Calishat

#children #AI #MachineVision

"[Professor Vlad] Ayzenberg and researchers from Emory University compared the visual perceptual abilities of preschoolers and state-of-the-art AI models and found that these children outperformed the best computer vision models currently available."

news.temple.edu/news/2025-07-0

New research reveals superior visual perception in humans compared with AI

Temple University Assistant Professor Vlad Ayzenberg…

Temple Now | news.temple.edu
Karthik Srinivasan

I was interviewed by The Economist's Babbage podcast on their series, "The science that built AI" last month. My hour long conversation was edited to about six minutes!

I am glad they edited/fit my conversation as taking the perspective that this big data, big compute driven deep-net approach is orthogonal to human/biological vision. And that, without incorporating biological principles (in this case, vision), autonomous visual navigation systems (i.e., self-driving cars) are unlikely and/or limited.

Unfortunately, the podcast requires a subscription to The Economist (I too had to access it from my university account!). But if you do have access, let me know what you think!

open.spotify.com/episode/4adN2

#Neuroscience #History #AI #Deepnets #BiologicalIntelligence #BiologicalVision #HumanVision #MachineVision #TheEconomist #Babbage #MachineLearning

Babbage: The science that built the AI revolution—part three

Listen to this episode from Babbage from The Economist…

Spotify
Futurist Jim Carroll

Daily Inspiration: "The real magic begins once we start to chat with the machines" - Futurist Jim Carroll

In the TV series The Jetsons, the humans regularly talked to the robots.

That future isn't that far away. Watch the video in which the Google DeepMind research group is using ChatGPT-like commands to instruct a robotic arm to use its machine vision to identify and work with a particular object. In this case, it's been asked to identify and lift the extinct animal. It's figured out which animal figure is the extinct one, utilizing its AI-based machine-vision analysis, and proceeds accordingly. Imagine this - the next command could be something as simple as this - "Find the king of the jungle and place it next to the sports item used by LeBron James." Magical!

youtube.com/watch?v=F3xCTq15mQ

The full details of this not-too-small achievement can be found on an extensive page that details all the work behind the scenes: "RT-2: Vision-Language-Action Models:  Transfer Web Knowledge to Robotic Control."

In other words, we're learning how to use large language models - the tech behind Bard, ChatGPT, and Bing -- to figure out what to do and translate these results into actions that are given to the robots.

The future is magic, and it's not that far away.

#robotics #machinelearning #machinevision #ai #artificialintelligence

Original post: jimcarroll.com/2023/09/daily-i

Jens Egholm

How do we fund software for event cameras?

Event cameras are wonderful, but the tooling is abysmal! I am in touch with developers and manufacturers in the field to push open-source tools. There is plenty of talent, but how to fund this?
#neuromorphic #eventcamera #machinevision