Change the world? Cutting-edge vision research byLuddy School of Informatics, Computing, and EngineeringInformatics professorsJustin WoodandSamantha Wood, and Ph.D. Informatics student Lalit Pandey, just might do that to resolve the classic nature-nurture debate.
Their research is highlighted in their paper, “Are Vision Transformers More Data Hungry Than Newborn Visual Systems?” Presented during the prestigious Neural Information Processing Systems Conference in New Orleans, it compares newborn visual systems to that of vision transformers -- in other words, comparing brains to learning algorithms. The results could help build “naturally intelligent” learning systems to spark the next generation of artificial intelligence and machine learning systems.
The research provided the first evidence, Samantha Wood said, that transformers are not more data hungry than newborn brains.
“The same powerful generic learning algorithm that powers ChatGPT might also power newborn brains,” she said.
Current AI technologies require vast amounts of training data. Biological brains are more flexible and capable of rapid learning from birth with far less information. By reverse engineering the core learning mechanisms in newborn brains, researchers could help develop naturally intelligent learning systems.
Vision transformers are special tools that analyze pictures by cutting them into smaller pieces called patches and using self-attention to determine the relationships between the patches. They can be trained to do multiple tasks and are used in computer vision applications such as autonomous car navigation and image recognition, as well as ChatGPT.
The Luddy research compares the visual system abilities of live newborn chicks and virtual reality chicks.
In nature, newborn chicks learn to recognize the first object they see.
Can machine learning systems do the same thing with the same kind of limited data newborn chicks have? To test that, researchers trained real chicks and virtual chicks on the same visual experiences. They raised chicks in chambers outfitted with monitors where the only thing they saw was a rotating 3D object. After a week, the chicks were able to recognize the original object and distinguish it from an unfamiliar object.
In testing virtual chicks with the same visual simulation, they learned just as quickly as the real chicks. Vision transformers didn’t need vast amounts of data. The flexible and generic attention-based learning mechanism in vision transformers, combined with data streams available to newborn animals, was enough to produce animal-like object recognition.
Their work was recently highlighted in a New Scientist article, “AI learns to recognize objects with the efficiency of a newborn chick.”
Samantha Wood said this is a starting point for a much larger project: trying to reverse engineer the learning algorithms in newborn brains.
“To completely overturn the ‘data hungry’ assumption,” she said, “we still need to show that transformers do not need more training data than newborn animals across a wide range of perception, cognitive, and motor tasks. We’re excited because our digital twin approach provides a way to address this fundamental question directly.”
A larger goal is to build a closed-loop experimental platform for understanding the origins of intelligence. Samantha Wood said they want to find models from artificial intelligence that can explain multiple newborn learning tasks, such as object perception, navigation, social cognition, numerical cognition and decision making rather than a single experiment.
“We hope to find models that show the same patterns of successes and failures as newborn animals across a wide range of tasks,” she said. “In turn, we can use the most successful models to generate new experimental tasks for animals, closing the experimental loop.”
Researchers are developing a series of “Newborn Embodied Turing Tests” to test active learning models that are moving around a virtual environment. It’s like a video game in which the player is an artificial chick.
“Researchers will be able to design their own artificial brains and load those brains into the artificial chicks,” Samantha Wood said. “These artificial chicks then move around the virtual controlled-rearing chamber, collecting their own visual experiences for learning. We record the behavior of the artificial chicks and measure whether it matches the behavior of the newborn chicks from our experiments.”