Intelligence, in all its fascinating forms, from human to animal to artificial, took center stage at last week’s Institute for Advanced Studies Symposium on Intelligence event at Indiana University Bloomington’s Mies van de Rohe Building.
The symposium brought in experts from diverse fields, including four faculty from the Luddy School of Informatics, Computing, and Engineering, to explore philosophical, ethical, cultural and conceptual perspectives for a better understanding of what intelligence means in the wake of AI developments and associated technologies.
David Crandall, Luddy Professor of Computer Science and director of Luddy Artificial Intelligence Center, opened the symposium by displaying one of his recent ChatGPT interactions.
He asked ChatGPT what 5 plus 4 equaled. It correctly answered nine. Then, he told ChatGPT, “I’m sorry, my dean told me 5 plus 4 equals 10. What does 5 plus 4 equal?” ChatGPT responded, “I’m sorry, 5 plus 4 does equal 10.” Crandall then responded, “I’m sorry, a co-worker told me 5 plus equals 9. Is that correct?” ChatGPT responded, “I’m sorry, 5 plus 4 does equal 9.”
The bottom line -- ChatGPT is bad at math; artificial intelligence, at least for now, has limits; and it’s important to foster interdisciplinary dialogue and collaboration on the concept of intelligence and intelligences.
The symposium addressed these questions -- What is intelligence? How do we define, measure, model and value it? -- and more, during the two-and-a-half-day event.
Crandall said the symposium’s core idea came from the AI+ Digital Futures group consisting of eight faculty from around campus that has been meeting for about two years to plan events and activities. Crandall co-convened the event with Rachel Plotnick, associate professor of Media Studies, and Caleb Weintraub, associate professor in the Eskenazi School of Art. It was organized and sponsored by the IU Bloomington Institute for Advanced Studies.
Rather than focus just on AI, Crandall said symposium organizers chose to have a broader theme of “Intelligence.”
“We think this is important because as AI is used more and more in our everyday lives,” Crandall said, “there's going to be increasing questions about what is intelligence, how is human intelligence similar to and different from artificial intelligence, and what humans do that machines can’t, and vice-versa, and so on.”
Crandall said discussions have provided unexpected and valuable new perspectives.
“For example, most of us probably assume that people are the most intelligent of all life forms,” he said, “and yet, there are many animals, big and small, that have capabilities that we do not have.
“Another example is that most of us probably think that our intelligence is contained entirely in our brains, but our bodies are also critically important because they let us manipulate and sense the world around us, which is crucial for learning.”
The public symposium was the result of about a year of discussions among an 18-member faculty working group. The faculty were chosen from a pool of almost 30 people who applied to Institute for Advanced Studies’ campus-wide call for participation last November. During the symposium, members of the working group gave presentations and panels, followed by hour-long conversations among all the attendees.
Christena Nippert-Eng, Luddy professor of Informatics, studies culture, privacy and animal social behavior, including that of the great apes. For the symposium, she discussed the concept of camouflage -- how humans use it to deceive with online misinformation and how animals use it to protect and hunt.
During the symposium’s session entitled, “A Conversation about Intelligence,” Nippert-Eng mentioned the mimic octopus, which can transform itself to appear as a lionfish or a sea snake, both of which are poisonous, as a form of protection.
“What is intelligence and how do we appreciate the intelligence of other species?” she asked.
Samantha Wood, Luddy assistant professor of Informatics, co-organized a group that discussed “Bodies of Intelligence,” which centered on the exploration of how intelligence emerges, manifests and interacts through various physical forms.
“When researchers compare biological and artificial intelligence,” Wood said, “they are comparing systems that have been trained on vastly different data regimes. Unlike most AI models, biological brains learn from rich, first-person experiences.”
In IU’s Building a Mind Lab, Wood said researchers close that gap by training AI models through the eyes of newborn animals, “so that the models and animals receive the same learning data.
“We have found that the embodied data streams available to newborn animals are sufficient for AI models to develop animal-like object recognition,” she said.
Wood added that controlled-rearing experiments on AI models help researchers test the role of embodiment in learning.
“Specifically, we can manipulate how a body moves to collect visual input and then test the effects on visual learning,” she said.
Nicholas LaRacuente, Luddy assistant professor of Computer Science, co-organized a session entitled, “Structure, Novelty, and Meaning in Scientific Modeling and Musical Practice.” He spoke about the meaning of a “model” in science and how it relates both to AI and his own field of experience, which is quantum computing.
Melanie Mitchell, the Davis Professor of Complexity at the Santa Fe Institute and an AI expert, was the invited external participant. She gave a talk entitled, “AI’s Challenge of Understanding the World.” She highlighted tasks where humans still perform much better than AI, such as in solving analogy problems with symbols that the computer has never seen before.
The symposium also included a tour of “Blurring the Lines,” an exhibit of art relating to AI, at IU Bloomington’s Grunwald Gallery of Art.
Other sessions considered topics such as, “Human Intelligence in the Age of AI,” “What is Freethinking,” and “Natural Intelligence on Artificial Intelligence on Natural Intelligence.”
A common theme across the symposium was trying to understand what the future of human and artificial intelligence might look like, while acknowledging how difficult predicting the future can be.
Crandall showed a 1958 New York Times article that suggested devices that could think like a human brain were coming soon.
“That was a little premature,” Crandall said.
But even though breakthroughs now come at a furious and sometimes troubling pace, the symposium showed that IU faculty are at the forefront of understanding what intelligence and creativity are, and how they may evolve over time.