Understanding the nuances of human intelligence | MIT News

What can we learn about human intelligence by studying how machines “think”? Can we better understand ourselves if we better understand the artificial intelligence systems that are becoming an increasingly significant part of our daily lives?

These questions may be deeply philosophical, but for Phillip Isola, finding the answers requires calculations as much as thinking.

Isola, a newly appointed associate professor in the Department of Electrical Engineering and Computer Science (EECS), studies the fundamental mechanisms involved in human intelligence from a computational perspective.

While understanding intelligence is the ultimate goal, his work mainly focuses on computer vision and machine learning. Isola is particularly interested in exploring how intelligence emerges in artificial intelligence models, how these models learn to represent the world around them, and what their “brains” share with the brains of their human creators.

“I see that all types of intelligence have many things in common and I would like to understand these similarities. What do all animals, humans and artificial intelligence have in common?” says Isola, who is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

According to Isola, a better scientific understanding of the intelligence of AI agents will help the world safely and effectively integrate them into society, maximizing their potential to benefit humanity.

Asking questions

Isola began wondering about science at a young age.

Growing up in San Francisco, he and his father often hiked the northern California coast or camped around Point Reyes and in the Marin County hills.

He was fascinated by geological processes and often wondered what made the natural world function. In school, Isola was driven by an insatiable curiosity, and although he gravitated towards technical subjects such as math and science, he had no limits to what he wanted to learn.

Not entirely sure what to study as an undergraduate at Yale University, Isola pursued it until he came across cognitive science.

“Before, I was interested in nature – how the world works. But then I realized that the brain is even more interesting and complex than even the formation of planets. Now I wanted to know what makes us tick,” he says.

As a freshman, he began working in the lab of his cognitive science professor and future mentor, Brian Scholl, a member of the Yale Psychology Department. He remained in this laboratory throughout his studies.

After spending a year off working with her childhood friends at an independent video game company, Isola was ready to return to the complex world of the human brain. He began graduate studies in brain and cognitive sciences at MIT.

“After college, I felt like I had finally found my place. I had a lot of great experiences at Yale and other parts of my life, but when I got to MIT, I realized that this was a job that I really loved and that these were people who thought similarly to me,” he says.

Isola credits his doctoral advisor, Ted Adelson, John and Dorothy Wilson Professor of Vision Sciences, with having a profound influence on his future path. He was inspired by Adelson's focus on understanding fundamental principles rather than just chasing new engineering benchmarks, which are formalized tests used to measure system performance.

A computational perspective

At MIT, Isola's research turned to computer science and artificial intelligence.

“I still liked all the cognitive science questions, but I felt like I could have made more progress on some of them if I had approached them from a purely computational perspective,” he says.

His thesis focused on perceptual clustering, which involves the mechanisms used by humans and machines to organize separate parts of an image into a single, coherent object.

If machines can learn perceptual grouping on their own, artificial intelligence systems will be able to recognize objects without human intervention. This type of self-supervised learning has applications in areas such as autonomous vehicles, medical imaging, robotics and automatic language translation.

After graduating from MIT, Isola completed a postdoctoral fellowship at the University of California, Berkeley to expand his perspective by working in a lab focused solely on computer science.

“This experience helped my work become much more impactful because I learned to balance understanding the basic, abstract principles of intelligence with pursuing more concrete reference points,” Isola recalls.

At Berkeley, he developed the image-to-image translation framework, an early form of a generative artificial intelligence model that could, for example, turn a sketch into a photographic image or a black-and-white photo into a color one.

He entered the academic job market and accepted a teaching position at MIT, but Isola deferred it for a year to work at a then-small startup called OpenAI.

“It was a nonprofit, and I liked the idealistic mission at the time. They were really good at reinforcement learning, and I thought it was an important topic to learn more about,” he says.

He enjoyed working in a lab with a lot of scientific freedom, but after a year Isola was ready to return to MIT and start his own research group.

The study of human-like intelligence

He immediately liked running a research laboratory.

“I really love the initial idea stage. I feel like I'm kind of a startup incubator where I can constantly do new things and learn new things,” he says.

Drawing on his interest in cognitive science and his desire to understand the human brain, his group studies fundamental computations related to human-like intelligence that emerges in machines.

One of the main goals is representation learning, which is the ability of humans and machines to represent and perceive the sensory world around them.

In recent work, he and his colleagues have observed that many different types of machine learning models, from LLM to computer vision models to audio models, seem to represent the world in a similar way.

These models were designed to perform very different tasks, but there are many similarities in their architectures. As they get larger and trained on more data, their internal structures become more and more similar.

This led Isola and his team to introduce the Platonic representation hypothesis (which takes its name from the Greek philosopher Plato), which states that the representations that all these models learn converge towards a common, underlying representation of reality.

“Language, images, sound – all these are different shadows on the wall from which you can infer that there is some underlying physical process – some kind of causal reality. If you train models on all these different types of data, they should eventually converge to this model of the world,” Isola says.

A related area his team is exploring is self-supervised learning. This includes how AI models learn to group related pixels in an image or words in a sentence without having to have labeled examples to learn from.

Because data is expensive and labels are limited, using only labeled data to train models can limit the capabilities of AI systems. The goal of self-supervised learning is to develop models that can independently create an accurate internal representation of the world.

“If you can create a good representation of the world, it should make it easier to solve problems later,” he explains.

Isola's research focuses more on finding something new and surprising than on building complex systems that can surpass the latest machine learning standards.

While this approach has been very successful in discovering innovative techniques and architectures, it means that the work sometimes lacks a clear end goal, which can lead to challenges.

For example, maintaining good team composition and financial flow can be difficult when the lab is focused on finding unexpected results, he says.

“In a sense, we are always working in the dark. It is a high-risk and highly rewarding job. Every now and then we find a grain of truth that is new and surprising,” he says.

In addition to gaining knowledge, Isola is passionate about passing on knowledge to the next generation of scientists and engineers. His favorite courses include 6.7960 (Deep Learning), which he and several other MIT faculty launched four years ago.

The class has seen exponential growth, going from 30 students in its initial offering to over 700 this fall.

And while the popularity of artificial intelligence means there's no shortage of interested students, the speed at which the field is evolving can make it difficult to separate the hype from the truly significant advances.

“I tell students that they have to take everything we say in class with a pinch of salt. Maybe in a few years we will tell them something different. Thanks to this course, we are really at the edge of knowledge,” he says.

But Isola also emphasizes to students that, despite all the hype surrounding the latest artificial intelligence models, smart machines are much simpler than most people suspect.

“Human ingenuity, creativity and emotion – many people think they can never be modeled. That may turn out to be true, but I think intelligence is quite simple once we understand it,” he says.

Although his current work focuses on deep learning models, Isola remains fascinated by the complexity of the human brain and continues to collaborate with researchers in the cognitive sciences.

Throughout, he remained captivated by the beauty of the natural world, which inspired his first interest in science.

Although she has less time for hobbies these days, Isola enjoys hiking and backpacking in the mountains or on Cape Cod, skiing and kayaking, or finding scenic places to spend time while traveling to scientific conferences.

And while he's looking forward to exploring new topics in his lab at MIT, Isola can't help but wonder how the role of intelligent machines could change the way he works.

He believes that artificial general intelligence (AGI), the moment when machines will be able to learn and use their knowledge in the same way as humans, is not that far away.

“I don't think AI will do everything for us and we'll enjoy life on the beach. I think there will be a coexistence of intelligent machines and humans who still have a lot of freedom and control. Now I'm thinking about interesting questions and applications when that happens. How can I help the world in a post-AGI future? I don't have any answers yet, but it's on my mind,” he says.

LEAVE A REPLY

Please enter your comment!
Please enter your name here