Featured Image

Undergraduates develop next-generation intelligence tools | MIT News

The coronavirus pandemic has driven us apart physically while reminding us of the power of technology to connect. When MIT shut its doors in March, much of campus moved online, to virtual classes, labs, and chatrooms. Among those making the pivot were students engaged in independent research under MIT’s Undergraduate Research Opportunities Program (UROP). 

With regular check-ins with their advisors via Slack and Zoom, many students succeeded in pushing through to the end. One even carried on his experiments from his bedroom, after schlepping his Sphero Bolt robots home in a backpack. “I’ve been so impressed by their resilience and dedication,” says Katherine Gallagher, one of three artificial intelligence engineers at MIT Quest for Intelligence who works with students each semester on intelligence-related applications. “There was that initial week of craziness and then they were right back to work.” Four projects from this spring are highlighted below.

Learning to explore the world with open eyes and ears

Robots rely heavily on images beamed through their built-in cameras, or surrogate “eyes,” to get around. MIT senior Alon Kosowsky-Sachs thinks they could do a lot more if they also used their microphone “ears.” 

From his home in Sharon, Massachusetts, where he retreated after MIT closed in March, Kosowsky-Sachs is training four baseball-sized Sphero Bolt robots to roll around a homemade arena. His goal is to teach the robots to pair sights with sounds, and to exploit this information to build better representations of their environment. He’s working with Pulkit Agrawal, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science, who is interested in designing algorithms with human-like curiosity.

While Kosowsky-Sachs sleeps, his robots putter away, gliding through an object-strewn rink he built for them from two-by-fours. Each burst of movement becomes a pair of one-second video and audio clips. By day, Kosowsky-Sachs trains a “curiosity” model aimed at pushing the robots to become bolder, and more skillful, at navigating their obstacle course.

“I want them to see something through their camera, and hear something from their microphone, and know that these two things happen together,” he says. “As humans, we combine a lot of sensory information to get added insight about the world. If we hear a thunder clap, we don’t need to see lightning to know that a storm has arrived. Our hypothesis is that robots with a better model of the world will be able to accomplish more difficult tasks.”

Training a robot agent to design a more efficient nuclear reactor 

One important factor driving the cost of nuclear power is the layout of its reactor core. If fuel rods are arranged in an optimal fashion, reactions last longer, burn less fuel, and need less maintenance. As engineers look for ways to bring down the cost of nuclear energy, they are eying the redesign of the reactor core.

“Nuclear power emits very little carbon and is surprisingly safe compared to other energy sources, even solar or wind,” says third-year student Isaac Wolverton. “We wanted to see if we could use AI to make it more efficient.” 

In a project with Josh Joseph, an AI engineer at the MIT Quest, and Koroush Shirvan, an assistant professor in MIT’s Department of Nuclear Science and Engineering, Wolverton spent the year training a reinforcement learning agent to find the best way to lay out fuel rods in a reactor core. To simulate the process, he turned the problem into a game, borrowing a machine learning technique for producing agents with superhuman abilities at chess and Go.

He started by training his agent on a simpler problem: arranging colored tiles on a grid so that as few tiles as possible of the same color would touch. As Wolverton increased the number of options, from two colors to five, and four tiles to 225, he grew excited as the agent continued to find the best strategy. “It gave us hope we could teach it to swap the cores into an optimal arrangement,” he says.

Eventually, Wolverton moved to an environment meant to simulate a 36-rod reactor core, with two enrichment levels and 2.1 million possible core configurations. With input from researchers in Shirvan’s lab, Wolverton trained an agent that arrived at the optimal solution.

The lab is now building on Wolverton’s code to try to train an agent in a life-sized 100-rod environment with 19 enrichment levels. “There’s no breakthrough at this point,” he says. “But we think it’s possible, if we can find enough compute resources.”

Making more livers available to patients who need them

About 8,000 patients in the United States receive liver transplants each year, but that’s only half the number who need one. Many more livers might be made available if hospitals had a faster way to screen them, researchers say. In a collaboration with Massachusetts General Hospital, MIT Quest is evaluating whether automation could help to boost the nation’s supply of viable livers.  

In approving a liver for transplant, pathologists estimate its fat content from a slice of tissue. If it’s low enough, the liver is deemed ready for transplant. But there are often not enough qualified doctors to review tissue samples on the tight timeline needed to match livers with recipients. A shortage of doctors, coupled with the subjective nature of analyzing tissue, means that viable livers are inevitably discarded.

This loss represents a huge opportunity for machine learning, says third-year student Kuan Wei Huang, who joined the project to explore AI applications in health care. The project involves training a deep neural network to pick out globules of fat on liver tissue slides to estimate the liver’s overall fat content.

One challenge, says Huang, has been figuring out how to handle variations in how various pathologists classify fat globules. “This makes it harder to tell whether I’ve created the appropriate masks to feed into the neural net,” he says. “However, after meeting with experts in the field, I received clarifications and was able to continue working.”

Trained on images labeled by pathologists, the model will eventually learn to isolate fat globules in unlabeled images on its own. The final output will be a fat content estimate with pictures of highlighted fat globules showing how the model arrived at its final count. “That’s the easy part — we just count up the pixels in the highlighted globules as a percentage of the overall biopsy and we have our fat content estimate,” says the Quest’s Gallagher, who is leading the project.

Huang says he’s excited by the project’s potential to help people. “Using machine learning to address medical problems is one of the best ways that a computer scientist can impact the world.”

Exposing the hidden constraints of what we mean in what we say

Language shapes our understanding of the world in subtle ways, with slight variations in the words we use conveying sharply different meanings. The sentence, “Elephants live in Africa and Asia,” looks a lot like the sentence “Elephants eat twigs and leaves.” But most readers will conclude that the elephants in the first sentence are split into distinct groups living on separate continents but not apply the same reasoning to the second sentence, because eating twigs and eating leaves can both be true of the same elephant in a way that living on different continents cannot.

Karen Gu is a senior majoring in computer science and molecular biology, but instead of putting cells under a microscope for her SuperUROP project, she chose to look at sentences like the ones above. “I’m fascinated by the complex and subtle things that we do to constrain language understanding, almost all of it subconsciously,” she says.

Working with Roger Levy, a professor in MIT’s Department of Brain and Cognitive Sciences, and postdoc MH Tessler, Gu explored how prior knowledge guides our interpretation of syntax and ultimately, meaning. In the sentences above, prior knowledge about geography and mutual exclusivity interact with syntax to produce different meanings.

After steeping herself in linguistics theory, Gu built a model to explain how, word by word, a given sentence produces meaning. She then ran a set of online experiments to see how human subjects would interpret analogous sentences in a story. Her experiments, she says, largely validated intuitions from linguistic theory.

One challenge, she says, was having to reconcile two approaches for studying language. “I had to figure out how to combine formal linguistics, which applies an almost mathematical approach to understanding how words combine, and probabilistic semantics-pragmatics, which has focused more on how people interpret whole utterances.’ “

After MIT closed in March, she was able to finish the project from her parents’ home in East Hanover, New Jersey. “Regular meetings with my advisor have been really helpful in keeping me motivated and on track,” she says. She says she also got to improve her web-development skills, which will come in handy when she starts work at Benchling, a San Francisco-based software company, this summer.

Spring semester Quest UROP projects were funded, in part, by the MIT-IBM Watson AI Lab and Eric Schmidt, technical advisor to Alphabet Inc., and his wife, Wendy.

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

Featured Image

MIT undergraduates pursue research opportunities through the pandemic | MIT News

news .mit.edu/sites/default/files/styles/news_article__cover_image__original/public/images/202009/Summer-UROP.png?itok=OyxvrrJc” />

Even in ordinary times, scientific process is stressful, with its demand for open-ended exploration and persistence in the face of failure. But the pandemic has added to the strain. In this new world of physical isolation, there are fewer opportunities for spontaneity and connection, and fewer distractions and events to mark the passage of time. Days pass in a numbing blur of sameness.

Working from home this summer, students participating in MIT’s Undergraduate Research Opportunities Program (UROP) did their best to overcome these challenges. Checking in with their advisors over Zoom and Slack, from as far west as Los Angeles, California and as far east as Skopje, North Macedonia, they completed two dozen projects sponsored by the MIT Quest for Intelligence. Four student projects are highlighted here.

Defending code-processing AI models against adversarial attacks 

Computer vision models have famously been fooled into classifying turtles as rifles, and planes as pigs, simply by making subtle changes to the objects and images the models are asked to interpret. But models that analyze computer code, which are a part of recent efforts to build automated tools to design programs efficiently, are also susceptible to so-called adversarial examples. 

The lab of Una-May O’Reilly, a principal research scientist at MIT, is focused on finding and fixing the weaknesses in code-processing models that can cause them to misbehave. As automated programming methods become more common, researchers are looking for ways to make this class of deep learning model more secure.

“Even small changes like giving a different name to a variable in a computer program can completely change how the model interprets the program,” says Tamara Mitrovska, a third-year student who worked on a UROP project this summer with Shashank Srikant, a graduate student in O’Reilly’s lab.

The lab is investigating two types of models used to summarize bits of a program as part of a broader effort to use machine learning to write new programs. One such model is Google’s seq2seq, originally developed for machine translation. A second is code2seq, which creates abstract representations of programs. Both are vulnerable to attacks due to a simple programming quirk: captions that let humans know what the code is doing, like assigning names to variables, give attackers an opening to exploit the model. By simply changing a variable name in a program or adding a print statement, the program may function normally, yet force the model processing it to give an incorrect answer.

This summer, from her home near Skopje, in North Macedonia, Mitrovska learned how to sift through a database of more than 100,000 programs in Java and Python and modify them algorithmically to try to fool seq2seq and code2seq. “These systems are challenging to implement,” she says. “Finding even the smallest bug can take a significant amount of time. But overall, I’ve been having fun and the project has been a very good learning experience for me.”

One exploit that she uncovered: Both models could be tricked by inserting “print” commands in the programs they process. That exploit, and others discovered by the lab, will be used to update the models to make them more robust.

What everyday adjectives can tell us about human reasoning

Embedded in the simplest of words are assumptions about the world that vary even among closely related languages. Take the word “biggest.” Like other superlatives in English, this adjective has no equivalent in French or Spanish. Speakers simply use the comparative form, “bigger” — plus grand in French or más grande in Spanish — to differentiate among objects of various sizes.

To understand what these words mean and how they are actually used, Helena Aparicio, formerly a postdoc at MIT and now a professor at Cornell University, devised a set of psychology experiments with MIT Associate Professor Roger Levy and Boston University Professor Elizabeth Coppock. Curtis Chen, a second-year student at MIT interested in the four topics that converge in Levy’s lab — computer science, psychology, linguistics, and cognitive science — joined on as a UROP student.

From his home in Hillsborough, New Jersey, Chen orchestrated experiments to identify why English speakers prefer superlatives in some cases and comparatives in others. He found that in scenes with more similarly sized objects, the more likely his human subjects were to prefer the word “biggest” to describe the largest object in the set. When objects appeared to fall within two clearly defined groups, subjects preferred the less-precise “bigger.” Chen also built an AI model to simulate the inferences made by his human subjects and found that it showed a similar preference for the superlative in ambiguous situations.

Designing a successful experiment can take several tries. To ensure consistency among the shapes that subjects were asked to describe, Chen generated them on the computer using HTML Canvas and JavaScript. “This way, the size differentials were exact, and we could simply report the formula used to make them,” he says.

After discovering that some subjects seemed confused by rectangle and line shapes, he replaced them with circles. He also removed the default option on his reporting scale after realizing that some subjects were using it to breeze through the tasks. Finally, he switched to the crowdsourcing platform Prolific after a number of participants on Amazon’s Mechanical Turk failed at tasks designed to ensure they were taking the experiments seriously.

“It was discouraging, but Curtis went through the process of exploring the data and figuring out what was going wrong,” says his mentor, Aparicio. 

In the end, he wound up with strong results and promising ideas for follow-up experiments this fall. “There’s still a lot to be done,” he says. “I had a lot of fun cooking up and tweaking the model, designing the experiment, and learning about this deceptively simple puzzle.”

Levy says he looks forward to the results. “Ultimately, this line of inquiry helps us understand how different vocabularies and grammatical resources of English and thousands of other languages support flexible communication by their native speakers,” he says.

Reconstructing real-world scenes from sensor data

AI systems that have become expert at sizing up scenes in photos and video may soon be able to do the same for real-world scenes. It’s a process that involves stitching together snapshots of a scene from varying viewpoints into a coherent picture. The brain performs these calculations effortlessly as we move through the world, but computers require sophisticated algorithms and extensive training. 

MIT Associate Professor Justin Solomon focuses on developing methods to help computers understand 3D environments. He and his lab look for new ways to take point cloud data gathered by sensors — essentially, reflections of infrared light bounced off the surfaces of objects — to create a holistic representation of a real-world scene. Three-dimensional scene analysis has many applications in computer graphics, but the one that drove second-year student Kevin Shao to join Solomon’s lab was its potential as a navigation tool for self-driving cars.

“Working on autonomous cars has been a childhood dream for me,” says Shao.

In the first phase of his UROP project, Shao downloaded the most important papers on 3D scene reconstruction and tried to reproduce their results. This improved his knowledge of PyTorch, the Python library that provides tools for training, testing, and evaluating models. It also gave him a deep understanding of the literature. In the second phase of the project, Shao worked with his mentor, PhD student Yue Wang, to improve on existing methods.

“Kevin implemented most of the ideas, and explained in detail why they would or wouldn’t work,” says Wang. “He didn’t give up on an idea until we had a comprehensive analysis of the problem.”

One idea they explored was the use of computer-drawn scenes to train a multi-view registration model. So far, the method works in simulation, but not on real-world scenes. Shao is now trying to incorporate real-world data to bridge the gap, and will continue the work this fall.

Wang is excited to see the results. “It sometimes takes PhD students a year to have a reasonable result,” he says. “Although we are still in the exploration phase, I think Kevin has made a successful transition from a smart student to a well-qualified researcher.”

When do infants become attuned to speech and music?

The ability to perceive speech and music has been traced to specialized parts of the brain, with infants as young as four months old showing sensitivity to speech-like sounds. MIT Professor Nancy Kanwisher and her lab are investigating how this special ear for speech and music arises in the infant brain.

Somaia Saba, a second-year student at MIT, was introduced to Kanwisher’s research last year in an intro to neuroscience class and immediately wanted to learn more. “The more I read up about cortical development, the more I realized how little we know about the development of the visual and auditory pathways,” she says. “I became very excited and met with [PhD student] Heather Kosakowski, who explained the details of her projects.”

Signing on for a project, Saba plunged into the “deep end” of cortical development research. Initially overwhelmed, she says she gained confidence through regular Zoom meetings with Kosakowski, who helped her to navigate MATLAB and other software for analyzing brain-imaging data. “Heather really helped motivate me to learn these programs quickly, which has also primed me to learn more easily in the future,” she says.

Before the pandemic shut down campus, Kanwisher’s lab collected functional magnetic resonance imaging (fMRI) data from two- to eight-week-old sleeping infants exposed to different sounds. This summer, from her home on Long Island, New York, Saba helped to analyze the data. She is now learning how to process fMRI data for awake infants, looking toward the study’s next phase. “This is a crucial and very challenging task that’s harder than processing child and adult fMRI data,” says Kosakowski. “Discovering how these specialized regions emerge in infants may be the key to unlocking mysteries about the origin of the mind.”

MIT Quest for Intelligence summer UROP projects were funded, in part, by the MIT-IBM Watson AI Lab and by Eric Schmidt, technical advisor to Alphabet Inc., and his wife, Wendy.

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

Recent Posts

Archives