Machine-learning-tool-could-help-develop-tougher-materials-MIT-News.jpg

Machine-learning tool could help develop tougher materials | MIT News

For engineers developing new materials or protective coatings, there are billions of different possibilities to sort through. Lab tests or even detailed computer simulations to determine their exact properties, such as toughness, can take hours, days, or more for each variation. Now, a new artificial intelligence-based approach developed at MIT could reduce that to a matter of milliseconds, making it practical to screen vast arrays of candidate materials.

The system, which MIT researchers hope could be used to develop stronger protective coatings or structural materials — for example, to protect aircraft or spacecraft from impacts — is described in a paper in the journal Matter, by MIT postdoc Chi-Hua Yu, civil and environmental engineering professor and department head Markus J. Buehler, and Yu-Chuan Hsu at the National Taiwan University.

The focus of this work was on predicting the way a material would break or fracture, by analyzing the propagation of cracks through the material’s molecular structure. Buehler and his colleagues have spent many years studying fractures and other failure modes in great detail, since understanding failure processes is key to developing robust, reliable materials. “One of the specialties of my lab is to use what we call molecular dynamics simulations, or basically atom-by-atom simulations” of such processes, Buehler says.

These simulations provide a chemically accurate description of how fracturing happens, he says. But it’s slow, because it requires solving equations of motion for every single atom. “It takes a lot of time to simulate these processes,” he says. The team decided to explore ways of streamlining that process, using a machine-learning system.

“We’re kind of taking a detour,” he says. “We’ve been asking, what if you had just the observation of how fracturing happens [in a given material], and let computers learn this relationship itself?” To do that, artificial intelligence (AI) systems need a variety of examples to use as a training set, to learn about the correlations between the material’s characteristics and its performance.

In this case, they were looking at a variety of composite, layered coatings made of crystalline materials. The variables included the composition of the layers and the relative orientations of their orderly crystal structures, and the way those materials each responded to fracturing, based on the molecular dynamics simulations. “We basically simulate, atom by atom, how materials break, and we record that information,” Buehler says.

The team used atom-by-atom simulations to determine how cracks propagate through different materials. This animation shows one such simulation, in which the crack propagates all the way through.

They painstakingly generated hundreds of such simulations, with a wide variety of structures, and subjected each one to many different simulated fractures. Then they fed large amounts of data about all these simulations into their AI system, to see if it could discover the underlying physical principles and predict the performance of a new material that was not part of the training set.

And it did. “That’s the really exciting thing,” Buehler says, “because the computer simulation through AI can do what normally takes a very long time using molecular dynamics, or using finite element simulations, which are another way that engineers solve this problem, and it’s very slow as well. So, this is a whole new way of simulating how materials fail.”

How materials fail is crucial information for any engineering project, Buehler emphasizes. Materials failures such as fractures are “one of the biggest reasons for losses in any industry. For inspecting planes or trains or cars, or for roads or infrastructure, or concrete, or steel corrosion, or to understand the fracture of biological tissues such as bone, the ability to simulate fracturing with AI, and doing that quickly and very efficiently, is a real game changer.”

The improvement in speed produced by using this method is remarkable. Hsu explains that “for single simulations in molecular dynamics, it has taken several hours to run the simulations, but in this artificial intelligence prediction, it only takes 10 milliseconds to go through all the predictions from the patterns, and show how a crack forms step by step.”

“Over the past 30 years or so there have been multiple approaches to model crack propagation in solids, but it remains a formidable and computationally expensive problem,” says Pradeep Guduru, a professor of engineering at Brown University, who was not involved in this work. “By shifting the computational expense to training a robust machine-learning algorithm, this new approach can potentially result in a quick and computationally inexpensive design tool, which is always desirable for practical applications.”

The method they developed is quite generalizable, Buehler says. “Even though in our paper we only applied it to one material with different crystal orientations, you can apply this methodology to much more complex materials.” And while they used data from atomistic simulations, the system could also be used to make predictions on the basis of experimental data such as images of a material undergoing fracturing.

“If we had a new material that we’ve never simulated before,” he says, “if we have a lot of images of the fracturing process, we can feed that data into the machine-learning model as well.” Whatever the input, simulated or experimental, the AI system essentially goes through the evolving process frame by frame, noting how each image differs from the one before in order to learn the underlying dynamics.

For example, as researchers make use of the new facilities in MIT.nano, the Institute’s facility dedicated to fabricating and testing materials at the nanoscale, vast amounts of new data about a variety of synthesized materials will be generated.

“As we have more and more high-throughput experimental techniques that can produce a lot of images very quickly, in an automated way, these kind of data sources can immediately be fed into the machine-learning model,” Buehler says. “We really think that the future will be one where we have a lot more integration between experiment and simulation, much more than we have in the past.”

The system could be applied not just to fracturing, as the team did in this initial demonstration, but to a wide variety of processes unfolding over time, he says, such as diffusion of one material into another, or corrosion processes. “Anytime where you have evolutions of physical fields, and we want to know how these fields evolve as a function of the microstructure,” he says, this method could be a boon.

The research was supported by the U.S. Office of Naval Research and the Army Research Office.

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

1600935734_Fireflies-helps-companies-get-more-out-of-meetings-MIT.jpg

Fireflies helps companies get more out of meetings | MIT News

Many decisions are made and details sorted out in a productive business meeting. But in order for that meeting to translate into results, participants have to remember all those details, understand their assignments, and follow through on commitments.

The startup Fireflies.ai is helping people get the most out of their meetings with a note-taking, information-organizing virtual assistant named Fred. Fred transcribes every word of meetings and then uses artificial intelligence to help people sort and share that information later on.

“There’s a tremendous amount of data generated in meetings that can help your team stay on the same page,” says Sam Udotong ’16, who founded the company with Krish Ramineni in 2016. “We let people capture that data, search through it, and then share it to the places that matter most.”

The tool integrates with popular meeting and scheduling software like Zoom and Google Calendar so users can quickly add Fred to calls. It also works with collaboration platforms like Slack and customer management software like Salesforce to help ensure plans turn into coordinated action.

Fireflies is used by people working in roles including sales, recruiting, and product management. They can use the service to automate project management tasks, screen candidates, and manage internal team communications.

In the last few months, driven in part by the Covid-19 pandemic, Fred has sat through millions of minutes of meetings involving more than half a million people. And the founders believe Fred can do more than simply help people adjust to remote work; it can also help them collaborate more effectively than ever before.

“[Fred] is giving you perfect memory,” says Udotong, who serves as Firelies’ chief technology officer. “The dream is for everyone to have perfect recall and make all their decisions based on the right information. So being able to search back to exact points in conversation and remember that is powerful. People have told us it makes them look smarter in front of clients.”

Taking the leap

Udotong was introduced to the power of machine learning in his first year at MIT while working on a project in which students built a drone that could lead people on campus tours. Later, during his first MIT hackathon, he sought to use machine learning in a cryptography solution. That’s when he met Ramineni, who was a student at the University of Pennsylvania. That’s also when Fireflies was born — although the founders would go on to change everything about the company besides its name as they sought to use artificial intelligence to improve efficiency in a range of fields.

“We ended up building six iterations of Fireflies before this current meeting assistant,” Udotong remembers. “And every time we would build a different iteration, we would tell our friends, ‘Download it, use it, and get back to us next week, we’ll grab coffee.’ We were making all these agreements and promises, and it became really challenging to keep track of all the conversations we were having to get our products out there. We thought, ‘What if we just had an AI that could keep track of conversations for us?’”

The founders’ initial note-taking solution, built in short bursts between classes and homework, tracked action items written in messages, sending reminders to users later on.

Following Udotong’s graduation with a degree in aeronautics and astronautics in 2016, the founders decided to use a $25,000 stipend they received from Rough Draft Ventures, along with $5,000 from the MIT Sandbox Innovation Fund, to work on Fireflies through the summer.

The plan was to work on Fireflies for another short burst: Ramineni was already making plans to attend Cambridge University for his master’s degree in the fall, and Udotong was weighing acceptance letters from graduate schools as well as job offers. By July, however, the founders had changed their plans.

“I think deciding [on a career path] is really hard these days, even if you identify your passion,” Udotong says. “The easy path for someone in tech is to follow the money and go work for Google or Facebook. We decided to go a different route and take the risk.”

They moved to Ramineni’s hometown of San Francisco to officially launch the company. Udotong remembers getting to San Francisco with $100 dollars in his bank account.

The founders had fully committed themselves to Fireflies, but it didn’t make starting the company any easier. They decided not to raise venture capital in the company’s early years, and Ramineni admits to questioning whether going all in on Fireflies was the right decision as recently at 2018.

The founders also weren’t sure a radically new software category would be embraced so readily by businesses. They continued to invest in the voice AI space, as they believed that the need for their technology was growing and the timing was right.

“We realized that there’s a ton of data generated every day through speech, either in meetings like Zoom or in person,” Ramineni says. “Today, two hours after your meeting, unless you’re taking good notes or recording, you’re not going to be able to recall everything. You might not even remember what action items you agreed to a few hours ago. It’s such a common problem that people don’t even know it’s an issue. You have meetings and you expect things to slip through the cracks.”

Illuminating conversations

Today the Fireflies solution shows little trace of the arduous journey the founders took to get to this point. In fact, building simplicity into the tool has been a major focus for the founders.

Fred can join calendar events automatically or be added to meetings using the fred@fireflies.ai address. Fred joins Zoom, Google Meet, Skype, or Microsoft calls as a participant, silently transcribing and generating notes from the meeting. After the meeting, the AI assistant sends a full transcript to whomever the organizer chooses, allowing users to click on sections of the transcript to hear that part of the meeting audio. Users can also search the transcript and go through an hourlong meeting in five minutes, according to the company. The transcript can also surface action items, tasks, metrics, pricing, and other topics of interest.

After each meeting, Fireflies can automatically sync all this meeting data into apps from companies like Slack, Salesforce, and Hubspot.

“Fireflies is like a personal assistant that helps connect your systems of communication with your systems of record,” Udotong says. “If you’re having these meetings over Zoom and Google Meet every day, and you’re interacting with Slack or Trello, Fireflies is that middle router that can bring synchronicity to your work life.”

In the midst of the Covid-19 pandemic, millions of companies have been forced to operate remotely, and the founders think the impact of that response will be felt for far longer than the virus.

“I think the world’s now realizing that people can be fully distributed,” says Ramineni, who notes Fireflies’ team has been remote since he and Udotong began working together in college hackathons from different campuses in 2014.

And as the company has grown, customers have begun using Fred for use cases the founders hadn’t even considered, like sending Fred to meetings that they can’t attend and reviewing the notes later on. Customers, the founders believe, are realizing that being able to quickly search, sort, and otherwise collaborate across audio data unlocks a world of new possibilities.

“It’s kind of like what Google did with search,” Udotong says. “There was five to 10 years of web data building up, and there was no way for people to find what they were looking for. The same thing is true today of audio and meeting data. It’s out there, but there’s no way to actually find what you’re looking for because it’s never even stored in the first place.”

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

1600935709_Undergraduates-develop-next-generation-intelligence-tools-MIT-News.png

Undergraduates develop next-generation intelligence tools | MIT News

The coronavirus pandemic has driven us apart physically while reminding us of the power of technology to connect. When MIT shut its doors in March, much of campus moved online, to virtual classes, labs, and chatrooms. Among those making the pivot were students engaged in independent research under MIT’s Undergraduate Research Opportunities Program (UROP). 

With regular check-ins with their advisors via Slack and Zoom, many students succeeded in pushing through to the end. One even carried on his experiments from his bedroom, after schlepping his Sphero Bolt robots home in a backpack. “I’ve been so impressed by their resilience and dedication,” says Katherine Gallagher, one of three artificial intelligence engineers at MIT Quest for Intelligence who works with students each semester on intelligence-related applications. “There was that initial week of craziness and then they were right back to work.” Four projects from this spring are highlighted below.

Learning to explore the world with open eyes and ears

Robots rely heavily on images beamed through their built-in cameras, or surrogate “eyes,” to get around. MIT senior Alon Kosowsky-Sachs thinks they could do a lot more if they also used their microphone “ears.” 

From his home in Sharon, Massachusetts, where he retreated after MIT closed in March, Kosowsky-Sachs is training four baseball-sized Sphero Bolt robots to roll around a homemade arena. His goal is to teach the robots to pair sights with sounds, and to exploit this information to build better representations of their environment. He’s working with Pulkit Agrawal, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science, who is interested in designing algorithms with human-like curiosity.

While Kosowsky-Sachs sleeps, his robots putter away, gliding through an object-strewn rink he built for them from two-by-fours. Each burst of movement becomes a pair of one-second video and audio clips. By day, Kosowsky-Sachs trains a “curiosity” model aimed at pushing the robots to become bolder, and more skillful, at navigating their obstacle course.

“I want them to see something through their camera, and hear something from their microphone, and know that these two things happen together,” he says. “As humans, we combine a lot of sensory information to get added insight about the world. If we hear a thunder clap, we don’t need to see lightning to know that a storm has arrived. Our hypothesis is that robots with a better model of the world will be able to accomplish more difficult tasks.”

Training a robot agent to design a more efficient nuclear reactor 

One important factor driving the cost of nuclear power is the layout of its reactor core. If fuel rods are arranged in an optimal fashion, reactions last longer, burn less fuel, and need less maintenance. As engineers look for ways to bring down the cost of nuclear energy, they are eying the redesign of the reactor core.

“Nuclear power emits very little carbon and is surprisingly safe compared to other energy sources, even solar or wind,” says third-year student Isaac Wolverton. “We wanted to see if we could use AI to make it more efficient.” 

In a project with Josh Joseph, an AI engineer at the MIT Quest, and Koroush Shirvan, an assistant professor in MIT’s Department of Nuclear Science and Engineering, Wolverton spent the year training a reinforcement learning agent to find the best way to lay out fuel rods in a reactor core. To simulate the process, he turned the problem into a game, borrowing a machine learning technique for producing agents with superhuman abilities at chess and Go.

He started by training his agent on a simpler problem: arranging colored tiles on a grid so that as few tiles as possible of the same color would touch. As Wolverton increased the number of options, from two colors to five, and four tiles to 225, he grew excited as the agent continued to find the best strategy. “It gave us hope we could teach it to swap the cores into an optimal arrangement,” he says.

Eventually, Wolverton moved to an environment meant to simulate a 36-rod reactor core, with two enrichment levels and 2.1 million possible core configurations. With input from researchers in Shirvan’s lab, Wolverton trained an agent that arrived at the optimal solution.

The lab is now building on Wolverton’s code to try to train an agent in a life-sized 100-rod environment with 19 enrichment levels. “There’s no breakthrough at this point,” he says. “But we think it’s possible, if we can find enough compute resources.”

Making more livers available to patients who need them

About 8,000 patients in the United States receive liver transplants each year, but that’s only half the number who need one. Many more livers might be made available if hospitals had a faster way to screen them, researchers say. In a collaboration with Massachusetts General Hospital, MIT Quest is evaluating whether automation could help to boost the nation’s supply of viable livers.  

In approving a liver for transplant, pathologists estimate its fat content from a slice of tissue. If it’s low enough, the liver is deemed ready for transplant. But there are often not enough qualified doctors to review tissue samples on the tight timeline needed to match livers with recipients. A shortage of doctors, coupled with the subjective nature of analyzing tissue, means that viable livers are inevitably discarded.

This loss represents a huge opportunity for machine learning, says third-year student Kuan Wei Huang, who joined the project to explore AI applications in health care. The project involves training a deep neural network to pick out globules of fat on liver tissue slides to estimate the liver’s overall fat content.

One challenge, says Huang, has been figuring out how to handle variations in how various pathologists classify fat globules. “This makes it harder to tell whether I’ve created the appropriate masks to feed into the neural net,” he says. “However, after meeting with experts in the field, I received clarifications and was able to continue working.”

Trained on images labeled by pathologists, the model will eventually learn to isolate fat globules in unlabeled images on its own. The final output will be a fat content estimate with pictures of highlighted fat globules showing how the model arrived at its final count. “That’s the easy part — we just count up the pixels in the highlighted globules as a percentage of the overall biopsy and we have our fat content estimate,” says the Quest’s Gallagher, who is leading the project.

Huang says he’s excited by the project’s potential to help people. “Using machine learning to address medical problems is one of the best ways that a computer scientist can impact the world.”

Exposing the hidden constraints of what we mean in what we say

Language shapes our understanding of the world in subtle ways, with slight variations in the words we use conveying sharply different meanings. The sentence, “Elephants live in Africa and Asia,” looks a lot like the sentence “Elephants eat twigs and leaves.” But most readers will conclude that the elephants in the first sentence are split into distinct groups living on separate continents but not apply the same reasoning to the second sentence, because eating twigs and eating leaves can both be true of the same elephant in a way that living on different continents cannot.

Karen Gu is a senior majoring in computer science and molecular biology, but instead of putting cells under a microscope for her SuperUROP project, she chose to look at sentences like the ones above. “I’m fascinated by the complex and subtle things that we do to constrain language understanding, almost all of it subconsciously,” she says.

Working with Roger Levy, a professor in MIT’s Department of Brain and Cognitive Sciences, and postdoc MH Tessler, Gu explored how prior knowledge guides our interpretation of syntax and ultimately, meaning. In the sentences above, prior knowledge about geography and mutual exclusivity interact with syntax to produce different meanings.

After steeping herself in linguistics theory, Gu built a model to explain how, word by word, a given sentence produces meaning. She then ran a set of online experiments to see how human subjects would interpret analogous sentences in a story. Her experiments, she says, largely validated intuitions from linguistic theory.

One challenge, she says, was having to reconcile two approaches for studying language. “I had to figure out how to combine formal linguistics, which applies an almost mathematical approach to understanding how words combine, and probabilistic semantics-pragmatics, which has focused more on how people interpret whole utterances.’ “

After MIT closed in March, she was able to finish the project from her parents’ home in East Hanover, New Jersey. “Regular meetings with my advisor have been really helpful in keeping me motivated and on track,” she says. She says she also got to improve her web-development skills, which will come in handy when she starts work at Benchling, a San Francisco-based software company, this summer.

Spring semester Quest UROP projects were funded, in part, by the MIT-IBM Watson AI Lab and Eric Schmidt, technical advisor to Alphabet Inc., and his wife, Wendy.

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

Recent Posts

Archives

wpChatIcon

Please Support Us with a Like

Please Support Us with a Like