Tackling-the-misinformation-epidemic-with-In-Event-of-Moon-Disaster.png

Tackling the misinformation epidemic with “In Event of Moon Disaster” | MIT News

news .mit.edu/sites/default/files/styles/news_article__cover_image__original/public/images/202007/deepfake-nixon-moon-disaster.png?itok=u6qsytW7″ />

Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “In Event of Moon Disaster.”

This provocative website showcases a “complete” deepfake (manipulated audio and video ) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be. 

“Media misinformation is a longstanding phenomenon, but, exacerbated by deepfake technologies and the ease of disseminating content online, it’s become a crucial issue of our time,” says D. Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the MIT Center for Advanced Virtuality, part of MIT Open Learning. “With this project — and a course curriculum on misinformation being built around it — our powerfully talented XR Creative Director Francesca Panetta is pushing forward one of the center’s broad aims: using AI and technologies of virtuality to support creative expression and truth.”

Alongside the film, moondisaster.org features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources. 

“This alternative history shows how new technologies can obfuscate the truth around us, encouraging our audience to think carefully about the media they encounter daily,” says Panetta.

Also part of the launch is a new documentary, “To Make a Deepfake,” a 30-minute film by Scientific American, that uses “In Event of Moon Disaster” as a jumping-off point to explain the technology behind AI-generated media. The documentary features prominent scholars and thinkers on the state of deepfakes, on the stakes for the spread of misinformation and the twisting of our digital reality, and on the future of truth.

The project is supported by the MIT Open Documentary Lab and the Mozilla Foundation, which awarded “In Event of Moon Disaster” a Creative Media Award last year. These awards are part of Mozilla’s mission to realize more trustworthy AI in consumer technology. The latest cohort of awardees uses art and advocacy to examine AI’s effect on media and truth.

Says J. Bob Alotta, Mozilla’s vice president of global programs: “AI plays a central role in consumer technology today — it curates our news , it recommends who we date, and it targets us with ads. Such a powerful technology should be demonstrably worthy of trust, but often it is not. Mozilla’s Creative Media Awards draw attention to this, and also advocate for more privacy, transparency, and human well-being in AI.” 

“In Event of Moon Disaster” previewed last fall as a physical art installation at the International Documentary Film Festival Amsterdam, where it won the Special Jury Prize for Digital Storytelling; it was selected for the 2020 Tribeca Film Festival and Cannes XR. The new website is the project’s global digital launch, making the film and associated materials available for free to all audiences.

The past few months have seen the world move almost entirely online: schools, talk shows, museums, election campaigns, doctor’s appointments — all have made a rapid transition to virtual. When every interaction we have with the world is seen through a digital filter, it becomes more important than ever to learn how to distinguish between authentic and manipulated media. 

“It’s our hope that this project will encourage the public to understand that manipulated media plays a significant role in our media landscape,” says co-director Burgund, “and that, with further understanding and diligence, we can all reduce the likelihood of being unduly influenced by it.”

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

MIT-Takeda-program-launches-MIT-News-scaled.jpg

MIT-Takeda program launches | MIT News

news .mit.edu/sites/default/files/styles/news_article__cover_image__original/public/images/202006/Takeda-MIT-presentations_0.jpg?itok=3LyRPFqE” />

In February, researchers from MIT and Takeda Pharmaceuticals joined together to celebrate the official launch of the MIT-Takeda Program. The MIT-Takeda Program aims to fuel the development and application of artificial intelligence (AI) capabilities to benefit human health and drug development. Centered within the Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), the program brings together the MIT School of Engineering and Takeda Pharmaceuticals, to combine knowledge and address challenges of mutual interest.   

Following a competitive proposal process, nine inaugural research projects were selected. The program’s flagship research projects include principal investigators from departments and labs spanning the School of Engineering and the Institute. Research includes diagnosis of diseases, prediction of treatment response, development of novel biomarkers, process control and improvement, drug discovery, and clinical trial optimization.

“We were truly impressed by the creativity and breadth of the proposals we received,” says Anantha P. Chandrakasan, dean of the School of Engineering, Vannevar Bush Professor of Electrical Engineering and Computer Science, and co-chair of the MIT-Takeda Program Steering Committee.

Engaging with researchers and industry experts from Takeda, each project team will bring together different disciplines, merging theory and practical implementation, while combining algorithm and platform innovations.

“This is an incredible opportunity to merge the cross-disciplinary and cross-functional expertise of both MIT and Takeda researchers,” says Chandrakasan. “This particular collaboration between academia and industry is of great significance as our world faces enormous challenges pertaining to human health. I look forward to witnessing the evolution of the program and the impact its research aims to have on our society.” 

“The shared enthusiasm and combined efforts of researchers from across MIT and Takeda have the opportunity to shape the future of health care,” says Anne Heatherington, senior vice president and head of Data Sciences Institute (DSI) at Takeda, and co-chair of the MIT-Takeda Program Steering Committee. “Together we are building capabilities and addressing challenges through interrogation of multiple data types that we have not been able to solve with the power of humans alone that have the potential to benefit both patients and the greater community.”

The following are the inaugural projects of the MIT-Takeda Program. Included are the MIT teams collaborating with Takeda researchers, who are leveraging AI to positively impact human health.

“AI-enabled, automated inspection of lyophilized products in sterile pharmaceutical manufacturing”: Duane Boning, the Clarence J. LeBel Professor of Electrical Engineering and faculty co-director of the Leaders for Global Operations program; Luca Daniel, professor of electrical engineering and computer science; Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president for open learning; and Brian Subirana, research scientist and director MIT Auto-ID Laboratory within the Department of Mechanical Engineering.

“Automating adverse effect assessments and scientific literature review”: Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Jameel Clinic faculty co-lead; Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society; and Jacob Andreas, assistant professor of electrical engineering and computer science.

“Automated analysis of speech and language deficits for frontotemporal dementia”: James Glass, senior research scientist in the MIT Computer Science and Artificial Intelligence Laboratory; Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president for open learning; and Brian Subirana, research scientist and director of the MIT Auto-ID Laboratory within the Department of Mechanical Engineering.

“Discovering human-microbiome protein interactions with continuous distributed representation”: Jim Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science and Department of Biological Engineering, Jameel Clinic faculty co-lead, and MIT-Takeda Program faculty lead; and Timothy Lu, associate professor of electrical engineering and computer science and of biological engineering.

“Machine learning for early diagnosis, progression risk estimation, and identification of non-responders to conventional therapy for inflammatory bowel disease”: Peter Szolovits, professor of computer science and engineering, and David Sontag, associate professor of electrical engineering and computer science.

“Machine learning for image-based liver phenotyping and drug discovery”: Polina Golland, professor of electrical engineering and computer science; Brian W. Anthony, principal research scientist in the Department of Mechanical Engineering; and Peter Szolovits, professor of computer science and engineering.

“Predictive in silico models for cell culture process development for biologics manufacturing”: Connor W. Coley, assistant professor of chemical engineering, and J. Christopher Love, the Raymond A. (1921) and Helen E. St. Laurent Professor of Chemical Engineering.

“Automated data quality monitoring for clinical trial oversight via probabilistic programming”: Vikash Mansinghka, principal research scientist in the Department of Brain and Cognitive Sciences; Tamara Broderick, associate professor of electrical engineering and computer science; David Sontag, associate professor of electrical engineering and computer science; Ulrich Schaechtle, research scientist in the Department of Brain and Cognitive Sciences; and Veronica Weiner, director of special projects for the MIT Probabilistic Computing Project.

“Time series analysis from video data for optimizing and controlling unit operations in production and manufacturing”: Allan S. Myerson, professor of chemical engineering; George Barbastathis, professor of mechanical engineering; Richard Braatz, the Edwin R. Gilliland Professor of Chemical Engineering; and Bernhardt Trout, the Raymond F. Baddour, ScD, (1949) Professor of Chemical Engineering.

“The flagship research projects of the MIT-Takeda Program offer real promise to the ways we can impact human health,” says Jim Collins. “We are delighted to have the opportunity to collaborate with Takeda researchers on advances that leverage AI and aim to shape health care around the globe.”

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

MIT-and-Toyota-release-innovative-dataset-to-accelerate-autonomous-driving.png

MIT and Toyota release innovative dataset to accelerate autonomous driving research | MIT News

news .mit.edu/sites/default/files/styles/news_article__cover_image__original/public/images/202006/MIT-AgeLab-DriveSeg-Open-DataSet.png?itok=p_jbXFXI” />

The following was issued as a joint release from the MIT AgeLab and Toyota Collaborative Safety Research Center.

How can we train self-driving vehicles to have a deeper awareness of the world around them? Can computers learn from past experiences to recognize future patterns that can help them safely navigate new and unpredictable situations?

These are some of the questions researchers from the AgeLab at the MIT Center for Transportation and Logistics and the Toyota Collaborative Safety Research Center (CSRC) are trying to answer by sharing an innovative new open dataset called DriveSeg.

Through the release of DriveSeg, MIT and Toyota are working to advance research in autonomous driving systems that, much like human perception, perceive the driving environment as a continuous flow of visual information.

“In sharing this dataset, we hope to encourage researchers, the industry, and other innovators to develop new insight and direction into temporal AI modeling that enables the next generation of assisted driving and automotive safety technologies,” says Bryan Reimer, principal researcher. “Our longstanding working relationship with Toyota CSRC has enabled our research efforts to impact future safety technologies.”

“Predictive power is an important part of human intelligence,” says Rini Sherony, Toyota CSRC’s senior principal engineer. “Whenever we drive, we are always tracking the movements of the environment around us to identify potential risks and make safer decisions. By sharing this dataset, we hope to accelerate research into autonomous driving systems and advanced safety features that are more attuned to the complexity of the environment around them.”

To date, self-driving data made available to the research community have primarily consisted of troves of static, single images that can be used to identify and track common objects found in and around the road, such as bicycles, pedestrians, or traffic lights, through the use of “bounding boxes.” By contrast, DriveSeg contains more precise, pixel-level representations of many of these same common road objects, but through the lens of a continuous video driving scene. This type of full-scene segmentation can be particularly helpful for identifying more amorphous objects — such as road construction and vegetation — that do not always have such defined and uniform shapes.

According to Sherony, video -based driving scene perception provides a flow of data that more closely resembles dynamic, real-world driving situations. It also allows researchers to explore data patterns as they play out over time, which could lead to advances in machine learning, scene understanding, and behavioral prediction.

DriveSeg is available for free and can be used by researchers and the academic community for non-commercial purposes at the links below. The data is comprised of two parts. DriveSeg (manual) is 2 minutes and 47 seconds of high-resolution video captured during a daytime trip around the busy streets of Cambridge, Massachusetts. The video ’s 5,000 frames are densely annotated manually with per-pixel human labels of 12 classes of road objects.

DriveSeg (Semi-auto) is 20,100 video frames (67 10-second video clips) drawn from MIT Advanced Vehicle Technologies (AVT) Consortium data. DriveSeg (Semi-auto) is labeled with the same pixel-wise semantic annotation as DriveSeg (manual), except annotations were completed through a novel semiautomatic annotation approach developed by MIT. This approach leverages both manual and computational efforts to coarsely annotate data more efficiently at a lower cost than manual annotation. This dataset was created to assess the feasibility of annotating a wide range of real-world driving scenarios and assess the potential of training vehicle perception systems on pixel labels created through AI-based labeling systems.

To learn more about the technical specifications and permitted use-cases for the data, visit the DriveSeg dataset page.

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

Recent Posts

Archives

wpChatIcon

Please Support Us with a Like

Please Support Us with a Like