Featured Image

Study finds stronger links between automation and inequality | MIT News

This is part 3 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu. 

Modern technology affects different workers in different ways. In some white-collar jobs — designer, engineer — people become more productive with sophisticated software at their side. In other cases, forms of automation, from robots to phone-answering systems, have simply replaced factory workers, receptionists, and many other kinds of employees.

Now a new study co-authored by an MIT economist suggests automation has a bigger impact on the labor market and income inequality than previous research would indicate — and identifies the year 1987 as a key inflection point in this process, the moment when jobs lost to automation stopped being replaced by an equal number of similar workplace opportunities.

“Automation is critical for understanding inequality dynamics,” says MIT economist Daron Acemoglu, co-author of a newly published paper detailing the findings.

Within industries adopting automation, the study shows, the average “displacement” (or job loss) from 1947-1987 was 17 percent of jobs, while the average “reinstatement” (new opportunities) was 19 percent. But from 1987-2016, displacement was 16 percent, while reinstatement was just 10 percent. In short, those factory positions or phone-answering jobs are not coming back.

“A lot of the new job opportunities that technology brought from the 1960s to the 1980s benefitted low-skill workers,” Acemoglu adds. “But from the 1980s, and especially in the 1990s and 2000s, there’s a double whammy for low-skill workers: They’re hurt by displacement, and the new tasks that are coming, are coming slower and benefitting high-skill workers.”

The new paper, “Unpacking Skill Bias: Automation and New Tasks,” will appear in the May issue of the American Economic Association: Papers and Proceedings. The authors are Acemoglu, who is an Institute Professor at MIT, and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.

Low-skill workers: Moving backward

The new paper is one of several studies Acemoglu and Restrepo have conducted recently examining the effects of robots and automation in the workplace. In a just-published paper, they concluded that across the U.S. from 1993 to 2007, each new robot replaced 3.3 jobs.

In still another new paper, Acemoglu and Restrepo examined French industry from 2010 to 2015. They found that firms that quickly adopted robots became more productive and hired more workers, while their competitors fell behind and shed workers — with jobs again being reduced overall.

In the current study, Acemoglu and Restrepo construct a model of technology’s effects on the labor market, while testing the model’s strength by using empirical data from 44 relevant industries. (The study uses U.S. Census statistics on employment and wages, as well as economic data from the Bureau of Economic Analysis and the Bureau of Labor Studies, among other sources.)

The result is an alternative to the standard economic modeling in the field, which has emphasized the idea of “skill-biased” technological change — meaning that technology tends to benefit select high-skilled workers more than low-skill workers, helping the wages of high-skilled workers more, while the value of other workers stagnates. Think again of highly trained engineers who use new software to finish more projects more quickly: They become more productive and valuable, while workers lacking synergy with new technology are comparatively less valued.  

However, Acemoglu and Restrepo think even this scenario, with the prosperity gap it implies, is still too benign. Where automation occurs, lower-skill workers are not just failing to make gains; they are actively pushed backward financially. Moreover,  Acemoglu and Restrepo note, the standard model of skill-biased change does not fully account for this dynamic; it estimates that productivity gains and real (inflation-adjusted) wages of workers should be higher than they actually are.

More specifically, the standard model implies an estimate of about 2 percent annual growth in productivity since 1963, whereas annual productivity gains have been about 1.2 percent; it also estimates wage growth for low-skill workers of about 1 percent per year, whereas real wages for low-skill workers have actually dropped since the 1970s.

“Productivity growth has been lackluster, and real wages have fallen,” Acemoglu says. “Automation accounts for both of those.” Moreover, he adds, “Demand for skills has gone down almost exclusely in industries that have seen a lot of automation.”

Why “so-so technologies” are so, so bad

Indeed, Acemoglu says, automation is a special case within the larger set of technological changes in the workplace. As he puts it, automation “is different than garden-variety skill-biased technological change,” because it can replace jobs without adding much productivity to the economy.

Think of a self-checkout system in your supermarket or pharmacy: It reduces labor costs without making the task more efficient. The difference is the work is done by you, not paid employees. These kinds of systems are what Acemoglu and Restrepo have termed “so-so technologies,” because of the minimal value they offer.

“So-so technologies are not really doing a fantastic job, nobody’s enthusiastic about going one-by-one through their items at checkout, and nobody likes it when the airline they’re calling puts them through automated menus,” Acemoglu says. “So-so technologies are cost-saving devices for firms that just reduce their costs a little bit but don’t increase productivity by much. They create the usual displacement effect but don’t benefit other workers that much, and firms have no reason to hire more workers or pay other workers more.”

To be sure, not all automation resembles self-checkout systems, which were not around in 1987. Automation at that time consisted more of printed office records being converted into databases, or machinery being added to sectors like textiles and furniture-making. Robots became more commonly added to heavy industrial manufacturing in the 1990s. Automation is a suite of technologies, continuing today with software and AI, which are inherently worker-displacing.

“Displacement is really the center of our theory,” Acemoglu says. “And it has grimmer implications, because wage inequality is associated with disruptive changes for workers. It’s a much more Luddite explanation.”

After all, the Luddites — British textile mill workers who destroyed machinery in the 1810s — may be synonymous with technophobia, but their actions were motivated by economic concerns; they knew machines were replacing their jobs. That same displacement continues today, although, Acemoglu contends, the net negative consequences of technology on jobs is not inevitable. We could, perhaps, find more ways to produce job-enhancing technologies, rather than job-replacing innovations.

“It’s not all doom and gloom,” says Acemoglu. “There is nothing that says technology is all bad for workers. It is the choice we make about the direction to develop technology that is critical.”

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

Featured Image

Visualizing the world beyond the frame | MIT News

Most firetrucks come in red, but it’s not hard to picture one in blue. Computers aren’t nearly as creative.

Their understanding of the world is colored, often literally, by the data they’ve trained on. If all they’ve ever seen are pictures of red fire trucks, they have trouble drawing anything else. 

To give computer vision models a fuller, more imaginative view of the world, researchers have tried feeding them more varied images. Some have tried shooting objects from odd angles, and in unusual positions, to better convey their real-world complexity. Others have asked the models to generate pictures of their own, using a form of artificial intelligence called GANs, or generative adversarial networks. In both cases, the aim is to fill in the gaps of image datasets to better reflect the three-dimensional world and make face- and object-recognition models less biased.

In a new study at the International Conference on Learning Representations, MIT researchers propose a kind of creativity test to see how far GANs can go in riffing on a given image. They “steer” the model into the subject of the photo and ask it to draw objects and animals close up, in bright light, rotated in space, or in different colors.

The model’s creations vary in subtle, sometimes surprising ways. And those variations, it turns out, closely track how creative human photographers were in framing the scenes in front of their lens. Those biases are baked into the underlying dataset, and the steering method proposed in the study is meant to make those limitations visible. 

“Latent space is where the DNA of an image lies,” says study co-author Ali Jahanian, a research scientist at MIT. “We show that you can steer into this abstract space and control what properties you want the GAN to express — up to a point. We find that a GAN’s creativity is limited by the diversity of images it learns from.” Jahanian is joined on the study by co-author Lucy Chai, a PhD student at MIT, and senior author Phillip Isola, the Bonnie and Marty (1964) Tenenbaum CD Assistant Professor of Electrical Engineering and Computer Science.

The researchers applied their method to GANs that had already been trained on ImageNet’s 14 million photos. They then measured how far the models could go in transforming different classes of animals, objects, and scenes. The level of artistic risk-taking, they found, varied widely by the type of subject the GAN was trying to manipulate. 

For example, a rising hot air balloon generated more striking poses than, say, a rotated pizza. The same was true for zooming out on a Persian cat rather than a robin, with the cat melting into a pile of fur the farther it recedes from the viewer while the bird stays virtually unchanged. The model happily turned a car blue, and a jellyfish red, they found, but it refused to draw a goldfinch or firetruck in anything but their standard-issue colors. 

The GANs also seemed astonishingly attuned to some landscapes. When the researchers bumped up the brightness on a set of mountain photos, the model whimsically added fiery eruptions to the volcano, but not a geologically older, dormant relative in the Alps. It’s as if the GANs picked up on the lighting changes as day slips into night, but seemed to understand that only volcanos grow brighter at night.

The study is a reminder of just how deeply the outputs of deep learning models hinge on their data inputs, researchers say. GANs have caught the attention of intelligence researchers for their ability to extrapolate from data, and visualize the world in new and inventive ways. 

They can take a headshot and transform it into a Renaissance-style portrait or favorite celebrity. But though GANs are capable of learning surprising details on their own, like how to divide a landscape into clouds and trees, or generate images that stick in people’s minds, they are still mostly slaves to data. Their creations reflect the biases of thousands of photographers, both in what they’ve chosen to shoot and how they framed their subject.

“What I like about this work is it’s poking at representations the GAN has learned, and pushing it to reveal why it made those decisions,” says Jaakko Lehtinen, a professor at Finland’s Aaalto University and a research scientist at NVIDIA who was not involved in the study. “GANs are incredible, and can learn all kinds of things about the physical world, but they still can’t represent images in physically meaningful ways, as humans can.”

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

Featured Image

Marshaling artificial intelligence in the fight against Covid-19 | MIT News

Artificial intelligence could play a decisive role in stopping the Covid-19 pandemic. To give the technology a push, the MIT-IBM Watson AI Lab is funding 10 projects at MIT aimed at advancing AI’s transformative potential for society. The research will target the immediate public health and economic challenges of this moment. But it could have a lasting impact on how we evaluate and respond to risk long after the crisis has passed. The 10 research projects are highlighted below.

Early detection of sepsis in Covid-19 patients 

Sepsis is a deadly complication of Covid-19, the disease caused by the new coronavirus SARS-CoV-2. About 10 percent of Covid-19 patients get sick with sepsis within a week of showing symptoms, but only about half survive.

Identifying patients at risk for sepsis can lead to earlier, more aggressive treatment and a better chance of survival. Early detection can also help hospitals prioritize intensive-care resources for their sickest patients. In a project led by MIT Professor Daniela Rus, researchers will develop a machine learning system to analyze images of patients’ white blood cells for signs of an activated immune response against sepsis.

Designing proteins to block SARS-CoV-2

Proteins are the basic building blocks of life, and with AI, researchers can explore and manipulate their structures to address longstanding problems. Take perishable food: The MIT-IBM Watson AI Lab recently used AI to discover that a silk protein made by honeybees could double as a coating for quick-to-rot foods to extend their shelf life.

In a related project led by MIT professors Benedetto Marelli and Markus Buehler, researchers will enlist the protein-folding method used in their honeybee-silk discovery to try to defeat the new coronavirus. Their goal is to design proteins able to block the virus from binding to human cells, and to synthesize and test their unique protein creations in the lab.

Saving lives while restarting the U.S. economy

Some states are reopening for business even as questions remain about how to protect those most vulnerable to the coronavirus. In a project led by MIT professors Daron Acemoglu, Simon Johnson and Asu Ozdaglar will model the effects of targeted lockdowns on the economy and public health.

In a recent working paper co-authored by Acemoglu, Victor Chernozhukov, Ivan Werning, and Michael Whinston, MIT economists analyzed the relative risk of infection, hospitalization, and death for different age groups. When they compared uniform lockdown policies against those targeted to protect seniors, they found that a targeted approach could save more lives. Building on this work, researchers will consider how antigen tests and contact tracing apps can further reduce public health risks.

Which materials make the best face masks?

Massachusetts and six other states have ordered residents to wear face masks in public to limit the spread of coronavirus. But apart from the coveted N95 mask, which traps 95 percent of airborne particles 300 nanometers or larger, the effectiveness of many masks remains unclear due to a lack of standardized methods to evaluate them.

In a project led by MIT Associate Professor Lydia Bourouiba, researchers are developing a rigorous set of methods to measure how well homemade and medical-grade masks do at blocking the tiny droplets of saliva and mucus expelled during normal breathing, coughs, or sneezes. The researchers will test materials worn alone and together, and in a variety of configurations and environmental conditions. Their methods and measurements will determine how well materials protect mask wearers and the people around them.

Treating Covid-19 with repurposed drugs

As Covid-19’s global death toll mounts, researchers are racing to find a cure among already-approved drugs. Machine learning can expedite screening by letting researchers quickly predict if promising candidates can hit their target.

In a project led by MIT Assistant Professor Rafael Gomez-Bombarelli, researchers will represent molecules in three dimensions to see if this added spatial information can help to identify drugs most likely to be effective against the disease. They will use NASA’s Ames and U.S. Department of Energy’s NSERC supercomputers to further speed the screening process.

A privacy-first approach to automated contact tracing

Smartphone data can help limit the spread of Covid-19 by identifying people who have come into contact with someone infected with the virus, and thus may have caught the infection themselves. But automated contact tracing also carries serious privacy risks.

In collaboration with MIT Lincoln Laboratory and others, MIT researchers Ronald Rivest and Daniel Weitzner will use encrypted Bluetooth data to ensure personally identifiable information remains anonymous and secure.

Overcoming manufacturing and supply hurdles to provide global access to a coronavirus vaccine

A vaccine against SARS-CoV-2 would be a crucial turning point in the fight against Covid-19. Yet, its potential impact will be determined by the ability to rapidly and equitably distribute billions of doses globally. This is an unprecedented challenge in biomanufacturing. 

In a project led by MIT professors Anthony Sinskey and Stacy Springs, researchers will build data-driven statistical models to evaluate tradeoffs in scaling the manufacture and supply of vaccine candidates. Questions include how much production capacity will need to be added, the impact of centralized versus distributed operations, and how to design strategies for fair vaccine distribution. The goal is to give decision-makers the evidence needed to cost-effectively achieve global access.

Leveraging electronic medical records to find a treatment for Covid-19

Developed as a treatment for Ebola, the anti-viral drug remdesivir is now in clinical trials in the United States as a treatment for Covid-19. Similar efforts to repurpose already-approved drugs to treat or prevent the disease are underway.

In a project led by MIT professors Roy Welsch and Stan Finkelstein, researchers will use statistics, machine learning, and simulated clinical drug trials to find and test already-approved drugs as potential therapeutics against Covid-19. Researchers will sift through millions of electronic health records and medical claims for signals indicating that drugs used to fight chronic conditions like hypertension, diabetes, and gastric influx might also work against Covid-19 and other diseases.

Finding better ways to treat Covid-19 patients on ventilators 

Troubled breathing from acute respiratory distress syndrome is one of the complications that brings Covid-19 patients to the ICU. There, life-saving machines help patients breathe by mechanically pumping oxygen into the lungs. But even as towns and cities lower their Covid-19 infections through social distancing, there remains a national shortage of mechanical ventilators and serious health risks of ventilation itself.

In collaboration with IBM researchers Zach Shahn and Daby Sow, MIT researchers Li-Wei Lehman and Roger Mark will develop an AI tool to help doctors find better ventilator settings for Covid-19 patients and decide how long to keep them on a machine. Shortened ventilator use can limit lung damage while freeing up machines for others. To build their models, researchers will draw on data from intensive-care patients with acute respiratory distress syndrome, as well as Covid-19 patients at a local Boston hospital.

Returning to normal via targeted lockdowns, personalized treatments, and mass testing

In a few short months, Covid-19 has devastated towns and cities around the world. Researchers are now piecing together the data to understand how government policies can limit new infections and deaths and how targeted policies might protect the most vulnerable.

In a project led by MIT Professor Dimitris Bertsimas, researchers will study the effects of lockdowns and other measures meant to reduce new infections and deaths and prevent the health-care system from being swamped. In a second phase of the project, they will develop machine learning models to predict how vulnerable a given patient is to Covid-19, and what personalized treatments might be most effective. They will also develop an inexpensive, spectroscopy-based test for Covid-19 that can deliver results in minutes and pave the way for mass testing. The project will draw on clinical data from four hospitals in the United States and Europe, including Codogno Hospital, which reported Italy’s first infection.

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

Recent Posts

Archives