Skip to content ↓

AI, the law, and our future

MIT “Policy Congress” examines the complex terrain of artificial intelligence regulation.
Press Inquiries

Press Contact:

Abby Abazorius
Phone: 617-253-2709
MIT News Office

Media Download

Professor Antonio Torralba, MIT director of the MIT—IBM Watson AI Lab and the inaugural director of the MIT Quest for Intelligence, addresses the audience at the MIT AI Policy Congress on Jan. 15.
Download Image
Caption: Professor Antonio Torralba, MIT director of the MIT—IBM Watson AI Lab and the inaugural director of the MIT Quest for Intelligence, addresses the audience at the MIT AI Policy Congress on Jan. 15.
Credits: Image: Caty Fairclough
The closing panel of the MIT AI Policy Congress discusses the future of AI regulations.
Download Image
Caption: The closing panel of the MIT AI Policy Congress discusses the future of AI regulations.
Credits: Image: Caty Fairclough

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Close
Professor Antonio Torralba, MIT director of the MIT—IBM Watson AI Lab and the inaugural director of the MIT Quest for Intelligence, addresses the audience at the MIT AI Policy Congress on Jan. 15.
Caption:
Professor Antonio Torralba, MIT director of the MIT—IBM Watson AI Lab and the inaugural director of the MIT Quest for Intelligence, addresses the audience at the MIT AI Policy Congress on Jan. 15.
Credits:
Image: Caty Fairclough
The closing panel of the MIT AI Policy Congress discusses the future of AI regulations.
Caption:
The closing panel of the MIT AI Policy Congress discusses the future of AI regulations.
Credits:
Image: Caty Fairclough

Scientists and policymakers converged at MIT on Tuesday to discuss one of the hardest problems in artificial intelligence: How to govern it.

The first MIT AI Policy Congress featured seven panel discussions sprawling across a variety of AI applications, and 25 speakers — including two former White House chiefs of staff, former cabinet secretaries, homeland security and defense policy chiefs, industry and civil society leaders, and leading researchers.

Their shared focus: how to harness the opportunities that AI is creating — across areas including transportation and safety, medicine, labor, criminal justice, and national security — while vigorously confronting challenges, including the potential for social bias, the need for transparency, and misteps that could stall AI innovation while exacerbating social problems in the United States and around the world.

“When it comes to AI in areas of public trust, the era of moving fast and breaking everything is over,” said R. David Edelman, director of the Project on Technology, the Economy, and National Security (TENS) at the MIT Internet Policy Research Initiative (IPRI), and a former special assistant to the president for economic and technology policy in the Obama White House.

Added Edelman: “There is simply too much at stake for all of us not to have a say.”

Daniel Weitzner, founding director of IPRI and a principal research scientist at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), said a key objective of the dialogue was to help policy analysts feel confident about their ability to actively shape the effects of AI on society.

“I hope the policymakers come away with a clear sense that AI technology is not some immovable object, but rather that the right interaction between computer science, government, and society at large will help shape the development of new technology to address society’s needs,” Weitzner said at the close of the event.

The MIT AI Policy Congress was organized by IPRI, alongside a two-day meeting of the Organization for Economic Cooperation and Development (OECD), the Paris-based intergovernmental association, which is developing AI policy recommendations for 36 countries around the world. As part of the event, OECD experts took part in a half-day, hands-on training session in machine learning, as they trained and tested a neural network under the guidance of Hal Abelson, the  Class of 1922 Professor of Computer Science and Engineering at MIT.

Tuesday’s forum also began with a primer on the state of the art in AI from Antonio Torralba, a professor in CSAIL and the Department of Electrical Engineering and Computer Science (EECS), and director of the MIT Quest for Intelligence. Noting that “there are so many things going on” in AI, Torralba quipped: “It’s very difficult to know what the future is, but it’s even harder to know what the present is.”

A new “commitment to address ethical issues”

Tuesday’s event, co-hosted by the IPRI and the MIT Quest for Intelligence, was held at a time when AI is receiving a significant amount of media attention — and an unprecedented level of financial investment and institutional support.

For its part, MIT announced in October 2018 that it was founding the MIT Stephen A. Schwarzman College of Computing, supported by a $350 million gift from Stephen Schwarzman, which will serve as an interdisciplinary nexus of research and education in computer science, data science, AI, and related fields. The college will also address policy and ethical issues relating to computing.

“Here at MIT, we are at a unique moment with the impending launch of the new MIT Schwarzman College of Computing,” Weitzner noted. “The commitment to address policy and ethical issues in computing will result in new AI research, and curriculum to train students to develop new technology to meet society’s needs.”

Other institutions are making an expanded commitment to AI as well — including the OECD.

“Things are evolving quite quickly,” said Andrew Wyckoff, director for science, technology, and innovation at the OECD. “We need to begin to try to get ahead of that.”

Wyckoff added that AI was a “top three” policy priority for the OECD in 2019-2020, and said the organization was forming a “policy observatory” to produce realistic assessments of AI’s impact, including the issue of automation replacing jobs.

“There’s a lot of fear out there about [workers] being displaced,” said Wyckoff. “We need to look at this and see what is reality, versus what is fear.”

A fair amount of that idea stems more from fear than reality, said Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy and a professor at the MIT Sloan School of Management, during a panel discussion on manufacturing and labor.

Compared to the range of skills needed in most jobs, “Today what machine learning can do is much more narrow,” Brynjolfsson said. “I think that’s going to be the status quo for a number of years.”

Brynjolfsson noted that his own research on the subject, evaluating the full range of specific tasks used in a wide variety of jobs, shows that automation tends to replace some but not all of those tasks.

“In not a single one of those occupations did machine learning run the table” of tasks, Brynjolfsson said. “You’re not just going to be able to plug in a machine very often.” However, he noted, the fact that computers can usurp certain tasks means that “reinvention and redesign” will be necessary for many jobs. Still, as Brynjolfsson emphasized, “That process is going to play out over years, if not decades.”

A varied policy landscape

One major idea underscored at the event is that AI policymaking could unfold quite differently from industry to industry. For autonomous vehicles — perhaps the most widely-touted application of AI — U.S. states have significant rulemaking power, and laws could vary greatly across state lines.

In a panel discussion on AI and transportation, Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the director of CSAIL, remarked that she sees transportation “as one of the main targets and one of the main points of adoption for AI technologies in the present and near future.”

Rus suggested that the use of  autonomous vehicles in some low-speed, less-complex environments might be possible within five years or so, but she also made clear that autonomous vehicles fare less well in more complicated, higher-speed situations, and struggle in bad weather.

Partly for those reasons, many autonomous vehicles figure to feature systems where drivers can take over the controls. But as Rus noted, that “depends on people’s ability to take over instantaneously,” while studies are currently showing that it takes drivers about nine seconds to assume control of their vehicles.

The transportation panel discussion also touched on the use of AI in nautical and aerial systems. In the latter case, “you can’t look into your AI co-pilot’s eyes and judge their confidence,” said John-Paul Clarke, the vice president of strategic technologies at United Technologies, regarding the complex dynamics of human-machine interfaces.

In other industries, fundamental AI challenges involve access to data, a point emphasized by both Torralba and Regina Barzilay, an MIT professor in both CSAIL and EECS. During a panel on health care, Barzilay presented on one aspect of her research, which uses machine learning to analyze mammogram results for better early detection of cancer. In Barzilay’s view, key technical challenges in her work that could be addressed by AI policy include access to more data and testing across populations — both of which can help refine automated detection tools.

The matter of how best to create access to patient data, however, led to some lively subsequent exchanges. Tom Price, former secretary of health and human services in the Trump administration, suggested that “de-identified data is absolutely the key” to further progress, while some MIT researchers in the audience suggested that it is virtually impossible to create totally anonymous patient data.

Jason Furman, a professor of the practice of economic policy at the Harvard Kennedy School and a former chair of the Council of Economic Advisors in the Obama White House, addressed the concern that insurers would deny coverage to people based on AI-generated predictions about which people would most likely develop diseases later in life. Furman suggested that the best solution for this lies outside the AI domain: preventing denial of care based on pre-existing conditions, an element of the Affordable Care Act.

But overall, Furman added, “the real problem with artificial intelligence is we don’t have enough of it.”

For his part, Weitzner suggested that, in lieu of perfectly anonymous medical data, “we should agree on what are the permissible uses and the impermissible uses” of data, since “the right way of enabling innovation and taking privacy seriously is taking accountability seriously.”

Public accountability

For that matter, the accountability of organizations constituted another touchstone of Tuesday’s discussions, especially in a panel on law enforcement and AI.

“Government entities need to be transparent about what they’re doing with respect to AI,” said Jim Baker, Harvard Law School lecturer and former general counsel of the FBI. “I think that’s obvious.”

Carol Rose, executive director of the American Civil Liberties Union’s Massachusetts chapter, warned against over-use of AI tools in law enforcement

“I think AI has tremendous promise, but it really depends if the data scientists and law enforcement work together,” Rose said, suggesting that a certain amount of “junk science” had already made its way into tools being marketed to law-enforcement officials. Rose also cited Joy Buolamwini of the MIT Media Lab as a leader in the evaluation of such AI tools; Buolamwini founded the Algorithmic Justice League, a group scrutinizing the use of facial recognition technologies.

“Sometimes I worry we have an AI hammer looking for a nail,” Rose said.

All told, as Edelman noted in closing remarks, the policy world consists of “very different bodies of law,” and policymakers will need to ask themselves to what extent general regulations are meaningful, or if AI policy issues are best addressed in more specific ways — whether in medicine, criminal justice, or transportation.  

“Our goal is to see the interconnection among these fields … but as we do, let’s also ask ourselves if ‘AI governance’ is the right frame at all — it might just be that in the near future, all governance deals with AI issues, one way or another,” Edelman said.

Weitzner concluded the conference with a call for governments to continue engagement with the computer science and artificial intelligence technical communities. “The technologies that are shaping the world’s future are being developed today. We have the opportunity to be sure that they serve society’s needs if we keep up this dialogue as way of informing technical design and cross-disciplinary research.”

Press Mentions

New York Times

New York Times reporter Steve Lohr writes about the MIT AI Policy Conference, which examined how society, industry and governments should manage the policy questions surrounding the evolution of AI technologies. “If you want people to trust this stuff, government has to play a role,” says CSAIL principal research scientist Daniel Weitzner.

Related Links

Related Topics

Related Articles

More MIT News