Skip to content ↓

Tackling the misinformation epidemic with “In Event of Moon Disaster”

New website from the MIT Center for Advanced Virtuality rewrites an important moment in history to educate the public on the dangers of deepfakes.
Watch Video
Press Inquiries

Press Contact:

Janine Liberty
Phone: 617-324-4369
MIT Open Learning
Close
To illustrate the dangers of deepfakes (highly realistic manipulated audio and video), a team from the Center for Advanced Virtuality created a website showcasing a “complete” deepfake of President Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon.
Caption:
To illustrate the dangers of deepfakes (highly realistic manipulated audio and video), a team from the Center for Advanced Virtuality created a website showcasing a “complete” deepfake of President Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon.
Credits:
Image: MIT Center for Advanced Virtuality
Using sophisticated AI and machine learning technologies, the "In Event of Moon Disaster" team merged Nixon's face with the movements of an actor reading a speech the former president never actually delivered.
Caption:
Using sophisticated AI and machine learning technologies, the "In Event of Moon Disaster" team merged Nixon's face with the movements of an actor reading a speech the former president never actually delivered.
Credits:
Image: MIT Center for Advanced Virtuality
Co-director Halsey Burgund, a fellow at MIT Open Documentary Lab, hopes that the project will help raise awareness of the "significant" role deepfake technology plays in today's media landscape, and says that "with further understanding and diligence, we can all reduce the likelihood of being unduly influenced by it."
Caption:
Co-director Halsey Burgund, a fellow at MIT Open Documentary Lab, hopes that the project will help raise awareness of the "significant" role deepfake technology plays in today's media landscape, and says that "with further understanding and diligence, we can all reduce the likelihood of being unduly influenced by it."
Credits:
Image: MIT Center for Advanced Virtuality

Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “In Event of Moon Disaster.”

This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be. 

“Media misinformation is a longstanding phenomenon, but, exacerbated by deepfake technologies and the ease of disseminating content online, it’s become a crucial issue of our time,” says D. Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the MIT Center for Advanced Virtuality, part of MIT Open Learning. “With this project — and a course curriculum on misinformation being built around it — our powerfully talented XR Creative Director Francesca Panetta is pushing forward one of the center’s broad aims: using AI and technologies of virtuality to support creative expression and truth.”

Alongside the film, moondisaster.org features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources. 

“This alternative history shows how new technologies can obfuscate the truth around us, encouraging our audience to think carefully about the media they encounter daily,” says Panetta.

Also part of the launch is a new documentary, “To Make a Deepfake,” a 30-minute film by Scientific American, that uses “In Event of Moon Disaster” as a jumping-off point to explain the technology behind AI-generated media. The documentary features prominent scholars and thinkers on the state of deepfakes, on the stakes for the spread of misinformation and the twisting of our digital reality, and on the future of truth.

The project is supported by the MIT Open Documentary Lab and the Mozilla Foundation, which awarded “In Event of Moon Disaster” a Creative Media Award last year. These awards are part of Mozilla’s mission to realize more trustworthy AI in consumer technology. The latest cohort of awardees uses art and advocacy to examine AI’s effect on media and truth.

Says J. Bob Alotta, Mozilla’s vice president of global programs: “AI plays a central role in consumer technology today — it curates our news, it recommends who we date, and it targets us with ads. Such a powerful technology should be demonstrably worthy of trust, but often it is not. Mozilla’s Creative Media Awards draw attention to this, and also advocate for more privacy, transparency, and human well-being in AI.” 

“In Event of Moon Disaster” previewed last fall as a physical art installation at the International Documentary Film Festival Amsterdam, where it won the Special Jury Prize for Digital Storytelling; it was selected for the 2020 Tribeca Film Festival and Cannes XR. The new website is the project’s global digital launch, making the film and associated materials available for free to all audiences.

The past few months have seen the world move almost entirely online: schools, talk shows, museums, election campaigns, doctor’s appointments — all have made a rapid transition to virtual. When every interaction we have with the world is seen through a digital filter, it becomes more important than ever to learn how to distinguish between authentic and manipulated media. 

“It’s our hope that this project will encourage the public to understand that manipulated media plays a significant role in our media landscape,” says co-director Burgund, “and that, with further understanding and diligence, we can all reduce the likelihood of being unduly influenced by it."

Press Mentions

The Boston Globe

Writing for The Boston Globe, Prof. D. Fox Harrell, Francesca Panetta and Pakinam Amer of the MIT Center for Advanced Virtuality explore the potential dangers posed by deepfake videos. “Combatting misinformation in the media requires a shared commitment to human rights and dignity — a precondition for addressing many social ills, malevolent deepfakes included,” they write.

Fortune

Researchers at MIT’s Center for Advanced Virtuality have created a deepfake video of President Richard Nixon discussing a failed moon landing. “[The video is] meant to serve as a warning of the coming wave of impressively realistic deepfake false videos about to hit us that use A.I. to convincingly reproduce the appearance and sound of real people,” write Aaron Pressman and David Z. Morris for Fortune.

Boston 25 News

Boston 25’s Chris Flanagan reports that MIT researchers developed a website aimed at educating the public about deepfake technology and misinformation. “This project is part of an awareness campaign to get people aware of what is possible with both AI technologies like our deepfake, but also really simple video editing technologies,” says Francesca Panetta, XR creative director at MIT’s Center for Advanced Virtuality.

Scientific American

Scientific American explores how MIT researchers created a new website aimed at exploring the potential perils and possibilities of deepfakes. “One of the things I most love about this project is that it’s using deepfakes as a medium and the arts to address the issue of misinformation in our society,” says Prof. D. Fox Harrell.

Space.com

MIT researchers created a deepfake video and website to help educate the public of the dangers of deepfakes and misinformation, reports Mike Wall for Space.com. “This alternative history shows how new technologies can obfuscate the truth around us, encouraging our audience to think carefully about the media they encounter daily,” says Francesca Panetta, XR creative director at MIT’s Center for Advanced Virtuality.

Related Links

Related Topics

Related Articles

More MIT News