Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 1338 news clips related to this topic.
Show:

New York Times

Prof. Christopher Knittel speaks with New York Times reporter Claire Brown about the development of AI data centers and the potential of increased utility costs. “If it’s just a few industrial customers with behind-the-meter power plants, it doesn’t really matter,” says Knittel. [As data centers grow and expand] “these things are going to matter so much. We can get it right, but sadly, too, if we don’t do it right, we can get it really wrong.” 

The Boston Globe

“In Event of Moon Disaster,” a short deepfake film on display at the MIT Museum’s “AI: Mind the Gap” exhibit depicts an alternate reality where the Apollo 11 mission ended in disaster, reports Mark Feeney for The Boston Globe. The “unnervingly realistic deepfake” depicts President Richard Nixon addressing the nation regarding the failed mission. The film “manages to be both frightening, in showing how convincing deepfakes can be, and, however paradoxically, inspiring,” writes Feeney. 

The Guardian

Prof. Pat Pataranutaporn speaks with The Guardian reporter Andrew Gregory about the lack of safety warnings and disclaimers in AI overviews, specifically in AI-generated health materials. “The absence of disclaimers when users are initially served medical information creates several critical dangers,” says Pataranutaporn. “Disclaimers serve as a crucial intervention point. They disrupt this automatic trust and prompt users to engage more critically with the information they receive.”

The Atlantic

Writing for The Atlantic, Prof. Deb Roy explores the impact of chatbots on language and learning development. “The ordinary forces that tether speech to consequence—social sanction, legal penalty, reputational loss—presuppose a continuous agent whose future can be made worse by what they say,” writes Roy. “With LLMs, there is no such locus. …When the speaker is an LLM, the human stakes that ordinarily anchor speech have nowhere to attach.” 

Bloomberg

 Prof. David Autor speaks with Bloomberg reporter David Westin about the shift toward automation in the workforce and the impact on workers. “There are many ways for us to use AI,” says Autor. “It’s incredibly flexible, malleable, plastic technology. You could use it to try to automate people out of existence. You could also use it to collaborate with people to make them more effective. But I also think that it depends on how we invest, how we build out those technologies.” 

Fast Company

Jerry Lu MFin ’24 speaks with Fast Company reporter Grace Snelling about his work developing a new AI tool that can be used to help figure skaters land their jumps and Olympic audiences better understand just how challenging a quadruple Axel is. “Some of the artistic sports were missing this data-driven storytelling ability—if you watch hockey on TV, it looks slow, but if you watch it in person, it looks fast,” Lu explains. 

Forbes

President Sally Kornbluth and MIT Corporation member Noubar Afeyan PhD '87 served as panelists at the 2026 Davos Imagination in Action event to discuss “upholding scientific principles in the era of LLMs,” reports John Werner for Forbes. “We want all of our students to have a foundational facility with AI,” said Kornbluth. “What we want them to know, now, is how they can really be passionate about the content that they care about, whether it's materials design, whether it's aerospace, whether it's biochemical innovation, and understanding the many ways in which AI can help in that innovation.”

Venture Beat

Researchers at MIT have “developed a new technique that enables large language models to learn new skills and knowledge without forgetting their past capabilities,” reports Ben Dickson for Venture Beat. “Their technique, called self-distillation fine-tuning (SDFT), allows models to learn directly from demonstrations and their own experiments by leveraging the inherent in-context learning abilities of modern LLMs,” explains Dickson. “Experiments show that SDFT consistently outperforms traditional supervised fine-tuning (SFT) while addressing the limitations of reinforcement learning algorithms.” 

The Wall Street Journal

Prof. Andrew Lo speaks with Wall Street Journal reporter Peter Coy about why he feels current AI systems aren’t suited to serving as financial advisors and his goal to create “an AI financial adviser that is a true fiduciary—namely, an entity that always puts the client’s interests first and tailors its advice to their particular needs, including emotional needs.” Lo notes that: “The AI people are using now can be dangerous, especially if the user isn’t fully aware of the biases, inaccuracies and other limits” of large language models. 

New York Times

Senior Research Scientist Leo Anthony Celi speaks with New York Times reporter Gina Kolata about the use of AI in health care. “The real concern isn’t AI itself,” says Celi. “It’s the AI is being deployed to optimize a profoundly broken system rather than to reimagine it.” 

USA Today

USA Today reporter Dinah Voyles Pulver spotlights Research Scientist Judah Cohen’s research studying how weather systems and climate patterns are related to the increase in Arctic blasts and deep freezes this winter. 

New York Times

Research Scientist Judah Cohen speaks with New York Times reporter Eric Niiler about his research studying “how global warming might also be causing colder winters in the eastern United States.” Cohen says “It’s weird what’s going on now in the stratosphere. These stretching events happen every winter, but just how the pattern is stuck is really remarkable.” 

Forbes

Prof. Olivier de Weck speaks with Forbes reporter Alex Knapp about the challenges and opportunities posed by building data centers in space. Data centers are “physically secure from intrusion and environmentally friendly once operational,” says de Weck. “Essentially, the three primary resources required on Earth—land, power, and cooling—are available ‘for free’ in space after the initial launch and deployment costs are covered.”

The Boston Globe

Prof. Marzyeh Ghassemi and Monica Agrawal PhD '23 speak with Boston Globe reporter Hiawatha Bray about the risks on relying solely on AI for medical information. “What I’m really, really worried about is economically disadvantaged communities,” says Ghassemi. “You might not have access to a health care professional who you can quickly call and say, ‘Hey… Should I listen to this?’”  

GBH

Prof. David Karger speaks with GBH’s Morning Edition host Mark Herz about the rapid development of new AI tools, the need for generative AI regulation, and the importance of transparency when it comes to AI-generated content. "I think we need to involve more entities, more people, more sources in the fact-checking process,” says Karger. “We need to figure out how to ensure that the fact checking can propagate into the platforms, even though the platforms are not doing the fact checking themselves.”