Skip to content ↓

Topic

Technology and society

Download RSS feed: News Articles / In the Media / Audio

Displaying 16 - 30 of 1346 news clips related to this topic.
Show:

The Guardian

Prof. Pat Pataranutaporn speaks with The Guardian reporter Andrew Gregory about the lack of safety warnings and disclaimers in AI overviews, specifically in AI-generated health materials. “The absence of disclaimers when users are initially served medical information creates several critical dangers,” says Pataranutaporn. “Disclaimers serve as a crucial intervention point. They disrupt this automatic trust and prompt users to engage more critically with the information they receive.”

Aesthetica Magazine

Aesthetica Magazine reporter Eleanor Sutherland spotlights “Freezing Time,” a new exhibit at the MIT Museum featuring the work of Harold “Doc” Edgerton, a “pioneer of high-speed imaging who made it possible to see what the human eye cannot.” This is “the first exhibition to really interrogate Edgerton’s experimental journey in developing his innovative image-making processes,” says Michael John Gorman, director of the MIT Museum. 

The Atlantic

Writing for The Atlantic, Prof. Deb Roy explores the impact of chatbots on language and learning development. “The ordinary forces that tether speech to consequence—social sanction, legal penalty, reputational loss—presuppose a continuous agent whose future can be made worse by what they say,” writes Roy. “With LLMs, there is no such locus. …When the speaker is an LLM, the human stakes that ordinarily anchor speech have nowhere to attach.” 

Bloomberg

Prof. David Autor speaks with Bloomberg reporter David Westin about the shift toward automation in the workforce and the impact on workers. “There are many ways for us to use AI,” says Autor. “It’s incredibly flexible, malleable, plastic technology. You could use it to try to automate people out of existence. You could also use it to collaborate with people to make them more effective. But I also think that it depends on how we invest, how we build out those technologies.” 

Forbes

President Sally Kornbluth and MIT Corporation member Noubar Afeyan PhD '87 served as panelists at the 2026 Davos Imagination in Action event to discuss “upholding scientific principles in the era of LLMs,” reports John Werner for Forbes. “We want all of our students to have a foundational facility with AI,” said Kornbluth. “What we want them to know, now, is how they can really be passionate about the content that they care about, whether it's materials design, whether it's aerospace, whether it's biochemical innovation, and understanding the many ways in which AI can help in that innovation.”

Venture Beat

Researchers at MIT have “developed a new technique that enables large language models to learn new skills and knowledge without forgetting their past capabilities,” reports Ben Dickson for Venture Beat. “Their technique, called self-distillation fine-tuning (SDFT), allows models to learn directly from demonstrations and their own experiments by leveraging the inherent in-context learning abilities of modern LLMs,” explains Dickson. “Experiments show that SDFT consistently outperforms traditional supervised fine-tuning (SFT) while addressing the limitations of reinforcement learning algorithms.” 

The Boston Globe

"No photographer so clearly, or memorably, demonstrated the relationship between time and technology as did Harold ‘Doc' Edgerton,” writes Boston Globe reporter Mark Feeney. "The stroboscopic cameras he developed...could register almost-infinitesimal gradations of motion.” A new exhibit at the MIT Museum called “Freezing Time: Edgerton and the Beauty of the Machine Age,” showcases the breadth of Edgerton’s work, featuring “20 Edgerton photographs, some later works by others inspired by his example, a dozen pages from his notebooks, a selection of his photographic equipment."

The Wall Street Journal

Prof. Andrew Lo speaks with Wall Street Journal reporter Peter Coy about why he feels current AI systems aren’t suited to serving as financial advisors and his goal to create “an AI financial adviser that is a true fiduciary—namely, an entity that always puts the client’s interests first and tailors its advice to their particular needs, including emotional needs.” Lo notes that: “The AI people are using now can be dangerous, especially if the user isn’t fully aware of the biases, inaccuracies and other limits” of large language models. 

New York Times

Senior Research Scientist Leo Anthony Celi speaks with New York Times reporter Gina Kolata about the use of AI in health care. “The real concern isn’t AI itself,” says Celi. “It’s the AI is being deployed to optimize a profoundly broken system rather than to reimagine it.” 

USA Today

USA Today reporter Dinah Voyles Pulver spotlights Research Scientist Judah Cohen’s research studying how weather systems and climate patterns are related to the increase in Arctic blasts and deep freezes this winter. 

New York Times

Research Scientist Judah Cohen speaks with New York Times reporter Eric Niiler about his research studying “how global warming might also be causing colder winters in the eastern United States.” Cohen says “It’s weird what’s going on now in the stratosphere. These stretching events happen every winter, but just how the pattern is stuck is really remarkable.” 

NBC

Prof. Carlo Ratti speaks with Matt Fortin of NBC Boston about his work designing this year’s Olympic torch. “For us it’s very exciting to do this,” says Ratti, “because it’s a way you can actually push design beyond what you normally do.”

Popular Science

The torch for this year’s Winter Olympics was designed by Prof. Carlo Ratti, reports Laura Baisas for Popular Science. Dubbed “Essential,” the torch clocks in at just under 2.5 pounds, and "boasts a unique internal mechanism that can be seen through a vertical opening along its side. This means that audiences can peek inside and see the burner in action. From a design perspective, that reinforces Ratti’s desire to keep the emphasis on the flame itself and not the object.”

The Boston Globe

Prof. Marzyeh Ghassemi and Monica Agrawal PhD '23 speak with Boston Globe reporter Hiawatha Bray about the risks on relying solely on AI for medical information. “What I’m really, really worried about is economically disadvantaged communities,” says Ghassemi. “You might not have access to a health care professional who you can quickly call and say, ‘Hey… Should I listen to this?’”  

GBH

Prof. David Karger speaks with GBH’s Morning Edition host Mark Herz about the rapid development of new AI tools, the need for generative AI regulation, and the importance of transparency when it comes to AI-generated content. "I think we need to involve more entities, more people, more sources in the fact-checking process,” says Karger. “We need to figure out how to ensure that the fact checking can propagate into the platforms, even though the platforms are not doing the fact checking themselves.”