Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 874 news clips related to this topic.
Show:

Financial Times

Writing for the Financial Times, Prof. Danielle Li examines the risks for highly skilled workers whose expertise is used as training data for AI systems. “As workers, people should think about how to use AI to expand their skills: whether by building complementary capabilities or by finding ways to scale their expertise through AI systems,” Li writes. “As citizens, they should press for policies that give workers clearer rights over the data generated by their work and compensation for it.” 

The Boston Globe

During his time as a visiting artist at the Media Lab, keyboardist Jordan Rudess worked with researchers from the Responsive Environments Group to develop “jam_bot, a machine learning model designed to emulate his playing style and improvise while performing alongside him,” reports Annie Sarlin for The Boston Globe

NPR

A new essay by Profs. Daron Acemoglu, David Autor and Simon Johnson, has offered “a more hopeful vision for the future of human work,” in a world infused with AI, reports Greg Rosalsky for NPR’s Planet Money. The authors “spend much of the essay providing a thought-provoking analysis of how new technologies can affect human jobs in general,” writes Rosalsky. “In short, it's complicated. Yes, often they do kill jobs. Other times they can make jobs less lucrative by, for example, making those jobs easier to do — or ‘de-skilling’ them — which means the supply of workers who can do these jobs goes up and wages for the occupation can go down.” 

Forbes

Researchers at MIT have developed an “AI-driven optimization method that works like ‘ChatGPT for spreadsheets’ – a tabular foundation model designed to handle spreadsheet-style data common in engineering design problems,” reports Gene Marks for Forbes. “The AI system identifies which design variables matter most and focuses search efforts on those, making problem solving less cumbersome,” writes Marks. 

Smithsonian Magazine

Prof. Sara Beery speaks with Smithsonian Magazine reporter Mike Bock about the benefits of AI use in ecological research. “There’s an increasing need to build strong machine learning skills directly in the ecological community,” says Beery. “These students don’t need to be AI researchers. But they do need access to the skills to apply these techniques to their research problems.” 

Boston.com

Prof. Robert Langer, Prof. Giovanni Traverso and former postdoctoral fellow Thomas von Erlach founded Vivtex, a biotechnology startup that has “created a high-tech system called a ‘GI tract on a chip’ that uses robotics and AI to test how drugs move through the human digestive system,” reports Beth Treffeisen for Boston.com. “The technology allows Vivtex to quickly test thousands of drug formulations and predict how they will be absorbed in people, much more accurately than traditional lab methods.” 

The Boston Globe

Profs. Robert Langer, Giovanni Traverso and former postdoctoral fellow Thomas von Erlach have founded Vivtex, a biotechnology startup specializing in “oral alternatives to drugs administered by injections,” reports Jonathan Satlzman for The Boston Globe. Vivtex, now working in collaboration with Novo Nordisk, is looking to develop a new class of “pills to treat obesity and diabetes,” explains Saltzman. 

TechCrunch

Guide Labs, co-founded by Julius Adebayo SM ’15, SM ’16, PhD ’22, has debuted a large language model designed to make “its actions easily interpretable,” reports Tim Fernholz for Tech Crunch. “Every token produced by the model can be traced back to its origins in the LLM’s training data,” explains Fernholz. 

Fortune

Prof. Daron Acemoglu speaks with Fortune reporter Jake Angelo about his work studying the “origins of economic and political decay,” and the need for the U.S. to crack down “on economic inequality and tempering with job destruction.” “If we go down this path of destroying jobs [and] creating more inequality, U.S. democracy is not going to survive,” says Acemoglu.  

The Boston Globe

“In Event of Moon Disaster,” a short deepfake film on display at the MIT Museum’s “AI: Mind the Gap” exhibit depicts an alternate reality where the Apollo 11 mission ended in disaster, reports Mark Feeney for The Boston Globe. The “unnervingly realistic deepfake” depicts President Richard Nixon addressing the nation regarding the failed mission. The film “manages to be both frightening, in showing how convincing deepfakes can be, and, however paradoxically, inspiring,” writes Feeney. 

The Guardian

Prof. Pat Pataranutaporn speaks with The Guardian reporter Andrew Gregory about the lack of safety warnings and disclaimers in AI overviews, specifically in AI-generated health materials. “The absence of disclaimers when users are initially served medical information creates several critical dangers,” says Pataranutaporn. “Disclaimers serve as a crucial intervention point. They disrupt this automatic trust and prompt users to engage more critically with the information they receive.”

The Atlantic

Writing for The Atlantic, Prof. Deb Roy explores the impact of chatbots on language and learning development. “The ordinary forces that tether speech to consequence—social sanction, legal penalty, reputational loss—presuppose a continuous agent whose future can be made worse by what they say,” writes Roy. “With LLMs, there is no such locus. …When the speaker is an LLM, the human stakes that ordinarily anchor speech have nowhere to attach.” 

Bloomberg

 Prof. David Autor speaks with Bloomberg reporter David Westin about the shift toward automation in the workforce and the impact on workers. “There are many ways for us to use AI,” says Autor. “It’s incredibly flexible, malleable, plastic technology. You could use it to try to automate people out of existence. You could also use it to collaborate with people to make them more effective. But I also think that it depends on how we invest, how we build out those technologies.” 

Forbes

President Sally Kornbluth and MIT Corporation member Noubar Afeyan PhD '87 served as panelists at the 2026 Davos Imagination in Action event to discuss “upholding scientific principles in the era of LLMs,” reports John Werner for Forbes. “We want all of our students to have a foundational facility with AI,” said Kornbluth. “What we want them to know, now, is how they can really be passionate about the content that they care about, whether it's materials design, whether it's aerospace, whether it's biochemical innovation, and understanding the many ways in which AI can help in that innovation.”

Venture Beat

Researchers at MIT have “developed a new technique that enables large language models to learn new skills and knowledge without forgetting their past capabilities,” reports Ben Dickson for Venture Beat. “Their technique, called self-distillation fine-tuning (SDFT), allows models to learn directly from demonstrations and their own experiments by leveraging the inherent in-context learning abilities of modern LLMs,” explains Dickson. “Experiments show that SDFT consistently outperforms traditional supervised fine-tuning (SFT) while addressing the limitations of reinforcement learning algorithms.”