Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 823 news clips related to this topic.
Show:

Gizmodo

Researchers at MIT have developed a new method that can predict how plasma will behave in a tokamak reactor given a set of initial conditions, reports Gayoung Lee for Gizmodo. The findings “may have lowered one of the major barriers to achieving large-scale nuclear fusion,” explains Lee. 

Forbes

Writing for Forbes, Senior Lecturer Guadalupe Hayes-Mota '08, SM '16, MBA '16 emphasizes the importance of implementing ethical frameworks when developing AI systems designed for use in healthcare. “The future of AI in healthcare not only needs to be intelligent,” writes Hayes-Mota. “It needs to be trusted. And in healthcare, trust is the ultimate competitive edge.” 

Tech Brew

Researchers at MIT have studied how chatbots perceived the political environment leading up to the 2024 election and its impact on automatically generated election-related responses, reports Patrick Kulp for Tech Brew. The researchers “fed a dozen leading LLMs 12,000 election-related questions on a nearly daily basis, collecting more than 16 million total responses through the contest in November,” explains Kulp.  

New York Times

Institute Prof. Daron Acemoglu participated in a “global dialogue on artificial intelligence governance” at the United Nations, reports Steve Lohr for The New York Times. “The AI quest is currently focused on automating a lot of things, sidelining and displacing workers,” says Acemoglu. 

Forbes

Researchers from MIT and Stanford tracked 11 large language models during the 2024 presidential campaign, and found that “AI models answered differently overtime… [and] they changed in response to events, prompts, and even demographic cues,” reports Ron Schmelzer for Forbes

Financial Times

Financial Times reporter Melissa Heikkilä spotlights how MIT researchers have uncovered evidence that increased use of AI tools by medical professionals risks “leading to worse health outcomes for women and ethnic minorities.” One study found that numerous AI models “recommended a much lower level of care for female patients,” writes Heikkilä. “A separate study by the MIT team showed that OpenAI’s GPT-4 and other models also displayed answers that had less compassion towards Black and Asian people seeking support for mental health problems.” 

Forbes

Prof. Dimitris Bertsimas, vice provost for MIT Open Learning, speaks with Forbes contributor Aviva Legatt about AI usage among university students. “Universities have a responsibility to ensure students, faculty, and staff gain a strong foundation in AI’s concepts, opportunities, and risks so they can help solve society’s biggest challenges,” says Bertsimas.

Forbes

Edwin Chen '08 speaks with Forbes reporter Pheobe Liu about his journey to founding Surge AI, a startup that “helps tech companies get the high-quality data they need to improve their AI models.” 

Fortune

Prof. Anant Agarwal speaks with Fortune reporter Nino Paoli about the benefits of a four-year college degree. “In this environment, learning deeply and building real expertise is more important than ever because the AI roles and applications are in the context of these other fields,” says Agarwal. “Degrees also future-proof your career by preparing you for the next big technology, whatever it might be.”

Forbes

Forbes reporter Martina Castellanos spotlights Edwin Chen '08, founder of Surge AI, as one of the 10 youngest billionaires on the 2025 Forbes 400 list. After working in machine learning, Chen saw “the lack of quality training data for AI,” and “launched Surge AI in 2020 to fix the problem,” writes Castellanos. 

CNN

Prof. Dylan Hadfield-Menell speaks with CNN reporter Hadas Gold about the need for AI safeguards and increased education on large language models. “The way these systems are trained is that they are trained in order to give responses that people judge to be good,” explains Hadfield-Menell. 

Forbes

Researchers at MIT have found that generative AI “not only repeats the same irrational tendencies of humans during the decision making process but also lacks some of the positive traits that humans do possess,” reports Tamsin Gable for Forbes. “This led the researchers to suggest that AI cannot replace many tasks and that human expertise remains important,” adds Gable. 

Time Magazine

MIT Dean of Digital Learning Cynthia Breazeal SM ’93, ScD ’00, Profs. Regina Barzilay and Priya Donti, and a number of MIT alumni have been named to Time’s TIME 100 AI 2025 list. The list spotlights “innovators, leaders, and thinkers reshaping our world through groundbreaking advances in artificial intelligence.”


 

Boston Globe

Prof. Marzyeh Ghassemi speaks with Boston Globe reporter Hiawatha Bray about her work uncovering issues with bias and trustworthiness in medical AI systems. “I love developing AI systems,” says Ghassemi. “I’m a professor at MIT for a reason. But it’s clear to me that naive deployments of these systems, that do not recognize the baggage that human data comes with, will lead to harm.”

CNN

In a video for CNN, graduate student Alex Kachkine explains his work developing a method using AI to create a reversible polymer film that could be used to restore damaged oil paintings, making the process faster than manual restoration. Kachkine explains that he hopes his work helps “get more paintings out of storage and into public view as there are many paintings that are damaged that I would love to see and it’s a real shame that there aren’t the resources necessary to restore them.”