Skip to content ↓

Topic

Laboratory for Information and Decision Systems (LIDS)

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 43 news clips related to this topic.
Show:

Gizmodo

Researchers at MIT have developed a new method that can predict how plasma will behave in a tokamak reactor given a set of initial conditions, reports Gayoung Lee for Gizmodo. The findings “may have lowered one of the major barriers to achieving large-scale nuclear fusion,” explains Lee. 

Financial Times

Financial Times reporter Melissa Heikkilä spotlights how MIT researchers have uncovered evidence that increased use of AI tools by medical professionals risks “leading to worse health outcomes for women and ethnic minorities.” One study found that numerous AI models “recommended a much lower level of care for female patients,” writes Heikkilä. “A separate study by the MIT team showed that OpenAI’s GPT-4 and other models also displayed answers that had less compassion towards Black and Asian people seeking support for mental health problems.” 

Time Magazine

MIT Dean of Digital Learning Cynthia Breazeal SM ’93, ScD ’00, Profs. Regina Barzilay and Priya Donti, and a number of MIT alumni have been named to Time’s TIME 100 AI 2025 list. The list spotlights “innovators, leaders, and thinkers reshaping our world through groundbreaking advances in artificial intelligence.”


 

Boston Globe

Prof. Marzyeh Ghassemi speaks with Boston Globe reporter Hiawatha Bray about her work uncovering issues with bias and trustworthiness in medical AI systems. “I love developing AI systems,” says Ghassemi. “I’m a professor at MIT for a reason. But it’s clear to me that naive deployments of these systems, that do not recognize the baggage that human data comes with, will lead to harm.”

Is Business Broken?

Prof. Asu Ozdaglar, Deputy Dean of MIT Schwarzman College of Computing, speaks with Is Business Broken? podcast host Curt Nickish to explore AI’s opportunities and risks — and whether it can be regulated without stifling progress. “AI is a very promising and transformative technology,” says Ozdaglar. “But regulation should be designed very carefully so that it does not block or impede the development of the technology.” Given AI’s potential harms or misuses, she added that it's important to think about the correct regulatory framework. “For it to be successful, it should focus on where harms can come from.”

WBUR

Principal Research Scientist Kalyan Veeramachaneni speaks with WBUR On Point host Meghna Chakrabarti about the benefits and risks of training AI on synthetic data. “I think the AI that we have as of today and we are using is largely very small; I don't mean that as in size, but in the tasks that it can do,” says Veeramachaneni. “And as days go by, we are asking more and more of it… that requires us to provide more data, train more models that are much more efficient in reasoning, and can solve problems that we haven't thought of solving.”

NPR

Prof. Pulkit Agrawal speaks with NPR Short Wave host Regina Barber and science correspondent Geoff Brumfiel about his work developing a new technique that allows robots to train in simulations of scanned home environments. “The power of simulation is that we can collect very large amounts of data,” explains Agrawal. “For example, in three hours' worth of simulation, we can collect 100 days' worth of data.” 

Forbes

Prof. Sarah Millholland, Prof. Christian Wolf, Prof. Emil Verner, Prof. Darcy McRose, Prof. Marzyeh Ghassemi, Prof. Mohsen Ghaffari and Prof. Ariel Furst have received the 2025 Sloan Research Fellowship for “being among the most promising scientific researchers currently working in their fields,” reports Michael T. Nietzel for Forbes. “Sloan Research Fellows are chosen in seven scientific and technical fields—chemistry, computer science, Earth system science, economics, mathematics, neuroscience, and physics,” explains Nietzel. 

Interesting Engineering

MIT engineers have developed a new training method to help ensure the safe operation of multiagent systems, including robots, search-and-rescue drones and self-driving cars, reports Jijo Malayil for Interesting Engineering. The new approach “doesn’t focus on rigid paths but rather enables agents to continuously map their safety margins—the boundaries within which they must stay,” writes Malayil. 

Forbes

Prof. David Autor has been named a Senior Fellow in the Schmidt Sciences AI2050 Fellows program, and Profs. Sara Beery, Gabriele Farina, Marzyeh Ghassemi, and Yoon Kim have been named Early Career AI2050 Fellows, reports Michael T. Nietzel for Forbes. The AI2050 fellowships provide funding and resources, while challenging “researchers to imagine the year 2050, where AI has been extremely beneficial and to conduct research that helps society realize its most beneficial impacts,” explains Nietzel. 

Forbes

Researchers at MIT have developed “Clio,” a new technique that “enables robots to make intuitive, task-relevant decisions,” reports Jennifer Kite-Powell for Forbes. The team’s new approach allows “a robot to quickly map a scene and identify the items they need to complete a given set of tasks,” writes Kite-Powell. 

TechCrunch

Researchers at MIT have found that commercially available AI models, “were more likely to recommend calling police when shown Ring videos captured in minority communities,” reports Kyle Wiggers for TechCrunch. “The study also found that, when analyzing footage from majority-white neighborhoods, the models were less likely to describe scenes using terms like ‘casing the property’ or ‘burglary tools,’” writes Wiggers. 

Interesting Engineering

Researchers at MIT have developed a new method that “enables robots to intuitively identify relevant areas of a scene based on specific tasks,” reports Baba Tamim for Interesting Engineering. “The tech adopts a distinctive strategy to make robots effective and efficient at sorting a cluttered environment, such as finding a specific brand of mustard on a messy kitchen counter,” explains Tamim. 

TechCrunch

TechCrunch reporter Kyle Wiggers writes that MIT researchers have developed a new tool, called SigLLM, that uses large language models to flag problems in complex systems. In the future, SigLLM could be used to “help technicians flag potential problems in equipment like heavy machinery before they occur.” 

Fast Company

Principal Research Scientist Kalyan Veeramachaneni speaks with Fast Company reporter Sam Becker about his work in developing the Synthetic Data Vault, which is helpful for creating synthetic data sets, reports Sam Becker for Fast Company. “Fake data is randomly generated,” says Veeramachaneni. “While synthetic data is trying to create data from a machine learning model that looks very realistic.”