Skip to content ↓

Topic

Computer Science and Artificial Intelligence Laboratory (CSAIL)

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 768 news clips related to this topic.
Show:

Venture Beat

Researchers at MIT have “developed a new technique that enables large language models to learn new skills and knowledge without forgetting their past capabilities,” reports Ben Dickson for Venture Beat. “Their technique, called self-distillation fine-tuning (SDFT), allows models to learn directly from demonstrations and their own experiments by leveraging the inherent in-context learning abilities of modern LLMs,” explains Dickson. “Experiments show that SDFT consistently outperforms traditional supervised fine-tuning (SFT) while addressing the limitations of reinforcement learning algorithms.” 

The Wall Street Journal

Prof. Andrew Lo speaks with Wall Street Journal reporter Peter Coy about why he feels current AI systems aren’t suited to serving as financial advisors and his goal to create “an AI financial adviser that is a true fiduciary—namely, an entity that always puts the client’s interests first and tailors its advice to their particular needs, including emotional needs.” Lo notes that: “The AI people are using now can be dangerous, especially if the user isn’t fully aware of the biases, inaccuracies and other limits” of large language models. 

GBH

Prof. David Karger speaks with GBH’s Morning Edition host Mark Herz about the rapid development of new AI tools, the need for generative AI regulation, and the importance of transparency when it comes to AI-generated content. "I think we need to involve more entities, more people, more sources in the fact-checking process,” says Karger. “We need to figure out how to ensure that the fact checking can propagate into the platforms, even though the platforms are not doing the fact checking themselves.” 

Wired

Graduate student Stephen Casper speaks with Wired reporter Matt Burgess about the rise of “deepfake video abuse and its role in nonconsensual intimate imagery generation.” “This ecosystem is built on the back of open-source models,” says Casper. “Oftentimes it’s just an open-source model that has been used to develop an app that then a user uses.” 

Forbes

Forbes reporter Craig Smith spotlights Prof. Regina Barzilay for her work using her personal health experience to develop transformative medical technology. In response to her breast cancer diagnosis, Barzilay “developed a deep learning model that analyzes mammography images to predict breast cancer risk up to five years in advance,” writes Smith. 

CNBC

Prof. Armando Solar-Lezama and Prof. Daniela Rus, director of CSAIL, speak with CNBC reporter Trevor Laurence Jockims about the impact of AI in the workforce. “These transitions are about efficiency, but also about trust and transparency: workers will need to trust that companies aren’t simply using AI as a cover for cost-cutting,” says Rus. 

VICE

Researchers at MIT have “found a way to transform a flat sheet into a functional 3D object with a single pull of a string,” reports Luis Prada for Vice. “The team developed a computational method that lets users design three-dimensional objects that can be fabricated as flat grids and then deployed almost instantly with a single tug,” explains Prada. 

Gizmodo

Researchers at MIT have developed a new type of material that can transform into a 3D structure with the simple pull of a string, reports Gayoung Lee for Gizmodo. The new material could “have an impressive range of applications, from transportable medical devices and foldable robots to modular space habitats on Mars,” Lee explains. 

Scientific American

MIT researchers have developed “GelSight,” a system that provides robots with a sense of touch, reports Ben Guarino for Scientific American. “GelSight can identify by touch the tiny letters spelling out LEGO on the stud of a toy brick,” explains Guarino. 

CNN

Prof. Anand Natarajan speaks with CNN reporter Lisa Eadicicco about the promise of quantum computing. “The big hope is that a quantum computer can simulate any sort of chemical or biological experiment you would do in the lab,” says Natarajan. He adds that quantum computing could be very influential for cryptography and cybersecurity, as it could be used to break codes. “That’s also a major motivation, to make sure that our adversaries cannot do it and that we have this capability.” 

The Verge

Prof. Emeritus Tim Berners-Lee speaks with The Verge’s Decoder host Nilay Patel about his hopes and concerns for the future of the world wide web. “In the early days of the web, anybody used to be able to make a website,” explains Lee. “That feeling of sovereignty as an individual being enabled and being a peer with all of the other people on the web, that is what were still fighting for and what we need to rebuild.” 

Forbes

Forbes reporter Gemma Allen spotlights Prof. Daniela Rus, director of CSAIL, and her work revolutionizing the field of robotics by bringing “empathy into engineering and proving that responsibility is as radical and as commercially attractive as unguarded innovation.” Rus says of her vision for the future of robotics and AI: “With robots, we can amplify strength and precision. With AI, we can amplify cognition, creativity, empathy, and foresight. These tools should help us become better versions of ourselves."

Wired

Wired reporter Steven Levy spotlights Research Scientist Sarah Schwettmann PhD '21 and her work investigating the unknown behaviors of AI agents. Schwettmann has co-founded Transluce, a nonprofit interpretability startup “to further study such phenomena,” writes Levy.

Fortune

Prof. Srini Devadas speaks with Fortune reporter Beatrice Nolan about data and privacy concerns surrounding AI assistants. “The challenge is that if you want the AI assistant to be useful, you need to give it access to your data and your privileges, and if attackers can trick the AI assistant, it is as if you were tricked,” says Devadas. 

Wired

A new study by researchers at MIT suggests that “the biggest and most computationally intensive AI models may soon offer diminishing returns compared to smaller models,” reports Will Knight for Wired. “By mapping scaling laws against continued improvements in model efficiency, the researchers found that it could become harder to wring leaps in performance from giant models whereas efficiency gains could make models running on more modest hardware increasingly capable over the next decade.”