Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 31 - 45 of 744 news clips related to this topic.
Show:

Wired

Writing for Wired, Prof. Daniela Rus, director of CSAIL, highlights the future of “physical intelligence, a new form of intelligent machine that can understand dynamic environments, cope with unpredictability, and make decisions in real time.” Rus writes: “Unlike the models used by standard AI, physical intelligence is rooted in physics; in understanding the fundamental principles of the real world, such as cause-and-effect.”

Fortune

Sloan research fellow Michael Schrage speaks with Fortune reporter Sheryl Estrada about the impact of AI on CFO roles. “The ongoing ‘Compound AI’ revolution, which involves approaching AI tasks by combining multiple interacting components, will increasingly transform the CFO role into that of an AI-powered chief capital officer (CCO),” says Schrage. “This is an analytics-driven shift that isn’t optional but imperative for enterprise growth.”

Knowable Magazine

Knowable Magazine reporter Katherine Ellison spotlights Future You, a new program developed by researchers at MIT that “offers young people a chance to chat with an online, AI-generated simulation of themselves at age 60.” 

Fast Company

Prof. Daron Acemoglu highlights the importance of adopting alternative technologies in the face of AI advancements, reports Jared Newman for Fast Company. “We need investment for alternative approaches to AI, and alternative technologies, those that I would say are more centered on making workers more productive, and providing better information to workers,” says Acemoglu.

Forbes

Forbes reporter Joe McKendrick spotlights a study by researchers from the MIT Center for Collective Intelligence evaluating “the performance of humans alone, AI alone, and combinations of both.” The researchers found that “human–AI systems do not necessarily achieve better results than the best of humans or AI alone,” explains graduate student Michelle Vaccaro and her colleagues. “Challenges such as communication barriers, trust issues, ethical concerns and the need for effective coordination between humans and AI systems can hinder the collaborative process.”

CNBC

In an interview with CNBC, Prof. Max Tegmark highlights the importance of increased AI regulation, specifically as a method to mitigate potential harm from large language models. “All other technologies in the United States, all other industries, have some kind of safety standards,” says Tegmark. “The only industry that is completely unregulated right now, which has no safety standards, is AI.” 

New York Times

Researchers from MIT and elsewhere have found that “AI doesn’t even understand itself,” reports Peter Coy for The New York Times. The researchers “asked AI models to explain how they were thinking about problems as they worked through them,” writes Coy. “The models were pretty bad at introspection.” 

The Boston Globe

Liquid AI, an MIT startup, is developing technology that “holds the same promise of writing, analyzing, and creating content as its rivals while using far less computing power,” reports Aaron Pressman for The Boston Globe

Forbes

Prof. David Autor has been named a Senior Fellow in the Schmidt Sciences AI2050 Fellows program, and Profs. Sara Beery, Gabriele Farina, Marzyeh Ghassemi, and Yoon Kim have been named Early Career AI2050 Fellows, reports Michael T. Nietzel for Forbes. The AI2050 fellowships provide funding and resources, while challenging “researchers to imagine the year 2050, where AI has been extremely beneficial and to conduct research that helps society realize its most beneficial impacts,” explains Nietzel. 

Wired

Using a new technique developed to examine the risks of multimodal large language models used in robots, MIT researchers were able to have a “simulated robot arm do unsafe things like knocking items off a table or throwing them by describing actions in ways that the LLM did not recognize as harmful and reject,” writes Will Knight for Wired. “With LLMs a few wrong words don’t matter as much,” explains Prof. Pulkit Agrawal. “In robotics a few wrong actions can compound and result in task failure more easily.”

Forbes

Researchers from MIT and elsewhere have compared 12 large language models (LLMs) against 925 human forecasters for a three-month forecasting tournament to help predict real-world events, including geopolitical events, reports Tomas Gorny for Forbes. "Our results suggest that LLMs can achieve forecasting accuracy rivaling that of human crowd forecasting tournaments,” the researchers explain.

Forbes

Forbes reporter John M. Bremen spotlights a new study by MIT researchers that “shows the most skilled scientists and innovations benefitted the most from AI – doubling their productivity – while lower-skilled staff did not experience similar gains.” The study “showed that specialized AI tools foster radical innovation at the technical level within a domain-specific scope, but also risk narrowing human roles and diversity of thought,” writes Bremen. 

Forbes

Writing for Forbes, Senior Lecturer Guadalupe Hayes-Mota SB '08, MS '16, MBA '16 shares insight into how entrepreneurs can use AI to build successful startups. AI “can be a strategic advantage when implemented wisely and used as a tool to support, rather than replace, the human touch,” writes Hayes-Mota. 

Financial Times

Prof. Daniela Rus, director of CSAIL, and Prof. Russ Tedrake speak with the Financial Times about how advances in AI have made it possible for robots to learn new skills and perform complex tasks. “All these cool things that we only dreamed of, we can now begin to realize,” says Rus. “Now we have to make sure that what we do with all these superpowers is good.”

Forbes

Research from the Data Provenance Initiative, led by MIT researchers, has “found that many web sources used for training AI models have restricted their data, leading to a rapid decline in accessible information,” reports Gary Drenik for Forbes