Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 718 news clips related to this topic.
Show:

Wired

Prof. Pattie Maes speaks with Wired reporter Reece Rogers about the potential benefits and challenges posed by AI agents. “The way these systems are built, right now, they're optimized from a technical point of view, an engineering point of view,” says Maes. “But, they're not at all optimized for human-design issues.” 

The Boston Globe

Noubar Afeyan PhD '87 and a member of the MIT Corporation speaks with Boston Globe reporter Aaron Pressman about the future of artificial general intelligence (AGI) and “superintelligent” AI. “Humans have long developed tools, microscopes, mass spectrometers, you name it, to help them be able to understand nature better,” says Afeyan. “Now one of the tools, in the case of machine [learning], we’re elevating to the level of a whole new intelligence.”

Ars Technica

Ars Technica reporter Jacek Krywko spotlights how MIT researchers have developed a new photonic chip that that can “compute the entire deep neural net, including both linear and non-linear operations, using photons.” Visiting scientist Saumil Bandyopadhyay '17, MEng '18, PhD '23 explains that: “We’re focused on a very specific metric here, which is latency. We aim for applications where what matters the most is how fast you can produce a solution. That’s why we are interested in systems where we’re able to do all the computations optically.” 

Financial Times

Prof. Daron Acemoglu speaks with Financial Times reporter Rana Foroohar about the impact of automation on the labor market. “It’s likely that the short- to midterm gains from AI will be distributed unequally, and will benefit capital more than labor,” says Acemoglu. 

NPR

Iqbal Dhaliwal, executive director of the Abdul Latif Jameel Poverty Action Lab (J-PAL), speaks with NPR reporter Ari Daniel about the positive social impact that can be brought forth by AI. "As this technical revolution unfolds in real time," says Dhaliwal, "we have a responsibility to rigorously study how these technologies can help or harm people's well-being, particularly people who experience poverty, and scale only the most effective AI solutions."

Wired

Writing for Wired, Prof. Daniela Rus, director of CSAIL, highlights the future of “physical intelligence, a new form of intelligent machine that can understand dynamic environments, cope with unpredictability, and make decisions in real time.” Rus writes: “Unlike the models used by standard AI, physical intelligence is rooted in physics; in understanding the fundamental principles of the real world, such as cause-and-effect.”

Fortune

Sloan research fellow Michael Schrage speaks with Fortune reporter Sheryl Estrada about the impact of AI on CFO roles. “The ongoing ‘Compound AI’ revolution, which involves approaching AI tasks by combining multiple interacting components, will increasingly transform the CFO role into that of an AI-powered chief capital officer (CCO),” says Schrage. “This is an analytics-driven shift that isn’t optional but imperative for enterprise growth.”

Knowable Magazine

Knowable Magazine reporter Katherine Ellison spotlights Future You, a new program developed by researchers at MIT that “offers young people a chance to chat with an online, AI-generated simulation of themselves at age 60.” 

Fast Company

Prof. Daron Acemoglu highlights the importance of adopting alternative technologies in the face of AI advancements, reports Jared Newman for Fast Company. “We need investment for alternative approaches to AI, and alternative technologies, those that I would say are more centered on making workers more productive, and providing better information to workers,” says Acemoglu.

Forbes

Forbes reporter Joe McKendrick spotlights a study by researchers from the MIT Center for Collective Intelligence evaluating “the performance of humans alone, AI alone, and combinations of both.” The researchers found that “human–AI systems do not necessarily achieve better results than the best of humans or AI alone,” explains graduate student Michelle Vaccaro and her colleagues. “Challenges such as communication barriers, trust issues, ethical concerns and the need for effective coordination between humans and AI systems can hinder the collaborative process.”

CNBC

In an interview with CNBC, Prof. Max Tegmark highlights the importance of increased AI regulation, specifically as a method to mitigate potential harm from large language models. “All other technologies in the United States, all other industries, have some kind of safety standards,” says Tegmark. “The only industry that is completely unregulated right now, which has no safety standards, is AI.” 

New York Times

Researchers from MIT and elsewhere have found that “AI doesn’t even understand itself,” reports Peter Coy for The New York Times. The researchers “asked AI models to explain how they were thinking about problems as they worked through them,” writes Coy. “The models were pretty bad at introspection.” 

The Boston Globe

Liquid AI, an MIT startup, is developing technology that “holds the same promise of writing, analyzing, and creating content as its rivals while using far less computing power,” reports Aaron Pressman for The Boston Globe

Forbes

Prof. David Autor has been named a Senior Fellow in the Schmidt Sciences AI2050 Fellows program, and Profs. Sara Beery, Gabriele Farina, Marzyeh Ghassemi, and Yoon Kim have been named Early Career AI2050 Fellows, reports Michael T. Nietzel for Forbes. The AI2050 fellowships provide funding and resources, while challenging “researchers to imagine the year 2050, where AI has been extremely beneficial and to conduct research that helps society realize its most beneficial impacts,” explains Nietzel. 

Wired

Using a new technique developed to examine the risks of multimodal large language models used in robots, MIT researchers were able to have a “simulated robot arm do unsafe things like knocking items off a table or throwing them by describing actions in ways that the LLM did not recognize as harmful and reject,” writes Will Knight for Wired. “With LLMs a few wrong words don’t matter as much,” explains Prof. Pulkit Agrawal. “In robotics a few wrong actions can compound and result in task failure more easily.”