Skip to content ↓

Topic

Technology and society

Download RSS feed: News Articles / In the Media / Audio

Displaying 16 - 30 of 1161 news clips related to this topic.
Show:

Forbes

Forbes reporter Joe McKendrick spotlights a study by researchers from the MIT Center for Collective Intelligence evaluating “the performance of humans alone, AI alone, and combinations of both.” The researchers found that “human–AI systems do not necessarily achieve better results than the best of humans or AI alone,” explains graduate student Michelle Vaccaro and her colleagues. “Challenges such as communication barriers, trust issues, ethical concerns and the need for effective coordination between humans and AI systems can hinder the collaborative process.”

CNBC

In an interview with CNBC, Prof. Max Tegmark highlights the importance of increased AI regulation, specifically as a method to mitigate potential harm from large language models. “All other technologies in the United States, all other industries, have some kind of safety standards,” says Tegmark. “The only industry that is completely unregulated right now, which has no safety standards, is AI.” 

New York Times

Researchers from MIT and elsewhere have found that “AI doesn’t even understand itself,” reports Peter Coy for The New York Times. The researchers “asked AI models to explain how they were thinking about problems as they worked through them,” writes Coy. “The models were pretty bad at introspection.” 

The Boston Globe

Liquid AI, an MIT startup, is developing technology that “holds the same promise of writing, analyzing, and creating content as its rivals while using far less computing power,” reports Aaron Pressman for The Boston Globe

CNBC

A new study by researchers at MIT and elsewhere has found that “87% of people say employees in their organization are confused to a certain degree about where to turn for data and tech services and issues,” reports Rachel Curry for CNBC. “Most of the organizations whose leaders responded to the survey had multiple executive roles in the tech and data spaces,” explains Curry. 

NPR

Prof. Seth Lloyd speaks with NPR Morning Edition host Adam Bearne about recent advancements in quantum chips and the future of quantum computing. "Quantum computers, their ability to do multiple tasks at once, allows them to explore a much larger range of possibilities than is available to classical computers, which can really only do one thing at a time," says Lloyd. 

Forbes

Prof. David Autor has been named a Senior Fellow in the Schmidt Sciences AI2050 Fellows program, and Profs. Sara Beery, Gabriele Farina, Marzyeh Ghassemi, and Yoon Kim have been named Early Career AI2050 Fellows, reports Michael T. Nietzel for Forbes. The AI2050 fellowships provide funding and resources, while challenging “researchers to imagine the year 2050, where AI has been extremely beneficial and to conduct research that helps society realize its most beneficial impacts,” explains Nietzel. 

NBC Boston

Prof. Daniela Rus, director of CSAIL, speaks with NBC Boston reporter Colton Bradford about her work developing a new AI system aimed at making grocery shopping easier, more personalized and more efficient. “I think there is an important synergy between what people can do and what machines can do,” says Rus. “You can think of it as machines have speed, but people have wisdom. Machines can lift heavy things, but people can reason about what to do with those heavy things.” 

The New York Times

Writing for The New York Times, Prof. Anant Agarwal shares AI’s potential to “revolutionize education by enhancing paths to individual students in ways we never thought possible.” Agarwal emphasizes: “A.I. will never replace the human touch that is so vital to education. No algorithm can replicate the empathy, creativity and passion a teacher brings to the classroom. But A.I. can certainly amplify those qualities. It can be our co-pilot, our chief of staff helping us extend our reach and improve our effectiveness.”

Wired

Using a new technique developed to examine the risks of multimodal large language models used in robots, MIT researchers were able to have a “simulated robot arm do unsafe things like knocking items off a table or throwing them by describing actions in ways that the LLM did not recognize as harmful and reject,” writes Will Knight for Wired. “With LLMs a few wrong words don’t matter as much,” explains Prof. Pulkit Agrawal. “In robotics a few wrong actions can compound and result in task failure more easily.”

Forbes

Researchers from MIT and elsewhere have compared 12 large language models (LLMs) against 925 human forecasters for a three-month forecasting tournament to help predict real-world events, including geopolitical events, reports Tomas Gorny for Forbes. "Our results suggest that LLMs can achieve forecasting accuracy rivaling that of human crowd forecasting tournaments,” the researchers explain.

Forbes

Forbes reporter John M. Bremen spotlights a new study by MIT researchers that “shows the most skilled scientists and innovations benefitted the most from AI – doubling their productivity – while lower-skilled staff did not experience similar gains.” The study “showed that specialized AI tools foster radical innovation at the technical level within a domain-specific scope, but also risk narrowing human roles and diversity of thought,” writes Bremen. 

Forbes

Writing for Forbes, Senior Lecturer Guadalupe Hayes-Mota SB '08, MS '16, MBA '16 shares insight into how entrepreneurs can use AI to build successful startups. AI “can be a strategic advantage when implemented wisely and used as a tool to support, rather than replace, the human touch,” writes Hayes-Mota. 

Knowable Magazine

Research Scientist Susan Amrose speaks with Knowable Magazine reporter Lele Nargi about the use of inland desalination for farming communities. Amrose, who studies inland desalination in the Middle East and North Africa, is “testing a system that uses electrodialysis instead of reverse osmosis,” explains Nargi. “This sends a steady surge of voltage across water to pull salt ions through an alternating stack of positively charged and negatively charged membranes.” 

Forbes

Research from the Data Provenance Initiative, led by MIT researchers, has “found that many web sources used for training AI models have restricted their data, leading to a rapid decline in accessible information,” reports Gary Drenik for Forbes