Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 1285 news clips related to this topic.
Show:

Fortune

Prof. Srini Devadas speaks with Fortune reporter Beatrice Nolan about data and privacy concerns surrounding AI assistants. “The challenge is that if you want the AI assistant to be useful, you need to give it access to your data and your privileges, and if attackers can trick the AI assistant, it is as if you were tricked,” says Devadas. 

Nature

Prof. Alex Shalek and his colleagues developed a deep-learning model called DrugReflector aimed at speeding up the process of drug discovery, reports Heidi Ledford for Nature. “They used DrugReflector to find chemicals that can affect the generation of platelets and red blood cells — a characteristic that could be useful in treating some blood conditions,” explains Ledford. The researchers found that “DrugReflector was up to 17 times more effective at finding relevant compounds than standard, brute-force drug screening that depends on randomly selecting compounds from a chemical library.”

The Guardian

Prof. Pat Pataranutaporn speaks with The Guardian reporter Madeleine Aggeler about the impact of AI on human relationships. “If you converse more and more with the AI instead of going to talk to your parents or your friends, the social fabric degrades,” says Pataranutaporn. “You will not develop the skills to go and talk to real humans.” 

Time Magazine

Time reporter Brian Elliott spotlights Prof. Zeynep Ton’s comments at a recent conference regarding the importance of businesses having an employee-focused strategy when implementing new AI tools. “The status quo mindset in leaders is to see labor as a cost to be minimized,” Ton explains. “Exemplary companies think of employees as drivers of customer satisfaction, profitability and growth.”

New York Times

Prof. Daron Acemoglu speaks New York Times reporter Karen Weise about workplace automation at Amazon. “Nobody else has the same incentive as Amazon to find the way to automate,” Acemoglu. “Once they work out how to do this profitably, it will spread to others, too.” 

Reuters

Vertical Semiconductor, an MIT spinoff, is working to “commercialize chip technology that can deliver electricity to artificial intelligence servers more efficiently,” reports Stephen Nellis for Reuters. “We do believe we offer a compelling next-generation solution that is not just a couple of percentage points here and there, but actually a step-wise transformation,” says Cynthia Liao MBA '24.

Wired

A new study by researchers at MIT suggests that “the biggest and most computationally intensive AI models may soon offer diminishing returns compared to smaller models,” reports Will Knight for Wired. “By mapping scaling laws against continued improvements in model efficiency, the researchers found that it could become harder to wring leaps in performance from giant models whereas efficiency gains could make models running on more modest hardware increasingly capable over the next decade.” 

Gizmodo

Researchers at MIT have developed a new method that can predict how plasma will behave in a tokamak reactor given a set of initial conditions, reports Gayoung Lee for Gizmodo. The findings “may have lowered one of the major barriers to achieving large-scale nuclear fusion,” explains Lee. 

Forbes

Writing for Forbes, Senior Lecturer Guadalupe Hayes-Mota '08, SM '16, MBA '16 emphasizes the importance of implementing ethical frameworks when developing AI systems designed for use in healthcare. “The future of AI in healthcare not only needs to be intelligent,” writes Hayes-Mota. “It needs to be trusted. And in healthcare, trust is the ultimate competitive edge.” 

Tech Brew

Researchers at MIT have studied how chatbots perceived the political environment leading up to the 2024 election and its impact on automatically generated election-related responses, reports Patrick Kulp for Tech Brew. The researchers “fed a dozen leading LLMs 12,000 election-related questions on a nearly daily basis, collecting more than 16 million total responses through the contest in November,” explains Kulp.  

Financial Times

Prof. Daron Acemoglu speaks with Financial Times reporters Claire Jones and Melissa Heikkilä about the economic implications of the AI boom. “There is a lot of pressure on managers to do something with AI… and there is the hype that is contributing to it,” says Acemoglu. “But not many people are doing anything super creative with it yet.” 

The Scientist

In an effort to better understand how protein language models (PLMs) think and better judge their reliability, MIT researchers applied a tool called sparse autoencoders, which can be used to make large language models more interpretable. The findings “may help scientists better understand how PLMs come to certain conclusions and increase researchers’ trust in them," writes Andrea Luis for The Scientist

Smithsonian Magazine

Noman Bashir, a fellow with MIT’s Climate and Sustainability Consortium, speaks with Smithsonian Magazine reporter Amber X. Chen about the impact of AI data centers on the country’s electric grid and infrastructure. Bashir notes “that the industry’s environmental impacts can also be seen farther up the supply chain,” writes Chen. “The GPUs that power A.I. data centers are made with rare earth elements, the extraction of which Bashir notes is resource intensive and can cause environmental degradation.” 

New York Times

Institute Prof. Daron Acemoglu participated in a “global dialogue on artificial intelligence governance” at the United Nations, reports Steve Lohr for The New York Times. “The AI quest is currently focused on automating a lot of things, sidelining and displacing workers,” says Acemoglu. 

Forbes

Researchers from MIT and Stanford tracked 11 large language models during the 2024 presidential campaign, and found that “AI models answered differently overtime… [and] they changed in response to events, prompts, and even demographic cues,” reports Ron Schmelzer for Forbes