Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 1288 news clips related to this topic.
Show:

Newsweek

Prof. Daron Acemoglu speaks with Newsweek reporter Hugh Cameron about the impact of AI on layoffs at major retailers. “I don't think we are at the cusp of mass unemployment,” says Acemoglu. “AI models have many limitations, and while there will be companies such as Amazon that will attempt to organize work to get more out of AI and reduce their headcount, at the macroeconomic level things will go more slowly.”

Wired

Wired reporter Steven Levy spotlights Research Scientist Sarah Schwettmann PhD '21 and her work investigating the unknown behaviors of AI agents. Schwettmann has co-founded Transluce, a nonprofit interpretability startup “to further study such phenomena,” writes Levy.

Science

At a recent conference, Prof. Sergey Ovchinnikov and his colleagues presented a paper demonstrating how they have used advanced versions of ChatGPT to “generate amino acid sequences that code for biologically active proteins with a structural feature called a four-helix bundle,” reports Jeffrey Brainard for Science. “To Ovchinnikov’s surprise, ChatGPT produced gene sequences without further refinement of his team’s query,” writes Brainard. “Still, the application of ChatGPT to this task needs refinement, Ovchinnikov found. Most of the sequences his team produced did not garner 'high confidence' on a score predicting whether they would form the desired protein structure.” 

Fortune

Prof. Srini Devadas speaks with Fortune reporter Beatrice Nolan about data and privacy concerns surrounding AI assistants. “The challenge is that if you want the AI assistant to be useful, you need to give it access to your data and your privileges, and if attackers can trick the AI assistant, it is as if you were tricked,” says Devadas. 

Nature

Prof. Alex Shalek and his colleagues developed a deep-learning model called DrugReflector aimed at speeding up the process of drug discovery, reports Heidi Ledford for Nature. “They used DrugReflector to find chemicals that can affect the generation of platelets and red blood cells — a characteristic that could be useful in treating some blood conditions,” explains Ledford. The researchers found that “DrugReflector was up to 17 times more effective at finding relevant compounds than standard, brute-force drug screening that depends on randomly selecting compounds from a chemical library.”

The Guardian

Prof. Pat Pataranutaporn speaks with The Guardian reporter Madeleine Aggeler about the impact of AI on human relationships. “If you converse more and more with the AI instead of going to talk to your parents or your friends, the social fabric degrades,” says Pataranutaporn. “You will not develop the skills to go and talk to real humans.” 

Time Magazine

Time reporter Brian Elliott spotlights Prof. Zeynep Ton’s comments at a recent conference regarding the importance of businesses having an employee-focused strategy when implementing new AI tools. “The status quo mindset in leaders is to see labor as a cost to be minimized,” Ton explains. “Exemplary companies think of employees as drivers of customer satisfaction, profitability and growth.”

New York Times

Prof. Daron Acemoglu speaks New York Times reporter Karen Weise about workplace automation at Amazon. “Nobody else has the same incentive as Amazon to find the way to automate,” Acemoglu. “Once they work out how to do this profitably, it will spread to others, too.” 

Reuters

Vertical Semiconductor, an MIT spinoff, is working to “commercialize chip technology that can deliver electricity to artificial intelligence servers more efficiently,” reports Stephen Nellis for Reuters. “We do believe we offer a compelling next-generation solution that is not just a couple of percentage points here and there, but actually a step-wise transformation,” says Cynthia Liao MBA '24.

Wired

A new study by researchers at MIT suggests that “the biggest and most computationally intensive AI models may soon offer diminishing returns compared to smaller models,” reports Will Knight for Wired. “By mapping scaling laws against continued improvements in model efficiency, the researchers found that it could become harder to wring leaps in performance from giant models whereas efficiency gains could make models running on more modest hardware increasingly capable over the next decade.” 

Gizmodo

Researchers at MIT have developed a new method that can predict how plasma will behave in a tokamak reactor given a set of initial conditions, reports Gayoung Lee for Gizmodo. The findings “may have lowered one of the major barriers to achieving large-scale nuclear fusion,” explains Lee. 

Forbes

Writing for Forbes, Senior Lecturer Guadalupe Hayes-Mota '08, SM '16, MBA '16 emphasizes the importance of implementing ethical frameworks when developing AI systems designed for use in healthcare. “The future of AI in healthcare not only needs to be intelligent,” writes Hayes-Mota. “It needs to be trusted. And in healthcare, trust is the ultimate competitive edge.” 

Tech Brew

Researchers at MIT have studied how chatbots perceived the political environment leading up to the 2024 election and its impact on automatically generated election-related responses, reports Patrick Kulp for Tech Brew. The researchers “fed a dozen leading LLMs 12,000 election-related questions on a nearly daily basis, collecting more than 16 million total responses through the contest in November,” explains Kulp.  

Financial Times

Prof. Daron Acemoglu speaks with Financial Times reporters Claire Jones and Melissa Heikkilä about the economic implications of the AI boom. “There is a lot of pressure on managers to do something with AI… and there is the hype that is contributing to it,” says Acemoglu. “But not many people are doing anything super creative with it yet.” 

The Scientist

In an effort to better understand how protein language models (PLMs) think and better judge their reliability, MIT researchers applied a tool called sparse autoencoders, which can be used to make large language models more interpretable. The findings “may help scientists better understand how PLMs come to certain conclusions and increase researchers’ trust in them," writes Andrea Luis for The Scientist