Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 376 - 390 of 1182 news clips related to this topic.
Show:

Yahoo! News

Prof. Marzyeh Ghassemi speaks with Yahoo News reporter Rebecca Corey about the benefits and risks posed by the use of AI tools in health care. “I think the problem is when you try to naively replace humans with AI in health care settings, you get really poor results,” says Ghassemi. “You should be looking at it as an augmentation tool, not as a replacement tool.”

Vox

Prof. Kevin Esvelt and his students have found that language-generating AI models could make it easier to create pandemic potential pathogens, reports Kelsey Piper for Vox.

Axios

MIT researchers and an undergraduate class found that chatbots could be prompted to suggest pandemic pathogens, including specific information not commonly known among experts, reports Ryan Health for Axios. The MIT researchers recommend "pre-release evaluations of LLMs by third parties, curating training datasets to remove harmful concepts, and verifiably screening all DNA generated by synthesis providers or used by contract research organizations."

The Conversation

Writing for The Conversation, postdoc Ziv Epstein SM ’19, PhD ’23, graduate student Robert Mahari and Jessica Fjeld of Harvard Law School explore how the use of generative AI will impact creative work. “The ways in which existing laws are interpreted or reformed – and whether generative AI is appropriately treated as the tool it is – will have real consequences for the future of creative expression,” the authors note.  

TechCrunch

Researchers at MIT have developed a new artificial intelligence system aimed at helping autopilot avoid obstacles while maintaining a desirable flight path, reports Kyle Wiggers for TechCrunch. “Any old algorithm can propose wild changes to direction in order to not crash, but doing so while maintaining stability and not pulping anything inside is harder,” writes Wiggers.

Bloomberg

A study by MIT researchers shows that “workers have cost employers a 25% tax rate, while the rate of software and equipment has stood around 5%,” write Diego Areas Munhoz and Samantha Handler for Bloomberg. “This lopsidedness in tax code gives employers more reason to invest in automating goods like machines and computer software instead of workers.”

Science

Science reporter Robert F. Service spotlights how Prof. Kevin Esvelt is sounding the alarm that “AI could help somebody with no science background and evil intentions design and order a virus capable of unleashing a pandemic.” 

Financial Times

“Power and Progress,” a new book by Institute Prof. Daron Acemoglu and Prof. Simon Johnson, has been named one of the best new books on economics by the Financial Times. “The authors’ nuanced take on technological development provides insights on how we can ensure the coming AI revolution leads to widespread benefits for the many, not just the tech bros,” writes Tej Parikh.

New York Times

Writing for The New York Times, Institute Prof. Daron Acemoglu and Prof Simon Johnson make the case that “rather than machine intelligence, what we need is ‘machine usefulness,’ which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies.”

The New York Times

New York Times reporter Natasha Singer spotlights the Day of AI, an MIT RAISE program aimed at teaching K-12 students about AI. “Because AI is such a powerful new technology, in order for it to work well in society, it really needs some rules,” said MIT President Sally Kornbluth. Prof. Cynthia Breazeal, MIT’s dean of digital learning, added: “We want students to be informed, responsible users and informed, responsible designers of these technologies.”

Inside Higher Ed

Graduate student Kartik Chandra writes for Inside Higher Education about how many of this year’s college graduates are feeling anxiety about new AI technologies. “We scientists are still debating the details of how AI is and is not humanlike in its use of language,” writes Chandra. “But let’s not forget the big picture: unlike AI, you speak because you have something to say.”

NPR

Prof. Danielle Li and graduate student Lindsey Raymond speak with NPR hosts Wailin Wong and Adrian Ma about how generative artificial intelligence could impact the workplace based on their research examining how an AI chatbot affected workers at customer contact centers. “A lot of what customer service is, is about managing people's feelings 'cause people come, they're tired or whatever,” says Li. “And so in some sense there's kind of this sort of human soft skills component that these technologies are able to capture in a way that prior technologies couldn't.”

GBH

Institute Prof. Daron Acemoglu and Prof. Aleksander Mądry join GBH’s Greater Boston to explore how AI can be regulated and safely integrated into our lives. “With much of our society driven by informational spaces — in particular social media and online media in general — AI and, in particular, generative AI accelerates a lot of problems like misinformation, spam, spear phishing and blackmail,” Mądry explains. Acemoglu adds that he feels AI reforms should be approached “more broadly so that AI researchers actually work in using these technologies in human-friendly ways, trying to make humans more empowered and more productive.”

Vox

Prof. Daron Acemoglu speaks with VOX Talks host Tim Phillips about his new book written with Prof. Simon Johnson, “Power and Progress.” The book explores “how we can redirect the path of innovation,” Phillips explains.

The Washington Post

MIT researchers have developed a new method to make chatbots more factual, reports Gerrit De Vynck for The Washington Post. “The researchers proposed using different chatbots to produce multiple answers to the same question and then letting them debate each other until one answer won out,” explains Vynck. “The researchers found using this ‘society of minds’ method made them more factual.”