Skip to content ↓

Topic

Technology and society

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 1112 news clips related to this topic.
Show:

The Washington Post

Prof. David Autor speaks with Washington Post reporter Cat Zakrzewski about the anticipated impact of AI in various industries. “We are just learning how to use AI and what it's good for, and it will take a while to figure out how to use it really productively,” says Autor. 

Bloomberg

Prof. Daron Acemoglu speaks with Bloomberg reporter Jeran Wittenstein about the current state of AI and the technology’s economic potential. “You need highly reliable information or the ability of these models to faithfully implement certain steps that previously workers were doing,” says Acemoglu of the state of current large language models. “They can do that in a few places with some human supervisory oversight” — like coding — “but in most places they cannot. That’s a reality check for where we are right now." 

Bloomberg News

MIT researchers have found that more workers without college degrees are optimistic about AI and automation initiatives in the workplace, than those workers with a college diploma, reports Rebecca Klar for Bloomberg Law. The study found “27.4% of workers without a college degree estimated that automation will be beneficial for their job security, compared to 23.7% of workers with a college degree,” explains Klar. 

Fast Company

Researchers at MIT have found that “60% of workers who work with robotics and AI think they’ll see positive career impacts as a result in terms of productivity, satisfaction, and job safety,” reports Sam Becker for Fast Company.

Scientific American

Writing for Scientific American, MIT Prof. David Rand and University of Pennsylvania postdoctoral fellow Jennifer Allen highlight new challenges in the fight against misinformation. “Combating misbelief is much more complicated—and politically and ethically fraught—than reducing the spread of explicitly false content,” they write. “But this challenge must be bested if we want to solve the ‘misinformation’ problem.”

Fast Company

Researchers at MIT have developed “AstroAnts,” autonomous, magnetic, robotic rovers roughly the size of a Hot Wheels toy car designed to monitor space vehicles and other hard-to-reach machinery, reports Jesus Diaz for Fast Company. “The idea is that, by constantly watching over the temperature and structural integrity of their cosmic rides, spaceships will be more resilient to the extreme conditions of space and astronauts will be safer,” explains Diaz.

The Washington Post

Writing for The Washington Post, Prof. Daniela Rus, director of CSAIL, and Nico Enriquez, a graduate student at Stanford, make the case that the United States should not only be building more efficient AI software and better computer chips, but also creating “interstate-type corridors to transmit sufficient, reliable power to our data centers.” They emphasize: “The United States has the talent, investor base, corporations and research institutions to write the most advanced AI models. But without a powerful data highway system, our great technology advances will be confined to back roads.”

CNN

CNN visits the lab of Prof. Canan Dagdeviren to learn more about her work developing wearable ultrasound devices that could help screen for early-stage breast cancer, monitor kidney health, and detect other cancers deep within the body. “Wearable technology will grow rapidly in the near future,” says Dagdeviren. “But in the far future, they will be one of the most powerful tools that we will be seeing in our daily life.” 

Bloomberg

Prof. Daron Acemoglu speaks with Bloomberg reporters John Lee and Katia Dmitrieva about the social and economic impacts of AI. “We don’t know where the future lies,” says Acemoglu. “There are many directions, the technology is malleable, we can make different choices.” 

TechCrunch

TechCrunch reporters Kyle Wiggers and Devin Coldewey spotlight a new generative AI model developed by MIT researchers that can help counteract conspiracy theories. The researchers “had people who believed in conspiracy-related statements talk with a chatbot that gently, patiently, and endlessly offered counterevidence to their arguments,” explain Wiggers and Coldewey. “These conversations led the humans involved to stating a 20% reduction in the associated belief two months later.”

Newsweek

New research by Prof. David Rand and his colleagues has utilized generative AI to address conspiracy theory beliefs, reports Marie Boran for Newsweek. “The researchers had more than 2,000 Americans interact with ChatGPT about a conspiracy theory they believe in, explains Boran. “Within three rounds of conversation with the chatbot, participants’ belief in their chosen conspiracy theory was reduced by 20 percent on average.” 

Fortune

Researchers from MIT and elsewhere have found that LLM-based AI chatbots are more effective at implanting false memories than “other methods of trying to implant memories, such as old-fashioned surveys with leading questions or conversations with a pre-scripted chatbot,” reports Jeremy Kahn for Fortune. “It seems the ability of the generative AI chatbot to shape each question based on the previous answers of the test subjects gave it particular power,” explains Kahn.

Bloomberg

Researchers from MIT and Stanford University have found “staff at one Fortune 500 software firm became 14% more productive on average when using generative AI tools,” reports Olivia Solon and Seth Fiegerman for Bloomberg

Popular Science

A new study by researchers from MIT and elsewhere tested a generative AI chatbot’s ability to debunk conspiracy theories , reports Mack Degeurin for Popular Science. “In the end, conversations with the chatbot reduced the participant’s overall confidence in their professed conspiracy theory by an average of 20%,” writes Degeurin. 

Los Angeles Times

A new study by researchers from MIT and elsewhere has found that an AI chatbot is capable of combating conspiracy theories, reports Karen Kaplan for The Los Angeles Times. The researchers found that conversations with the chatbot made people “less generally conspiratorial,” says Prof. David Rand.  “It also increased their intentions to do things like ignore or block social media accounts sharing conspiracies, or, you know, argue with people who are espousing those conspiracy theories.”