Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 1082 news clips related to this topic.
Show:

The Boston Globe

Prof. Yossi Sheffi speaks with Boston Globe reporter Hiawatha Bray about the challenges and risks posed by implementing automation, amid the dockworkers strike. Sheffi emphasized the importance of gradually introducing new technologies and offering workers training to work with AI. “There will be new jobs,” says Sheffi. “And we want the current workers to be able to get these new jobs.” 

Associated Press

Prof. Yossi Sheffi speaks with Associated Press reporter Cathy Bussewitz about how automation could impact the workforce, specifically dockworkers. “You cannot bet against the march of technology,” says Sheffi. “You cannot ban automation, because it will creep up in other places... The trick is to make it over time, not do it haphazardly.” 

The Washington Post

Prof. David Autor speaks with Washington Post reporter Cat Zakrzewski about the anticipated impact of AI in various industries. “We are just learning how to use AI and what it's good for, and it will take a while to figure out how to use it really productively,” says Autor. 

Forbes

Researchers at MIT have found large language models “often struggle to handle more complex problems that require true understanding,” reports Kirimgeray Kirimli for Forbes. “This underscores the need for future versions of LLMs to go beyond just these basic, shared capabilities,” writes Kirimli. 

Bloomberg

Prof. Daron Acemoglu speaks with Bloomberg reporter Jeran Wittenstein about the current state of AI and the technology’s economic potential. “You need highly reliable information or the ability of these models to faithfully implement certain steps that previously workers were doing,” says Acemoglu of the state of current large language models. “They can do that in a few places with some human supervisory oversight” — like coding — “but in most places they cannot. That’s a reality check for where we are right now." 

Fortune

Researchers at MIT have developed “Future You,” a generative AI chatbot that enables users to speak with potential older versions of themselves, reports Sharon Goldman for Fortune. The tool “uses a large language model and information provided by the user to help young people ‘improve their sense of future self-continuity, a psychological concept that describes how connected a person feels with their future self,’” writes Goldman. “The researchers explained that the tool cautions users that its results are only one potential version of their future self, and they can still change their lives,” writes Goldman. 

Bloomberg News

MIT researchers have found that more workers without college degrees are optimistic about AI and automation initiatives in the workplace, than those workers with a college diploma, reports Rebecca Klar for Bloomberg Law. The study found “27.4% of workers without a college degree estimated that automation will be beneficial for their job security, compared to 23.7% of workers with a college degree,” explains Klar. 

Fast Company

Researchers at MIT have found that “60% of workers who work with robotics and AI think they’ll see positive career impacts as a result in terms of productivity, satisfaction, and job safety,” reports Sam Becker for Fast Company.

Interesting Engineering

Researchers at MIT have developed a new method that “enables robots to intuitively identify relevant areas of a scene based on specific tasks,” reports Baba Tamim for Interesting Engineering. “The tech adopts a distinctive strategy to make robots effective and efficient at sorting a cluttered environment, such as finding a specific brand of mustard on a messy kitchen counter,” explains Tamim. 

New Scientist

Researchers at MIT and elsewhere have found that “human memories can be distorted by photos and videos edited by artificial intelligence,” reports Matthew Sparkes for New Scientist. “I think the worst part here, that we need to be aware or concerned about, is when the user isn’t aware of it,” says postdoctoral fellow Samantha Chan. “We definitely have to be aware and work together with these companies, or have a way to mitigate these effects. Maybe have sort of a structure where users can still control and say ‘I want to remember this as it was’, or at least have a tag that says ‘this was a doctored photo, this was a changed photo, this was not a real one’.”

WHDH 7

Prof. Regina Barzilay has received the WebMD Health Heros award for her work developing a new system that uses AI to detect breast cancer up to 5 years earlier, reports WHDH. “We do have a right to know our risk and then we, together with our healthcare providers, need to manage them,” says Barzilay. 

Forbes

Writing for Forbes, Senior Lecturer Guadalupe Hayes-Mota '08 MS '16, MBA '16 explores the challenges, opportunities and future of AI-driven drug development. “I see the opportunities for AI in drug development as vast and transformative,”  writes Hayes-Mota. “AI can help potentially uncover new drug candidates that would have been impossible to find through traditional methods.”

The Washington Post

Writing for The Washington Post, Prof. Daniela Rus, director of CSAIL, and Nico Enriquez, a graduate student at Stanford, make the case that the United States should not only be building more efficient AI software and better computer chips, but also creating “interstate-type corridors to transmit sufficient, reliable power to our data centers.” They emphasize: “The United States has the talent, investor base, corporations and research institutions to write the most advanced AI models. But without a powerful data highway system, our great technology advances will be confined to back roads.”

Bloomberg

Prof. Daron Acemoglu speaks with Bloomberg reporters John Lee and Katia Dmitrieva about the social and economic impacts of AI. “We don’t know where the future lies,” says Acemoglu. “There are many directions, the technology is malleable, we can make different choices.” 

TechCrunch

TechCrunch reporters Kyle Wiggers and Devin Coldewey spotlight a new generative AI model developed by MIT researchers that can help counteract conspiracy theories. The researchers “had people who believed in conspiracy-related statements talk with a chatbot that gently, patiently, and endlessly offered counterevidence to their arguments,” explain Wiggers and Coldewey. “These conversations led the humans involved to stating a 20% reduction in the associated belief two months later.”