Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 331 - 345 of 1173 news clips related to this topic.
Show:

The Wall Street Journal

Prof. Mark Tegmark speaks with The Wall Street Journal reporter Emily Bobrow about the importance of companies and governments working together to mitigate the risks of new AI technologies. Tegmark “recommends the creation of something like a Food and Drug Administration for AI, which would force companies to prove their products are safe before releasing them to the public,” writes Bobrow.

The Guardian

Prof. D. Fox Harrell writes for The Guardian about the importance of ensuring AI systems are designed to “reflect the ethically positive culture we truly want.” Harrell emphasizes that: “We need to be aware of, and thoughtfully design, the cultural values that AI is based on. With care, we can build systems based on multiple worldviews – and address key ethical issues in design such as transparency and intelligibility."

Wired

Undergraduate student Isabella Struckman and Sofie Kupiec ’23 reached out to the first hundred signatories of the Future of Life Institute’s open letting calling for a pause on AI development to learn more about their motivations and concerns, reports Will Knight for Wired. “The duo’s write-up of their findings reveals a broad array of perspectives among those who put their name to the document,” writes Knight. “Despite the letter’s public reception, relatively few were actually worried about AI posing a looming threat to humanity.”

TechCrunch

Prof. Daniela Rus, director of CSAIL, speaks with TechCrunch reporter Brain Heater about liquid neural networks and how this emerging technology could impact robotics. “The reason we started thinking about liquid networks has to do with some of the limitations of today’s AI systems,” says Rus, “which prevent them from being very effective for safety, critical systems and robotics. Most of the robotics applications are safety critical.”

Boston.com

MIT researchers have developed a new tool called “PhotoGuard” that can help protect images from AI manipulation, reports Ross Cristantiello for Boston.com. The tool “is designed to make real images resistant to advanced models that can generate new images, such as DALL-E and Midjourney,” writes Cristantiello.

USA Today

A working paper co-authored by Prof. John Horton and graduate students Emma van Inwegen and Zanele Munyikwa has found that “AI has the potential to level the playing field for non-native English speakers applying for jobs by helping them better present themselves to English-speaking employers,” reports Medora Lee for USA Today. “Between June 8 and July 14, 2021, [Inwegen] studied 480,948 job seekers, who applied for jobs that require English to be spoken but who mostly lived in nations where English is not the native language,” explains Lee. “Of those who used AI, 7.8% were more likely to be hired.”

CNN

Researchers at MIT have developed “PhotoGuard,” a tool that can be used to protect images from AI manipulation, reports Catherine Thorbecke for CNN. The tool “puts an invisible ‘immunization’ over images that stops AI models from being able to manipulate the picture,” writes Thorbecke.

The Boston Globe

Boston Globe reporter Aaron Pressman speaks with alumnus Jeremy Wertheimer, co-founder of ITA Software, about the state of AI innovation in the Greater Boston area, reports Aaron Pressman for The Boston Globe. “Back in the day, we called it good old-fashioned AI,” says Wertheimer. “But the future is to forget all that clever coding. You want to have an incredibly simple program with enough data and enough computing power.”

Forbes

A number of MIT alumni including Elaheh Ahmadi, Alexander Amini, and Jose Amich have been named to the Forbes 30 Under 30 Local Boston list.

Financial Times

Prof. David Autor speaks with Delphine Strauss of the Financial Times about the risks AI poses to jobs and job quality, but also the technology’s potential to help rebuild middle-class jobs. “The good case for AI is where it enables people with foundational expertise or judgment to do more expert work with less expertise,” says Autor. He adds, “My hope is that we can use AI to reinstate the value of skills held by people without as high a degree of formal education.”

The Boston Globe

Prof. Daron Acemoglu speaks with Boston Globe reporters Alex Kantrowitz and Douglas Gorman about how to address the advance of AI in the workplace. “We know from many areas that have rapidly automated that they don’t deliver the types of returns that they promised,” says Acemoglu. “Humans are underrated.”  

Reuters

Prof. Simon Johnson speaks with Reuters reporter Mark John about the impact of AI on the economy. “AI has got a lot of potential – but potential to go either way,” says Johnson. “We are at a fork in the road.”

The Daily Beast

Researchers at MIT and Dana-Farber Cancer Institute have published a paper showcasing the development of OncoNPC, an artificial intelligence model that can predict where a patient’s cancer came from in their body, reports Tony Ho Tran for The Daily Beast. This information “can help determine more effective treatment decisions for patients and caregivers,” writes Tran.

Financial Times

Prof. Carlo Ratti writes for Financial Times about how new AI algorithms can impact the property market. “To train a real estate bot, our lab at MIT used pictures of 20,000 houses around Boston, as well as data that measured how their prices changed over time,” write Ratti. “When other variables were added — such as structural information and neighbourhood amenities — our algorithm was able to make very accurate predictions of how prices would change over time.”

The Washington Post

Prof. Manish Raghavan speaks with The Washington Post reporter Danielle Abril about the risk of AI bias in employers’ recruitment behavior. “For example, AI could appear to be biased in matching mostly Harvard graduates to some jobs when those graduates may just have a higher likelihood to match certain requirements,” explains Abril. “Humans already struggle with implicit biases, often favoring people like themselves, and that could get replicated through AI.”