Skip to content ↓

Topic

Technology and society

Download RSS feed: News Articles / In the Media / Audio

Displaying 436 - 450 of 1341 news clips related to this topic.
Show:

Freakonomics Radio

Prof. Simon Johnson speaks with Freakonomics guest host Adam Davidson about his new book, economic history, and why new technologies impact people differently. “What do people creating technology, deploying technology— what exactly are they seeking to achieve? If they’re seeking to replace people, then that’s what they’re going to be doing,” says Johnson. “But if they’re seeking to make people individually more productive, more creative, enable them to design and carry out new tasks — let’s push the vision more in that direction. And that’s a naturally more inclusive version of the market economy. And I think we will get better outcomes for more people.”

Fortune

In an article he co-authored for Fortune, postdoctoral associate Matthew Hughes explains how extreme heat affects different kinds of machines. “In general, the electronics contained in devices like cellphones, personal computers and data centers consist of many kinds of materials that all respond differently to temperature changes,” they write. “So as the temperature increases, different kinds of materials deform differently, potentially leading to premature wear and failure." 

Popular Science

Prof. Yoon Kim speaks with Popular Science reporter Charlotte Hu about how large language models like ChatGPT operate. “You can think of [chatbots] as algorithms with little knobs on them,” says Kim. “These knobs basically learn on data that you see out in the wild,” allowing the software to create “probabilities over the entire English vocab.”

MSNBC

Graduate students Martin Nisser and Marisa Gaetz co-founded Brave Behind Bars, a program designed to provide incarcerated individuals with coding and digital literacy skills to better prepare them for life after prison, reports Morgan Radford for MSNBC. Computers and coding skills “are really kind of paramount for fostering success in the modern workplace,” says Nisser.

The Washington Post

Writing for The Washington Post, graduate student Thomas Roberts underscores the importance of investing in new technologies to mitigate the risks posed by space debris. “Space operators can control how some large objects return to Earth. But this requires extra fuel reserves and adaptive control technologies, which translate into higher costs,” writes Roberts. 

The Wall Street Journal

Prof. Mark Tegmark speaks with The Wall Street Journal reporter Emily Bobrow about the importance of companies and governments working together to mitigate the risks of new AI technologies. Tegmark “recommends the creation of something like a Food and Drug Administration for AI, which would force companies to prove their products are safe before releasing them to the public,” writes Bobrow.

The Guardian

Prof. D. Fox Harrell writes for The Guardian about the importance of ensuring AI systems are designed to “reflect the ethically positive culture we truly want.” Harrell emphasizes that: “We need to be aware of, and thoughtfully design, the cultural values that AI is based on. With care, we can build systems based on multiple worldviews – and address key ethical issues in design such as transparency and intelligibility."

TechCrunch

Prof. Daniela Rus, director of CSAIL, speaks with TechCrunch reporter Brain Heater about liquid neural networks and how this emerging technology could impact robotics. “The reason we started thinking about liquid networks has to do with some of the limitations of today’s AI systems,” says Rus, “which prevent them from being very effective for safety, critical systems and robotics. Most of the robotics applications are safety critical.”

TechCrunch

Vaikkunth Mugunthan MS ’19 PhD ‘22 and Christian Lau MS ’20, PhD ’22 co-founded DynamoFL – a software company that “offers software to bring large language models (LLMs) to enterprise and fine-tune those models on sensitive data,” reports Kyle Wiggers for TechCrunch. “Generative AI has brought to the fore new risks, including the ability for LLMs to ‘memorize’ sensitive training data and leak this data to malicious actors,” says Mugunthan. “Enterprises have been ill-equipped to address these risks, as properly addressing these LLM vulnerabilities would require recruiting teams of highly specialized privacy machine learning researchers to create a streamlined infrastructure for continuously testing their LLMs against emerging data security vulnerabilities.”

Boston.com

MIT researchers have developed a new tool called “PhotoGuard” that can help protect images from AI manipulation, reports Ross Cristantiello for Boston.com. The tool “is designed to make real images resistant to advanced models that can generate new images, such as DALL-E and Midjourney,” writes Cristantiello.

USA Today

A working paper co-authored by Prof. John Horton and graduate students Emma van Inwegen and Zanele Munyikwa has found that “AI has the potential to level the playing field for non-native English speakers applying for jobs by helping them better present themselves to English-speaking employers,” reports Medora Lee for USA Today. “Between June 8 and July 14, 2021, [Inwegen] studied 480,948 job seekers, who applied for jobs that require English to be spoken but who mostly lived in nations where English is not the native language,” explains Lee. “Of those who used AI, 7.8% were more likely to be hired.”

Boston 25 News

Researchers at MIT have developed a wearable ultrasound device that can be used to detect early signs of breast cancer, reports Rachel Keller and Bob Dumas for Boston 25 News. “This technology will be able to let you know if there’s a question mark, if there’s an anomaly, in your breast tissue,” says Prof. Canan Dagdeviren.

CNN

Researchers at MIT have developed “PhotoGuard,” a tool that can be used to protect images from AI manipulation, reports Catherine Thorbecke for CNN. The tool “puts an invisible ‘immunization’ over images that stops AI models from being able to manipulate the picture,” writes Thorbecke.

Financial Times

Prof. David Autor speaks with Delphine Strauss of the Financial Times about the risks AI poses to jobs and job quality, but also the technology’s potential to help rebuild middle-class jobs. “The good case for AI is where it enables people with foundational expertise or judgment to do more expert work with less expertise,” says Autor. He adds, “My hope is that we can use AI to reinstate the value of skills held by people without as high a degree of formal education.”

The Boston Globe

Prof. Daron Acemoglu speaks with Boston Globe reporters Alex Kantrowitz and Douglas Gorman about how to address the advance of AI in the workplace. “We know from many areas that have rapidly automated that they don’t deliver the types of returns that they promised,” says Acemoglu. “Humans are underrated.”