Skip to content ↓

Topic

Technology and society

Download RSS feed: News Articles / In the Media / Audio

Displaying 376 - 390 of 1286 news clips related to this topic.
Show:

The New York Times

MIT AgeLab Director Joseph Coughlin speaks with New York Times opinion columnist Farhad Manjoo about why the tech industry tends to ignore older people. “The market is aging,” says Coughlin. “The market is numerous. The market has got more money than the people they have been building products for.”

TechCrunch

 Prof. Arnaud Costinot and Prof. Iván Werning speak with TechCrunch reporter Brian Heater about their research examining the potential impact of a robot tax on automation and jobs. “The potential wages people can earn may become more unequal with new technologies and the idea is that the tax can mitigate these effects,” Costinot and Werning explain. “In a sense, one can think of this as pre-distribution, affecting earnings before taxes, instead of redistribution.”

The Boston Globe

Prof. Tod Machover speaks with Boston Globe reporter A.Z. Madonna about the restaging of his opera ‘VALIS’ at MIT, which features an AI-assisted musical instrument developed by Nina Masuelli ’23.  “In all my career, I’ve never seen anything change as fast as AI is changing right now, period,” said Machover. “So to figure out how to steer it towards something productive and useful is a really important question right now.”

Popular Science

Researchers at MIT and elsewhere have developed a medical device that uses AI to evade scar tissue build up, reports Andrew Paul for Popular Science. “The technology’s secret weapon is its conductive, porous membrane capable of detecting when it is becoming blocked by scar tissue,” writes Paul. 

Freakonomics Radio

Prof. Simon Johnson speaks with Freakonomics guest host Adam Davidson about his new book, economic history, and why new technologies impact people differently. “What do people creating technology, deploying technology— what exactly are they seeking to achieve? If they’re seeking to replace people, then that’s what they’re going to be doing,” says Johnson. “But if they’re seeking to make people individually more productive, more creative, enable them to design and carry out new tasks — let’s push the vision more in that direction. And that’s a naturally more inclusive version of the market economy. And I think we will get better outcomes for more people.”

Fortune

In an article he co-authored for Fortune, postdoctoral associate Matthew Hughes explains how extreme heat affects different kinds of machines. “In general, the electronics contained in devices like cellphones, personal computers and data centers consist of many kinds of materials that all respond differently to temperature changes,” they write. “So as the temperature increases, different kinds of materials deform differently, potentially leading to premature wear and failure." 

Popular Science

Prof. Yoon Kim speaks with Popular Science reporter Charlotte Hu about how large language models like ChatGPT operate. “You can think of [chatbots] as algorithms with little knobs on them,” says Kim. “These knobs basically learn on data that you see out in the wild,” allowing the software to create “probabilities over the entire English vocab.”

MSNBC

Graduate students Martin Nisser and Marisa Gaetz co-founded Brave Behind Bars, a program designed to provide incarcerated individuals with coding and digital literacy skills to better prepare them for life after prison, reports Morgan Radford for MSNBC. Computers and coding skills “are really kind of paramount for fostering success in the modern workplace,” says Nisser.

The Washington Post

Writing for The Washington Post, graduate student Thomas Roberts underscores the importance of investing in new technologies to mitigate the risks posed by space debris. “Space operators can control how some large objects return to Earth. But this requires extra fuel reserves and adaptive control technologies, which translate into higher costs,” writes Roberts. 

The Wall Street Journal

Prof. Mark Tegmark speaks with The Wall Street Journal reporter Emily Bobrow about the importance of companies and governments working together to mitigate the risks of new AI technologies. Tegmark “recommends the creation of something like a Food and Drug Administration for AI, which would force companies to prove their products are safe before releasing them to the public,” writes Bobrow.

The Guardian

Prof. D. Fox Harrell writes for The Guardian about the importance of ensuring AI systems are designed to “reflect the ethically positive culture we truly want.” Harrell emphasizes that: “We need to be aware of, and thoughtfully design, the cultural values that AI is based on. With care, we can build systems based on multiple worldviews – and address key ethical issues in design such as transparency and intelligibility."

TechCrunch

Prof. Daniela Rus, director of CSAIL, speaks with TechCrunch reporter Brain Heater about liquid neural networks and how this emerging technology could impact robotics. “The reason we started thinking about liquid networks has to do with some of the limitations of today’s AI systems,” says Rus, “which prevent them from being very effective for safety, critical systems and robotics. Most of the robotics applications are safety critical.”

TechCrunch

Vaikkunth Mugunthan MS ’19 PhD ‘22 and Christian Lau MS ’20, PhD ’22 co-founded DynamoFL – a software company that “offers software to bring large language models (LLMs) to enterprise and fine-tune those models on sensitive data,” reports Kyle Wiggers for TechCrunch. “Generative AI has brought to the fore new risks, including the ability for LLMs to ‘memorize’ sensitive training data and leak this data to malicious actors,” says Mugunthan. “Enterprises have been ill-equipped to address these risks, as properly addressing these LLM vulnerabilities would require recruiting teams of highly specialized privacy machine learning researchers to create a streamlined infrastructure for continuously testing their LLMs against emerging data security vulnerabilities.”

Boston.com

MIT researchers have developed a new tool called “PhotoGuard” that can help protect images from AI manipulation, reports Ross Cristantiello for Boston.com. The tool “is designed to make real images resistant to advanced models that can generate new images, such as DALL-E and Midjourney,” writes Cristantiello.