Skip to content ↓

Topic

Technology and society

Download RSS feed: News Articles / In the Media / Audio

Displaying 436 - 450 of 1306 news clips related to this topic.
Show:

Quartz

Prof. Daron Acemoglu and graduate student Todd Lensman have created “the first economic model of how to regulate transformative technologies,” like artificial intelligence, reports Tim Fernholz for Quartz. “Their tentative conclusion is that slower deployments is likely better, and that a machine learning tax combined with sector-specific restrictions on the use of the technology could provide the best possible outcomes,” writes Fernholz.

Forbes

Writing for Forbes, Prof. Daniela Rus, director of CSAIL, makes the case that liquid neural networks “offer an elegant and efficient computational framework for training and inference in machine learning. With their compactness, adaptability, and streamlined computation, these networks have the potential to reshape the landscape of artificial intelligence and drive further breakthroughs in the field.”

The Wall Street Journal

Wall Street Journal reporter Emily Bobrow spotlights Laurel Braitman PhD '13 for her work teaching writing and communication skills to healthcare workers. “We need people who are trained in science and medicine to be able to tell stories about what matters in public health in a way that makes people listen,” says Braitman. “But to do that, they have to be in touch with what they really feel.”

TechCrunch

Researchers at MIT have developed PIGINet (Plans, Images, Goal and Initial facts), a neural network designed to bring task and motion planning to home robotics, reports Brian Heater for Tech Crunch. “The system is largely focused on kitchen-based activities at present. It draws on simulated home environments to build plans that require interactions with various different elements of the environment, like counters, cabinets, the fridge, sinks, etc,” says Heater.

The Guardian

Prof. Max Tegmark speaks with Guardian reporter Steve Rose about the potential of artificial intelligence. “The positive, optimistic scenario is that we responsibly develop superintelligence in a way that allows us to control it and benefit from it,” says Tegmark. “If we can build and control superintelligence, we can quickly go from being limited by our own stupidity to being limited by the laws of physics. It could be the greatest empowerment moment in human history.”

Axios

MIT Schwarzman College of Computing Dean Daniel Huttenlocher discusses how artificial intelligence has impacted print media at the Aspen Ideas Festival, reports John Frank for Axios. “Most of us grew up in a world where the word print was something that was authoritative,” says Huttenlocher, of how people will need to be on the lookout for misinformation.

NBC News

MIT Schwarzman College of Computing Dean Daniel Huttenlocher speaks at the Aspen Ideas Festival on how to regulate AI while maximizing its positive impact, reports NBC. “I think when we think about regulation [of artificial intelligence] we need to think about this in the ways we’ve traditionally thought about things – risk, reward, tradeoffs – and that tends to be domain specific,” says Huttenlocher. “It’s hard to have sort of an abstract notion of this new technology and what the risk [and] reward is across all domains.”

CNBC

Prof. Deb Roy speaks with CNBC reporter Deirdre Bosa about “the relationship between machine-leaning technology and humans.”

Yahoo! News

Prof. Marzyeh Ghassemi speaks with Yahoo News reporter Rebecca Corey about the benefits and risks posed by the use of AI tools in health care. “I think the problem is when you try to naively replace humans with AI in health care settings, you get really poor results,” says Ghassemi. “You should be looking at it as an augmentation tool, not as a replacement tool.”

Vox

Prof. Kevin Esvelt and his students have found that language-generating AI models could make it easier to create pandemic potential pathogens, reports Kelsey Piper for Vox.

The Conversation

Writing for The Conversation, postdoc Ziv Epstein SM ’19, PhD ’23, graduate student Robert Mahari and Jessica Fjeld of Harvard Law School explore how the use of generative AI will impact creative work. “The ways in which existing laws are interpreted or reformed – and whether generative AI is appropriately treated as the tool it is – will have real consequences for the future of creative expression,” the authors note.  

Financial Times

“Power and Progress,” a new book by Institute Prof. Daron Acemoglu and Prof. Simon Johnson, has been named one of the best new books on economics by the Financial Times. “The authors’ nuanced take on technological development provides insights on how we can ensure the coming AI revolution leads to widespread benefits for the many, not just the tech bros,” writes Tej Parikh.

New York Times

Writing for The New York Times, Institute Prof. Daron Acemoglu and Prof Simon Johnson make the case that “rather than machine intelligence, what we need is ‘machine usefulness,’ which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies.”

Inside Higher Ed

Graduate student Kartik Chandra writes for Inside Higher Education about how many of this year’s college graduates are feeling anxiety about new AI technologies. “We scientists are still debating the details of how AI is and is not humanlike in its use of language,” writes Chandra. “But let’s not forget the big picture: unlike AI, you speak because you have something to say.”