Skip to content ↓

Topic

Technology and society

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 1131 news clips related to this topic.
Show:

Forbes

Research from the Data Provenance Initiative, led by MIT researchers, has “found that many web sources used for training AI models have restricted their data, leading to a rapid decline in accessible information,” reports Gary Drenik for Forbes

Forbes

Researchers at MIT have developed a new AI model capable of assessing a patient’s risk of pancreatic cancer, reports Erez Meltzer for Forbes. “The model could potentially expand the group of patients who can benefit from early pancreatic cancer screening from 10% to 35%,” explains Meltzer. “These kinds of predictive capabilities open new avenues for preventive care.” 

TechCrunch

Arago, an AI startup co-founded by alumnus Nicolas Muller, has been named to the Future 40 list by Station F, which selects “the 40 most promising startups,” reports Romain Dillet for TechCrunch. Arago is “working on new AI-focused chips that use optical technology at the chipset level to speed up operations,” explains Dillet.

TechCrunch

Neural Magic, an AI optimization startup co-founded by Prof. Nir Shavit and former Research Scientist Alex Matveev, aims to “process AI workloads on processors and GPUs at speeds equivalent to specialized AI chips,” reports Kyle Wiggers for TechCrunch. “By running models on off-the-shelf processors, which usually have more available memory, the company’s software can realize these performance gains,” explains Wiggers. 

New Scientist

Researchers at MIT have developed a new virtual training program for four-legged robots by taking “popular computer simulation software that follows the principles of real-world physics and inserting a generative AI model to produce artificial environments,” reports Jeremy Hsu for New Scientist. “Despite never being able to ‘see’ the real world during training, the robot successfully chased real-world balls and climbed over objects 88 per cent of the time after the AI-enhanced training,” writes Hsu. "When the robot relied solely on training by a human teacher, it only succeeded 15 per cent of the time.”

Financial Times

Research Scientist Nick van der Meulen speaks with Financial Times reporter Bethan Staton about how automation could be used to help employers plug the skills gap. “You can give people insight into how their skills stack up . . . you can say this is the level you need to be for a specific role, and this is how you can get there,” says van der Meulen. “You cannot do that over 80 skills through active testing, it would be too costly.”

The Boston Globe

Designer and artist Es Devlin has been named the recipient of the 2025 Eugene McDermott Award in the Arts at MIT, reports Arushi Jacob for The Boston Globe. The award recognizes and honors “individuals in the arts, spanning a variety of mediums,” explains Jacobs. “The award aims to invest in the careers of cross-disciplinary artists, like Devlin.” 

Mashable

Graduate student Aruna Sankaranarayanan speaks with Mashable reporter Cecily Mauran the impact of political deepfakes and the importance of AI literacy, noting that the fabrication of important figures who aren’t as well known is one of her biggest concerns. “Fabrication coming from them, distorting certain facts, when you don’t know what they look like or sound like most of the time, that’s really hard to disprove,” says Sankaranarayanan.  

Forbes

Postdoctoral associate Peter Slattery speaks with Forbes reporter Tor Constantino about the importance of developing new technologies to easily distinguish AI generated content. “I think we need to be very careful to ensure that watermarks are robust against tampering and that we do not have scenarios where they can be faked,” explains Slattery. “The ability to fake watermarks could make things worse than having no watermarks as it would give the illusion of credibility.” 

The Boston Globe

 MIT Humanist Chaplain Greg Epstein speaks with Boston Globe reporter Christine Mehta about his new book "Tech Agnostic: How Technology Became the World’s Most Powerful Religion, and Why It Desperately Needs a Reformation.” “Epstein is not the first to argue that a secular phenomenon has become a religion (the list grows longer every year), but to him, obsessions like the American workplace, sports, or CrossFit are at best middling cults,” writes Mehta. “Technology, on the other hand, is the religion of today’s world, Epstein says, displacing the influence of everything else on our lives.”

Nature

Prof. Jacopo Buongiorno speaks with Nature reporter Davide Castelvecchi about how AI has increased energy demand and the future of nuclear energy. 

Wired

Liquid AI, an MIT startup, is unveiling a new AI model based on a liquid neural network that “has the potential to be more efficient, less power-hungry, and more transparent than the ones that underpin everything from chatbots to image generators to facial recognition systems, reports Will Knight for Wired. 

CNBC

Prof. Daron Acemoglu, a recipient of the 2024 Nobel Prize in economic sciences, speaks with CNBC about the challenges facing the American economy. Acemoglu notes that in his view the coming economic storm is really “both a challenge and an opportunity,” explains Acemoglu. “I talk about AI, I talk about aging, I talk about the remaking of globalization. All of these things are threats because they are big changes, but they’re also opportunities that we could use in order to make ourselves more productive, workers more productive, workers earn more. In fact, even reduce inequality, but the problem is that we’re not prepared for it.” 

NPR

Prof. Daron Acemoglu speaks with Greg Rosalsky of NPR’s Planet Money about a recent survey that claims "almost 40% of Americans, ages 18 to 64, have used generative AI." "My concern with their numbers is that it does not distinguish fundamentally productive uses of generative AI from occasional/frivolous uses," says Acemoglu. 

Forbes

Sloan Visiting Senior Lecturer Paul McDonagh-Smith speaks with Joe McKendrick of Forbes about the ongoing discussions about AI safety guidelines. “While ensuring safety is crucial, especially for frontier AI models, there is also a need to strike a balance where AI is a catalyst for innovation without putting our organizations and broader society at risk,” explains McDonagh-Smith.