Skip to content ↓


Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 541 news clips related to this topic.


Writing for Politico, MIT Prof. Armando Solar-Lezama and University of Texas at Austin Prof. Swarat Chaudhuri examine the recent executive order on AI. “Especially as new ways to train models with limited resources emerge, and as the price of computing goes down,” they write, “such regulations could start hurting the outsiders — the researchers, small companies, and other independent organizations whose work will be necessary to keep a fast-moving technology in check.”

Curiosity Stream

Four faculty members from across MIT - Professors Song Han, Simon Johnson, Yoon Kim and Rosalind Picard - speak with Curiosity Stream about the opportunities and risks posed by the rapid advancements in the field of AI. “We do want to think about which human capabilities we treasure,” says Picard. She adds that during the Covid-19 pandemic, “we saw a lot of loss of people's ability to communicate with one another face-to-face when their world moved online. I think we need to be thoughtful and intentional about what we're building with the technology and whether it's diminishing who we are or enhancing it.”


Prof. Russ Tedrake and Max Bajracharya '21 MEng '21 speak with TechCrunch reporter Brian Heater about the impact of generative AI on the future of robotics. “Generative AI has the potential to bring revolutionary new capabilities to robotics,” says Tedrake. “Not only are we able to communicate with robots in natural language, but connecting to internet-scale language and image data is giving robots a much more robust understanding and reasoning about the world.”

The Daily Beast

Researchers from MIT and elsewhere have developed a new 3D printing process that “allows users to create more elastic materials along with rigid ones using slow-curing polymers,” reports Tony Ho Tran for the Daily Beast. The researchers used the system to create a, “3D printed hand complete with bones, ligaments, and tendons. The new process also utilizes a laser sensor array developed by researchers at MIT that allows the printer to actually ‘see’ what it’s creating as it creates it.”


Principal Research Scientist Andrew McAfee discussed the impact of artificial intelligence and technology on businesses at the Fortune CFO Collaborative, reports Sheryl Estrada for Fortune. Generative AI is “going to diffuse throughout the economy,” said McAfee. “It’s going to separate winners from losers, and it’s going to turbocharge the winners faster than you and I have been expecting.”  

The Economist

The Economist reporter Rachel Lloyd predicts a “distinct change” in topics for bestselling books in 2024. Lloyd predicts artificial intelligence will take a lead, spotlighting “The Heart and Chip: Our Bright Future with Robots,” by Prof. Daniela Rus, director of CSAIL, as a leading example of the shift.

The Wall Street Journal

Prof. David Rand speaks with Wall Street Journal reporter Christopher Mims about the impact of generative AI on the spread of misinformation. “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says Rand.


Writing for Fortune, Sloan research fellow Michael Schrage and his colleagues, explain how AI-enabled key performance indicators (KPIs) can help companies better understand and measure success. “Driving strategic alignment within their organization is an increasingly important priority for senior executives,” they write. “AI-enabled KPIs are powerful tools for achieving this. By getting their data right, using appropriate organizational constructs, and driving a cultural shift towards data-driven decision making, organizations can effectively govern the creation and deployment of AI-enabled KPIs." 

Los Angeles Times

Los Angeles Times reporter Brian Merchant spotlights Joy Buolamwini PhD '22 and her new book, “Unmasking AI: My Mission to Protect What is Human in a World of Machines.” “Buolamwini’s book recounts her journey to become one of the nation’s preeminent scholars and critics of artificial intelligence — she recently advised President Biden before the release of his executive order on AI — and offers readers a compelling, digestible guide to some of the most pressing issues in the field,” writes Merchant.

The Boston Globe

Joy Buolamwini PhD '22 speaks with Brian Bergstein of The Boston Globe’s “Say More” podcast about her academic and professional career studying bias in AI. “As I learned more and also became familiar with the negative impacts of things like facial recognition technologies, it wasn’t just the call to say let’s make systems more accurate but a call to say let’s reexamine the ways in which we create AI in the first place and let’s reexamine our measures of progress because so far they have been misleading,” says Buolamwini

The Boston Globe

Joy Buolamwini PhD '22 writes for The Boston Globe about her experience uncovering bias in artificial intelligence through her academic and professional career. “I critique AI from a place of having been enamored with its promise, as an engineer more eager to work with machines than with people at times, as an aspiring academic turned into an accidental advocate, and also as an artist awakened to the power of the personal when addressing the seemingly technical,” writes Buolamwini. “The option to say no, the option to halt a project, the option to admit to the creation of dangerous and harmful though well-intentioned tools must always be on the table.”

The Washington Post

Graduate student Shayne Longpre speaks with Washington Post reporter Nitasha Tiku about the ethical and legal implications surrounding language model datasets. Longpre says “the lack of proper documentation is a community-wide problem that stems from modern machine-learning practices.”


Researchers from MIT and elsewhere have developed a transparency index used to assess 10 key AI foundation models, reports Ryan Heath for Axios. Heath writes that the researchers emphasized that “unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.”


TechCrunch reporter Kylie Wiggers spotlights the SmartEM project by researchers from MIT and Harvard, which is aimed at enhancing lab work using, “a computer vision system and ML control system inside a scanning electron microscope” to examine a specimen intelligently. “It can avoid areas of low importance, focus on interesting or clear ones, and do smart labeling of the resulting image as well,” writes Wiggers.

The World

Research scientist Nataliya Kosmyna speaks with The World host Chris Harland-Dunaway about the science behind using non-invasive brain scans and artificial intelligence to understand people’s different thoughts and mental images.