Skip to content ↓


Artificial intelligence

Download RSS feed: News Articles / In the Media

Displaying 1 - 15 of 729 news clips related to this topic.


Researchers at MIT have developed an autonomous vehicle with “mini sensors to allow it to see the world and also with an artificially intelligent computer brain that can allow it to drive,” explains postdoctoral associate Alexander Amini in an interview with Mashable. “Our autonomous vehicles are able to learn directly from humans how to drive a car so they can be deployed and interact in brand new environments that they’ve never seen before,” Amini notes.


The Washington Post

MIT researchers have developed a new AI tool called Sybil that could help predict whether a patient will get lung cancer up to six years in advance, reports Pranshu Verma for The Washington Post.  “Much of the technology involves analyzing large troves of medical scans, data sets or images, then feeding them into complex artificial intelligence software,” Verma explains. “From there, computers are trained to spot images of tumors or other abnormalities.”


Researchers at MIT developed SoFi, a soft robotic fish designed to study underwater organisms and their environments, reports Mashable. “The soft robotic fish serves a nice purpose for hopefully minimizing impact on the environments that we’re studying and also helps us study different types of behaviors and also study the actual mechanics of these organisms as well,” says graduate student Levi Cai.


Prof. Nick Montfort speaks with GBH All Things Considered host Arun Rath about ChatGPT, its potential impact on the future of academia and how instructors could adapt their courses in light of this new technology.


Research fellow Michael Schrage speaks with Fortune reporter Sheryl Estrada about how generative A.I. will impact finance. “I think, increasingly, we’re going to be seeing generative A.I. used for financial forecasts and scenario generation,” says Schrage.

National Geographic

National Geographic reporter Maya Wei-Haas explores how the ancient art of origami is being applied to fields such a robotics, medicine and space exploration. Wei-Haas notes that Prof. Daniela Rus and her team developed a robot that can fold to fit inside a pill capsule, while Prof. Erik Demaine has designed complex, curving fold patterns. “You get these really impressive 3D forms with very simple creasing,” says Demaine.


Prof. Daniela Rus, director of CSAIL, discusses the future of artificial intelligence, emphasizing the importance of balancing the development of new technologies with the need to ensure they are deployed in a way that benefits humanity. “We have to advance the science and engineering of autonomy and the science and engineering of intelligence to create the kinds of machines that will be friendly to people, that will be assistive and supportive for people and that will augment people with the tasks that they need help with,” Rus explains.


NBC 1st Look host Chelsea Cabarcas visits MIT to learn more about how faculty, researchers and students are “pioneering the world of tomorrow.” Cabarcas meets the MIT Solar Electric Vehicle team and gets a peek at Nimbus, the single-occupant vehicle that team members raced in the American Solar Challenge from Kansas City to New Mexico. Cabarcas also sees the back-flipping MIT mini cheetah that could one day be used in disaster-relief operations.


Researchers from MIT and Mass General Hospital have developed “a deep learning model named ‘Sybil’ that can be used to predict lung cancer risk, using data from just a single CT scan,” writes Sai Balasubramanian for Forbes. “Sybil is able to predict a patient’s future lung cancer risk to a certain extent of accuracy, using the data from just one LDCT [low-dose computed tomography scan],” writes Balasubramanian.


Prof. Joshua Tenenbaum speaks with Wired reporter Will Knight about AI image generators and the limitations of AI tools. “It's amazing what they can do,” says Tenenbaum, “but their ability to imagine what the world might be like from simple descriptions is often very limited and counterintuitive.”

The Wall Street Journal

Graduate student Matthew Groh discusses Detect Fakes, a research project he co-created aimed at teaching people how to detect deepfakes, with Wall Street Journal reporter Ann-Marie Alcántara. Groh recommends people pay attention to the context of an image or video, noting that people can “pay attention to incentives and what someone is saying and why someone might be saying this.”


Kevin Hu SB ’13, SM ’15, PhD ’19 co-founded Metaplane, a startup aimed at providing users with data analytics-focused tools, reports Kyle Wiggers for TechCrunch. “Metaplane monitors data using anomaly detection models trained primarily on historical metadata. The monitors try to account for seasonality, trends and feedback from customers, Hu says, to minimize alert fatigue, “writes Wiggers.


A review led Prof. Marzyeh Ghassemi has found that a major issue in health-related machine learning models “is the relative scarcity of publicly available data sets in medicine,” reports Emily Sohn for Nature.

Fast Company

Researchers from the MIT-IBM Watson AI Lab and the Harvard Natural Language Processing Group developed the Giant Language model Test Room (GLTR), an algorithm that attempts to detect if text was written by a bot, reports Megan Morrone for Fast Company. “Using the ‘it takes one to know one’ method, if the GLTR algorithm can predict the next word in a sentence, then it will assume that sentence has been written by a bot,” explains Morrone.