Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 496 - 510 of 1217 news clips related to this topic.
Show:

NBC

NBC 1st Look host Chelsea Cabarcas visits MIT to learn more about how faculty, researchers and students are “pioneering the world of tomorrow.” Cabarcas meets the MIT Solar Electric Vehicle team and gets a peek at Nimbus, the single-occupant vehicle that team members raced in the American Solar Challenge from Kansas City to New Mexico. Cabarcas also sees the back-flipping MIT mini cheetah that could one day be used in disaster-relief operations.

Forbes

Researchers from MIT and Mass General Hospital have developed “a deep learning model named ‘Sybil’ that can be used to predict lung cancer risk, using data from just a single CT scan,” writes Sai Balasubramanian for Forbes. “Sybil is able to predict a patient’s future lung cancer risk to a certain extent of accuracy, using the data from just one LDCT [low-dose computed tomography scan],” writes Balasubramanian.

Wired

Prof. Joshua Tenenbaum speaks with Wired reporter Will Knight about AI image generators and the limitations of AI tools. “It's amazing what they can do,” says Tenenbaum, “but their ability to imagine what the world might be like from simple descriptions is often very limited and counterintuitive.”

The Wall Street Journal

Graduate student Matthew Groh discusses Detect Fakes, a research project he co-created aimed at teaching people how to detect deepfakes, with Wall Street Journal reporter Ann-Marie Alcántara. Groh recommends people pay attention to the context of an image or video, noting that people can “pay attention to incentives and what someone is saying and why someone might be saying this.”

TechCrunch

Kevin Hu SB ’13, SM ’15, PhD ’19 co-founded Metaplane, a startup aimed at providing users with data analytics-focused tools, reports Kyle Wiggers for TechCrunch. “Metaplane monitors data using anomaly detection models trained primarily on historical metadata. The monitors try to account for seasonality, trends and feedback from customers, Hu says, to minimize alert fatigue, “writes Wiggers.

Nature

A review led Prof. Marzyeh Ghassemi has found that a major issue in health-related machine learning models “is the relative scarcity of publicly available data sets in medicine,” reports Emily Sohn for Nature.

Fast Company

Researchers from the MIT-IBM Watson AI Lab and the Harvard Natural Language Processing Group developed the Giant Language model Test Room (GLTR), an algorithm that attempts to detect if text was written by a bot, reports Megan Morrone for Fast Company. “Using the ‘it takes one to know one’ method, if the GLTR algorithm can predict the next word in a sentence, then it will assume that sentence has been written by a bot,” explains Morrone.

The Economist

Research scientist Ryan Hamerly and his team are working to harness “the low power consumption of hybrid optical devices for smart speakers, lightweight drones and even self-driving cars,” reports The Economist

TechCrunch

MIT spinout Gaia A is developing a forest management building tool aimed at providing foresters with the resources to make data-driven decisions, reports Haje Jan Kamps and Brian Heater for TechCrunch. “The company is currently using lidar and computer vision tech to gather data but is ultimately building a data platform to tackle some of the big questions in forestry,” writes Kamps and Heater.

Marketplace

Research affiliate Ramin Hasani speaks with Kimberly Adams of Marketplace about how he and his CSAIL colleagues solved a differential equation dating back to the early 1900s, enabling researchers to create an AI algorithm that can learn on the spot and adapt to evolving patterns. The new algorithm “will enable larger-scale brain simulations,” Hasani explains.

Fortune

Fortune reporter Gabby Shacknai spotlights Joy Buolamwini PhD ’22 and her research in racial bias in AI. “After finishing grad school, Biolamwini decided to continue her research on A.I.’s racial bias and quickly realized that much of this was a result of the non-diverse datasets and imagery used by a disproportionately white, male tech workforce to train A.I. and inform its algorithms,” writes Shacknai.

The Boston Globe

Graduate student Kevin Frans co-founded OpenAI, a for-profit research lab that aims to provide free public access to artificial intelligence systems, reports Hiawatha Bray for The Boston Globe. “Our mission is to put AI is the hands of everyone,” says Frans.

Popular Science

Popular Science reporter Charlotte Hu writes that MIT researchers have developed a new machine learning model that can depict how the sound around a listener changes as they move through a certain space. “We’re mostly modeling the spatial acoustics, so the [focus is on] reverberations,” explains graduate student Yilun Du. “Maybe if you’re in a concert hall, there are a lot of reverberations, maybe if you’re in a cathedral, there are many echoes versus if you’re in a small room, there isn’t really any echo.”

Forbes

MIT researchers have developed a new technique aimed at protecting images from AI generators, reports Kyle Barr for Gizmodo. The program uses "data poisoning techniques to essentially disturb pixels within an image to create invisible noise, effectively making AI art generators incapable of generating realistic deepfakes based on the photos they’re fed,” reports Kyle Barr for Gizmodo." 

Fortune

Researchers from MIT and other institutions have found that an AI system called KataGo can be consistently tricked into losing at the strategy game of Go, reports Jeremy Kahn for Fortune. This research highlights potential similar vulnerabilities in “other AI systems trained in a similar manner,” writes Kahn.