Skip to content ↓

Topic

Computer vision

Download RSS feed: News Articles / In the Media / Audio

Displaying 16 - 30 of 168 news clips related to this topic.
Show:

The New York Times

Prof. Steven Barrett speaks with New York Times reporter Paige McClanahan about the pressing need to make air travel more sustainable and his research exploring the impact of contrails on the planet’s temperature. “Eliminating contrails is quite a big lever on mitigating the climate impact of aviation,” said Barrett.

TechCrunch

MIT spinout Gaia A is developing a forest management building tool aimed at providing foresters with the resources to make data-driven decisions, reports Haje Jan Kamps and Brian Heater for TechCrunch. “The company is currently using lidar and computer vision tech to gather data but is ultimately building a data platform to tackle some of the big questions in forestry,” writes Kamps and Heater.

Popular Science

Popular Science reporter Charlotte Hu writes that MIT researchers have developed a new machine learning model that can depict how the sound around a listener changes as they move through a certain space. “We’re mostly modeling the spatial acoustics, so the [focus is on] reverberations,” explains graduate student Yilun Du. “Maybe if you’re in a concert hall, there are a lot of reverberations, maybe if you’re in a cathedral, there are many echoes versus if you’re in a small room, there isn’t really any echo.”

TechCrunch

Scientists at MIT have developed “a machine learning model that can capture how sounds in a room will propagate through space,” report Kyle Wiggers and Devin Coldewey for TechCrunch. “By modeling the acoustics, the system can learn a room’s geometry from sound recordings, which can then be used to build a visual rendering of a room,” write Wiggers and Coldewey.

Fast Company

Fast Company reporter Elissaveta Brandon writes that a team of scientists from MIT and elsewhere have developed an amphibious artificial vision system inspired by the fiddler crab’s compound eye, which has an almost 360-degree field of view and can see on both land and water. “When translated into a machine,” writes Brandon, “this could mean more versatile cameras for self-driving cars and drones, both of which can become untrustworthy in the rain.”

TechCrunch

MIT researchers have developed FuseBot, a new system that combines RFID tagging with a robotic arm to retrieve hidden objects from a pile, reports Brian Heater for TechCrunch. “As long as some objects within the pile are tagged, the system can determine where its subject is most likely located and the most efficient way to retrieve it,” writes Heater.

Popular Science

Popular Science reporter Charlotte Hu writes that MIT researchers have developed an “electronics chip design that allows for sensors and processors to be easily swapped out or added on, like bricks of LEGO.” Hu writes that “a reconfigurable, modular chip like this could be useful for upgrading smartphones, computers, or other devices without producing as much waste.”

The Daily Beast

MIT engineers have developed a wireless, reconfigurable chip that could easily be snapped onto existing devices like a LEGO brick, reports Miriam Fauzia for The Daily Beast. “Having the flexibility to customize and upgrade an old device is a modder’s dream,” writes Fauzia, “but the chip may also help reduce electronic waste, which is estimated at 50 million tons a year worldwide.”

The Wall Street Journal

CSAIL researchers have developed a robotic arm equipped with a sensorized soft brush that can untangle hair, reports Douglas Belkin for The Wall Street Journal. “The laboratory brush is outfitted with sensors that detect tension," writes Belkin. “That tension reads as pain and is used to determine whether to use long strokes or shorter ones.”

TechCrunch

TechCrunch reporter Kyle Wiggers spotlights how MIT researchers have developed a new computer vision algorithm that can identify images down to the individual pixel. The new algorithm is a “vast improvement over the conventional method of ‘teaching’ an algorithm to spot and classify objects in pictures and videos,” writes Wiggers.

Gizmodo

Gizmodo reporter Andrew Liszewski writes that MIT researchers “used a high-resolution video camera with excellent low-light performance (the amount of sensor noise has to be as minimal as possible) to capture enough footage of a blank well that special processing techniques were able to not only see the shadow’s movements, but extrapolate who was creating them.”

TechCrunch

TechCrunch reporter Brian Heater spotlights RFusion, a fully-integrated robotic arm developed by MIT researchers that “uses an RF antenna and camera mounted to an arm to find lost objects.”

The Wall Street Journal

Wall Street Journal reporters Angus Loten and Kevin Hand spotlight how MIT researchers are developing robots with humanlike senses that will be able to assist with a range of tasks. GelSight, a technology developed by CSAIL researchers, outfits robot arms with a small gel pad that can be pressed into objects to sense their size and texture, while another team of researchers is “working to bridge the gap between touch and sight by training an AI system to predict what a seen object feels like and what a felt object looks like.”

Economist

Graduate student Shashank Srikant speaks with The Economist about his work developing a new model that can detect computer bugs and vulnerabilities that have been maliciously inserted into computer code.

The Wall Street Journal

MIT researchers have developed a new robot that can help locate hidden items using AI and wireless technologies, reports Benoit Morenne for The Wall Street Journal. “The latest version of the robot has a 96% success rate at finding and picking up objects in a lab setting, including clothes and household items,” writes Morenne. “In the future, this home helper could also retrieve a specific wrench or screwdriver from a toolbox and assist a human in assembling a piece of furniture.”