Skip to content ↓

Topic

Computer vision

Download RSS feed: News Articles / In the Media / Audio

Displaying 31 - 45 of 179 news clips related to this topic.
Show:

Fast Company

Fast Company reporter Elissaveta Brandon writes that a team of scientists from MIT and elsewhere have developed an amphibious artificial vision system inspired by the fiddler crab’s compound eye, which has an almost 360-degree field of view and can see on both land and water. “When translated into a machine,” writes Brandon, “this could mean more versatile cameras for self-driving cars and drones, both of which can become untrustworthy in the rain.”

TechCrunch

MIT researchers have developed FuseBot, a new system that combines RFID tagging with a robotic arm to retrieve hidden objects from a pile, reports Brian Heater for TechCrunch. “As long as some objects within the pile are tagged, the system can determine where its subject is most likely located and the most efficient way to retrieve it,” writes Heater.

Popular Science

Popular Science reporter Charlotte Hu writes that MIT researchers have developed an “electronics chip design that allows for sensors and processors to be easily swapped out or added on, like bricks of LEGO.” Hu writes that “a reconfigurable, modular chip like this could be useful for upgrading smartphones, computers, or other devices without producing as much waste.”

The Daily Beast

MIT engineers have developed a wireless, reconfigurable chip that could easily be snapped onto existing devices like a LEGO brick, reports Miriam Fauzia for The Daily Beast. “Having the flexibility to customize and upgrade an old device is a modder’s dream,” writes Fauzia, “but the chip may also help reduce electronic waste, which is estimated at 50 million tons a year worldwide.”

The Wall Street Journal

CSAIL researchers have developed a robotic arm equipped with a sensorized soft brush that can untangle hair, reports Douglas Belkin for The Wall Street Journal. “The laboratory brush is outfitted with sensors that detect tension," writes Belkin. “That tension reads as pain and is used to determine whether to use long strokes or shorter ones.”

TechCrunch

TechCrunch reporter Kyle Wiggers spotlights how MIT researchers have developed a new computer vision algorithm that can identify images down to the individual pixel. The new algorithm is a “vast improvement over the conventional method of ‘teaching’ an algorithm to spot and classify objects in pictures and videos,” writes Wiggers.

Gizmodo

Gizmodo reporter Andrew Liszewski writes that MIT researchers “used a high-resolution video camera with excellent low-light performance (the amount of sensor noise has to be as minimal as possible) to capture enough footage of a blank well that special processing techniques were able to not only see the shadow’s movements, but extrapolate who was creating them.”

TechCrunch

TechCrunch reporter Brian Heater spotlights RFusion, a fully-integrated robotic arm developed by MIT researchers that “uses an RF antenna and camera mounted to an arm to find lost objects.”

The Wall Street Journal

Wall Street Journal reporters Angus Loten and Kevin Hand spotlight how MIT researchers are developing robots with humanlike senses that will be able to assist with a range of tasks. GelSight, a technology developed by CSAIL researchers, outfits robot arms with a small gel pad that can be pressed into objects to sense their size and texture, while another team of researchers is “working to bridge the gap between touch and sight by training an AI system to predict what a seen object feels like and what a felt object looks like.”

Economist

Graduate student Shashank Srikant speaks with The Economist about his work developing a new model that can detect computer bugs and vulnerabilities that have been maliciously inserted into computer code.

The Wall Street Journal

MIT researchers have developed a new robot that can help locate hidden items using AI and wireless technologies, reports Benoit Morenne for The Wall Street Journal. “The latest version of the robot has a 96% success rate at finding and picking up objects in a lab setting, including clothes and household items,” writes Morenne. “In the future, this home helper could also retrieve a specific wrench or screwdriver from a toolbox and assist a human in assembling a piece of furniture.”

Mashable

MIT researchers have developed a new robot with a tactile sensing finger that can find objects buried in sand or rice, reports Emmett Smith for Mashable. “The robot could eventually perform other underground duties like identifying buried cables or disarming bombs or land mines.”

Mashable

Mashable spotlights Strolling Cities, a video project from the MIT-IBM Watson AI Lab, which uses AI to allow users to imagine what different words would like as a location. “Unlike other image-generating AI systems, Strolling Cities creates fictional cities every time,” Mashable notes.

Fast Company

Fast Company reporter Mark Wilson spotlights Strolling Cities, a new AI video project developed by researchers from the MIT-IBM Watson AI Lab, which recreates the streets of Italy based on millions of photos and words. “I decided that the beauty and sentiment, the social, historical, and psychological contents of my memories of Italy could become an artistic project, probably a form of emotional consolation,” says Mauro Martino of the MIT-IBM Watson AI Lab. “Something beautiful always comes out of nostalgia.”

TechCrunch

TechCrunch reporter Brian Heater writes that MIT researchers have developed a new robotic finger, dubbed the Digger Finger, that can sense and identify objects underground. “It’s a useful skill that could someday be deployed for landmines, finding underground cables and a variety of other tasks.”