Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 721 - 735 of 847 news clips related to this topic.
Show:

Wired

Wired reporter Matt Simon writes that CSAIL researchers have developed a new virtual system that could eventually be used to teach robots how to perform household chores. Researchers hope the system could one day help robots, “learn to anticipate future actions and be able to change the environment for the human,” explains PhD student Xavier Puig.

Popular Science

Using LiDAR sensors, MIT researchers have developed an autonomous vehicle navigation system for rural roads with “no detailed, three-dimensional map for the vehicle to reference,” reports Rob Verger of Popular Science. “The solution for urban mapping really doesn’t scale very well to a huge portion of the country,” explains graduate student Teddy Ort.

Motherboard

CSAIL researchers have developed a system that uses LIDAR and GPS to allow self-driving cars to navigate rural roads without detailed maps, writes Tracey Lindeman of Motherboard. Autonomous ride-hailing or car-sharing is important in rural communities because “the carless in these areas have few transportation options; many small communities don’t even have public buses,” notes Lindeman.

Forbes

Eric Mack writes for Forbes about a new system from MIT researchers that uses GPS in conjunction with LIDAR and IMU sensors to power self-driving vehicle navigation. Graduate student Teddy Ort says the system “shows the potential of self-driving cars being able to actually handle roads beyond the small number that tech companies have mapped.”

co.design

MapLite, a new system developed by CSAIL, aims to help autonomous vehicles navigate uncharted areas, writes Jesus Diaz for Co.Design. “[I]f autonomous cars can reach the millions of people who live beyond the city and are unable to pilot their own vehicles,” said graduate student Teddy Ort, “they will be uniquely capable of providing mobility to those who have very few alternatives.”

The Verge

Writing for The Verge, Angela Chen highlights advances in AI that are allowing researchers to discover and understand new materials at a rapid pace. Chen cites a study co-authored by Assistant Prof. Elsa Olivetti, who “developed a machine-learning system that scans academic papers to figure out which ones include instructions for making certain materials.”

PBS NOVA

MIT researchers have developed “the first artificial system to mimic the way the brain interprets sound – and it rivals humans in its accuracy,” reports Samia Bouzik for NOVA Next. “The research offers a tantalizing new way to study the brain…[and] could boost some neuroscience research into the fast track,” writes Bouzik.

co.design

After several years of experimentation, graduate student Arnav Kapur developed AlterEgo, a device to interpret subvocalization that can be used to control digital applications. Describing the implications as “exciting,” Katharine Schwab at Co.Design writes, “The technology would enable a new way of thinking about how we interact with computers, one that doesn’t require a screen but that still preserves the privacy of our thoughts.”

The Guardian

AlterEgo, a device developed by Media Lab graduate student Arnav Kapur, “can transcribe words that wearers verbalise internally but do not say out loud, using electrodes attached to the skin,” writes Samuel Gibbs of The Guardian. “Kapur and team are currently working on collecting data to improve recognition and widen the number of words AlterEgo can detect.”

Popular Science

Researchers at the Media Lab have developed a device, known as “AlterEgo,” which allows an individual to discreetly query the internet and control devices by using a headset “where a handful of electrodes pick up the miniscule electrical signals generated by the subtle internal muscle motions that occur when you silently talk to yourself,” writes Rob Verger for Popular Science.

New Scientist

A new headset developed by graduate student Arnav Kapur reads the small muscle movements in the face that occur when the wearer thinks about speaking, and then uses “artificial intelligence algorithms to decipher their meaning,” writes Chelsea Whyte for New Scientist. Known as AlterEgo, the device “is directly linked to a program that can query Google and then speak the answers.”

The Guardian

In a forthcoming book excerpted in The Guardian, Alex Beard describes Prof. Deb Roy's project to record his infant son's learning behaviors. Beard explains that while Roy set out to create machines that learned like humans, he was ultimately blown away by "the incredible sophistication of what a language learner in the flesh actually looks like and does." "The learning process wasn’t decoding, as he had originally thought, but something infinitely more continuous, complex and social."

WGBH

A recent study from Media Lab graduate student Joy Buolamwini addresses errors in facial recognition software that create concern for civil liberties. “If programmers are training artificial intelligence on a set of images primarily made up of white male faces, their systems will reflect that bias,” writes Cristina Quinn for WGBH.

Quartz

In a new working paper, Prof. Daron Acemoglu and his co-author argue that the rise in automation is linked to the aging of the blue-collar population. “The study shows that workers feeling the brunt of automation in lost jobs and lower wages are between the ages of 36 and 55. Those findings should make it easier for policy makers to track down the most affected workers—and help them survive the robot rush,” writes Ana Campoy for Quartz.

Mashable

Mashable highlights the robotic system, developed by researchers at MIT and Princeton, that can pick up, recognize, and place assorted objects. The researchers created an algorithm that allows the crane to “grab and sort objects (such as medicine bottles) into bins making it a potential timesaver for medical experts.”