Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 751 - 765 of 869 news clips related to this topic.
Show:

The Guardian

AlterEgo, a device developed by Media Lab graduate student Arnav Kapur, “can transcribe words that wearers verbalise internally but do not say out loud, using electrodes attached to the skin,” writes Samuel Gibbs of The Guardian. “Kapur and team are currently working on collecting data to improve recognition and widen the number of words AlterEgo can detect.”

Popular Science

Researchers at the Media Lab have developed a device, known as “AlterEgo,” which allows an individual to discreetly query the internet and control devices by using a headset “where a handful of electrodes pick up the miniscule electrical signals generated by the subtle internal muscle motions that occur when you silently talk to yourself,” writes Rob Verger for Popular Science.

New Scientist

A new headset developed by graduate student Arnav Kapur reads the small muscle movements in the face that occur when the wearer thinks about speaking, and then uses “artificial intelligence algorithms to decipher their meaning,” writes Chelsea Whyte for New Scientist. Known as AlterEgo, the device “is directly linked to a program that can query Google and then speak the answers.”

The Guardian

In a forthcoming book excerpted in The Guardian, Alex Beard describes Prof. Deb Roy's project to record his infant son's learning behaviors. Beard explains that while Roy set out to create machines that learned like humans, he was ultimately blown away by "the incredible sophistication of what a language learner in the flesh actually looks like and does." "The learning process wasn’t decoding, as he had originally thought, but something infinitely more continuous, complex and social."

WGBH

A recent study from Media Lab graduate student Joy Buolamwini addresses errors in facial recognition software that create concern for civil liberties. “If programmers are training artificial intelligence on a set of images primarily made up of white male faces, their systems will reflect that bias,” writes Cristina Quinn for WGBH.

Quartz

In a new working paper, Prof. Daron Acemoglu and his co-author argue that the rise in automation is linked to the aging of the blue-collar population. “The study shows that workers feeling the brunt of automation in lost jobs and lower wages are between the ages of 36 and 55. Those findings should make it easier for policy makers to track down the most affected workers—and help them survive the robot rush,” writes Ana Campoy for Quartz.

Mashable

Mashable highlights the robotic system, developed by researchers at MIT and Princeton, that can pick up, recognize, and place assorted objects. The researchers created an algorithm that allows the crane to “grab and sort objects (such as medicine bottles) into bins making it a potential timesaver for medical experts.”

Financial Times

In an article for Financial Times, CSAIL Director Daniela Rus explains why humans should collaborate rather than compete with AI. “Technology and people do not have to be in competition,” writes Rus. “Collaborating with AI systems, we can augment and amplify many aspects of work and life.”

Quartz

Lecturer Luis Perez-Breva writes for Quartz about why most retail corporations’ definition of AI is flawed. “'AI' is at its best when we program it to address problems that are hard for humans; when not used to upskill humans, however, all it does is shift work from employees to customers,” Perez-Breva writes.

Xinhuanet

AI leader SenseTime is the first company to join the MIT Intelligence Quest since its launch, writes Xinhua editor Xiang Bo. “As the largest provider of AI algorithms in China, we are very excited to work with MIT to lead global AI research into the next frontier,” said Xu Li, CEO of SenseTime.

Financial Times

A video from Financial Times highlights work being done by CSAIL to develop robot teams. Prof. Daniela Rus discusses how partnering robots has the potential to “form much more adaptive and complex systems that will be able to take on a wider set of tasks."

TechCrunch

Spun out from MIT, Feature Labs helps companies identify, implement, and deploy impactful machine learning products, writes Ron Miller of TechCrunch. By automating the manual process of feature engineering, data scientists “can spend more time figuring out what they need to predict,” says co-founder Max Kanter ’15.

The Economist

An article in The Economist states that new research by MIT grad student Joy Buolamwini supports the suspicion that facial recognition software is better at processing white faces than those of other people. The bias probably arises “from the sets of data the firms concerned used to train their software,” the article suggests.

Quartz

Dave Gershgorn writes for Quartz, highlighting congress’ concerns around the dangers of inaccurate facial recognition programs. He cites Joy Buolamwini’s Media Lab research on facial recognition, which he says “maintains that facial recognition is still significantly worse for people of color.”

Forbes

A new paper from graduate students in EECS details a newly-developed chip that allows neural networks to function offline, while drastically reducing power usage. “That means smartphones and even appliances and smaller Internet of Things devices could run neural networks locally” writes Eric Mack for Forbes.