Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 736 - 750 of 847 news clips related to this topic.
Show:

Financial Times

In an article for Financial Times, CSAIL Director Daniela Rus explains why humans should collaborate rather than compete with AI. “Technology and people do not have to be in competition,” writes Rus. “Collaborating with AI systems, we can augment and amplify many aspects of work and life.”

Quartz

Lecturer Luis Perez-Breva writes for Quartz about why most retail corporations’ definition of AI is flawed. “'AI' is at its best when we program it to address problems that are hard for humans; when not used to upskill humans, however, all it does is shift work from employees to customers,” Perez-Breva writes.

Xinhuanet

AI leader SenseTime is the first company to join the MIT Intelligence Quest since its launch, writes Xinhua editor Xiang Bo. “As the largest provider of AI algorithms in China, we are very excited to work with MIT to lead global AI research into the next frontier,” said Xu Li, CEO of SenseTime.

Financial Times

A video from Financial Times highlights work being done by CSAIL to develop robot teams. Prof. Daniela Rus discusses how partnering robots has the potential to “form much more adaptive and complex systems that will be able to take on a wider set of tasks."

TechCrunch

Spun out from MIT, Feature Labs helps companies identify, implement, and deploy impactful machine learning products, writes Ron Miller of TechCrunch. By automating the manual process of feature engineering, data scientists “can spend more time figuring out what they need to predict,” says co-founder Max Kanter ’15.

The Economist

An article in The Economist states that new research by MIT grad student Joy Buolamwini supports the suspicion that facial recognition software is better at processing white faces than those of other people. The bias probably arises “from the sets of data the firms concerned used to train their software,” the article suggests.

Quartz

Dave Gershgorn writes for Quartz, highlighting congress’ concerns around the dangers of inaccurate facial recognition programs. He cites Joy Buolamwini’s Media Lab research on facial recognition, which he says “maintains that facial recognition is still significantly worse for people of color.”

Forbes

A new paper from graduate students in EECS details a newly-developed chip that allows neural networks to function offline, while drastically reducing power usage. “That means smartphones and even appliances and smaller Internet of Things devices could run neural networks locally” writes Eric Mack for Forbes.

TechCrunch

Brian Heater for TechCrunch covers how researchers are creating a system that will allow robots to develop motor skills and process abstract concepts. “With this system, the robots can perform complex tasks without getting bogged down in the minutia required to complete them,” Heater writes.

TechCrunch

MIT researchers have designed a new chip to enhance the functionality of neural networks while simultaneously reducing the consumption of power, writes Darrell Etherington of TechCrunch. “The basic concept involves simplifying the chip design so that shuttling of data between different processors on the same chip is taken out of the equation,” he explains.

New Scientist

Graduate student Joy Buolamwini tested three different face-recognition systems and found that the accuracy is best when the subject is a lighter skinned man, reports Timothy Revell for New Scientist. With facial recognition software being used by police to identify suspects, “this means inaccuracies could have consequences, such as systematically ingraining biases in police stop and searches,” writes Revell.

Marketplace

Molly Wood at Marketplace speaks with Media Lab graduate student Joy Buolamwini about the findings of her recent research, which examined widespread bias in AI-supported facial recognition programs. “At the end of the day, data reflects our history, and our history has been very biased to date,” Buolamwini said.

co.design

Recent research from graduate student Joy Buolamwini shows that facial recognition programs, which are increasingly being used by law enforcement, are failing to identify non-white faces. “When these systems can’t recognize darker faces with as much accuracy as lighter faces, there’s a higher likelihood that innocent people will be targeted by law enforcement,” writes Katharine Schwab for Co. Design

Gizmodo

Writing for Gizmodo, Sidney Fussell explains that a new Media Lab study finds facial-recognition software is most accurate when identifying men with lighter skin and least accurate for women with darker skin. The software analyzed by graduate student Joy Buolamwini “misidentified the gender of dark-skinned females 35 percent of the time,” explains Fussell.

Quartz

A study co-authored by MIT graduate student Joy Buolamwini finds that facial-recognition software is less accurate when identifying darker skin tones, especially those of women, writes Josh Horwitz of Quartz. According to the study, these errors could cause AI services to “treat individuals differently based on factors such as skin color or gender,” explains Horwitz.