Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 616 - 630 of 1027 news clips related to this topic.
Show:

Gizmodo

Gizmodo reporter Andrew Liszewski writes that MIT researchers have developed a robot that can play Jenga using visual and physical cues. The ability to feel “facilitated the robot’s ability to learn how to play all on its own, both in terms of finding a block that was loose enough to remove, and repositioning it on the top of the tower without upsetting the delicate balance.”

Popular Science

A new robot developed by MIT researchers uses AI and sensors to play the game of Jenga, reports Rob Verger for Popular Science. “It decides on its own which block to push, [and] which blocks to probe; it decides on its own how to extract them; and it decides on its own when it’s a good idea to keep extracting them, or to move to another one,” says Prof. Alberto Rodriguez.

Wired

Wired reporter Matt Simon writes that MIT researchers have engineered a robot that can teach itself to play the game of Jenga. As Simon explains, the development is a “big step in the daunting quest to get robots to manipulate objects in the real world.”

WSJ at Large

President Reif speaks with Gerry Baker of WSJ at Large about the impact of AI on the future of education and work. “Part of the goal of the [MIT Schwarzman] college is, as we educate people to use these [AI] tools, to educate them in a way that empowers human beings, not replaces human beings,” says Reif. 

Associated Press

Associated Press reporter Tali Arbel writes that MIT researchers have found that Amazon’s facial detection technology often misidentifies women and women with darker skin. Arbel writes that the study, “warns of the potential of abuse and threats to privacy and civil liberties from facial-detection technology.”

The Washington Post

A new study by Media Lab researchers finds that Amazon’s Rekognition facial recognition system performed more accurately when identifying lighter-skinned faces, reports Drew Harrell for The Washington Post. The system “performed flawlessly in predicting the gender of lighter-skinned men,” writes Harrell, “but misidentified the gender of darker-skinned women in roughly 30 percent of their tests.”

The Verge

Verge reporter James Vincent writes that Media Lab researchers have found that the facial recognition system Rekognition performed worse at identifying an individual’s gender if they were female or dark-skinned. In experiments, the researchers found that the system “mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time,” Vincent explains.

New York Times

MIT researchers have found that the Rekognition facial recognition system has more difficulty identifying the gender of female and darker-skinned faces than similar services, reports Natasha Singer for The New York Times. Graduate student Joy Buolamwini said “the results of her studies raised fundamental questions for society about whether facial technology should not be used in certain situations,” writes Singer.

The New Yorker

New Yorker contributor Caroline Lester writes about the Moral Machine, an online platform developed by MIT researchers to crowdsource public opinion on the ethical issues posed by autonomous vehicles. 

New York Times

New York Times reporter Steve Lohr writes about the MIT AI Policy Conference, which examined how society, industry and governments should manage the policy questions surrounding the evolution of AI technologies. “If you want people to trust this stuff, government has to play a role,” says CSAIL principal research scientist Daniel Weitzner.

Forbes

Prof. Max Tegmark speaks with Forbes contributor Peter High about his work trying to ensure that AI technologies are implemented in a way that is beneficial to society. “If we plan accordingly and steer technology in the right direction, we can create an inspiring future that will allow humanity to flourish in a way that we have never seen before,” says Tegmark.

Boston Globe

Writing for The Boston Globe, members of the Media Lab’s Scalable Cooperation research group argue that independent oversight is needed to ensure that new AI technologies are developed in an ethical manner. “AI is the new framework of our lives,” they write. “We need to ensure it’s a safe, human-positive framework, from top to bottom.”

Forbes

Writing for Forbes, Prof. David Mindell explores the concept of using work, in particular the duties a home health aide performs, as a Turing test for the abilities of AI systems. “In this era of anxiety about AI technologies changing the nature of work,” writes Mindell, “everything we know about work should also change the nature of AI.”

Wired

Prof. Daniela Rus and R. David Eldeman, director of the Project on Technology, Economy, and National Security at MIT speak with Matt Simon at Wired about working with robots. “The robots have a fixed architecture and they have a fixed vocabulary,” explains Rus. “So, people will continue to have to learn that and understand what the tool is useful for.”

The Wall Street Journal

Provost Martin Schmidt and SHASS Dean Melissa Nobles speak with Wall Street Journal reporter Sara Castellanos about MIT’s efforts to advance the study of AI and its ethical and societal implications through the MIT Stephen A. Schwarzman College of Computing. Schmidt says this work “requires a deep partnership between the technologists and the humanists.”