Skip to content ↓

Topic

Computer science and technology

Download RSS feed: News Articles / In the Media / Audio

Displaying 571 - 585 of 1169 news clips related to this topic.
Show:

Indvstrvs

Writing for Indvstrvs, Prof. Eugene Fitzgerald, CEO and director of the Singapore-MIT Alliance for Research and Technology (SMART), explores new advances in silicon technologies. “With silicon computing saturating, the key to the future is interconnecting with other systems wirelessly and at lower power,” writes Fitzgerald.

ZDNet

A new tool developed by MIT researchers sheds light on the operations of generative adversarial network models and allows users to edit these machine learning models to generate new images, reports Daphne Leprince-Ringuet for ZDNet. "The real challenge I'm trying to breach here," says graduate student David Bau, "is how to create models of the world based on people's imagination."

TechCrunch

A wireless system developed by CSAIL researchers can monitor movement and vital signs without using video, while also preserving privacy, reports Darrell Etherington for TechCrunch. “The system has the potential to be used at long-term care and assisted living facilities to provide a higher standard of support, while also ensuring that the privacy of the residents of those facilities is respected,” writes Etherington.

Fortune

Writing for Fortune, graduate student Nina McMurry examines how public health authorities can allay fears about contact tracing apps. “Public health authorities need to make sure that the public understands what the technology is doing,” McMurry and her co-authors write. “Even if an app is privacy-preserving, the public may not perceive it as such.”

New Scientist

Writing for New Scientist, Vijaysree Venkatraman spotlights “Coded Bias,” a new documentary that chronicles graduate student Joy Buolamwini’s “journey to uncover racial and sexist bias in face-recognition software and other artificial intelligence systems.”

Fortune

Researchers at MIT’s Center for Advanced Virtuality have created a deepfake video of President Richard Nixon discussing a failed moon landing. “[The video is] meant to serve as a warning of the coming wave of impressively realistic deepfake false videos about to hit us that use A.I. to convincingly reproduce the appearance and sound of real people,” write Aaron Pressman and David Z. Morris for Fortune.

Fast Company

Fast Company reporter Amy Farley spotlights graduate student Joy Buolamwini and her work battling bias in artificial intelligence systems, noting that “when it comes to AI injustices, her voice resonates.” Buolamwini emphasizes that “we have a voice and a choice in the kind of future we have.”

Boston 25 News

Boston 25’s Chris Flanagan reports that MIT researchers developed a website aimed at educating the public about deepfake technology and misinformation. “This project is part of an awareness campaign to get people aware of what is possible with both AI technologies like our deepfake, but also really simple video editing technologies,” says Francesca Panetta, XR creative director at MIT’s Center for Advanced Virtuality.

Space.com

MIT researchers created a deepfake video and website to help educate the public of the dangers of deepfakes and misinformation, reports Mike Wall for Space.com. “This alternative history shows how new technologies can obfuscate the truth around us, encouraging our audience to think carefully about the media they encounter daily,” says Francesca Panetta, XR creative director at MIT’s Center for Advanced Virtuality.

Scientific American

Scientific American explores how MIT researchers created a new website aimed at exploring the potential perils and possibilities of deepfakes. “One of the things I most love about this project is that it’s using deepfakes as a medium and the arts to address the issue of misinformation in our society,” says Prof. D. Fox Harrell.

New York Times

Prof. Fox Harrell speaks with New York Times reporter Joshua Rothkopf about the educational potential of deepfake technology. “To have the savvy to negotiate a political media landscape where a video could potentially be a deepfake, or a legitimate video could be called a deepfake, I think those are cases people need to be aware of,” says Harrell.

WBUR

WBUR’s Bob Shaffer reports on a deepfake video created by MIT's Center for Advanced Virtuality, which aims to spark awareness of deepfake technologies. The goal is to highlight how deepfakes are an extension “of a continuum of misinformation that we all should be aware of and should have our ears tuned to, if we can," said co-director Halsey Burgund.

STAT

MIT researchers have developed an AI system that can predict Alzheimer’s risk by forecasting how patients will perform on a test measuring cognitive decline up to two years in advance, reports Casey Ross for STAT

WHDH 7

7 News spotlights how CSAIL researchers have developed two new software systems that are aimed at allowing anyone to customize and design their own knitted design patterns. “The researchers tested the software by having people with no knitting experience design gloves and hats,” explains 7 News reporter Keke Vencill.

Smithsonian Magazine

Smithsonian reporter Emily Matchar spotlights AlterEgo, a device developed by MIT researchers to help people with speech pathologies communicate. “A lot of people with all sorts of speech pathologies are deprived of the ability to communicate with other people,” says graduate student Arnav Kapur. “This could restore the ability to speak for people who can’t.”