Skip to content ↓

Topic

Media Lab

Download RSS feed: News Articles / In the Media / Audio

Displaying 61 - 75 of 831 news clips related to this topic.
Show:

The Boston Globe

Joy Buolamwini PhD '22 speaks with Brian Bergstein of The Boston Globe’s “Say More” podcast about her academic and professional career studying bias in AI. “As I learned more and also became familiar with the negative impacts of things like facial recognition technologies, it wasn’t just the call to say let’s make systems more accurate but a call to say let’s reexamine the ways in which we create AI in the first place and let’s reexamine our measures of progress because so far they have been misleading,” says Buolamwini

Popular Science

MIT researchers have developed a new programmable, shape-changing smart fiber called FibeRobo that can change its structure in response to hot or cold temperatures, reports Andrew Paul for Popular Science. “FibeRobo is flexible and strong enough to use within traditional manufacturing methods like embroidery, weaving looms, and knitting machines,” writes Paul. “With an additional ability to combine with electrically conductive threads, a wearer could directly control their FibeRobo clothing or medical wearables like compression garments via wireless inputs from a controller or smartphone.”

National Geographic

MIT researchers have designed a wearable ultrasound device that could help make breast cancer screening more accessible, reports Carrie Arnold for National Geographic.  “Early detection is the key for survival,” says Prof. Canan Dagdeviren. “Our humble calculation shows that this technology has the potential to save 12 million lives per year globally.”

The Boston Globe

Joy Buolamwini PhD '22 writes for The Boston Globe about her experience uncovering bias in artificial intelligence through her academic and professional career. “I critique AI from a place of having been enamored with its promise, as an engineer more eager to work with machines than with people at times, as an aspiring academic turned into an accidental advocate, and also as an artist awakened to the power of the personal when addressing the seemingly technical,” writes Buolamwini. “The option to say no, the option to halt a project, the option to admit to the creation of dangerous and harmful though well-intentioned tools must always be on the table.”

The Washington Post

Graduate student Shayne Longpre speaks with Washington Post reporter Nitasha Tiku about the ethical and legal implications surrounding language model datasets. Longpre says “the lack of proper documentation is a community-wide problem that stems from modern machine-learning practices.”

The Boston Globe

Writing for The Boston Globe, Prof. Mitchel Resnick explores how a new coding app developed by researchers from the Lifelong Kindergarten group is aimed at allowing young people to use mobile phones to create interactive stories, games and animations. Resnick makes the case that with “appropriate apps and support, mobile phones can provide opportunities for young people to imagine, create, and share projects.”

The World

Research scientist Nataliya Kosmyna speaks with The World host Chris Harland-Dunaway about the science behind using non-invasive brain scans and artificial intelligence to understand people’s different thoughts and mental images.  

Scientific American

Researchers at MIT have designed “a wearable ultrasound scanner that could be used at home to detect breast tumors earlier,” reports Simon Makin for Scientific American. “The researchers incorporated the scanner into a flexible, honeycombed 3-D-printed patch that can be fixed into a bra,” explains Makin. “The wearer moves the scanner among six different positions on the breast, where it snaps into place with magnets, allowing reproducible scanning of the whole breast.”

The Atlantic

Writing for The Atlantic, Prof. Deb Roy makes the case that “new kinds of social networks can be designed for constructive communication—for listening, dialogue, deliberation, and mediation—and they can actually work.” Roy adds: “We can and should create social networks designed for public discourse that prioritize inclusion, where underheard voices and perspectives can flourish, and where people take and offer disagreement in good faith.”

Axios

Axios reporter Alison Snyder writes about how a new study by MIT researchers finds that preconceived notions about AI chatbots can impact people’s experiences with them. Prof. Pattie Maes explains, the technology's developers “always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned, but we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don't just depend on the AI and the quality of the AI. It depends on how the human responds to the AI.”

Scientific American

MIT researchers have found that user bias can drive interactions with AI chatbots, reports Nick Hilden for Scientific American.  “When people think that the AI is caring, they become more positive toward it,” graduate student Pat Pataranutaporn explains. “This creates a positive reinforcement feedback loop where, at the end, the AI becomes much more positive, compared to the control condition. And when people believe that the AI was manipulative, they become more negative toward the AI—and it makes the AI become more negative toward the person as well.”

GBH

Prof. Eric Klopfer, co-director of the RAISE initiative (Responsible AI for Social Empowerment in Education), speaks with GBH reporter Diane Adame about the importance of providing students guidance on navigating artificial intelligence systems. “I think it's really important for kids to be aware that these things exist now, because whether it's in school or out of school, they are part of systems where AI is present,” says Klopfer. “Many humans are biased. And so the [AI] systems express those same biases that they've seen online and the data that they've collected from humans.”

WBUR

WBUR’s Lloyd Schwartz spotlights Prof. Tod Machover’s revival of “VALIS” at MIT, staged by Prof. Jay Scheib. “The score is an inventive and often hauntingly beautiful arrangement of synthesizer, live instruments, and electronically expanded instruments,” writes Schwartz, “which Machover calls ‘hyper-instruments,’ a compelling amalgamation of minimalism, medieval, Wagner and rock.”

Popular Science

Popular Science reporter Andrew Paul writes that MIT researchers have developed a new long-range, low-power underwater communication system. Installing underwater communication networks “could help continuously measure a variety of oceanic datasets such as pressure, CO2, and temperature to refine climate change modeling,” writes Paul, “as well as analyze the efficacy of certain carbon capture technologies.”

The Boston Globe

Prof. Tod Machover speaks with Boston Globe reporter A.Z. Madonna about the restaging of his opera ‘VALIS’ at MIT, which features an AI-assisted musical instrument developed by Nina Masuelli ’23.  “In all my career, I’ve never seen anything change as fast as AI is changing right now, period,” said Machover. “So to figure out how to steer it towards something productive and useful is a really important question right now.”