Skip to content ↓

Topic

Computer science and technology

Download RSS feed: News Articles / In the Media / Audio

Displaying 301 - 315 of 1246 news clips related to this topic.
Show:

Curiosity Stream

Four faculty members from across MIT - Professors Song Han, Simon Johnson, Yoon Kim and Rosalind Picard - speak with Curiosity Stream about the opportunities and risks posed by the rapid advancements in the field of AI. “We do want to think about which human capabilities we treasure,” says Picard. She adds that during the Covid-19 pandemic, “we saw a lot of loss of people's ability to communicate with one another face-to-face when their world moved online. I think we need to be thoughtful and intentional about what we're building with the technology and whether it's diminishing who we are or enhancing it.”

Forbes

Forbes contributor Lucio Ribeiro spotlights Andrew Ng MS '98 and Jaime Teevan SM '01, PhD '07 as two of eight “AI superheroes whose work is transforming technology and challenges our understanding of what’s possible.” Ng is the CEO is Landing AI, and “his efforts in educating the masses about AI through platforms like Coursera, which he co-founded, have democratized AI knowledge, bridging the gap between academia and industry,” writes Ribeiro. Teevan’s “work is focused on making AI more accessible and useful to people in their everyday lives.”

Nature

MIT researchers have “used an algorithm to sort through millions of genomes to find new, rare types of CRISPR systems that could eventually be adapted into genome-editing tools,” writes Sara Reardon for Nature. “We are just amazed at the diversity of CRISPR systems,” says Prof. Feng Zhang. “Doing this analysis kind of allows us to kill two birds with one stone: both study biology and also potentially find useful things.”

TechCrunch

Prof. Russ Tedrake and Max Bajracharya '21 MEng '21 speak with TechCrunch reporter Brian Heater about the impact of generative AI on the future of robotics. “Generative AI has the potential to bring revolutionary new capabilities to robotics,” says Tedrake. “Not only are we able to communicate with robots in natural language, but connecting to internet-scale language and image data is giving robots a much more robust understanding and reasoning about the world.”

Marketplace

Prof. Héctor Beltrán speaks with Lily Jamali of Marketplace about his new book, “Code Work: Hacking across the US/México Techno-Borderlands,” which explores the culture of hackathons and entrepreneurship in Mexico. "Ultimately, it’s about difference, thinking about Silicon Valley from Mexico,” says Beltrán. "Also, from a Chicano/Latino perspective, because as I show throughout the book, there’s these connections, tensions, intersections between the Latino community in the U.S., the Latin American community, the Mexican community.”

Fortune

Principal Research Scientist Andrew McAfee discussed the impact of artificial intelligence and technology on businesses at the Fortune CFO Collaborative, reports Sheryl Estrada for Fortune. Generative AI is “going to diffuse throughout the economy,” said McAfee. “It’s going to separate winners from losers, and it’s going to turbocharge the winners faster than you and I have been expecting.”  

The Economist

The Economist reporter Rachel Lloyd predicts a “distinct change” in topics for bestselling books in 2024. Lloyd predicts artificial intelligence will take a lead, spotlighting “The Heart and Chip: Our Bright Future with Robots,” by Prof. Daniela Rus, director of CSAIL, as a leading example of the shift.

The Wall Street Journal

Prof. David Rand speaks with Wall Street Journal reporter Christopher Mims about the impact of generative AI on the spread of misinformation. “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says Rand.

Inside Higher Ed

Prof. Nick Montfort speaks with Inside Higher Ed reporter Lauren Coffey about the use of AI tools in art and education. “The primary concern should be how can we provide new education, how can we inform students, artists, instructors,” says Montfort. “How can we, not so much bolt policies or create guardrails, but how can we help them understand new technologies better and their implications?”

Living on Earth

Prof. Kerry Emanuel speaks with Living on Earth host Jenni Doering about the future of extreme weather forecasting. “We have to do a much better job projecting long term risk, and how that's changing as the climate changes so that people can make intelligent decisions about where they're going to live, what they're going to build, and so on,” says Emanuel. “We need better models, we need better computers, so that we can resolve the atmosphere better, we need to make better measurements of the ocean below the surface, that's really tough to do.”

Los Angeles Times

Los Angeles Times reporter Brian Merchant spotlights Joy Buolamwini PhD '22 and her new book, “Unmasking AI: My Mission to Protect What is Human in a World of Machines.” “Buolamwini’s book recounts her journey to become one of the nation’s preeminent scholars and critics of artificial intelligence — she recently advised President Biden before the release of his executive order on AI — and offers readers a compelling, digestible guide to some of the most pressing issues in the field,” writes Merchant.

The Boston Globe

Joy Buolamwini PhD '22 speaks with Brian Bergstein of The Boston Globe’s “Say More” podcast about her academic and professional career studying bias in AI. “As I learned more and also became familiar with the negative impacts of things like facial recognition technologies, it wasn’t just the call to say let’s make systems more accurate but a call to say let’s reexamine the ways in which we create AI in the first place and let’s reexamine our measures of progress because so far they have been misleading,” says Buolamwini

Yahoo! News

Prof. Sinan Aral speaks with Yahoo! Finance Live host Julie Hyman about President Biden’s executive order on artificial intelligence regulation. “This is big, it's bold, it's broad,” says Aral. “It has a number of provisions. It has provisions for safety, for privacy, for equity, for workers, for competition and innovation, and leadership abroad. And it really targets those foundation models, the big AI companies in terms of their safety and security standards.”

The Boston Globe

Joy Buolamwini PhD '22 writes for The Boston Globe about her experience uncovering bias in artificial intelligence through her academic and professional career. “I critique AI from a place of having been enamored with its promise, as an engineer more eager to work with machines than with people at times, as an aspiring academic turned into an accidental advocate, and also as an artist awakened to the power of the personal when addressing the seemingly technical,” writes Buolamwini. “The option to say no, the option to halt a project, the option to admit to the creation of dangerous and harmful though well-intentioned tools must always be on the table.”

Axios

Researchers from MIT and elsewhere have developed a transparency index used to assess 10 key AI foundation models, reports Ryan Heath for Axios. Heath writes that the researchers emphasized that “unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.”