Skip to content ↓

Topic

Computer science and technology

Download RSS feed: News Articles / In the Media / Audio

Displaying 76 - 90 of 998 news clips related to this topic.
Show:

The Economist

The Economist reporter Rachel Lloyd predicts a “distinct change” in topics for bestselling books in 2024. Lloyd predicts artificial intelligence will take a lead, spotlighting “The Heart and Chip: Our Bright Future with Robots,” by Prof. Daniela Rus, director of CSAIL, as a leading example of the shift.

The Wall Street Journal

Prof. David Rand speaks with Wall Street Journal reporter Christopher Mims about the impact of generative AI on the spread of misinformation. “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says Rand.

Inside Higher Ed

Prof. Nick Montfort speaks with Inside Higher Ed reporter Lauren Coffey about the use of AI tools in art and education. “The primary concern should be how can we provide new education, how can we inform students, artists, instructors,” says Montfort. “How can we, not so much bolt policies or create guardrails, but how can we help them understand new technologies better and their implications?”

Living on Earth

Prof. Kerry Emanuel speaks with Living on Earth host Jenni Doering about the future of extreme weather forecasting. “We have to do a much better job projecting long term risk, and how that's changing as the climate changes so that people can make intelligent decisions about where they're going to live, what they're going to build, and so on,” says Emanuel. “We need better models, we need better computers, so that we can resolve the atmosphere better, we need to make better measurements of the ocean below the surface, that's really tough to do.”

Los Angeles Times

Los Angeles Times reporter Brian Merchant spotlights Joy Buolamwini PhD '22 and her new book, “Unmasking AI: My Mission to Protect What is Human in a World of Machines.” “Buolamwini’s book recounts her journey to become one of the nation’s preeminent scholars and critics of artificial intelligence — she recently advised President Biden before the release of his executive order on AI — and offers readers a compelling, digestible guide to some of the most pressing issues in the field,” writes Merchant.

The Boston Globe

Joy Buolamwini PhD '22 speaks with Brian Bergstein of The Boston Globe’s “Say More” podcast about her academic and professional career studying bias in AI. “As I learned more and also became familiar with the negative impacts of things like facial recognition technologies, it wasn’t just the call to say let’s make systems more accurate but a call to say let’s reexamine the ways in which we create AI in the first place and let’s reexamine our measures of progress because so far they have been misleading,” says Buolamwini

Yahoo! News

Prof. Sinan Aral speaks with Yahoo! Finance Live host Julie Hyman about President Biden’s executive order on artificial intelligence regulation. “This is big, it's bold, it's broad,” says Aral. “It has a number of provisions. It has provisions for safety, for privacy, for equity, for workers, for competition and innovation, and leadership abroad. And it really targets those foundation models, the big AI companies in terms of their safety and security standards.”

The Boston Globe

Joy Buolamwini PhD '22 writes for The Boston Globe about her experience uncovering bias in artificial intelligence through her academic and professional career. “I critique AI from a place of having been enamored with its promise, as an engineer more eager to work with machines than with people at times, as an aspiring academic turned into an accidental advocate, and also as an artist awakened to the power of the personal when addressing the seemingly technical,” writes Buolamwini. “The option to say no, the option to halt a project, the option to admit to the creation of dangerous and harmful though well-intentioned tools must always be on the table.”

Axios

Researchers from MIT and elsewhere have developed a transparency index used to assess 10 key AI foundation models, reports Ryan Heath for Axios. Heath writes that the researchers emphasized that “unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.”

Forbes

Tom Davenport, a visiting scholar at the MIT Initiative on the Digital Economy, writes for Forbes about how organizations are approaching generative AI. “If organizations are to succeed with generative AI, they need to increase the focus on data preparation for it, which is a primary prerequisite for success,” writes Davenport.

CBS Boston

Graduate student Kaylee Cunningham speaks with CBS Boston about her work using social media to help educate and inform the public about nuclear energy. Cunningham, who is known as Ms. Nuclear Energy on TikTok, recalls how as a child she was involved in musical theater, a talent she has now combined with her research interests as an engineer. She adds that she also hopes her platform inspires more women to pursue STEM careers. “You don't have to look like the stereotypical engineer,” Cunningham emphasizes.

Fortune

Graduate student Sarah Gurev and her colleagues have developed a new AI system named EVEscape that can, “predict alterations likely to occur to viruses as they evolve,” reports Erin Prater for Fortune. Gurev says that with the amount of data the system has amassed, it “can make surprisingly accurate predications.”

TechCrunch

Arvid Lunnemark '22, Michael Truell '22, Sualeh Asif '22, and Aman Sanger '22 co-founded Anysphere, a startup building an “‘AI-native’” software development environment, called Cursor,” reports Kyle Wiggers for TechCrunch. “In the next several years, our mission is to make programming an order of magnitude faster, more fun and creative,” says Truell. “Our platform enables all developers to build software faster.”

Forbes

Curtis Northcutt SM '17, PhD '21, Jonas Mueller PhD '18, and Anish Athalye SB '17, SM '17, PhD '23 have co-founded Cleanlab, a startup aimed at fixing data problems in AI models, reports Alex Konrad for Forbes. “The reality is that every single solution that’s data-driven — and the world has never been more data-driven — is going to be affected by the quality of the data,” says Northcutt.

Axios

Axios reporter Alison Snyder writes about how a new study by MIT researchers finds that preconceived notions about AI chatbots can impact people’s experiences with them. Prof. Pattie Maes explains, the technology's developers “always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned, but we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don't just depend on the AI and the quality of the AI. It depends on how the human responds to the AI.”