Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 346 - 360 of 1229 news clips related to this topic.
Show:

Fortune

Principal Research Scientist Andrew McAfee discussed the impact of artificial intelligence and technology on businesses at the Fortune CFO Collaborative, reports Sheryl Estrada for Fortune. Generative AI is “going to diffuse throughout the economy,” said McAfee. “It’s going to separate winners from losers, and it’s going to turbocharge the winners faster than you and I have been expecting.”  

The Economist

The Economist reporter Rachel Lloyd predicts a “distinct change” in topics for bestselling books in 2024. Lloyd predicts artificial intelligence will take a lead, spotlighting “The Heart and Chip: Our Bright Future with Robots,” by Prof. Daniela Rus, director of CSAIL, as a leading example of the shift.

The Wall Street Journal

Prof. David Rand speaks with Wall Street Journal reporter Christopher Mims about the impact of generative AI on the spread of misinformation. “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says Rand.

Fortune

Writing for Fortune, Sloan research fellow Michael Schrage and his colleagues, explain how AI-enabled key performance indicators (KPIs) can help companies better understand and measure success. “Driving strategic alignment within their organization is an increasingly important priority for senior executives,” they write. “AI-enabled KPIs are powerful tools for achieving this. By getting their data right, using appropriate organizational constructs, and driving a cultural shift towards data-driven decision making, organizations can effectively govern the creation and deployment of AI-enabled KPIs." 

Inside Higher Ed

Prof. Nick Montfort speaks with Inside Higher Ed reporter Lauren Coffey about the use of AI tools in art and education. “The primary concern should be how can we provide new education, how can we inform students, artists, instructors,” says Montfort. “How can we, not so much bolt policies or create guardrails, but how can we help them understand new technologies better and their implications?”

Los Angeles Times

Los Angeles Times reporter Brian Merchant spotlights Joy Buolamwini PhD '22 and her new book, “Unmasking AI: My Mission to Protect What is Human in a World of Machines.” “Buolamwini’s book recounts her journey to become one of the nation’s preeminent scholars and critics of artificial intelligence — she recently advised President Biden before the release of his executive order on AI — and offers readers a compelling, digestible guide to some of the most pressing issues in the field,” writes Merchant.

The Boston Globe

Joy Buolamwini PhD '22 speaks with Brian Bergstein of The Boston Globe’s “Say More” podcast about her academic and professional career studying bias in AI. “As I learned more and also became familiar with the negative impacts of things like facial recognition technologies, it wasn’t just the call to say let’s make systems more accurate but a call to say let’s reexamine the ways in which we create AI in the first place and let’s reexamine our measures of progress because so far they have been misleading,” says Buolamwini

Yahoo! News

Prof. Sinan Aral speaks with Yahoo! Finance Live host Julie Hyman about President Biden’s executive order on artificial intelligence regulation. “This is big, it's bold, it's broad,” says Aral. “It has a number of provisions. It has provisions for safety, for privacy, for equity, for workers, for competition and innovation, and leadership abroad. And it really targets those foundation models, the big AI companies in terms of their safety and security standards.”

The Boston Globe

Joy Buolamwini PhD '22 writes for The Boston Globe about her experience uncovering bias in artificial intelligence through her academic and professional career. “I critique AI from a place of having been enamored with its promise, as an engineer more eager to work with machines than with people at times, as an aspiring academic turned into an accidental advocate, and also as an artist awakened to the power of the personal when addressing the seemingly technical,” writes Buolamwini. “The option to say no, the option to halt a project, the option to admit to the creation of dangerous and harmful though well-intentioned tools must always be on the table.”

The Washington Post

Graduate student Shayne Longpre speaks with Washington Post reporter Nitasha Tiku about the ethical and legal implications surrounding language model datasets. Longpre says “the lack of proper documentation is a community-wide problem that stems from modern machine-learning practices.”

Higher Ed Spotlight

As MIT’s fall semester was starting, President Sally Kornbluth spoke with Ben Wildavsky, host of the Higher Ed Spotlight podcast, about the importance of incorporating the humanities in STEM education and the necessity of breaking down silos between disciplines to tackle pressing issues like AI and climate change. “Part of the importance of us educating our students is they’re going to be out there in the world deploying these technologies. They’ve got to understand the implications of what they’re doing,” says Kornbluth. “Our students will find themselves in positions where they’re going to have to make decisions as to whether these technologies that were conceived for good are deployed in ways that are not beneficial to society. And we want to give them a context in which to make those decisions.” 

Axios

Researchers from MIT and elsewhere have developed a transparency index used to assess 10 key AI foundation models, reports Ryan Heath for Axios. Heath writes that the researchers emphasized that “unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.”

TechCrunch

TechCrunch reporter Kylie Wiggers spotlights the SmartEM project by researchers from MIT and Harvard, which is aimed at enhancing lab work using, “a computer vision system and ML control system inside a scanning electron microscope” to examine a specimen intelligently. “It can avoid areas of low importance, focus on interesting or clear ones, and do smart labeling of the resulting image as well,” writes Wiggers.

The World

Research scientist Nataliya Kosmyna speaks with The World host Chris Harland-Dunaway about the science behind using non-invasive brain scans and artificial intelligence to understand people’s different thoughts and mental images.  

Forbes

Tom Davenport, a visiting scholar at the MIT Initiative on the Digital Economy, writes for Forbes about how organizations are approaching generative AI. “If organizations are to succeed with generative AI, they need to increase the focus on data preparation for it, which is a primary prerequisite for success,” writes Davenport.