Skip to content ↓

Topic

Ethics

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 95 news clips related to this topic.
Show:

Fast Company

Writing for Fast Company, Senior Lecturer Guadalupe Hayes-Mota SB '08, MS '16, MBA '16, explores new approaches to improve the drug development process and more effectively connect scientific discoveries and treatment. “Transforming scientific discoveries into better treatments is a complex challenge, but it is also an opportunity to rethink our approach to healthcare innovation,” writes Hayes-Mota. “Through cross-disciplinary collaboration, leveraging AI, focusing on patient-centered innovation, and rethinking R&D, we can create a future where scientific breakthroughs translate into meaningful, accessible treatments for all.”

TechCrunch

Researchers at MIT have found that commercially available AI models, “were more likely to recommend calling police when shown Ring videos captured in minority communities,” reports Kyle Wiggers for TechCrunch. “The study also found that, when analyzing footage from majority-white neighborhoods, the models were less likely to describe scenes using terms like ‘casing the property’ or ‘burglary tools,’” writes Wiggers. 

Forbes

Prof. Devavrat Shah is interviewed by Forbes’ Gary Drenik on balancing AI innovation with ethical considerations, noting governance helps ensure the benefits of AI are fairly distributed across society. “Our responsibility is to harness [AI’s] potential while safeguarding against its risks,” Shah explains. “This approach to promoting responsible AI development hinges on governance rooted in collaboration, transparency and actionable guidance."

VOA News

Prof. David Rand speaks with VOA News about the potential impact of adding watermarks to AI generated materials. “My concern is if you label as AI-generated, everything that’s AI-generated regardless of whether it’s misleading or not, people essentially are going to stop really paying attention to it,” says Rand.

Politico

Researchers at MIT and elsewhere have found that while AI systems could help doctors come to the right diagnosis more often, the diagnostic gains aren’t always distributed evenly, with more improvements tied to patients with lighter skin, report Daniel Payne, Erin Schumaker, and Ruth Reader for Politico. “AI could be a powerful tool to improve care and potentially offer providers a check on their blindspots," they write. "But that doesn’t mean AI will reduce bias. In fact, the study suggests, AI could cause greater disparities in care.”

Higher Ed Spotlight

As MIT’s fall semester was starting, President Sally Kornbluth spoke with Ben Wildavsky, host of the Higher Ed Spotlight podcast, about the importance of incorporating the humanities in STEM education and the necessity of breaking down silos between disciplines to tackle pressing issues like AI and climate change. “Part of the importance of us educating our students is they’re going to be out there in the world deploying these technologies. They’ve got to understand the implications of what they’re doing,” says Kornbluth. “Our students will find themselves in positions where they’re going to have to make decisions as to whether these technologies that were conceived for good are deployed in ways that are not beneficial to society. And we want to give them a context in which to make those decisions.” 

GBH

Prof. Eric Klopfer, co-director of the RAISE initiative (Responsible AI for Social Empowerment in Education), speaks with GBH reporter Diane Adame about the importance of providing students guidance on navigating artificial intelligence systems. “I think it's really important for kids to be aware that these things exist now, because whether it's in school or out of school, they are part of systems where AI is present,” says Klopfer. “Many humans are biased. And so the [AI] systems express those same biases that they've seen online and the data that they've collected from humans.”

The Boston Globe

President Sally Kornbluth joined The Boston Globe’s Shirley Leung on her Say More podcast to discuss the future of AI, ethics in science, and climate change. “I view [the climate crisis] as an existential issue to the extent that if we don’t take action there, all of the many, many other things that we’re working on, not that they’ll be irrelevant, but they’ll pale in comparison,” Kornbluth says.

Freakonomics Radio

Prof. Simon Johnson speaks with Freakonomics guest host Adam Davidson about his new book, economic history, and why new technologies impact people differently. “What do people creating technology, deploying technology— what exactly are they seeking to achieve? If they’re seeking to replace people, then that’s what they’re going to be doing,” says Johnson. “But if they’re seeking to make people individually more productive, more creative, enable them to design and carry out new tasks — let’s push the vision more in that direction. And that’s a naturally more inclusive version of the market economy. And I think we will get better outcomes for more people.”

Forbes

Lecturer Guadalupe Hayes-Mota SB '08, MS '16, MBA '16 writes for Forbes about the ethical framework needed to mitigate risks in artificial intelligence. “[A]s we continue to unlock AI's capabilities, it is crucial to address the ethical challenges that emerge,” writes Hayes-Mota. “By establishing a comprehensive ethical framework grounded in beneficence, non-maleficence, autonomy, justice and responsibility, we can ensure that AI's deployment in life sciences aligns with humanity's best interests.”

The Conversation

Writing for The Conversation, postdoc Ziv Epstein SM ’19, PhD ’23, graduate student Robert Mahari and Jessica Fjeld of Harvard Law School explore how the use of generative AI will impact creative work. “The ways in which existing laws are interpreted or reformed – and whether generative AI is appropriately treated as the tool it is – will have real consequences for the future of creative expression,” the authors note.  

Financial Times

“Power and Progress,” a new book by Institute Prof. Daron Acemoglu and Prof. Simon Johnson, has been named one of the best new books on economics by the Financial Times. “The authors’ nuanced take on technological development provides insights on how we can ensure the coming AI revolution leads to widespread benefits for the many, not just the tech bros,” writes Tej Parikh.

New York Times

Writing for The New York Times, Institute Prof. Daron Acemoglu and Prof Simon Johnson make the case that “rather than machine intelligence, what we need is ‘machine usefulness,’ which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies.”

The New York Times

New York Times reporter Natasha Singer spotlights the Day of AI, an MIT RAISE program aimed at teaching K-12 students about AI. “Because AI is such a powerful new technology, in order for it to work well in society, it really needs some rules,” said MIT President Sally Kornbluth. Prof. Cynthia Breazeal, MIT’s dean of digital learning, added: “We want students to be informed, responsible users and informed, responsible designers of these technologies.”

GBH

Institute Prof. Daron Acemoglu and Prof. Aleksander Mądry join GBH’s Greater Boston to explore how AI can be regulated and safely integrated into our lives. “With much of our society driven by informational spaces — in particular social media and online media in general — AI and, in particular, generative AI accelerates a lot of problems like misinformation, spam, spear phishing and blackmail,” Mądry explains. Acemoglu adds that he feels AI reforms should be approached “more broadly so that AI researchers actually work in using these technologies in human-friendly ways, trying to make humans more empowered and more productive.”