Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 421 - 435 of 1285 news clips related to this topic.
Show:

Axios

Axios reporter Alison Snyder writes about how a new study by MIT researchers finds that preconceived notions about AI chatbots can impact people’s experiences with them. Prof. Pattie Maes explains, the technology's developers “always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned, but we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don't just depend on the AI and the quality of the AI. It depends on how the human responds to the AI.”

Scientific American

MIT researchers have found that user bias can drive interactions with AI chatbots, reports Nick Hilden for Scientific American.  “When people think that the AI is caring, they become more positive toward it,” graduate student Pat Pataranutaporn explains. “This creates a positive reinforcement feedback loop where, at the end, the AI becomes much more positive, compared to the control condition. And when people believe that the AI was manipulative, they become more negative toward the AI—and it makes the AI become more negative toward the person as well.”

The Boston Globe

Prof. Thomas Kochan and Prof. Thomas Malone speak with Boston Globe reporter Hiawatha Bray about the recent deal between the Writers Guild of America and the Alliance of Motion Picture and Television Producers, which will “protect movie screenwriters from losing their jobs to computers that could use artificial intelligence to generate screenplays.” Kochan notes that when it comes to AI, “where workers don’t have a voice through a union, most companies are not engaging their workers on these issues, and the workers have no rights, no redress.”

Fortune

Researchers from MIT and elsewhere have identified some of the benefits and disadvantages of generative AI when used for specific tasks, reports Paige McGlauflin and Joseph Abrams for Fortune. “The findings show a 40% performance boost for consultants using the chatbot for the creative product project, compared to the control group that did not use ChatGPT, but a 23% decline in performance when used for business problem-solving,” explain McGlauflin and Abrams.

The Wall Street Journal

A study by researchers from MIT and Harvard examined the potential impact of the use of AI technologies on the field of radiology, reports Laura Landro for The Wall Street Journal. “Both AI models and radiologists have their own unique strengths and areas for improvement,” says Prof. Nikhil Agarwal.

GBH

Prof. Eric Klopfer, co-director of the RAISE initiative (Responsible AI for Social Empowerment in Education), speaks with GBH reporter Diane Adame about the importance of providing students guidance on navigating artificial intelligence systems. “I think it's really important for kids to be aware that these things exist now, because whether it's in school or out of school, they are part of systems where AI is present,” says Klopfer. “Many humans are biased. And so the [AI] systems express those same biases that they've seen online and the data that they've collected from humans.”

Forbes

Maria Telleria ’08, SM’10, PhD ’13 speaks with Forbes contributor Stuart Anderson about her experience immigrating to the U.S. as a teenager, earning her PhD at MIT, and co-founding a company. “I don’t think I would have had these opportunities if I could not have come to the United States,” said Telleria. “I think it helped me grow by being exposed to two cultures. When you have had to think in two different ways, I think it makes you better understand other people and why they’re different. Coming to America has been an amazing opportunity.”

The Boston Globe

Writing for The Boston Globe, MIT Prof. Carlo Ratti and Harvard Prof. Antoine Picon examine AI and the future of cities, noting that their research has shown “once trained, visual AI is shockingly accurate at predicting property values, crime rates, and even public health outcomes — just by analyzing photos.” They add: “Tireless, penetrating artificial eyes are coming to our streets, promising to show us things we have never seen before. They will be incredible tools to guide us — but only if we keep our own eyes open.”

The Economist

Prof. Regina Barzilay speaks with The Economist about how AI can help advance medicine in areas such as uncovering new drugs. With AI, “the type of questions that we will be asking will be very different from what we’re asking today,” says Barzilay.

The Boston Globe

President Sally Kornbluth joined The Boston Globe’s Shirley Leung on her Say More podcast to discuss the future of AI, ethics in science, and climate change. “I view [the climate crisis] as an existential issue to the extent that if we don’t take action there, all of the many, many other things that we’re working on, not that they’ll be irrelevant, but they’ll pale in comparison,” Kornbluth says.

Time

Prof. Max Tegmark has been named to TIME’s list of the 100 most influential people in AI. “Our best course of action is to follow biotech’s example, and ensure that potentially dangerous products need to be approved by AI-experts at an AI [version of the] FDA before they can be launched,” says Tegmark of how government should regulate the development of AI. “More than 60% of Americans support such an approach.”

Scientific American

A new study by MIT researchers demonstrates how “machine-learning systems designed to spot someone breaking a policy rule—a dress code, for example—will be harsher or more lenient depending on minuscule-seeming differences in how humans annotated data that were used to train the system,” reports Ananya for Scientific American. “This is an important warning for a field where datasets are often used without close examination of labeling practices, and [it] underscores the need for caution in automated decision systems—particularly in contexts where compliance with societal rules is essential,” says Prof. Marzyeh Ghassemi.

Forbes

Forbes reporter Rob Toews spotlights Prof. Daniela Rus, director of CSAIL, and research affiliate Ramin Hasani and their work with liquid neural networks. “The ‘liquid’ in the name refers to the fact that the model’s weights are probabilistic rather than constant, allowing them to vary fluidly depending on the inputs the model is exposed to,” writes Toews.

The Boston Globe

Prof. Tod Machover speaks with Boston Globe reporter A.Z. Madonna about the restaging of his opera ‘VALIS’ at MIT, which features an AI-assisted musical instrument developed by Nina Masuelli ’23.  “In all my career, I’ve never seen anything change as fast as AI is changing right now, period,” said Machover. “So to figure out how to steer it towards something productive and useful is a really important question right now.”