Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 226 - 240 of 747 news clips related to this topic.
Show:

Scientific American

MIT researchers have found that user bias can drive interactions with AI chatbots, reports Nick Hilden for Scientific American.  “When people think that the AI is caring, they become more positive toward it,” graduate student Pat Pataranutaporn explains. “This creates a positive reinforcement feedback loop where, at the end, the AI becomes much more positive, compared to the control condition. And when people believe that the AI was manipulative, they become more negative toward the AI—and it makes the AI become more negative toward the person as well.”

The Boston Globe

Prof. Thomas Kochan and Prof. Thomas Malone speak with Boston Globe reporter Hiawatha Bray about the recent deal between the Writers Guild of America and the Alliance of Motion Picture and Television Producers, which will “protect movie screenwriters from losing their jobs to computers that could use artificial intelligence to generate screenplays.” Kochan notes that when it comes to AI, “where workers don’t have a voice through a union, most companies are not engaging their workers on these issues, and the workers have no rights, no redress.”

Fortune

Researchers from MIT and elsewhere have identified some of the benefits and disadvantages of generative AI when used for specific tasks, reports Paige McGlauflin and Joseph Abrams for Fortune. “The findings show a 40% performance boost for consultants using the chatbot for the creative product project, compared to the control group that did not use ChatGPT, but a 23% decline in performance when used for business problem-solving,” explain McGlauflin and Abrams.

The Wall Street Journal

A study by researchers from MIT and Harvard examined the potential impact of the use of AI technologies on the field of radiology, reports Laura Landro for The Wall Street Journal. “Both AI models and radiologists have their own unique strengths and areas for improvement,” says Prof. Nikhil Agarwal.

GBH

Prof. Eric Klopfer, co-director of the RAISE initiative (Responsible AI for Social Empowerment in Education), speaks with GBH reporter Diane Adame about the importance of providing students guidance on navigating artificial intelligence systems. “I think it's really important for kids to be aware that these things exist now, because whether it's in school or out of school, they are part of systems where AI is present,” says Klopfer. “Many humans are biased. And so the [AI] systems express those same biases that they've seen online and the data that they've collected from humans.”

Scientific American

A new study by MIT researchers demonstrates how “machine-learning systems designed to spot someone breaking a policy rule—a dress code, for example—will be harsher or more lenient depending on minuscule-seeming differences in how humans annotated data that were used to train the system,” reports Ananya for Scientific American. “This is an important warning for a field where datasets are often used without close examination of labeling practices, and [it] underscores the need for caution in automated decision systems—particularly in contexts where compliance with societal rules is essential,” says Prof. Marzyeh Ghassemi.

The Ojo-Yoshida Report

Research scientist Bryan Reimer speaks with The Ojo-Yoshida Report host Junko Yoshida about the future of the autonomous vehicle industry. “We cannot let the finances drive here,” explains Reimer. “We need to manage the finances to let society win over the long haul.”

Forbes

Forbes reporter Rob Toews spotlights Prof. Daniela Rus, director of CSAIL, and research affiliate Ramin Hasani and their work with liquid neural networks. “The ‘liquid’ in the name refers to the fact that the model’s weights are probabilistic rather than constant, allowing them to vary fluidly depending on the inputs the model is exposed to,” writes Toews.

Financial Times

Researchers at MIT and elsewhere have used artificial intelligence to develop a new antibiotic to combat Acinetobacter baumannii, a challenging bacteria known to become resistant to antibiotics, reports Hannah Kuchler for the Financial Times. “It took just an hour and a half — a long lunch — for the AI to serve up a potential new antibiotic, an offering to a world contending with the rise of so-called superbugs: bacteria, viruses, fungi and parasites that have mutated and no longer respond to the drugs we have available,” writes Kuchler.

Popular Science

Prof. Yoon Kim speaks with Popular Science reporter Charlotte Hu about how large language models like ChatGPT operate. “You can think of [chatbots] as algorithms with little knobs on them,” says Kim. “These knobs basically learn on data that you see out in the wild,” allowing the software to create “probabilities over the entire English vocab.”

Fast Company

Principal Research Scientist Kalyan Veeramachaneni speaks with Fast Company reporter Sam Becker about his work in developing the Synthetic Data Vault, which is helpful for creating synthetic data sets, reports Sam Becker for Fast Company. “Fake data is randomly generated,” says Veeramachaneni. “While synthetic data is trying to create data from a machine learning model that looks very realistic.”

TechCrunch

Researchers from MIT and Harvard have explored astrocytes, a group of brain cells, from a computational perspective and developed a mathematical model that shows how they can be used to build a biological transformer, reports Kyle Wiggers for TechCrunch. “The brain is far superior to even the best artificial neural networks that we have developed, but we don’t really know exactly how the brain works,” says research staff member Dmitry Krotov. “There is scientific value in thinking about connections between biological hardware and large-scale artificial intelligence networks. This is neuroscience for AI and AI for neuroscience.

The Wall Street Journal

Prof. Mark Tegmark speaks with The Wall Street Journal reporter Emily Bobrow about the importance of companies and governments working together to mitigate the risks of new AI technologies. Tegmark “recommends the creation of something like a Food and Drug Administration for AI, which would force companies to prove their products are safe before releasing them to the public,” writes Bobrow.

The Guardian

Prof. D. Fox Harrell writes for The Guardian about the importance of ensuring AI systems are designed to “reflect the ethically positive culture we truly want.” Harrell emphasizes that: “We need to be aware of, and thoughtfully design, the cultural values that AI is based on. With care, we can build systems based on multiple worldviews – and address key ethical issues in design such as transparency and intelligibility."