Skip to content ↓

Topic

Algorithms

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 547 news clips related to this topic.
Show:

Politico

MIT researchers have found that “when an AI tool for radiologists produced a wrong answer, doctors were more likely to come to the wrong conclusion in their diagnoses,” report Daniel Payne, Carmen Paun, Ruth Reader and Erin Schumaker for Politico. “The study explored the findings of 140 radiologists using AI to make diagnoses based on chest X-rays,” they write. “How AI affected care wasn’t dependent on the doctors’ levels of experience, specialty or performance. And lower-performing radiologists didn’t benefit more from AI assistance than their peers.”

TechCrunch

Birago Jones SM '12 and Karthik Dinakar SM '12, PhD '17 co-founded Pienso – an AI platform that “lets users build and deploy models without having to write code,” reports Kyle Wiggers for TechCrunch. “Pienso’s flexible, no-code interface allows teams to train models directly using their own company’s data,” says Jones. “This alleviates the privacy concerns of using … models, and also is more accurate, capturing the nuances of each individual company.”

The Daily Beast

MIT researchers have developed a new technique “that could allow most large language models (LLMs) like ChatGPT to retain memory and boost performance,” reports Tony Ho Tran for the Daily Beast. “The process is called StreamingLLM and it allows for chatbots to perform optimally even after a conversation goes on for more than 4 million words,” explains Tran.

The Boston Globe

Researchers at MIT and elsewhere have estimated that the use of algorithms in public domains may provide “real value to the public while also saving the government money,” reports Kevin Lewis for The Boston Globe. The researchers suggest algorithms “that target workplace safety inspections, decide whether to refer patients for medical testing, and suggest whether to assign remedial coursework to college students,” have had similar impacts as those in public domains.

The Boston Globe

Researchers from MIT and elsewhere have developed an AI model that is capable of identifying 3 ½ times more people who are at high-risk for developing pancreatic cancer than current standards, reports Felice J. Freyer for The Boston Globe. “This work has the potential to enlarge the group of pancreatic cancer patients who can benefit from screening from 10 percent to 35 percent,” explains Freyer. “The group hopes its model will eventually help detect risk of other hard-to-find cancers, like ovarian.”

Forbes

Researchers at MIT have discovered how a new computational imaging algorithm can capture user interactions through ambient light sensors commonly found in smartphones, reports Davey Winder for Forbes. “By combining the smartphone display screen, an active component, with the ambient light sense, which is passive, the researchers realized that capturing images in front of that screen was possible without using the device camera,” explains Winder.

The Washington Post

MIT researchers are working to uncover new ways to avoid contrails and minimize their impact on global warming, reports Nicolas Rivero for The Washington Post. “Whether [the contrail impact is] exactly 20 percent or 30 percent or 50 percent, I don’t think anybody knows that answer, really,” says research scientist Florian Allroggen “But it also doesn’t really matter. It’s a big contributor and we need to worry about it.”

The Boston Globe

Boston Globe reporters Aaron Pressman and Jon Chesto spotlight Liquid AI, a new startup founded by MIT researchers that is developing an AI system that relies on neural-network models that are “much simpler and require significantly less computer power to train and operate” than generative AI systems. “You need a fraction of the cost of developing generative AI, and the carbon footprint is much lower,” explains Liquid AI CEO Ramin Hasani, a research affiliate at CSAIL. “You get the same capabilities with a much smaller representation.”

TechCrunch

Prof. Daniela Rus, director of CSAIL, and research affiliates Ramin Hasani, Mathias Lechner, and Alexander Amini have co-founded Liquid AI, a startup building a general-purpose AI system powered by a liquid neural network, reports Kyle Wiggers for TechCrunch. “Accountability and safety of large AI models is of paramount importance,” says Hasani. “Liquid AI offers more capital efficient, reliable, explainable and capable machine learning models for both domain-specific and generative AI applications." 

Nature

MIT researchers have “used an algorithm to sort through millions of genomes to find new, rare types of CRISPR systems that could eventually be adapted into genome-editing tools,” writes Sara Reardon for Nature. “We are just amazed at the diversity of CRISPR systems,” says Prof. Feng Zhang. “Doing this analysis kind of allows us to kill two birds with one stone: both study biology and also potentially find useful things.”

Scientific American

A new study by MIT researchers demonstrates how “machine-learning systems designed to spot someone breaking a policy rule—a dress code, for example—will be harsher or more lenient depending on minuscule-seeming differences in how humans annotated data that were used to train the system,” reports Ananya for Scientific American. “This is an important warning for a field where datasets are often used without close examination of labeling practices, and [it] underscores the need for caution in automated decision systems—particularly in contexts where compliance with societal rules is essential,” says Prof. Marzyeh Ghassemi.

Forbes

Forbes reporter Rob Toews spotlights Prof. Daniela Rus, director of CSAIL, and research affiliate Ramin Hasani and their work with liquid neural networks. “The ‘liquid’ in the name refers to the fact that the model’s weights are probabilistic rather than constant, allowing them to vary fluidly depending on the inputs the model is exposed to,” writes Toews.

Popular Science

Prof. Yoon Kim speaks with Popular Science reporter Charlotte Hu about how large language models like ChatGPT operate. “You can think of [chatbots] as algorithms with little knobs on them,” says Kim. “These knobs basically learn on data that you see out in the wild,” allowing the software to create “probabilities over the entire English vocab.”

Boston.com

MIT researchers have developed a new tool called “PhotoGuard” that can help protect images from AI manipulation, reports Ross Cristantiello for Boston.com. The tool “is designed to make real images resistant to advanced models that can generate new images, such as DALL-E and Midjourney,” writes Cristantiello.