Skip to content ↓

Topic

Algorithms

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 544 news clips related to this topic.
Show:

The Daily Beast

MIT researchers have developed a new technique “that could allow most large language models (LLMs) like ChatGPT to retain memory and boost performance,” reports Tony Ho Tran for the Daily Beast. “The process is called StreamingLLM and it allows for chatbots to perform optimally even after a conversation goes on for more than 4 million words,” explains Tran.

The Boston Globe

Researchers at MIT and elsewhere have estimated that the use of algorithms in public domains may provide “real value to the public while also saving the government money,” reports Kevin Lewis for The Boston Globe. The researchers suggest algorithms “that target workplace safety inspections, decide whether to refer patients for medical testing, and suggest whether to assign remedial coursework to college students,” have had similar impacts as those in public domains.

The Boston Globe

Researchers from MIT and elsewhere have developed an AI model that is capable of identifying 3 ½ times more people who are at high-risk for developing pancreatic cancer than current standards, reports Felice J. Freyer for The Boston Globe. “This work has the potential to enlarge the group of pancreatic cancer patients who can benefit from screening from 10 percent to 35 percent,” explains Freyer. “The group hopes its model will eventually help detect risk of other hard-to-find cancers, like ovarian.”

Forbes

Researchers at MIT have discovered how a new computational imaging algorithm can capture user interactions through ambient light sensors commonly found in smartphones, reports Davey Winder for Forbes. “By combining the smartphone display screen, an active component, with the ambient light sense, which is passive, the researchers realized that capturing images in front of that screen was possible without using the device camera,” explains Winder.

The Washington Post

MIT researchers are working to uncover new ways to avoid contrails and minimize their impact on global warming, reports Nicolas Rivero for The Washington Post. “Whether [the contrail impact is] exactly 20 percent or 30 percent or 50 percent, I don’t think anybody knows that answer, really,” says research scientist Florian Allroggen “But it also doesn’t really matter. It’s a big contributor and we need to worry about it.”

The Boston Globe

Boston Globe reporters Aaron Pressman and Jon Chesto spotlight Liquid AI, a new startup founded by MIT researchers that is developing an AI system that relies on neural-network models that are “much simpler and require significantly less computer power to train and operate” than generative AI systems. “You need a fraction of the cost of developing generative AI, and the carbon footprint is much lower,” explains Liquid AI CEO Ramin Hasani, a research affiliate at CSAIL. “You get the same capabilities with a much smaller representation.”

TechCrunch

Prof. Daniela Rus, director of CSAIL, and research affiliates Ramin Hasani, Mathias Lechner, and Alexander Amini have co-founded Liquid AI, a startup building a general-purpose AI system powered by a liquid neural network, reports Kyle Wiggers for TechCrunch. “Accountability and safety of large AI models is of paramount importance,” says Hasani. “Liquid AI offers more capital efficient, reliable, explainable and capable machine learning models for both domain-specific and generative AI applications." 

Nature

MIT researchers have “used an algorithm to sort through millions of genomes to find new, rare types of CRISPR systems that could eventually be adapted into genome-editing tools,” writes Sara Reardon for Nature. “We are just amazed at the diversity of CRISPR systems,” says Prof. Feng Zhang. “Doing this analysis kind of allows us to kill two birds with one stone: both study biology and also potentially find useful things.”

Scientific American

A new study by MIT researchers demonstrates how “machine-learning systems designed to spot someone breaking a policy rule—a dress code, for example—will be harsher or more lenient depending on minuscule-seeming differences in how humans annotated data that were used to train the system,” reports Ananya for Scientific American. “This is an important warning for a field where datasets are often used without close examination of labeling practices, and [it] underscores the need for caution in automated decision systems—particularly in contexts where compliance with societal rules is essential,” says Prof. Marzyeh Ghassemi.

Forbes

Forbes reporter Rob Toews spotlights Prof. Daniela Rus, director of CSAIL, and research affiliate Ramin Hasani and their work with liquid neural networks. “The ‘liquid’ in the name refers to the fact that the model’s weights are probabilistic rather than constant, allowing them to vary fluidly depending on the inputs the model is exposed to,” writes Toews.

Popular Science

Prof. Yoon Kim speaks with Popular Science reporter Charlotte Hu about how large language models like ChatGPT operate. “You can think of [chatbots] as algorithms with little knobs on them,” says Kim. “These knobs basically learn on data that you see out in the wild,” allowing the software to create “probabilities over the entire English vocab.”

Boston.com

MIT researchers have developed a new tool called “PhotoGuard” that can help protect images from AI manipulation, reports Ross Cristantiello for Boston.com. The tool “is designed to make real images resistant to advanced models that can generate new images, such as DALL-E and Midjourney,” writes Cristantiello.

CNN

Researchers at MIT have developed “PhotoGuard,” a tool that can be used to protect images from AI manipulation, reports Catherine Thorbecke for CNN. The tool “puts an invisible ‘immunization’ over images that stops AI models from being able to manipulate the picture,” writes Thorbecke.

Forbes

At CSAIL’s Imagination in Action event, Prof. Stefanie Jegelka’s presentation provided insight into “the failures and successes of neural networks and explored some crucial context that can help engineers and other human observers to focus in on how learning is happening,” reports research affiliate John Werner for Forbes.

Forbes

Prof. Jacob Andreas explored the concept of language guided program synthesis at CSAIL’s Imagination in Action event, reports research affiliate John Werner for Forbes. “Language is a tool,” said Andreas during his talk. “Not just for training models, but actually interpreting them and sometimes improving them directly, again, in domains, not just involving languages (or) inputs, but also these kinds of visual domains as well.”