AI to help researchers see the bigger picture in cell biology
By providing holistic information on a cell, an AI-driven method could help scientists better understand disease mechanisms and plan experiments.
Download RSS feed: News Articles / In the Media / Audio
By providing holistic information on a cell, an AI-driven method could help scientists better understand disease mechanisms and plan experiments.
Strahinja Janjusevic brings an international perspective and US Naval Academy education to his graduate research in the MIT Technology and Policy Program.
By minimizing the need to drive around looking for a parking spot, this technique can save drivers up to 35 minutes — and give them a realistic estimate of total travel time.
The context of long-term conversations can cause an LLM to begin mirroring the user’s viewpoints, possibly reducing accuracy or creating a virtual echo-chamber.
Removing just a tiny fraction of the crowdsourced data that informs online ranking platforms can significantly change the results.
New research detects hidden evidence of mistaken correlations — and provides a method to improve accuracy.
While the growing energy demands of AI are worrying, some techniques can also help make power grids cleaner and more efficient.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
The technique can help scientists in economics, public health, and other fields understand whether to trust the results of their experiments.
Using a versatile problem-solving framework, researchers show how early relapse in lymphoma patients influences their chance for survival.
This new technique enables LLMs to dynamically adjust the amount of computation they use for reasoning, based on the difficulty of the question.
With insect-like speed and agility, the tiny robot could someday aid in search-and-rescue missions.
MIT CSAIL and LIDS researchers developed a mathematically grounded system that lets soft robots deform, adapt, and interact with people and objects, without violating safety limits.
Large language models can learn to mistakenly link certain sentence patterns with specific topics — and may then repeat these patterns instead of reasoning.
MIT researchers developed a way to identify the smallest dataset that guarantees optimal solutions to complex problems.