Improving the reliability of circuits for quantum computers
A new technique helps scientists measure a phenomenon that can cause quantum circuits to perform differently than expected, increasing the error in computations.
A new technique helps scientists measure a phenomenon that can cause quantum circuits to perform differently than expected, increasing the error in computations.
The “MetaEase” technique provides a heads-up to potential scenarios that could cause long wait-times or outages.
Assistant Professor Gabriele Farina mines the foundations of decision-making in complex multi-agent scenarios.
An old patent from MIT Professor Bill Freeman inspired the new “Y-zipper,” a three-sided fastener that snaps gear, robots, and art into shape at the push of a button.
A new debiasing technique called WRING avoids creating or amplifying biases that can occur with existing debiasing approaches.
Building on a long-standing MIT–IBM collaboration, the new lab will chart the convergence of AI, algorithms, and quantum computing.
A new method could bring more accurate and efficient AI models to high-stakes applications like health care and finance, even in under-resourced settings.
A new study reveals cellular pathways that appear to underlie some differences in physical fitness.
The “EnergAIzer” method generates reliable results in seconds, enabling data center operators to efficiently allocate resources and reduce wasted energy.
New dataset of 30,000-plus competition math problems from 47 countries gives AI researchers a harder test — and students worldwide a better training ground.
Ultra-efficient chip design enables extremely strong cryptography algorithms to run on energy-constrained edge devices.
A new training method improves the reliability of AI confidence estimates without sacrificing performance, addressing a root cause of hallucination in reasoning models.
Researchers are developing hardware and algorithms to improve collaboration between divers and autonomous underwater vehicles engaged in maritime missions.
The influential first leader of the Computation Structures Group at MIT played a key role in the development of asynchronous computing.
Researchers use control theory to shed unnecessary complexity from AI models during training, cutting compute costs without sacrificing performance.