Reasoning skills of large language models are often overestimated
New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.
New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.
More accurate uncertainty estimates could help users decide about how and when to use machine-learning models in the real world.
This new tool offers an easier way for people to analyze complex tabular data.
In a retrospective talk spanning multiple decades, Professor Al Oppenheim looked back over the birth of digital signal processing and shared his thoughts on the future of the field.
This tiny, biocompatible sensor may overcome one of the biggest hurdles that prevent the devices from being completely implanted.
Twelve faculty members have been granted tenure in six units across MIT’s School of Engineering.
These models, which can predict a patient’s race, gender, and age, seem to use those traits as shortcuts when making medical diagnoses.
Known for building connections between the social sciences, data science, and computation, the political science professor will lead IDSS into its next chapter.
This novel circuit architecture cancels out unwanted signals at the earliest opportunity.
VEIR, founded by alumnus Tim Heidel, has developed technology that can move more power over long distances, with the same footprint as traditional lines.
MosaicML, co-founded by an MIT alumnus and a professor, made deep-learning models faster and more efficient. Its acquisition by Databricks broadened that mission.
The dedicated teacher and academic leader transformed research in computer architectures, parallel computing, and digital design, enabling faster and more efficient computation.
The program focused on AI in health care, drawing on Takeda’s R&D experience in drug development and MIT’s deep expertise in AI.
Graduate engineering program is No. 1 in the nation; MIT Sloan is No. 5.
LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos.