Reasoning skills of large language models are often overestimated
New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.
New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.
LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos.
The method uses language-based inputs instead of costly visual data to direct a robot through a multistep navigation task.
A new approach could streamline virtual training processes or aid clinicians in reviewing diagnostic videos.
A new “consensus game,” developed by MIT CSAIL researchers, elevates AI’s text comprehension and generation skills.
Associate Professor Jonathan Ragan-Kelley optimizes how computer graphics and images are processed for the hardware of today and tomorrow.
Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.
For the first time, researchers use a combination of MEG and fMRI to map the spatio-temporal human brain dynamics of a visual image being recognized.
Researchers have developed a security solution for power-hungry AI models that offers protection against two common attacks.
MIT Center for Transportation and Logistics Director Matthias Winkenbach uses AI to make vehicle routing more efficient and adaptable for unexpected events.
A CSAIL study highlights why it is so challenging to program a quantum computer to run a quantum algorithm, and offers a conceptual model for a more user-friendly quantum computer.
The MIT Schwarzman College of Computing building will form a new cluster of connectivity across a spectrum of disciplines in computing and artificial intelligence.
Researchers create a curious machine-learning model that finds a wider variety of prompts for training a chatbot to avoid hateful or harmful output.
Researchers developed a simple yet effective solution for a puzzling problem that can worsen the performance of large language models such as ChatGPT.
The ambient light sensors responsible for smart devices’ brightness adjustments can capture images of touch interactions like swiping and tapping for hackers.