Looking for a specific action in a video? This AI-based method can find it for you
A new approach could streamline virtual training processes or aid clinicians in reviewing diagnostic videos.
A new approach could streamline virtual training processes or aid clinicians in reviewing diagnostic videos.
“Alchemist” system adjusts the material attributes of specific objects within images to potentially modify video game models to fit different environments, fine-tune VFX, and diversify robotic training.
Fifteen new faculty members join six of the school’s academic departments.
The 10 Design Fellows are MIT graduate students working at the intersection of design and multiple disciplines across the Institute.
A new technique that can automatically classify phases of physical systems could help scientists investigate novel materials.
A new “consensus game,” developed by MIT CSAIL researchers, elevates AI’s text comprehension and generation skills.
A new algorithm learns to squish, bend, or stretch a robot’s entire body to accomplish diverse tasks like avoiding obstacles or retrieving items.
MIT CSAIL and Project CETI researchers reveal complex communication patterns in sperm whales, deepening our understanding of animal language systems.
The conversation in Kresge Auditorium touched on the promise and perils of the rapidly evolving technology.
Associate Professor Jonathan Ragan-Kelley optimizes how computer graphics and images are processed for the hardware of today and tomorrow.
Together, the Hasso Plattner Institute and MIT are working toward novel solutions to the world’s problems as part of the Designing for Sustainability research program.
TorNet, a public artificial intelligence dataset, could help models reveal when and why tornadoes form, improving forecasters' ability to issue warnings.
At MIT’s Festival of Learning 2024, panelists stressed the importance of developing critical thinking skills while leveraging technologies like generative AI.
MIT professors Roger Levy, Tracy Slatyer, and Martin Wainwright appointed to the 2024 class of “trail-blazing fellows.”
For the first time, researchers use a combination of MEG and fMRI to map the spatio-temporal human brain dynamics of a visual image being recognized.