A new way to bring personal items to mixed reality
“InteRecon” enables users to capture items in a mobile app and reconstruct their interactive features in mixed reality. The tool could assist in education, medical environments, museums, and more.
“InteRecon” enables users to capture items in a mobile app and reconstruct their interactive features in mixed reality. The tool could assist in education, medical environments, museums, and more.
Professor of media technology honored for research in human-computer interaction that is considered both fundamental and influential.
The Tactile Vega-Lite system, developed at MIT CSAIL, streamlines the tactile chart design process; could help educators efficiently create these graphics and aid designers in making precise changes.
“Xstrings” method enables users to produce cable-driven objects, automatically assembling bionic robots, sculptures, and dynamic fashion designs.
The system uses reconfigurable electromechanical building blocks to create structural electronics.
New research could allow a person to correct a robot’s actions in real-time, using the kind of feedback they’d give another human.
The consortium will bring researchers and industry together to focus on impact.
Projects from MIT course 4.043/4.044 (Interaction Intelligence) were presented at NeurIPS, showing how AI transforms creativity, education, and interaction in unexpected ways.
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.
Inspired by the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.
The neuroscientist turned entrepreneur will focus on advancing the intersection of behavioral science and AI across MIT.
Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.
Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.
MIT CSAIL director and EECS professor named a co-recipient of the honor for her robotics research, which has expanded our understanding of what a robot can be.