MIT tool visualizes and edits “physically impossible” objects
By visualizing Escher-like optical illusions in 2.5 dimensions, the “Meschers” tool could help scientists understand physics-defying shapes and spark new designs.
By visualizing Escher-like optical illusions in 2.5 dimensions, the “Meschers” tool could help scientists understand physics-defying shapes and spark new designs.
Neural Jacobian Fields, developed by MIT CSAIL researchers, can learn to control any robot from a single camera, without any other sensors.
A computer vision study compares changes in pedestrian behavior since 1980, providing information for urban designers about creating public spaces.
MIT researchers found that special kinds of neural networks, called encoders or “tokenizers,” can do much more than previously realized.
MIT engineers designed a versatile interface that allows users to teach robots new skills in intuitive ways.
Developed to analyze new semiconductors, the system could streamline the development of more powerful solar panels.
A new method can physically restore original paintings using digitally constructed films, which can be removed if desired.
Coactive, founded by two MIT alumni, has built an AI-powered platform to unlock new insights from content of all types.
The approach could help animators to create realistic 3D characters or engineers to design elastic products.
The color-correcting tool, known as “SeaSplat,” reveals more realistic colors of underwater features.
In addition to training future players, the technology could expand the capabilities of other humanoid robots, such as for search and rescue.
The CausVid generative AI tool uses a diffusion model to teach an autoregressive (frame-by-frame) system to rapidly produce stable, high-resolution videos.
A new method helps convey uncertainty more precisely, which could give researchers and medical clinicians better information to make decisions.
A new approach could enable intuitive robotic helpers for household, workplace, and warehouse settings.
“InteRecon” enables users to capture items in a mobile app and reconstruct their interactive features in mixed reality. The tool could assist in education, medical environments, museums, and more.