Advancing urban tree monitoring with AI-powered digital twins
The Tree-D Fusion system integrates generative AI and genus-conditioned algorithms to create precise simulation-ready models of 600,000 existing urban trees across North America.
Download RSS feed: News Articles / In the Media / Audio
The Tree-D Fusion system integrates generative AI and genus-conditioned algorithms to create precise simulation-ready models of 600,000 existing urban trees across North America.
MIT CSAIL researchers used AI-generated images to train a robot dog in parkour, without real-world data. Their LucidSim system demonstrates generative AI's potential for creating robotics training data.
Yiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo will start postgraduate studies at Oxford next fall.
A new design tool uses UV and RGB lights to change the color and textures of everyday objects. The system could enable surfaces to display dynamic patterns, such as health data and fashion designs.
The new Tayebati Postdoctoral Fellowship Program will support leading postdocs to bring cutting-edge AI to bear on research in scientific discovery or music.
Inspired by large language models, researchers develop a training technique that pools diverse data to teach robots new skills.
“MouthIO” is an in-mouth device that users can digitally design and 3D print with integrated sensors and actuators to capture health data and interact with a computer or phone.
By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
A new method can train a neural network to sort corrupted data while anticipating next steps. It can make flexible plans for robots, generate high-quality video, and help AI agents navigate digital environments.
By using a 3D printer like an iron, researchers can precisely control the color, shade, and texture of fabricated objects, using only one material.
Associate Professor Julian Shun develops high-performance algorithms and frameworks for large-scale graph processing.
MIT CSAIL researchers created an AI-powered method for low-discrepancy sampling, which uniformly distributes data points to boost simulation accuracy.
New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.
“Co-LLM” algorithm helps a general-purpose AI model collaborate with an expert large language model by combining the best parts of both answers, leading to more factual responses.
“ScribblePrompt” is an interactive AI framework that can efficiently highlight anatomical structures across different medical scans, assisting medical workers to delineate regions of interest and abnormalities.