MIT-Takeda Program wraps up with 16 publications, a patent, and nearly two dozen projects completed
The program focused on AI in health care, drawing on Takeda’s R&D experience in drug development and MIT’s deep expertise in AI.
The program focused on AI in health care, drawing on Takeda’s R&D experience in drug development and MIT’s deep expertise in AI.
This technique could lead to safer autonomous vehicles, more efficient AR/VR headsets, or faster warehouse robots.
LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos.
The SPARROW algorithm automatically identifies the best molecules to test as potential new medicines, given the vast number of factors affecting each choice.
Combining natural language and programming, the method enables LLMs to solve numerical, analytical, and language-based tasks transparently.
Co-hosted by the McGovern Institute, MIT Open Learning, and others, the symposium stressed emerging technologies in advancing understanding of mental health and neurological conditions.
The method uses language-based inputs instead of costly visual data to direct a robot through a multistep navigation task.
A new downscaling method leverages machine learning to speed up climate model simulations at finer resolutions, making them usable on local levels.
DenseAV, developed at MIT, learns to parse and understand the meaning of language just by watching videos of people talking, with potential applications in multimedia search, language learning, and robotics.
The technique characterizes a material’s electronic properties 85 times faster than conventional methods.
In the new economics course 14.163 (Algorithms and Behavioral Science), students investigate the deployment of machine-learning tools and their potential to understand people, reduce bias, and improve society.
The startup Augmental allows users to operate phones and other devices using their tongue, mouth, and head gestures.
MIT CSAIL’s frugal deep-learning model infers the hidden physical properties of objects, then adapts to find the most stable grasps for robots in unstructured environments like homes and fulfillment centers.
With generative AI models, researchers combined robotics data from different sources to help robots learn better.
A new approach could streamline virtual training processes or aid clinicians in reviewing diagnostic videos.