Researchers enhance peripheral vision in AI models
By enabling models to see the world more like humans do, the work could help improve driver safety and shed light on human behavior.
By enabling models to see the world more like humans do, the work could help improve driver safety and shed light on human behavior.
The team used machine learning to analyze satellite and roadside images of areas where small farms predominate and agricultural data are sparse.
The ambient light sensors responsible for smart devices’ brightness adjustments can capture images of touch interactions like swiping and tapping for hackers.
PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.
A multimodal system uses models trained on language, vision, and action data to help robots develop and execute plans for household, construction, and manufacturing tasks.
This new method draws on 200-year-old geometric foundations to give artists control over the appearance of animated characters.
“Minimum viewing time” benchmark gauges image recognition complexity for AI systems by measuring the time needed for accurate human identification.
Justin Solomon applies modern geometric techniques to solve problems in computer vision, machine learning, statistics, and beyond.
MIT CSAIL researchers innovate with synthetic imagery to train AI, paving the way for more efficient and bias-reduced machine learning.
Computer vision enables contact-free 3D printing, letting engineers print with high-performance materials they couldn’t use before.
By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.
AI models that prioritize similarity falter when asked to design something completely new.
Amid the race to make AI bigger and better, Lincoln Laboratory is developing ways to reduce power, train efficiently, and make energy use transparent.
Inspired by physics, a new generative model PFGM++ outperforms diffusion models in image generation.
Researchers use multiple AI models to collaborate, debate, and improve their reasoning abilities to advance the performance of LLMs while increasing accountability and factual accuracy.