What does the future hold for generative AI?
Rodney Brooks, co-founder of iRobot, kicks off an MIT symposium on the promise and potential pitfalls of increasingly powerful AI tools like ChatGPT.
Rodney Brooks, co-founder of iRobot, kicks off an MIT symposium on the promise and potential pitfalls of increasingly powerful AI tools like ChatGPT.
The Nano Summit highlights nanoscale research across multiple disciplines at MIT.
Human Guided Exploration (HuGE) enables AI agents to learn quickly with some help from humans, even if the humans make mistakes.
Twelve teams of students and postdocs across the MIT community presented innovative startup ideas with potential for real-world impact.
Jörn Dunkel and Surya Ganguli ’98, MNG ’98 receive Science Polymath awards; Josh Tenenbaum is named AI2050 Senior Fellow.
The Graduate Student Coaching Program teaches students the “coaching mindset” to help them reach their personal and professional goals.
MIT CSAIL researchers innovate with synthetic imagery to train AI, paving the way for more efficient and bias-reduced machine learning.
Seed projects, posters represent a wide range of labs working on technologies, therapeutic strategies, and fundamental research to advance understanding of age-related neurodegenerative disease.
Computer vision enables contact-free 3D printing, letting engineers print with high-performance materials they couldn’t use before.
A pivotal talk led postdoc Kristina Monakhova to develop smart, computational cameras and microscopes for intelligent systems.
The lifelong athlete, pilot, aviation enthusiast, and educator taught at the Institute for 40 years.
How do powerful generative AI systems like ChatGPT work, and what makes them different from other types of artificial intelligence?
MIT CSAIL researchers combine AI and electron microscopy to expedite detailed brain network mapping, aiming to enhance connectomics research and clinical pathology.
Ten years after the founding of the undergraduate research program, its alumni reflect on the unexpected gifts of their experiences.
By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.