Remembering Professor Emerita Jeanne Shapiro Bamberger, a pioneer in music education
The former department chair was an early innovator in the use of artificial intelligence to both study and influence how children learn music.
The former department chair was an early innovator in the use of artificial intelligence to both study and influence how children learn music.
Media Lab PhD student Kimaya Lecamwasam researches how music can shape well-being.
Brain imaging suggests people with musical training may be better than others at filtering out distracting sounds.
Groundbreaking MIT concert, featuring electronic and computer-generated music, was a part of the 2025 International Computer Music Conference.
From the classroom to expanding research opportunities, students at MIT Music Technology use design to push the frontier of digital instruments and software for human expression and empowerment.
Widely known for his Synthetic Performer, Csound language, and work on the MPEG-4 audio standard, Vercoe positioned MIT as a hub for music technology through leadership roles with the Media Lab and Music and Theater Arts Section.
Jay Keyser’s new book, “Play It Again, Sam,” makes the case that repeated motifs enhance our experience of artistic works.
Offerings included talks, concerts, and interactive installations.
The professor of history expanded MIT’s arts infrastructure and championed its arts faculty, while providing new opportunities for students and faculty.
MIT researchers lay out design principles behind the TeleAbsence vision, how it could help people cope with loss and plan for how they might be remembered.
An exuberant performance included five premieres by MIT composers, a fitting tribute to open the new home of MIT Music and launch the MIT arts festival Artfinity.
Connected by the MIT Human Insight Collaborative, Lecturer Mi-Eun Kim and Research Scientist Praneeth Namburi want to develop an understanding of musical expression and skill development.
Events connected the MIT community through exhibitions, performances, interactive installations, and more.
Inspired by the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.