Skip to content ↓

Why is the piano player wearing mittens?

When Zachary Smith graduated from Brigham Young University with degrees in music and electrical engineering, he looked around for a graduate research program that would bring the two fields together. He found the Harvard-MIT Division of Health Sciences and Technology's Speech and Hearing Bioscience and Technology Program.

He also found Ray Goldsworthy, a fellow graduate student in the program. Goldsworthy has had a cochlear implant since losing his hearing after contracting spinal meningitis at age 12. Although Goldsworthy understands nearly all of what is said to him in a quiet area, noise and music present problems.

"[Cochlear implant] users do not have the same auditory resolution that normal hearing listeners have," Goldsworthy said. "A fitting metaphor is that CI users are listening to a piano where the player has mittens on. Tonal discrimination and harmonic relation are usually greatly diminished.

"I listen mostly to Eastern music where the concept of melody plays far more of an important role compared to harmony," he said. "Generally, I find music more enjoyable if there is only one or two instruments, since multiple instruments become jumbled in the poorly resolved realm of CI sound."

When Smith came to the Health Sciences and Technology (HST) program three years ago, he started quizzing Goldsworthy on his experiences with music. This led Smith to hook up with Bertrand Delgutte, principal research scientist in MIT's Research Laboratory of Electronics (RLE) Auditory Physiology group, and Andrew J. Oxenham, research scientist in RLE's Sensory Communication group. Their work may lead to changes in cochlear implants that would provide a much richer musical experience.

At MIT and the Massachusetts Eye and Ear Infirmary, Smith is beginning work on a thesis that investigates how to most effectively code small timing differences of sounds between two ears with cochlear implants. These subtle timing differences make all the difference in the world for normal hearing individuals trying to listen to a single voice in a noisy room.

"Speech in noise is difficult for CI users because they only have stimulation on one side," Goldsworthy said. "I have absolutely no sense of sound direction. So when I'm in a noisy environment, all of the sounds become spatially jumbled, whereas a normal hearing person can distinguish voices depending on their incoming direction."

Like Smith, Goldsworthy's research looks at how multiple microphones can be used to enhance speech in noise to give back the binaural advantage to CI users.

A version of this article appeared in MIT Tech Talk on March 13, 2002.

Related Topics

More MIT News

Headshot of Catherine Wolfram

A delicate dance

Professor of applied economics Catherine Wolfram balances global energy demands and the pressing need for decarbonization.

Read full story