David Pesetsky, the Ferrari P. Ward Professor of Modern Languages and Linguistics, was recently named a fellow of the American Association for the Advancement of Science (AAAS). Pesetsky, who is also a Margaret MacVicar Faculty Fellow at MIT, was chosen for “his innovative and critical research on syntactic theory, connecting it to issues in phonology, morphology, reading, language acquisition and neuroscience, and for his contributions to linguistic education at many levels.” He is the author of two groundbreaking books on syntactic theory as well as numerous articles that have contributed to the understanding of "Universal Grammar." Pesetsky spoke about linguistics recently with writer Kathryn O'Neill.
Q. What does the study of linguistics tell us about how we think and learn?
A. It teaches us that there are laws governing how we think and laws governing how we learn. They’re so much a part of us that even though they characterize every second of our existence, they are not really accessible to consciousness — you hardly know they’re there. So it’s a scientific activity to try discover what they are. These laws are hidden, the way most interesting ones are.
Q. Why is the idea of Universal Grammar controversial?
A. I think to some extent it’s because you have to dig a bit to discover what languages have in common. This is a young science, so there isn’t as much agreement about precisely how to dig — compared to other sciences that have been going for 500 years or more. I also think some people hear the term “Universal Grammar,” which is perhaps an unfortunate phrase, and instead of trying to understand it as a technical term, they just treat it as a grammar that is universal, which is not quite what the concept is.
Q. How would you define Universal Grammar?
A. We say Universal Grammar because [MIT Professor Noam] Chomsky decided to use that term in a famous and influential book written in the early 1960s called Aspects of the Theory of Syntax — and he didn’t invent the term. He was reviving and adapting a term that French grammarian philosophers from the 17th century had used; their idea was that there was a logical basis to the structure of all languages — a proposal somewhat different from modern linguistics, though clearly related. One of the things Chomsky was doing in Aspects was looking for historical roots for the ideas he was coming up with as a result of his own technical work. So he borrowed the term from Enlightenment philosophy, and we’re stuck with it now, but it doesn’t quite mean what it sounds like.
Q. So what does it mean?
A. It’s whatever underlies our ability to acquire and to use the languages that we as a species do in fact acquire (and use) — in particular those aspects of this ability that seem to be particular to language. Obviously there are a lot of preconditions to speaking English, like having a mouth that’s a particular shape, so that’s not part of Universal Grammar. But the mental properties that seem to underlie the laws that govern the structure of sentences — those fall under Universal Grammar to the extent that they are common to English and all the other languages of the world.
Q. I hear you have been working on the relationship between language and music. How did you end up working in that area, and what have you learned?
A. I spend a lot of my time working with the fantastic students we have here, and much of what I learn, I learn from working with them. This music work is an example — it’s a joint project with a graduate student who just finished his PhD last year (in phonetics, in fact): Jonah Katz. Our starting point was a brilliant 1983 book that was itself a collaboration — between a linguist, Ray Jackendoff PhD ’69, and a composer, Fred Lerdahl. This book asked the following question: if we were to look at the structure of music with the eyes of a linguist, what might we learn about music?
Now what they came up with was a model that benefited from the insights of linguistics, but didn’t look very much like language. So, if they were right, we have these two complicated cognitive systems in our head, one of them that underlies our ability to use language and this other system that underlies music — both fascinating, but quite different. What Jonah and I tried to argue is that if you ask their questions in a slightly different way, their own results might actually support the opposite general conclusion. Music and language might actually be one and the same cognitive system. That’s the claim we’re making, and of course it’s pretty controversial.
After all, there are big and obvious differences between music and language. No language uses pitch, for example, in anything like the way music does. And no musical system has a lexicon (learned pairings of sound and meaning — e.g. words) the way language does. Those are big differences, and they are real. But our thought is that once you acknowledge that the basic building blocks of language and music are different (words vs. pitches, for example), the ways in which these building blocks get combined and recombined, maybe that’s the same in the two systems. A slogan we’ve used to summarize the proposal is “same recipe, different ingredients."
It’s really a bold paper in that sense, and that’s what excites us.