Skip to content ↓

Hard drives will evolve into soft hearts ... or not

CSAIL debate on machine consciousness ends in a draw
From left, Ray Kurzweil, Rodney Brooks and David Gelernter speak at 'Creativity: The mind, machines and mathematics' held at MIT Nov. 30.
From left, Ray Kurzweil, Rodney Brooks and David Gelernter speak at 'Creativity: The mind, machines and mathematics' held at MIT Nov. 30.
Photos / Donna Coveney

The topic of the Nov. 30 debate--the limits of intelligent machines--might have been ripped from classic science fiction. The format echoed a presidential campaign slugfest. Nobody won a round, but the audience scored insight from two of the brightest minds in the field of artificial intelligence, Ray Kurzweil and David Gelernter.

Held to mark the 70th anniversary of Alan Turing's 1936 groundbreaking paper, "On Computable Numbers," the debate at MIT's Stata Center explored the possibility of human-like intelligence, emotional intuition and even consciousness in computers. Or: "Can we build super-intelligent machines or are we limited to building super-intelligent zombies," as moderator Rodney Brooks, director of the Computer Science and Artificial Intelligence Laboratory, put it.

Taking the position that machines will achieve a level of human intelligence was Kurzweil, a prodigious inventor, winner of the 1999 National Medal of Technology and author of "The Age of Intelligent Machines" and "The Age of Spiritual Machines."

Taking an opposing "anti-cognitivist" viewpoint was Gelernter, Yale professor of computer science, chief scientist at Mirror Worlds Technologies and contributing editor at the Weekly Standard.

A key point of contention was defining consciousness, or even whether it could be defined. "There's no consciousness detector that we can imagine creating … that doesn't have some philosophical assumptions built into it," Kurzweil said. But Gelernter insisted that "you can't possibly understand the human mind if you don't understand consciousness."

While Gelernter agreed with Kurzweil that emotional intelligence was a key component of human intelligence, he argued that building a conscious machine "out of software seems to be virtually impossible."

Software, by definition, can be peeled away and run on another computer platform, but "the mind cannot be ported to any other platform or even to an instance of the same platform," Gelernter said. Innovation may create "powerful unconscious intelligence but I can't see it creating a new node of consciousness. I can't even see where that node would be--floating in the air in someplace?"

Human mental states are privately circumscribed, hidden from analysis, unlike software codes, he said. As for spirituality--which Gelernter defines as a "thirst for the living God"--"Can we build a robot with a physical need for a non-physical thing? Maybe, but don't count on it. And forget software."

"That's because we're thinking of software as it is today," Kurzweil countered. Not only is informational technology expanding exponentially, but research on the brain is yielding new information on brain chemistry and neural functions, he said.

Indeed, a brain that shuffles chemicals is not that different than a computer that shuffles symbols, Kurzweil said.

Consciousness is an "emergent property of a complex system. It's not dependent on substrates," he said. However, "there's no way … to measure the subjective experience of another entity. We assume that each other are conscious."

Certainly consciousness is an emergent property, Gelernter said; but one could run hugely complex programs, with billions of processes, and "I don't have any reason to believe that consciousness would emerge."

If true intelligence involves emotion, keep in mind that "you don't just think with your brain, you think with your body." Gelernter said. "We don't have the right to dismiss out of hand the role the chemical makeup of the brain plays in creating the emergent property of consciousness."

An audience member wondered if "everything" had consciousness and whether humans could learn to communicate with machines. Kurzweil responded that history was filled with examples of humans who didn't accept the consciousness of other human races and cited current debates over animal consciousness and hence animal rights.

Machines may be another matter, Gelernter said. "If we think we are communicating with a software-powered robot we are kidding ourselves because we use words in a fundamentally different way."

After the debate, Brooks said neither party had won, adding with diplomatic aplomb, "I disagreed with both of them."

To view a webcast of the debate and a lecture on Turing by University of Canterbury scholar Jack Copeland, go to

A version of this article appeared in MIT Tech Talk on December 6, 2006 (download PDF).

Related Links

Related Topics

More MIT News