Skip to content ↓

Brain circuitry findings could shape computer design

Guosong Liu, a neuroscientist at the Picower Center for Learning and Memory at MIT, reports new information on neuron design and function in the March 7 issue of Nature Neuroscience that he says could lead to new directions in how computers are made.

While computers get faster all the time, they continue to lack any form of human intelligence. While a computer may beat us at balancing a checkbook or dominating a chessboard, it still cannot easily drive a car or carry on a conversation.

Computers lag in raw processing power--even the most powerful components are dwarfed by 100 billion brain cells--but their biggest deficit may be that they are designed without knowledge of how the brain itself computes.

While computers process information using a binary system of zeros and ones, the neuron, Liu discovered, communicates its electrical signals in trinary--utilizing not only zeros and ones, but also minus ones. This allows additional interactions to occur during processing. For instance, two signals can add together or cancel each other out, or different pieces of information can link up or try to override one another.

One reason the brain might need the extra complexity of another computation component is that it has the ability to ignore information when necessary; for instance, if you are concentrating on something, you can ignore your surroundings. "Computers don't ignore information," Liu said. "This is an evolutionary advantage that's unique to the brain."

Liu, associate professor of brain and cognitive sciences, said an important element of how brain circuits work involves wiring the correct positive, or "excitatory" wires, with the correct negative, or "inhibitory" wires. His work demonstrates that brain cells contain many individual processing modules that each collects a set number of excitatory and inhibitory inputs. When the two types of inputs are correctly connected together, powerful processing can occur at each module.

This work provides the first experimental evidence supporting a theory proposed more than 20 years ago by MIT neuroscientist Tomaso Poggio, the Eugene McDermott Professor in the Brain Sciences, in which he proposed that neurons use an excitatory/inhibitory form to process information.

By demonstrating the existence of tiny excitation/inhibition modules within brain cells, the work also addresses a huge question in neuroscience: What is the brain's transistor, or fundamental processing unit? For many years, neuroscientists believed that this basic unit of computing was the cell itself, which collects and processing signals from other cells. By showing that each cell is built from hundreds of tiny modules, each of which computes independently, Liu's work adds to a growing view that there might be something even smaller than the cell at the heart of computation.

Once all the modules have completed their processing, they funnel signals to the cell body, where all of the signals are integrated and passed on. "With cells composed of so many smaller computational parts, the complexity attributed to the nervous system begins to make more sense," Liu said.

Liu found that these microprocessors automatically form all along the surface of the cell as the brain develops. The modules also have their own built-in intelligence that seems to allow them to accommodate defects in the wiring or electrical storms in the circuitry: if any of the connections break, new ones automatically form to replace the old ones. If the positive, "excitatory" connections are overloading, new negative, "inhibitory" connections quickly form to balance out the signaling, immediately restoring the capacity to transmit information.

The discovery of this balancing act, which occurs repeatedly all over the cell, provides new insight into the mechanisms by which our neural circuits adapt to changing conditions.

This work is funded by the National Institutes of Health and the RIKEN-MIT Neuroscience Research Center.

A version of this article appeared in MIT Tech Talk on March 10, 2004.

Related Links

Related Topics

More MIT News