"The great scientific breakthroughs in artificial intelligence are still ahead of us," Professor Patrick Winston predicted in his opening remarks to Rethinking Artificial Intelligence, a corporate briefing held at MIT on September 24-25.
"Assuming that the science of AI is a 100-year enterprise that began in 1950, 2000 will be the halfway point," said Dr. Winston, the Ford Professor of Engineering in the Department of Electrical Engineering and Computer Science. "Molecular biology reached its halfway point when Watson and Crick discovered DNA. That discovery shifted everything -- it changed the world. We're waiting for the DNA equivalent in artificial intelligence."
About 300 senior technical management and corporate strategists from industries as diverse as aerospace and advertising attended the three-part seminar, which focused on how AI-based systems have evolved, where their impact is felt and what AI means for corporate strategy and revenue. The briefing was held at Kresge Auditorium and was jointly sponsored by the Artificial Intelligence Laboratory and the Industrial Liaison Program.
The goal of AI in the business world has changed over the past decade, moving away from the old idea of "replacing humans" to the higher-impact idea of "making people smarter." AI will become an "essential element in mainstream business systems" of the future, said Professor Winston, who was director of the AI Lab from 1975 through June 1997.
According to Professor Winston, the science behind AI is ready for a paradigm shift directed at understanding the contributions of linguistic, visual and motor faculties to human intelligence.
In describing some of the lessons learned by researchers since the eighties, he quoted Esther Dyson, editor of the influential trade publication Release 1.0, who predicted that AI would not become commercially significant until it became strategically important, "like raisins embedded in raisin bread." Professor Winston went on to describe how AI was doing just that.
"We finally realized that nobody cares about saving money by using cutting-edge technology to replace expensive experts," he said. Instead, AI is now being used more often to boost the capabilities of information gathering and retrieval systems, to improve the user/computer interface, and dramatically alter -- and save costs -- in medicine and medical education.
While some of the business applications described by the speakers were arcane (database mining, for example), other applications had an element of the fantastic. The seminar attendees were treated to videotaped demonstrations of running robots that could perform gymnastics, insect-like robots that could find land mines under water and surgical simulations that let medical students practice their craft without using human subjects.
Not all the technology is fully operational, such as the Star Trek-like computer called HAL II created in the lab of Dr. Howard Shrobe, associate director of the AI Lab. HAL II is expected to debut in January. Other examples are up and running, like Professor Eric Grimson's X-Ray Vision for Surgeons, which is currently being used at Brigham and Women's Hospital in Boston.
"Science fiction? No, it's science fact," said Professor Grimson of EECS. "These systems are laying the foundation for a revolution. They will change the way surgery is done in this country."
Artifical intelligence could also change the way children play. Professor Rodney Brooks, director of the AI Lab, showed a videotaped demonstration of a child's doll designed with AI features. Sensors make it appear to interact with humans -- it cries when held upside down, laughs when tickled under the chin and coos when fed a bottle -- all with the appropriate facial expressions.
"Perception and motor action are the things that evolution spent the most time on. I believe that if we get those down, the rest will follow," said Professor Brooks, who also showed videotapes of robots that could explore the surface of Mars or seek out and destroy land mines. These robots are able to detect mines made from different types of materials, and they can differentiate those mines from natural objects by means of acoustic sensors in their feet.
"In this case, robots are replacing people, which goes against mine and Patrick's maxim," he explained, "but they're replacing people who might get blown up."
A version of this article appeared in MIT Tech Talk on October 1, 1997.