Or: What the Future Holds, In 1000 Words or Less
Whenever I am privy to discussions about artificial intelligence, I am reminded of the words of a comedian I saw on television once: “You know, I always thought that the brain was the most fascinating organ in the human body. But then I realized: Look what’s telling me that…”
I’ve been thinking about this a lot since attending Cyberfest, a week-long “birthday party” for HAL, certainly the best-known character in the legendary film 2001: A Space Odyssey (monoliths don’t count as characters), and probably the best-known computer in the world. Taking cue from Arthur C. Clarke’s novel, which placed HAL’s birth on January 12, 1997 in Urbana-Champaign, the University of Illinois organized a week of activities that spanned the disciplines in celebration of the fictional event. Not surprisingly, most participants focused on HAL’s breakdown, and why thinking machines are really scary. Some attempted to draw parallels between HAL’s apparent malfunction and certain human psychological conditions. Others, enthusiastically embracing the idea of AI, struggled with the reasons why HAL’s non-fiction counterparts don’t already exist.
That someday something resembling HAL will exist seemed a foregone conclusion to all. Approaches varied. Dr. Stephen Wolfram, Beckman fellow and creator of Mathematica, believes that AI goals remain frustratingly out of reach not due to some ineffable and uniquely human quality to “intelligence,” but rather due to an equally inexplicable assumption: The assumption that, in order to think, it must think like we do. Citing recent breakthroughs in cloning as examples, he theorizes that advances in AI, as well as in other areas of science long plagued by unanswered fundamental questions, will come from radically new approaches toward their resolution.
Others, including Ray Kurzweil, Murray Campbell of IBM’s Deep Blue team, and the NEC’s David Kuck disagree, insisting that greater understanding of how the human mind works and learns is necessary for the creation of an artificial intelligence. Some even foresee an eventual merging of creator and creation: Kuck envisions implants designed to enhance certain physical abilities becoming popular, populating suburbs everywhere with “bionic men and women” and blurring the line between the biological and the artificial; Kurzweil (sharing a vision with MIT researcher Marvin Minsky) imagines a future in which personalities are housed in metal casings not too unlike the one housing this word processor, leaving their flesh and blood packaging behind.
Hans Moravec explored similar territory in his 1988 book Mind Children, and it was in this book that he made the prediction that machines will evolve beyond their creators. While describing his current work on what I imagine to be an armada of little robot Merry Maids, Moravec made it clear during his turn at the podium that his opinion has not been altered in the intervening years, pointing out that the raw computing power required to match the processing capabilities of the brain will soon be available. Nothing indicates that the upward climb won’t continue.
This prompted some to ask: “Why would we let it continue?” This is an interesting question, I think, and links up somewhere with the idea that there will be one big computer – call it HAL or Skynet or Colossus – that will be the one to wake up and decide we don’t measure up. I think that our demands for more convenience, faster connections and more free time will dictate that the advances in processing will continue, and will in turn require more from our machines. Moravec’s house-cleaning drones, for example, will be designed to learn from their mistakes, and possibly from others’ through informational exchange. What other things might they learn and tell one another about, as they become more sophisticated? I don’t think there will be one monstrous machine tormenting us, but rather many little ones surprising us.
As I sit and write this I remember sitting in the darkened Virginia Theater in Champaign and being struck suddenly by how child-like HAL seemed to me. Was he really testing Dave and Frank, or was he trying to tell them his secret when he asked if the mission hadn’t seemed irregular to them? Was he, as Roger Ebert asserted, the only “human” character in the film, or was he merely not as adept at deceit as his more mature companions? When we do create things we can all agree are intelligent, will we expect too much of them too soon, lacking any compassion for our little prodigy progeny, while always expecting the worst from them?
Clarke joked, when asked about the discrepancy between the dates the movie and the book give for HAL’s “birth” (The movie gives the year as 1992), that “they would never send a nine-year-old computer on a mission like that.” It’s interesting to ponder what his words would mean if we did find that these little silicon entities still needed to be raised: They would face the ironic probability that their bodies had become outmoded well before their minds reached maturity. What a human fate.
As interesting as anything currently on the horizon from the world of science is our reaction to having caught sight of it and, as we have come closer to reaching the AI promised land, our fear at arriving has transformed the dream of the Electric Grandmother into the nightmare of the Terminator. Reality will likely incorporate both, and neither. And, as much as we howl that we are unique, we will surely share something with these strange offspring. They will, after all, be in some way a part of us – either literally or figuratively. If we can let ourselves recognize this, we may begin to put our fears to rest.
All of this is based, of course, on the musings of a woman madly missing her own children, trapped in a room full of strangers. Yet it really does seem that in this case a little anthropomorphizing wouldn’t hurt – but look what’s telling you that.
[Originally published in February, 1997 by Panorama Inc., and re-printed with permission in the June 1997 edition of Incommunicado: the e-zine.]