It is often claimed that the Turing test for intelligence could be passed by a computer driven by a gigantic lookup table that would tell it what response to make. The most recent one I have come across was in a review by Donald Perlis (2005) of Hawkins and Blakeslee's (2004) On Intelligence (a book most charitably described as "shallow," by the way). Perlis says:
It seems to me that Turing is, in effect, taking the position that his Imitation Game is simply a prima facie test of intelligence, not a necessary and sufficient condition. That is, it may be a reasonable practical guide, if we find ourselves in need of making a decision on the matter, say for legal or other practical purposes, and in the absence of contravening evidence. What might such evidence be (that, despite passing the Test, a system is not intelligent)? One answer is obvious: the system turns out to have an enormous database of answers to everything, and simply performs a lookup whenever called upon to respond.
But, as I said, countless further examples might be adduced (not to mention when members of my own family hit me with this egregious claim). In a footnote, Perlis adds the inevitable qualification that "such a database would have to be large beyond any conceivable practicality, or even infinite...." Actually, Perlis is unusual in including the case of an infinite DB, which it seems to me should be ruled out on the grounds that Turing was clearly talking about physical machines, not mathematical ones.infinite
The "gigantic database" argument is an "in principle" argument, because it would obviously be impossible to actually put such a thing together. Some "in principle" arguments are so absurd that one feels compelled to protest. Example: All the air in this room might suddenly rush to one corner. "In principle" it could. We are justified in assuming it won't, for two reasons. One is that the probability of it happening is so low that our assumption will with virtual certainty never be contradicted. But another is that any scenario in which the improbable event happens will violate many other modeling assumptions as well. If we seriously want to think about the air compressing itself, then we can't really talk about "the air in this room" any more, because the time period over which the room, the observer's species, or even the planet they sit on, can be assumed to exist is much smaller than the time we would have to wait to have an even chance of seeing all the air molecules land in a corner of the room. But never mind all that. A worse problem is that to get to a part of the phase space that gives a zero probability to any molecule being outside the corner zone, we must model the set of molecules as classical objects, but what we really have is a wave function that yields observations of individual molecules under the right circumstances. Finding all the molecules in one corner of the room really means interacting with the wave function in such a way as to produce an observation that is the eigenfunction of an observable that assigns a narrow range of positions to all the molecules in the room. It's only a matter of faith that such an interaction exists.
I claim the Gigantic Database Machine (GDM) is similarly inconceivable. In order to build it, you would have to envision (virtually) every possible series of conversations that could happen to a character I will call "the robot" (the human talking to it being called "the judge". Occasionally we'll need to refer to the real, live person that the judge has available as the alternative choice when trying to decide who's who; we'll call that person "the human"). I say "virtually" because you don't have to envision everything the robot might say, just every way it might react to things the judge said to it. Obviously, you are allowed to bound the length of the overall conversation; the judge has to decide after seeing a finite amount of material, fixed in advance, or the test isn't fully specified. If we envision a conversation as a turn-taking game, then you don't have to construct the entire game tree, just a subtree with the property that every move by the opponent has a response and every move by the machine is followed by every conceivable remark the judge might make. To put it less mathematically, you don't have to implement both a set of cheerful reactions and a set of gloomy reactions. All you have to do is imagine every conversation a cheerful robot might ever take part in, and what its cheerful reaction would be at every step, or, if you prefer, do the same for a gloomy robot, or a bipolar robot, or a clinically detached one.
The table driving this robot would have to be very big. No conceivable process could create it, a point I'll return to below. But were it to be created, there would be no place to put it. It wouldn't fit anywhere. The judge is allowed to type any string at all when it's his or her turn to speak, and just storing all the possible strings would take roughly 10L bytes, where L is the bound on the length of the conversation. If L is in the range of 1000, then it is safe to say the human race will never build a memory system that large. We can generously estimate the number of atoms in the universe to be 1050, which isn't even a small dent in 101000, even if we store 103 bits per atom. Sometimes philosophers employ the device of assuming that some very improbable process could bring an object into existence. A sandstorm or supernova swirls by, and there sits something that looks and behaves exactly like an alligator or a U.S. $20 bill. (The bill wouldn't be genuine in spite of its physical properties; would the alligator?) This maneuver won't work for the GDM, because no physical event can bring into existence something bigger than the universe.
Oh, well, the proposers of this idea will say: We don't care whether the GDM can exist for real. Let's suppose God creates it (possibly with the help of invisible elves) and keeps it in hyperspace somewhere. He then beams its inputs and outputs back and forth to our universe telepathically. God and the hyperspace elves have their work cut out for them, as outlined above, even if real estate is cheap in hyperspace. Suppose the judge asks the cheerful robot to tell it a story. After the first sentence the judge says, "No, not about a prince, about a frog." So the elves will have to imagine a large number of stories, given the possible ways the judge could intervene, or demand that you start again. And don't forget all the state information that must be encoded in the table. The judge might begin by saying, "My dog's name is Rover. Remember that, because I'm going to ask you about it later." Because the dog's name could be almost anything, the story and all its variants is going to be duplicated somewhere in the tree for every possible dog name. You might try to wiggle out of this by having the robot just say later, "Oh, I've forgotten the dog's name," but that only works in branches where the judge doesn't often remind the robot to think of it again and verify that it hasn't forgotten it. (Perhaps a refractory robot would be easier than cheerful one, but if you make your task too easy — perhaps by following someone's (Weizenbaum's?) facetious suggestion to make the robot "autistic" and design it to never respond at all — the judge will too easily figure out that your robot is not the human.)
If God or elves construct the table, then one could plausibly argue that the table is not really a kind of algorithm; it's really a recording of intelligent activity, namely, all the imagination that went into anticipating all possible conversations. So naturally a human judge could not tell its output from that of an intelligent being, because its output is the output of an intelligent being, namely God. I submit that this would not count as a computer in the sense Turing meant. To make the point really clear, suppose one of the elves in R&D sends a memo making this observation:
It's shame to have to build the entire DB, when we only use one small piece of it each time. Why not construct a "Virtual" GDM, in which we wait until the next response by the robot is needed, wake up the relevant elf, and have him record what the judge just said and an appropriate response, then run the lookup program on the resulting DB. No one would be able to tell the output of the VGDM from that of the GDM.Obviously, if this suggestion is followed, then the table becomes a mere buffering device between the elf and the judge. The judge is not talking to the GDB, but to the elf (or God, or whomever). Creating the whole table in advance just increases the delay between writing the buffer and reading it, as well as creating an incredible number of buffers that are never read. Imagining a conversation requires intelligence. Imagining all of them requires an almost unlimited supply of intelligence. For the GDM to pass the Turing test would merely verify that "real" intelligence went into the construction of the tree, not that the GDM is itself intelligent.
The robot in the conversation can't know what time it is, because the GD is supposed to be a static structure. Given the time required to build it, even knowledge of the year or century would be dicey, but over longer time scales we have to worry even about the stability of the language we're using. Suppose we get funding to build the GDM at the height of the Cold War when research money is plentiful. We agree to freeze the intended language as Standard (i.e., American Midwest) English of that era, and assume common knowledge that an educated person of that era would have. Several billion years later, when the GDM is finally complete, it might be hard to find a judge and a human competitor who still speak Standard English, and the judge will spot the robot right away because of the antiquity of its dialect, and its inability to understand any word coined after "groovy." If those points somehow don't count, then the robot will be spotted by its ignorance of the outcomes of recent national elections and scandals, or even the what team won the World Series or World Cup last.
In Turing's original essay, imagined dialogues with the robot involve references to Shakespeare and chess. But the GDM must stand outside of time. You have to engage in a conversation that mentions nothing the human might have heard of that the robot can't have. I submit that there is no such conversation; conversation doesn't work that way. If we are talking to someone who knows no Shakespeare, we ask about NASCAR, or where the person is from, or, in desperation, the weather. How do we rule out any "context-dependent" topic?
Now you might begin to plead for a little slack. Can't you add a clock to the system? So that, when asked what time it is, the robot's utterance can be produced by filling in a template like, "When did we start? About ___?", with the approximate time put in the blank. You could add variables to the system. If the judge asks the robot to remember their dog's name, you could store the dog's name in a variable, say N, and have the table, when asked to respond to a request for the name, contain a template something like, "... ___? I think." with a note to put N in the blank. Sure you can do those things, but now you're writing a program, and you've gotten sucked into the AI game. Nope, if you want to claim an ordinary Gigantic Database can do the job, then you've got to stick to your guns.
I hope that by belaboring this issue I have succeeded in transmitting to you my strong sense that the "gigantic database" machine is not a sharp enough idea to be taken seriously. But I don't think those who propose it really want it to be taken seriously. It's just there to fill a certain "functional role," as a device that talks without having mental states. The real definition of the GDM is:
A computer that behaves as though it were intelligent, even though (somehow) it really isn't.Alas, this is where we came in. The original challenge for doubters of the Turing test was to come up with such a device. What they have done instead is to assume there must be one.
Note infinite: If we start
talking about mathematical Turing machines passing the Turing test,
then we will have to find a mathematical definition of "coherent
conversation," which is even more difficult than finding a
(scientific but nonmathematical)
definition of "intelligence," which is what Turing was trying to avoid
in the first place.
Jeff Hawkins and Sandra Blakeslee 2004 On Intelligence. Henry Holt
Donald Perlis 2005 Hawkins on intelligence: fascination and frustration. Artificial Intelligence 169, pp. 184--191