Erica Lucast
Gustavus Adolphus
College
Linking
Symbols with the World:
The
Turing Test and Recognizing Intelligence
Fifty years ago when Alan Turing first proposed his
imitation game, the main concern was whether it was possible to create a
machine capable of fooling a human into thinking it was human too. Half a century later we have yet to
accomplish this feat, but the “test” Turing outlined in “Computing Machinery
and Intelligence,”[1] the paper
generally cited as the founding work in artificial intelligence, has not grown
cobwebby with age. The genius of the
test was that it avoided the problem of giving a workable definition of
intelligence; instead it gave a practical criterion for attributing
intelligence a machine. It is still an
important tool in AI research, and passing the test remains the aim of some
programmers. Yet our question is no
longer about the possibility of
achieving this level of sophistication in computing power, as Turing’s was;
rather, philosophers of mind and artificial intelligence now challenge the
adequacy of the Turing test.
Philosophers’ concerns have centered around whether the
test provides necessary and sufficient conditions for intelligence. Is passing the test is enough to constitute
intelligence? Will something
intelligent pass the test? The more
sophisticated our “smart” machines become, the more we learn about what true
intelligence is not, and it is now a common opinion that passing the Turing
test will not be the definitive mark of an “intelligent” machine. It is the view of this author that this is
incorrect. Numerous intuitively
appealing objections to the Turing test have been raised, and in the course of
discussing them this paper will present the view that an entity, be it machine
or something else, can pass the test if it is capable of sensory interaction
with the world, and that this will be enough to constitute intelligence.*
1: The Test
A brief discussion of the test itself will provide some
background for the rest of the discussion.
Turing opens his 1950 paper by proposing “to consider the question, ‘Can
machines think?’”[2] He quickly dismisses that question, however,
citing the difficulty of defining the terms “machine” and “think.” To replace it, Turing proposes an “imitation
game” in which it is the task of an interrogator, physically separated from the
human and computer “contestants,” to determine which is the computer and which
the human.
Here we must be careful to distinguish what Turing
actually says from what we read into his argument. A casual reading of the paper will leave one with the impression
that if a machine can pass the test, then it is intelligent. But nowhere does Turing actually state that
success in the imitation game constitutes intelligence, or that something
intelligent will necessarily succeed in the game. His aim is merely to approach the artificial intelligence
question from a different direction:
“Will the interrogator decide wrongly as often when the game is played
[between a human and a computer] as he does when the game is played between a
man and a woman? These questions
replace our original, ‘Can machines think?’”[3] Turing is asking what we will say about a
computer which can pass the test; he is clearly not making any claim that we
will or that we should count it intelligent.
Since Turing published his paper, however, the Turing
test has been interpreted and adapted as philosophical thinking and actual
technology catch up to it. The most
common assumption is that somewhere in the test are necessary or sufficient
conditions for intelligence.
Philosophers fall on both sides of this question; some consider the test
to give a necessary condition (i.e. they believe that anything intelligent is
supposed to pass the test), while others interpret the test as giving a
sufficient condition of intelligence (i.e. something which passes is
intelligent). Their objections are
directed against one of these interpretations.
This is the discussion begun by Turing’s paper: what will we say about a
computer’s performance in an imitation game?
2: The Argument from
Consciousness: Searle’s Chinese Room
John Searle stands squarely in the “sufficient condition”
camp. He makes the assumption that the
test is supposed to provide sufficient conditions for intelligence, and
proceeds to argue against the test as he interprets it. If a machine succeeds in Turing’s imitation
game, he argues, we still cannot conclude that it is intelligent. His Chinese Room argument,[4]
after Turing’s paper perhaps the most famous in the field of artificial
intelligence, is as follows: Suppose Searle (or anyone else who knows English
but no Chinese) is sent into a room which is empty except for a table and
chair. On the table are stacks of
papers: several batches of Chinese writing and a set of instructions in English
on how to correlate some Chinese symbols with others and to put out strings of
symbols accordingly. Searle, innocent of the meanings of the symbols, cannot
know that among the Chinese batches is a story and some questions about it, and
the strings of symbols he generates constitute answers to the questions, and
the English instructions are a “program” for communicating in Chinese. Suppose
further that both Searle and the programmers get so good at this symbol game
that the answers Searle produces are indistinguishable from native Chinese. The thrust of Searle’s argument is that this
Chinese Room system of inputs and outputs could pass a Chinese Turing test
(i.e. it could fool a native speaker into thinking it understood Chinese), but
Searle himself is still entirely Chinese illiterate. By analogy, therefore, the Turing test is inadequate because it
will count as intelligent a machine which clearly has no understanding of what
it is doing. Thus the test does not
stipulate a sufficient condition for intelligence.
At this point, nearly twenty years after Searle’s paper
appeared, it seems that we have actually produced a computer which will
function much as the Chinese Room system. Programming a computer to work with
natural language is not an impossible task, as the work of two computational
linguists in Santa Monica, California demonstrates.[5] Kathleen Dahlgren and Ed Stabler have developed a search engine
“designed to find exact information rather than contend with a mountain of
useless information most search engine software dishes up.” The software will
respond to questions asked in natural language, using contextual hints to find
what the user is actually looking for rather than listing thirty-five web sites
dealing with other uses of the word.
For example, when asked in a Biblical context, “What was Job’s job?” it
will reply that Job was the servant of the lord. Regular search engines would have a terrible time with such a
question, being unable to distinguish between the use of “Job” as a name and
“job” as a noun.
Yet Searle’s point stands: even software as sophisticated
as this does not “understand” what it is doing and would not fool a human into
believing it to be human. It answers
questions; it does not reason on its own or laugh at a joke. It cannot be said to understand what it is “talking about” when it retrieves
information, no matter how skillful it may be at doing so. What the computer is
doing is exactly what Searle was doing in the Chinese Room: following
rules. Its function is purely
syntactic, for the symbols it is manipulating are not linked semantically with
the world. They do not mean anything.
Searle himself concedes that the system could be
sophisticated enough to draw inferences from a story which did not explicitly
contain the information to answer a question:
[S]uppose you are given the following story: “A
man went into a restaurant and ordered a hamburger. When the hamburger arrived it was burned to a crisp, and the man
stormed out of the restaurant angrily, without paying for the hamburger or
leaving a tip.” Now, if you are asked
“Did the man eat the hamburger?” you will presumably answer, “No, he did
not.” ...Now [certain][6]
machines can similarly answer questions about restaurants in this fashion. To do this, they have a “representation” of
the sort of information that human beings have about restaurants, which enables
them to answer such questions....[7]
But it seems we still would not
grant the computer intelligence on that basis, for it is clear that it is
merely performing syntactic manipulations.
Neither it nor Searle in his room can write a sonnet or express an
opinion, because all it or he has to go on is what it is fed through stories
and other chunks of information.
At the crux of the difficulty is the enormous task of
programming into a computer from scratch all the linguistic and sensory
experience necessary for it to behave as a human in a convincing manner. It is not likely we could program enough
into it by hand for the computer to be convincing in every situation. How can we get around this? By giving the computer a semantics to
supplement its syntax. It needs to have
a way to link the symbols it is manipulating with things in the world. It needs to have its own discoveries and
make its own connections. If Searle’s
English “program” contained sentences such as “‘Squiggle squiggle’ means
‘house’ and ‘squoggle squoggle’ means ‘inside,’” we would say that Searle
understands what he is doing, because he now knows what ‘squiggle squiggle’ and
‘squoggle squoggle’ mean.
Searle knows what the terms mean because he lives and
acts in the world. They have
significance beyond mere values of a variable, and he has spent a lifetime
building the significances in his mind. Language gives him a tool for
interaction with the world; Searle’s language is not empty, as a computer’s is.
His semantics is not merely a theory of reference, either; Searle relies on
analogy and idiom to go beyond clear-cut reference. With language, unusual comparisons and connections can be made,
so that the world underdetermines language.
There is no practical way that an “empty” computer
language can capture all these relationships and their vastly tangled
connections. Theoretically, it is
conceivable; but practically, it would simply take too much. The alternative is much more realizable:
build a computer (or rather, a robot) that will acquire a semantics the way
Searle did. Let it interact with the
world and form its own “cognitive connection web.” Its symbols would no longer be empty, and in that case Searle’s
argument is no longer valid.
3. Sensory Input Will Do: Harnad’s Turing Test Hierarchy
In two papers concerning the Turing Test, Stevan Harnad[8]
proposes just that. He argues that
although “Searle thought he was refuting... the Turing Test,” he was actually
only refuting a specific version of it.[9] Like Searle, Harnad believes the test gives
a sufficient condition of intelligence.
He, however, does not take Turing’s test at face value as Turing
proposed it, but instead proposes that a hierarchy of Turing tests with different
strengths can be inferred from Turing’s paper.
Here is an outline of Harnad’s versions of the test:
Level t1: This is the level at which research currently stands. The models at this level are specialized
fractions of humans’ total capacities.
These include functions such as playing chess, or Kenneth Colby’s PARRY,
which simulates the behavior of a paranoid psychiatric patient extremely well,
but cannot interact in other capacities.[10]
Level T2: This is the level Harnad calls the “conventional” Turing
test. It is the kind of machine Searle
had in mind for his Chinese Room argument; it functions as a pen-pal would,
“words in and words out.”[11] He also points out that at this level, all
interaction is purely symbolic, which is why, if the test is taken to mean only
this level, Searle’s argument can be seen to refute the Turing test.
Level T3: Harnad dubs this the “Robotic Turing Test.” He describes very nearly what I have
outlined above: mere “pen-pal capacities,” as Harnad calls them, are inadequate,
and will be detectable somehow. Symbols
the machine employs need to be anchored to the world by dynamic sensory input
and output capacities. Harnad finds it
doubtful, as I do, that pen-pal capacities are independent of such
sensorimotor, or robotic, capabilities.[12]
Level T4: On this level, the machine in question would have “internal
microfunctional indistinguishability”[13]
from us. The materials used to build
the machine can be different from our own, but the machine’s internal (and
external) workings will be no different than ours. Such a machine would have the same physical reactions to
situations as we do (such as blushing, rushes of adrenaline, bleeding when cut,
and so on).
Level T5: This is T4 implemented with actual biochemicals indistinguishable
from our own.
Harnad argues that the T3 level
is the decisive one, the one sufficient for intelligence. Obviously, the first level will never be
taken for intelligence, for it is incomplete.
The second level is subject to arguments of Searle’s sort. The fourth and fifth would count as
intelligent, but are overdetermined; they have more than the minimum capacity
necessary for intelligence. He
demonstrates this in a thought experiment: suppose we have nine candidates,
three from each of the top three Turing levels. “All nine,” Harnad notes, “can correspond with you as a pen-pal
for a lifetime; all nine can interact robotically with the people, objects,
events and states in the world indistinguishably from you and me for a
lifetime.” Now suppose it is revealed
to you that these friends of yours are not in fact people.
You are now being consulted as to which of the
nine you feel can safely be deprived of their civil rights.... Which ones should it be, and why? ...I think we are still squarely facing the
Turing intuition here, and the overwhelming answer is that none should be. ...By all
we can know that matters for such
moral decisions about people with minds, both empirically and intuitively, they
all have minds, as surely as any of the rest of us do.[14]
It seems quite reasonable to
surmise that we are not going to discriminate against the T3 robots simply
because their physical makeup is different from ours; if they have been
convincing friends for a reasonably long period of time, knowing that their bodies
are artificial will not alter your estimation of them as thinking beings.
The dividing line, then, lies between levels T2 and
T3. What exactly is the
difference? Both function in natural
language. Because of the problem of
other minds, language and conversational capacity are our everyday criteria for
assuming the others with whom we interact have “something up there” and are not
mere automatons or zombies. These
machines both have that capacity, at least in theory, so why should sensory and
motor interaction with the world make a difference? In a practical sense, we want to say that there is no way to
program into the T2 machine the vast store of background context and other
unconscious connections that go into our forming understandings of words. So the T2 will never happen. The T3, which does appear to be possible
given the current state of technology, would pick up those unconscious
connections on its own. As Harnad
mentions, real life pen-pals can do many other things. “But,” he adds, “that version of the [Turing
test] would no longer be T2, the pen-pal version, but T3, the robotic version.”[15]
He points out, too, that we do not know whether our linguistic functions are
fully separable from our “robotic” ones; he cites the fact that many organisms
have robotic functions without linguistic ones but none are endowed with
language capacities without robotic ones.
In an intuitive sense, we can say that having
sensorimotor capacities makes a difference because the machine which works with
language it has learned from the world (i.e. the T3) can “understand” what it
is talking about, whereas the T2 machine is only shuffling symbols, as Searle
did in his Chinese Room. The T3 level
gets around Searle’s argument, however, for here not all the information given
to the machine is grounded only in symbols.
The symbols are connected to the world and to the machine’s experience
in it (however remotely, citing the underdetermination of language by the
world). They are complete with
connotations and attachments to memory; they have meaning. A computer which
is merely told what a symbol means,
in terms of other symbols, will not have the same kind of understanding as its
counterpart, which has experience of
what the symbol means.
We are ready to grant, then, that at Harnad’s T3 level,
the Turing test provides a sufficient condition for intelligence.
4. It’s Too Narrow: French’s
Seagull Test
Let us now turn to the converse objections to the Turing
test. The complement to Searle’s
Chinese Room argument is one attacking the position that the Turing test
provides a necessary condition for intelligence. Such an argument is Robert French’s Seagull Test.[16] Unlike Searle, French agrees that something
which can pass the Turing test is intelligent, but he claims that nothing but
an actual human will have the capacity to do so. As an analogy, he sets up a “Seagull Test” for flight.[17] Suppose there is a pair of philosophers on
an island whose only flying animals are seagulls. They wish to “pin down what ‘flying’ is all about,” and since no definitional criteria are
satisfactory, they devise a Seagull Test to determine what can fly and what
cannot.
The Seagull Test works much like the Turing
Test. Our philosophers have two
three-dimensional radar screens, one of which tracks a real seagull; the other
will track the putative flying machine.
They may run any imaginable experiment on the two objects in an attempt
to determine which is the seagull and which is the machine, but they may watch
them only on their radar screens. The
machine will be said to have passed the Seagull Test for flight if both
philosophers are indefinitely unable to distinguish the seagull from the
machine.[18]
The philosophers will claim
nothing if the machine does not pass the test; thus they acknowledge that
something which does not pass may yet be able to fly. Without a theoretical understanding of the principles of flight,
and seagulls as the only available flying prototype, the only way to tell for
certain whether an object can fly is its ability to pass their test.
French asserts that the test is too stringent, for
machines such as jets and helicopters, which we all agree really do fly, will
not pass. They fly, but not in the way
a seagull does; and so the test will detect them every time. “For the Turing Test,” French claims, “the
implications of this metaphor are clear; an entity could conceivably be
extremely intelligent but, if it did not respond to the interrogator’s
questions in a thoroughly human way, it would not pass the Test.”[19] He bases this claim on a certain kind of
question an interrogator might put to the contestants in the imitation game:
questions of a “subcognitive” nature, which he forms as various rating games. His examples include such questions as “On a
scale of 0 (completely implausible) to 10 (completely plausible), please rate
[the following]: ‘Flugblogs’ as a name Kellogg’s would give to a new breakfast
cereal” and, on the same scale, “Rate dry leaves as hiding places.”[20] Questions such as these would unmask a
computer every time, French argues, because it does not have the subcognitive
structure a human does, and therefore it will not be able to make the same
kinds of associations we do.
Blay Whitby makes a similar kind of argument in his
article, “Why the Turing Test is AI’s Biggest Blind Alley.”[21] He takes issue with several assumptions he
sees as implicit in many readings of Turing’s paper, namely that “Intelligence
is (or is nearly, or includes) being able to deceive a human interlocutor” and
that “[t]he best approach to the problem of defining intelligence is through
some sort of operational test.”[22] The first assumption is the one at which
French’s Seagull Test is aimed.
Objecting to the second is actually Whitby’s main point: before we can
hope to create intelligent machines, he asserts, we must understand the
underlying principles of intelligence.
He too draws an analogy to flight: “It is true that many serious
aviation pioneers did make detailed study of bird flight, ...but it must be
stressed that working aircraft were developed by achieving greater
understanding of the principles of aerodynamics.”[23]
The difficulty of developing such principles in the field
of intelligence need not be reiterated here.
Avoiding the need for them is the genius of the Turing test, and it
seems as though a workable understanding of intelligence can be reached without
them. We bypass the problem of other
minds every day, after all. It does not
seem unreasonable, therefore, to argue that we would grant a machine
intelligence even when we can distinguish it from the human in the imitation
game. Of course its subcognitive network will be different from ours; it
is a different kind of entity. Now,
French apparently assumes that the computer has been given all of its structure
by a programmer, and thus lacks the subliminal associations the human brain
forms as it learns through its experiences.
If this were the case, it is likely that we would not grant it
intelligence at all, or at least not full intelligence, for it would have no
such associations at all, and its scope of reasoning would therefore be
severely limited. A computer which
could interact with the world on its own, however, would be able to form such
subcognitive associations, for every situation would be accompanied by many
factors a dry program would never capture.
In all likelihood, these would be different from human ones. But this does not in any way entail that we
would not recognize them as intelligent.
To take French’s own example, consider
a being that resembled us precisely in all
physical respects except that its eyes were attached to its knees. This physical difference alone would
engender enormous differences in its associative concept network compared to
our own. Bicycle riding, crawling on
the floor, wearing various articles of clothing (e.g. long pants) and
negotiating crowded hallways would all be experienced in a vastly different way
by this individual.[24]
Yet we would never say such an
individual was unintelligent. It might
give different ratings in French’s rating games (so we could tell it from a
normal human every time we played), but if it could support its answers with
explanations, we would surely agree that it too was a thinking being. The subcognitive associations which lend
such weight in our assessment of thinking beings are probably too complicated
to program directly into a computer; true.
But they could be acquired through sensorimotor interaction with the
environment in which an entity finds itself, and then it is doubtful that even
if we can differentiate it from a human, we will still treat it as intelligent
on the same grounds we do other humans.
5. Conclusion
Let me summarize what I have presented so far. Turing’s imitation game presented a new way
to approach the question of whether machines are capable of thought, initiating
a philosophical debate that is still in progress. Critiques of the Turing test have argued over whether the test
provides necessary or sufficient conditions for intelligence. Searle’s Chinese Room is an argument against
the test’s providing a sufficient condition; his position is that something
which can pass the test may still not be said to understand what it is talking
about. Against the test’s giving a
necessary condition for intelligence, Robert French constructed a Seagull Test
to demonstrate by analogy that the Turing test is too narrow a criterion to
demonstrate intelligence; some intelligent things would not pass the test. The way around both of these arguments, I
have proposed, is to build a machine which can interact with the world in
sensorimotor capacities similar to ours; thus it will acquire a grounding for
its syntactic symbols, and although it may form an impression of the world
different from ours, it will be recognizably intelligent to us in the way any
other person in everyday life is.
Furthermore, with Harnad I agree that this kind of interaction is
enough, for creating machines which for all intents and purposes are actually human is overkill.
One could now ask whether this is technologically
possible. This is a question I of
course cannot answer for certain. I
can, however, cite some evidence which leads me to believe that it is quite
possible, and may even occur in our lifetimes.
First of all, there exist today computers which run learning algorithms
and can pick up on environmental cues.
One example of such machines are Furbys. The toys are sensitive to being flipped upside down, loud noises,
and motion. When first purchased, they
speak nothing but the pre-programmed Furby language, but as the owner interacts
with it over time, it picks up some vocabulary from English (or whatever
language it hears).
Along similar lines, MIT’s Cynthia Breazeal has created
Kismet, a robot which seeks human interaction.[25] Kismet is only a head, with a doll’s blue
eyes, pink rubber lips, fuzzy eyebrows and curly pink ears. Yet those few features give it an impressive
range of facial expression. Anyone
watching can tell what the robot is “feeling” just by looking at its face. And what it is feeling is a result of what
is going on around it. When its creator
is present, it greets her by wiggling its ears and raising its eyebrows: it is
happy to see her. If she stimulates it
too much, it gets annoyed. If she plays
with it for a long time, it gets tired and goes to sleep. There are emotions it does not display, but
as yet the project is incomplete.
Still, Breazeal says, “The behavior is not canned. It is being computed and is not a random
thing. The interaction is rich enough
that you can’t tell what’s going to happen next. The overacrching behavior is that the robot is seeking someone
out, but the internal factors are changing all the time.”[26]
When she came to MIT, Breazeal worked with Rodney
Brooks. When he returned from a
sabbatical shortly after Breazeal began her work, he began to work on building
an android “which would be given human experiences and would learn intelligence
by interacting with the world.”[27] Cog, short for “cognitive,” was the
result. Cog is a robot which can
distinguish between several sensory stimuli and focus on one; it can catch and
throw a ball and play with a Slinky.
Here is a robot which interacts with the world.
Another approach to creating artificial intelligence is
through neural network processing.
Without going into the details or motivation of it here, it is worth
mentioning Paul Churchland’s discussion of Garrison Cottrell’s work at the
University of California, San Diego.[28] Cottrell and his group developed a network
which could recognize faces, first as faces, and to a lesser but still
impressive degree, the gender of the face.
It “learned” this skill from a training set of photographs containing
faces and non-faces, much the way the human brain does.
One last example is relevant here. In an article for Discover Magazine in June 1998, Gary Taubes writes about a pair of
computer scientists at the University of Sussex, Inman Harvey and Adrian
Thompson, who work in “evolutionary electronics.”[29] Thompson has been working on “evolving”
computer chips to perform specific tasks.
Essentially, he grades silicon processors that can change their
configurations quickly on their performance at the task, and then “mates” them
together to form a new chip. The
process continues until he has evolved a chip which is “flabbergastingly
efficient,” as he puts it. Right now,
the process by which this takes place is still mysterious. Thompson is skeptical about using his kind
of processors in artificial intelligence applications, Taubes writes, but
Harvey believes that by applying Thompson’s process to a system with billions
or trillions of components (rather than Thompson’s 100), much like the number
of neurons in a human brain, it just might be possible to evolve a conscious
machine. After all, biological
evolution made us humans conscious; if consciousness is a property which makes
its tasks easier, the machine would presumably evolve it as well.[30]
The point in mentioning these examples is that much of
human behavior and developmental processes are already being imitated by
computer scientists and electrical engineers.
Most likely, it is only a matter of time before these technologies come
together to implement the kind of machine I have argued will be regarded
intelligent.
* I am thus claiming that the test does provide a sufficient condition for intelligence, although, as we will see, something which we recognize as intelligent may also be recognizable as nonhuman and would not pass the Turing test. The test therefore does not necessarily give a necessary condition for intelligence.
[1] Turing, Alan. “Computing Machinery and Intelligence.” Mind. Vol. 59, No. 236, p. 433-460. In Readings in Cognitive Science: A Perspective from Psychology and Artificial Intelligence. A. Collins and E.E. Smith, Eds. San Mateo, CA: Kaufmann, 1988. Internet: http://dangermouse.uark.edu/ai/Turing.html (19 Sept. 1999). n. pag.
[2] Turing.
[3] Turing.
[4] Searle, John. “Minds, Brains, and Programs.” The Behavioral and Brain Sciences. Vol. 3. Cambridge: Cambridge UP, 1980. Internet: http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html (19 Sept. 1999). n. pag.
[5] Weinstein, Bob. “A search engine that uses linguistic analysis to cut to the chase.” From the Boston Globe’s web site: http://www.boston.com (12 September 1999). n. pag.
[6] Searle refers to the work of Roger Schank in particular, but claims that other systems had been produced at the time that could perform similar tasks.
[7] Searle.
[8] Harnad, Stevan. “Minds, Machines, and Searle.” Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25. 1989. Internet: http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad89.searle.html (11 Oct. 1999). n. pag.
---. “Turing on Reverse-Engineering the Mind.” Preprint of draft submitted to Journal of Logic, Language, and Information special issue on “Alan Turing and Artificial Intelligence” (to appear early 2001). Internet: http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html (11 Oct. 1999). n. pag.
[9] Harnad, R-E.
[10] See Dennett, Daniel. “Can Machines Think?” How We Know. Ed. Michael Shafto. San Francisco: Harper & Row, 1985.
[11] Harnad, R-E.
[12] Harnad, R-E.
[13] Harnad, R-E.
[14] Harnad, R-E.
[15] Harnad, R-E.
[16] French, Robert. “Subcognition and the Limits of the Turing Test.” Mind. Vol. 99, No. 393. 1990. p. 53-65. Internet: ftp://forum.fapse.ulg.ac.be/pub/techreports/turing.pdf (11 Oct. 1999). 9 p.
[17] French 2.
[18] French 2.
[19] French 3.
[20] French 4,5.
[21] Whitby, Blay. “Why the Turing Test is AI’s Biggest Blind Alley.” 1997. Based on a paper presented at the Turing 1990 Colloquium. Internet: http://www.cogs.susx.ac.uk/users/blayw/tt.html (19 Sept. 1999). n. pag.
[22] Whitby.
[23] Whitby.
[24] French 6-7.
[25] Whynott, Douglas. “The Robot that Loves People.” Discover. Oct. 1999. 66-73.
[26] Whynott 68.
[27] Whynott 70.
[28] Churchland, Paul. The Engine of Reason, the Seat of the Soul. Cambridge, MA: MIT Press, 1995. 40-48.
[29] Taubes, Gary. “Evolving a Conscious Machine.” Discover. June 1998. Internet: http://www.discover.com/june_story/cmachine.html (3 Sept. 1998).
[30] For this thought, I am partially indebted to John Beloff in his “Minds or Machines” from Truth: A Journal of Modern Thought. Reprinted from Vol. 2, 1988. Internet: http://www.leaderu.com/truth/2truth04.html (11 Oct. 1999). 5 p.