Could we somehow construct an entity that would be intelligent, without also being human?
We have already built amazingly powerful computers that can add, subtract, multiply and
divide better than any human being. Are computers, therefore, intelligent? Something in us
says no, to be intelligent implies a wider range of skills than arithmetic. The devil’s advocate
will announce that ‘computer’ used to mean ‘a person who computes.’ We must admit
that, yes, computers can do some clever tricks with numbers. But an intelligent man
needs only amateur arithmetic skills; a truly intelligent agent can employ strategic
reasoning and martial logic to play a good game of chess. On the other hand, the chess
machine Deep Blue beat the world champion Garry Kasparov in 1997 (Campbell
et al., 2002). Chess has little of the uncertainty present in other domains, so maybe we
ought to say that an intellectual should be able to act decisively in the face of his
uncertainty fate while outwitting his opponents for his own gain, such as in the old game
of backgammon. However, the TD-gammon algorithm can also play backgammon
to a master level (Tesauro, 1994). The goal posts keep moving further away; our
standard for intelligence increases each time a non-human entity reaches a measurable
standard.
Turing (1950) suggested that we could identify intelligence through natural language rhetoric
interrogation: get a judge that is known to be intelligent to have a conversation with some
entity, and we let the judge decide if the entity is intelligent. The Turing test defines
intelligence in a circular way: ‘if a known intelligent agent thinks the agent under test is
intelligent, then the agent under test is intelligent.’ We escape the logical loop because we
assume that humans are intelligent, or at least one of us is. The beauty of Turing’s test of
machine intelligence is that it can encompass arithmetic, chess, backgammon and
many other domains by encoding them into natural language questions. Google and
Wolfram|Alpha both make laudable attempts to answer natural language questions in a
staggering range of domains well beyond the latitude of any one human expert. Has an
algorithm pass the Turing test of intelligence? Sort of. Google answers some questions
with super-human ability and other with definitely sub-human ability. You would
probably be better off asking your mother for relationship advice, while she would
refer you to Google if you asked her about the capital of Madagascar. I suggest that
instead of thinking of Google as being intelligent, we tend to think of Google as
changing what ‘intelligence’ means. Google and the internet change the way we think
(Carr, 2008) and some worry that these new media are infecting our minds, deleting
previously relevant skills and replacing them with images of cats with mysteriously
hovering witless captions written in crippled English. But people consistently adopt new
technologies that enhance our abilities to think and partially abandon old skills as
technology replaces them. Writing and calculators have decreased the importance of
memory and arithmetic, respectively. A piece of paper can remember hundreds of
words, but we do not consider that paper is endowed with intelligence. As a machine
gets closer to being ‘intelligent,’ we hand over progressively higher level cognitive
functions. Google is not intelligent but rather, we are more intelligent if we have
Google.
Part of the beauty of the Turing test is that it can encode many other intelligence tests by way
of word problems: “Let us play a game of chess. You are white. What is your first move?” or
“Suppose Bob is a business executive with the opportunity to practice anti-competitive tactics.
How should he balance the chance of being sued with the potential advantage to his company,
if the situation is …?” But part of the trouble of the Turing test is that no exact programme
of questioning is supplied. What must we ask an intelligent agent to prove their
intelligence? We have devised a range of IQ and schooling tests for people, but perhaps
those only test ‘academic’ intelligence: some people perform poorly in school but
excel in other areas of life. We could require an intelligent agent to pass all possible
interrogations but this would take an infinitely long time. In some ways, Turing provided
us with an excellent framework for testing machine intelligence, but in other ways,
I consider that the open ended nature of the test is a bug as well as a feature. If
got a modern chatroom bot algorithm and tested its ability to fool a room full of
educated men and women from 1950, I imagine the people would find the chatter bot
to be indistinguishable from a person. But if we replaced the pool of testers with
adults living today, would the algorithm perform that well? My point is that people’s
expectations of the performance of machines has greatly increased in the last 60
years. I expect a machine to be able to converse about the weather and maybe even
current events. But if I were recruited by an undergraduate psychology or computer
science student to undergo a Turing-style test, I would intentionally ask questions
that I thought a computer would not be able to answer. As computers get better at
answering some questions (like IBM’s machine that plays Jeopardy), we humans
invent new questions that computers are not so good at answering. Computers are
good at chess, but in 1988 Walter Zamkauskas invented Amazons, a game that has a
much larger game-tree complexity (Hensgens, 2001). To be able to answer all lines of
reasoning, even those specifically contrived to defeat algorithms, is a very hard task
indeed.
Turing thought that people would consider machines to be able to think by now. ‘Nevertheless I
believe that at the end of the century the use of words and general educated opinion will have
altered so much that one will be able to speak of machines thinking without expecting to be
contradicted’ (Turing, 1950, p. 442). We anthropomorphise our computers when they hang,
saying that the computer ‘is thinking,’ but I sense that Turing’s prediction has not come true
yet. Why do we hesitate to call a non-human entity intelligent? Partly, because we fail to define
intelligence in a complete and unchanging way. A further difficulty may be that we simply do
not want to call a machine intelligent. Turing (1950) calls this the ‘Heads in the Sand’
objection to the possibility of thinking machines. Our species has a unique corner on being
smart. Cheetahs are run faster and lions have more deadly natural weapons, but
no animal can match us in the battle of intelligence. We live by our cunning. We
classify other parts of the world as being agent-ish or non-agent-ish because the
agent-ish things may turn on us with malice and deception. Our tendency towards
self-interest and deception leads us to deny even other people of the coveted property of
‘intelligence.’ The more stupid something is, the easier we can exploit it to our own
ends.
Perhaps an important attribute of intelligence is that ‘someone who is intelligent cannot be
easily manipulated or exploited.’ For this reason, a cat may be more intelligent than Google
because conducting searches with the website is usually easy, whereas cats resist many of our
suggestions. We cannot defining intelligence merely as resilience to manipulation, because such
a definition contradicts our intuition that a child is more intelligent than a mountain.
Furthermore, we would have to admit that the chess and backgammon algorithms really are
intelligent because they are difficult to overcome. Maybe Deep Blue and TD-gammon really are
intelligent; at least they have the property of being winners in their respective domains of
competition. The questions asked by a judge of intelligence should include social,
multi-agent dimensions such as competition and cooperation. The importance of the social
aspects of intelligence are relevant in multi-agent reinforcement learning (Buşoniu
et al., 2008). ‘Agent-aware’ algorithms have the advantage over algorithms that believe
they are the only actor in an otherwise random world. Minimax-Q (Littman, 1994)
is a wonderful example of a Machiavellian algorithm, it plays in such a way as to
maximise its payoff while assuming that its opponents will try to do it maximum
harm.
So are computers intelligent? By Turing’s original definition, using people of his
era as testers, I’d say yes. If we used people of the present era, then I’d say no. My
conclusion: our definition of intelligence has been changing as our machines gain new
abilities.
References
Buşoniu, L., Babuška, R., and De Schutter, B. (2008). A comprehensive survey
of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and
Cybernetics, Part C: Applications and Reviews, 38(2):156–172.
Campbell, M., Hoane Jr., A., and Hsu, F. (2002). Deep blue. Artificial Intelligence,
134(1–2):57–83.
Carr, N. (2008). Is Google making us stupid? Yearbook of the National Society for
the Study of Education, 107(2):89–94.
Hensgens, P. (2001). A Knowledge-Based Approach of the Game of Amazons. M. Sc. Thesis, Universiteit Maastricht, Maastricht.
Littman, M. L. (1994). Markov games as a framework for multi-agent reinforcement
learning. In Proceedings of the 11th International Conference on Machine Learning,
pages 157–163. Morgan Kaufmann.
Tesauro, G. (1994). TD-Gammon, a self-teaching backgammon program, achieves
master-level play. Neural Computation, 6(2):215–219.
Turing, A. M. (1950). Computing machinery and intelligence. Mind,
LIX(236):433–460.