The Meaning of Deep Blue's Victory

By Charles Krauthammer

“What we have is the world’s best chess player vs. Garry Kasparov.”--Louis Gerstner, CEO of IBM


When on May 11 Deep Blue, an IBM computer, defeated Garry Kasparov in the sixth and
deciding game of their man-vs.-machine match, the world took notice. It made front pages
everywhere. Great story: BOX DEFEATS WORLD CHESS CHAMPION. Indeed: BOX
DEFEATS BEST PLAYER OF ALL TIME. Kasparov is so good that in his entire life he
has never lost a match-and he has been involved in some of the epic chess matches in chess history, including several Ali-Frazier-like classics with former world champion Anatoly Karpov.

Deep Blue won 2-1, with three draws. Nonetheless, the real significance of the match lay
not in the outcome, however stunning. Why? Because the match was tied until Game Six and Game Six was decided by a simple misplay of the opening. Kasparov played the wrong move order - making what should have been move 9 on move 7 - and simply could not recover.

It was a temporary lapse of memory. (Most openings have been tested so many times by trial and error that there is no need to figure them out during the game. You come in knowing them by heart.)
Such lapses are fatal against Deep Blue, however. This brute contains in it’s memory
every opening of every recorded game of every grandmaster ever. Deep Blue’s “opening book” spotted the transposition immediately and pounced. Twelve moves later, his position in ruins, Kasparov  resigned.

Blunders of this sort are entertaining and sensational. But they are not very illuminating.
The real illumination in the match, the lightening flash that shows us the terrors to come, came in Game Two, the likes of which had never been seen before.

What was new about Game Two - so new and so terrifying that Kasparov subsequently
altered his style, went on the defensive, and eventually suffered a self confessed psychological collapse (“I lost my fighting spirit”)- was that the machine played like a human. Grandmaster observers said that had they not known who was playing they would have imagined that Kasparov was playing one of the great human players, maybe even himself. Machines are not supposed to play this way.

Playing Like a Computer

What did Deep Blue do? What does it mean to play like a human?
We must start by looking at what it means to play like a computer. When computer plays
chess, or for that matter, when they do anything, they do not reason. They do not think. They simply calculate.

In chess, it goes something like this. In any given position, the machine calculates:

“If I do A, and he does B, and then I do C, and he does D...then I will end up with position X.”

“On the other hand, if I do A and he does B, and I do C and he does not D but E...I’ll end
up with position Y.”

Deep Blue, the most prodigious calculator in the history of man or machine, can perform
this logic operation 200 million times every second. This means that in the average of three minutes allocated for examining a position, it is actually weighing 36 billion different outcomes.

Each outcome is a different position-how the board will look-a few moves down the road (in
our example: X and Y.). The machine then totes up the pluses and minuses of each final
position (For instance, the loss of a queen is a big minus, bishops stuck behind their own pawns are a smaller minus), chooses the one in 36 billion that has the highest number, and makes a move.

This is called “brute force” calculation and it is how Deep Blue and all good computers
work. This is not artificial intelligence, which was the 1 alternative approach to making computers play chess and do other intellectual tasks. In artificial intelligence you try to get the machine to emulate human thinking. You try to teach it discrimination, pattern recognition, and the like. Unfortunately, artificial intelligence machines turn out to be a bust at chess.

The successful machines simply calculate. And it is with this kind of calculating ability that
Deep Blue beat Kasparov last year in Game One of their maiden match in Philadelphia. It was the first time a computer had ever won a game from a world champion and it caused a sensation.

It happened this way: Late in the game Deep Blue found its king under fierce attack by
Kasparov. Yet Deep Blue momentarily ignored the threat (lose the king and you lose the game) and blithely expended two moves going after a lowly stray (Kasparov) pawn. The experts were aghast. No human player would have dared do this. When your king is exposed, to give Kasparov two extra moves in which to press his attack is an invitation to suicide.

Deep Blue, however, having calculated every possible outcome of the next 10 or 15 moves,
had determined that it could (1) capture the pawn, then (2) bring its expeditionary force back to defend its king exactly a hairsbreadth before Kasparov could deliver the fatal checkmate, thus (3) foil Kasparov’s attack, - no matter how he tried it - and then (4) win the game thanks to the extra pawn it had captured on its hair raising gambit.

So it calculated. And so, being exactly right, it won.

No human would have tried this because no human could have been certain in this
incredibly complex position he had seen every combination. Deep Blue did try it because, up to a certain horizon, ( 10-15 moves into the future), it is omniscient.

Game One in Philadelphia became legend. It was a shock to Kasparov’s pride and a tribute
to brute tactical calculation. But that’s all it was: tactics.

Playing Like a Human

Fast forward to Game Two of this year’s match, on May 4. this time the machine won, but
in a totally different way.

It did not use fancy tactics-tactics being the calculation of parry and thrust, charge and
retreat, the tit-for-tat of actual engagement, the working out of “If I do A...”. Game Two allowed for no clever tactics. Its position was closed, meaning that both sides’ pieces were fairly locked in and had very few tactical and combinational opportunities.

Kasparov had deliberately maneuvered the game into this structure. He knew ( from Game
One in Philadelphia) that when the armies are out in the open and exchanging fire rapidly, the machine can outcalculate him. He knew that his best chance lay in a game of closed positions, where nothing immediate is happening, where the opposing armies make little contact, just eyeing each other warily across the board, maneuvering their units, making subtle changes in their battle lines.

Such strategic, structural contests favor humans. after all, Kasparov does not evaluate 200
million positions per second. He can evaluate three per second at the most. But he has such intuition, such feel for the nuances and subtleties that lie in the very structure of any position, that he can instinctively follow the few lines that are profitable and discard the billions of combinations Deep Blue must look at. Kasparov knows in advance which positions “look” and “feel” right. and in closed strategic game like Game Two, look and feel are everything.

The great chess master Saviely Tartakower once said: “Tactics is what you do when there
is something to do. Strategy is what you do when there is nothing to do.” Strategic contests
are contests of implied force and feints, of hints and muted thrusts. They offer nothing
(obvious) to do, and they are thus perfectly suited to human flexibility and “feel”.

Calculators, on the other hand, are not good at strategy. Which is why historically, when
computers - even the great Deep Blue - have been given nothing tactically to do, no tit-for-tat combinations to play with, they have tended to make aimless moves devoid of strategic sense.

Not this time. To the amazement of all, not least Kasparov, in this game drained of tactics,
Deep Blue won. Brilliantly. Creatively. Humanly. It played with - forgive me - nuance and

How subtle? When it was over, one grandmaster was asked where Kasparov went wrong.
He said he didn’t know. Kasparov had done nothing untoward. He made no obvious errors. He had not overlooked some razzle-dazzle combination. He had simply been gradually, imperceptibly squeezed to death by a machine that got the “feel” of the position better than he.

Why is this important? Because when Deep Blue played like a human, even though reaching its 2 conclusions in a way completely different from a human, something monumental happened:
Deep Blue passed the Turing test.

The Turing Test

In 1950, the great mathematician and computer scientist Alan Turing proposed the Turing
test for “artificial intelligence”. It is brilliantly simple: you put a machine and a human behind a curtain and ask  them questions. If you find that you cannot tell which is the human and which is the machine, then the  machine has achieved artificial intelligence.

This is, of course, a mechanistic and functional way of defining artificial intelligence. It is
not interested in how the machine - or, to be sure, even the human - comes to its conclusions. It is not interested in what happens in the black box, just what comes out, results. You cannot tell the man  and machine apart? then there is no reason to deny that the machine has artificially recreated or recapitulated human intelligence.

In Game Two, Deep Blue passed the Turing test. Yes, of course, it was for chess only, a
very big caveat. But, first, no one was ever sure a machine would pass even this limited test. Kasparov himself was deeply surprised and unnerved by the human-like quality of Deep Blue’s play.
He was so unnerved, in fact, that after Game Two he spoke darkly of some “hand of God”
intervening, a not-so-veiled suggestion that some IBM programmer must have altered Deep Blue’s instructions in mid-game. Machines are not supposed to play the way Deep Blue played Game Two. Well, Deep Blue did. (there is absolutely no evidence of human tampering.)

And second, if a computer has passed the Turing test for chess, closed logical system
though it may be, that opens the possibility that computers might in time pass the Turing test in other areas.

One reason to believe so is that, in this case, Deep Blue’s Turing -like artificial intelligence
was achieved by inadvertence. Joe Hoane, one of Deep Blue’s programmers, was asked, “
How much of your work was devoted specifically to artificial intelligence in emulating human thought?” His answer: “No effort was devoted[to that]. It is not an artificial intelligence project in any way. It is a project in - we play chess through sheer speed of calculation and we just shift through the possibilities and we just pick one line.”

You build a machine that does nothing but calculation and it crosses over and creates
poetry. This is alchemy. You build a device with enough number-crunching algorithmic power and speed -and , lo, quantity becomes quality, tactics becomes strategy, calculation becomes intuition. Or so it seems.
And, according to Turing, what seems is what counts.

From Ape to Archimides

But is that not what evolution did with us humans: Build a device--the brain--of enough neuronal size and complexity that lo, squid begat man, quantity begat quality, reflex begat intuition, brain begat mind?

After all, how do humans get intuition and thought and feel? Unless you believe I some
metaphysical homunculus hovering over (in?) the brain directing its bits and pieces, you must attributeour strategic, holistic mental abilities to the incredibly complex firing of neurons in the brain. Kasparov does not get the gestalt of a position because some angel whispers in his ear. (Well, maybe Bobby Fischer does. But he’s mad.) His brain goes through complex sequences of electrical and chemical events that produce the ability to “see” and “feel” what is going on. It does not look like neurons firing. It does not feel like neurons firing. But it certainly is neurons firing, as confirmed by the lack of chess ability among the dead.

And the increasing size and complexity of the neuronal environment has produced in
humans not just the capacity for strategic thought, but consciousness, too. Where does that come from, if not from neurons firing? A million years ago, human ancestors were swinging from the trees and composing no poetry. They led, shall we say, an unexamined life. And yet, with the gradual, non-magical development of ever more complex neuronal attachments and connections, we went from simians to Socrates. But somehow along the way - we know not how it happened but we know that it happened - a thought popped up like an overhead cartoon balloon. We became self-aware, like Adam in the Garden.

Unless you are ready to posit that this breakthrough occurred as the result in some
physics-defying rupture of nature, you must believe that human intelligence, thought, self consciousness itself are the 3 evolutionary product of an increasingly complex brain.

But then if the speed and complexity of electrochemical events in the brain can produce
thought and actual self-consciousness, why in principle could this not occur in sufficiently complex machines? If it can be done with a carbon-based system, why not with silicon (the stuff of computer chips)?

An even more powerful mystery about human agency is free will. Yet even here we have an
inkling of how it might derive from a physical-material base. We know from chaos theory that when systems become complex enough, one goes from the mechanistic universe, where one can predict every molecular collision down to the last one, to a universe of contingency, where one cannot predict the final event. when that final event is human action, we call the contingency that underlies it free will.

I ask again: If contingency, and with it free will, evolved out of the complexity of a carbon-based system, why not with silicon?

“You Can Never Know for Sure...”

On May 4, in New York city, a computer demonstrated subtlety and nuance in chess. a
more general intelligence will require a level of complexity that might take decades more of advances in computer speed and power. (Not bad actually, considering that it took nature using its raw materials three billion years to produce intelligence in us.) And it will take perhaps a few centuries more for computers to reach the final, terrifying point of self-awareness, contingency, and autonomous will.

It is, of course, a very long way to go from a chess game on the 35th floor of the Equitable
Center to sharing the planet with logic monsters descended distantly from Deep Blue. But we’ve had our glimpse. For me, the scariest moment of the match occurred when Murray Campbell, one of the creators of Deep Blue, was asked about a particular move the computer made. He replied: “The system searches through many billions of possibilities before it makes its move decision, and to actually figure out exactly why it made its move is impossible. It takes forever. You can look at various lines, and get some ideas, but you can never know for sure exactly why it did what it did.”

You can never know for sure why it did what it did. The machine has already reached such
a level of complexity that its own creators cannot trace its individual decisions in a
mechanistic A to B to C way. It is simply too complicated. Deep Blue’s actions have already eclipsed the power of its own makers to fully fathom. why did Blue reposition its king’s rook on move 23 of Game Two? Murray Campbell isn’t sure. why did Adam eat from the apple? does his Maker know?

We certainly know the rules, the equations, the algorithms, the database by which Deep
Blue decides. But its makers have put in so many and so much at such levels of complexity - so many equations to be reconciled and to “collide” at once - that we get a result that already has the look of contingency. Indeed, one of the most intriguing and unnerving aspects of Deep Blue is that it does not always make the same move in a given position.

We have the idea that all computers (at least ones that aren’t on the blink) are totally
predictable adding machines. Put your question and you will get the answer out - the same answer every time. This is true with your hand held calculator. Do 7 times 6 and you will get 42 every time. It is not true with the kind of problems Deep Blue deals with.

Why? Because Deep Blue consists of 32 computer nodes (of 16 co-processors each)
talking to one another at incredible speed. If you present the same question to it a second time, the nodes might be talking to one another in a slightly different order (depending on minute alterations in the way various tasks are farmed out to the various chips), yielding a different result. In other words, in a replay of Game Two, Deep Blue might not reposition its king’s rook on move 23.

To have achieved this level of artificial intelligence - passing the Turing test against the
greatest chess player in history - less than 40 years after the invention of the integrated circuit, less than 30 years after the introduction of the microprocessor, should give us pause about the future possibilities of this creation. It will grow ever beyond our control, even our understanding. It will do things that leave its creators baffled - even as Deep Blue’s creators are baffled by their baby’s moves.

The skeptics have a final fallback, however. Okay, they say maybe we will be able to
create machines with the capacity for nuance, subtlety, strategic thinking, and even
consciousness. but they still 4 could never feel, say, pain, i.e., have the subjective experience we have when a pin is pushed into our finger. No pain, no sadness, no guilt, no jealousy, no joy. Just logic. what kind of creature is that?

The most terrifying of all. Assume the skeptics are right. (I suspect they are.) all they are
saying is that we cannot fully replicate humans in silicon. No kidding. the fact is that we will instead be creating a new and different form of being. And infinitely more monstrous: creatures sharing our planet who not only imitate and surpass us in logic, who have perhaps even achieved consciousness and free will, but are utterly devoid of the kind of feelings and emotions that, literally, humanize human beings.

Be afraid.

You might think it is a little too early for fear. Well, Garry Kasparov doesn’t think so. “I’m
not afraid to admit that I’m afraid,” said perhaps the most fearless player in the history of chess when asked about his tentative play. When it was all over, he confessed why: “I’m a human being, you know... when I see something that is well beyond my understanding, I’m scared.”

We have just seen the ape straighten his back, try out his thumb, utter his first words, and
fashion his first arrow. The rest of the script is predictable. Only the time frame is in question. <>