Sunday, March 23, 2008

Book Review

Book Review:

The Age of Spiritual Machines

By Raymond Kurzweil

Published by Viking Adult on January 1, 1999

I had more respect for Raymond Kurzweil before reading The Age of Spiritual Machines than I did after I finished it. His arguments--if they can indeed be called "arguments"--are flimsy at best and downright deceiving at worst. What I can't decide is whether or not Kurzweil (who has a long list of astonishing credentials) meant to assemble a book full fallacious arguments, or if he simply lacks the kind of philosophical rigor necessary to deal with the issues that his book pretends to deal with.

The first difficult pill to swallow is Kurzweil's attempted generalization of Moore's Law (a "law" which posits an accelerating rate of increase in computational power along with a corresponding decrease in the price of computers.) In the opening chapter of his book, Kurzweil seeks to posit a similar Law. This Law (audaciously named "The Law of Time and Chaos") posits that ever since the universe began, everything in existence has operated under a Law that can be stated in two parts:

1) "The Law of Increasing Chaos: As chaos exponentially increases, time exponentially slows down."

2) "The Law of Accelerating Returns: As order exponentially increases, time exponentially speeds up (that is, the time interval between salient events grows shorter as time passes.)"

Kurzweil argues that Moore's Law is just an instantiation of the Law of Accelerating Returns, which has been operating on Earth ever since the dawn of evolution (a process of ever increasing order.) If we accept that evolution is a process of increasing order and that the Law of Accelerating Returns is valid, then (Kurzweil argues) we must accept that time is speeding up. As long as we define "time is speeding up" as "the rate at which interesting things happen is increasing," I'm happy to give Kurzweil the benefit of the doubt here--in spite of his wonky use of the term "time." After all this is what he seems to mean in his aforementioned parenthetical addendum to the Law of Accelerating Returns.

The really misguided portions of Kurzweil's discourse occur when he tries to use the Law of Accelerating Returns to make predictions about the future. To make a long story short: evolution makes time speed up (stuff always happens quicker than it used to), the accelerating production of technology is the result of this temporal speedup, technological production--because it orders things--makes time speed up even more (stuff happens even more quickly), and therefore... machine intelligence will rival human intelligence by the year 2039. This is the very definition of a non sequitur fallacy. To be fair, Kurzweil doesn't state his case as simplistically as I have just now; but he does rely heavily on the Law of Accelerating Returns to suggest that the coming intelligence of machines is "inevitable"--to use his word.

Here's one reason why this conclusion is ridiculous even if we consider the Law of Accelerating Returns valid. Even if more and more stuff is happening more and more quickly, there are still a great many conceivably possible biological and technological events that won't be happening anytime soon. For example, human beings won't be evolving to have purple skin. And we won't be inventing technology to suck all the marrow out of our bones and inject it into trees. The problem with Kurzweil's argument is that you can use it to prove just about anything: stuff is happening faster and faster, it will continue to do so, therefore we will have purple skin by the year 2039. This is obviously silly. And so are Kurzweil's conclusions.

Here's another reason we should be suspicious of drawing the "intelligent machines are inevitable" conclusion form Law of Accelerating Returns. The Law of Accelerating Returns can be used to predict the occurrence of events that exclude the creation of intelligent machines. For example, one can argue that the constant increase in technology is bringing us ever-closer to total nuclear annihilation. Should this happen, it would constitute a major set-back for advances in computing machinery, let a lone the human race.

Am I being unfair by attacking what is clearly one of Raymond Kurzweil's wonkiest arguments? I don't think so. He himself recognizes the Law of Accelerating Returns to be a crucial element in the overall argument. Moore's Law alone isn't enough to show that intelligent machinery is inevitable. In the introduction to his chapter on the Law of Time and Chaos. he states, "For the past forty years, in accordance with Moore's Law, the power of transistor-based computing has been growing exponentially. But by the year 2020, Moore's Law will have run its course. What then? To answer this critical question, we will need to understand the exponential nature of time" (vi). In other words, Moore's Law doesn't predict the advent of machines sufficiently fast for approximating human intelligence. Part of the reason for this is that Moore's Law only applies to transistor-based computers, which have a theoretical maximum speed. Kurzweil needs another Law if he is going to make the predictions he wants to make. Unfortunately the Law he uses and the conclusions he draws from it are seriously flawed, as we have seen.

Let's give Kurzweil a little more slack, however. Let's pretend computers will keep getting faster from now until the end of time. And let's pretend that the human race won't be destroying itself between now and then. This still isn't enough to jump to the "machines will be as intelligent as human beings in X number of years" conclusion. The gist of Kurzweil's argument in this direction is, "Look at how dumb computers used to be. Look at how many people thought they would never be able to do X, Y, and Z. And now look at how smart they are now--able to to X, Y, and Z. Now we have grounds to doubt people who say that computers will never be as intelligent as human beings." One of Kurzweil's favorite examples is the game of chess. Prior to Deep Blue's win over world champion Gary Kasparov, the idea of a computer playing grandmaster-level chess was considered ludicrous by many people, including notable experts in artificial intelligence such as Hubert Dreyfus. Now that computers have achieved this milestone, Kurzweil argues, "the sky is the limit."

Wait a second. Not so fast. What has actually been accomplished by this so-called milestone in computer chess playing abilities? Computers like Deep Blue are designed to crunch numbers, analyzing millions upon millions of chess positions every fraction of a second. Needless to say, this is not what human beings do when they play chess. Even a fleet of grandmasters won't look at millions upon millions of chess positions every second--or even over the course of their lifetimes. Indeed, during each move of a chess game, grandmasters tend to look at three or four possible variations--maximum. But (as chess champion Jose Capablanca once said) "those variations happen to be the right ones." What is it that makes the chess variations a human being looks at "happen to be right," whereas a computer must look at several million variations to find the right ones? This is a question that many philosophers and artificial intelligence researchers have examined. Raymond Kurzweil, unfortunately, is not one of them. He seems to think that computers will become fast enough that they don't need the kind of intuition that human beings have. However, if (as philosophers like Hubert Dreyfus argue) human intuition is the key to human intelligence, then machine intelligence--if it occurs at all--will never be of the same type as human intelligence.

Kurzweil's book suffers because he posits the inevitability of intelligent machines without ever defining what he means by "intelligence." Let's give him a little more slack and assume that a machine can be intelligent even if it isn't doing the same kind of calculating that human beings are doing. This means that we can consider Deep Blue and Gary Kasparov both to be "intelligent" when it comes to playing chess, even though they are playing chess in very different ways. Even with these assumptions, Kurzweil's arguments remain unconvincing.

Kurzweil sides with Turing in believing that computers will one day be able to intelligently converse with human beings in a variety of natural languages. He further believes they will converse so well that their responses will be indistinguishable from those a human being might make. (In other words, they will pass the Turing test.)

Unfortunately, the number of chess positions a computer needs to calculate in order to play decent chess is minuscule compared to the number of linguistic responses a computer would need to calculate in order to respond as a human being does in everyday conversation. Kurzweil asserts that choosing the proper response could be as simple as performing the following algorithm on a really, really, really fast computer:

"For my next step, take my best next step. If I'm done, I'm done."

Yep. That's all there is to it. Kurzweil describes this as "a really simple formula to create intelligent solutions to difficult problems" (282). Let's give Kurzweil the benefit of the doubt on this one: matters are certainly more complicated and he knows it, but he has to make things simple for his readers. As well as giving him the benefit of the doubt, let's assume with him that all the complexities really do boil down to the above recursive formula. For example, when playing chess he suggests a chess program follows this variation of the formula:

"PICK MY NEXT BEST MOVE: Pick my next best move. If I've won, I'm done."

For chess, this works. It works because the idea of "best move" is defined in terms of whether or not you've "won." Luckily the winning conditions for chess (checkmate) are easy to evaluate computationally, so it makes sense to define "best move" in terms of those winning conditions. But this formula becomes ludicrous in areas like natural language production:

"PICK MY NEXT BEST WORD: Pick my next best word. If I'm done, I'm done."

What does it mean to be "done" in conversation? When do you stop talking? At any given moment, how do we (as human beings) know when we've said the right things? How do we know we're going to say the right things the next time we open our mouths? The fact is, we don't always know. And if we don't always know, how are we supposed to write a stopping condition in a recursive formula for a computer's "next best word?"

My point is that chess lends itself to mathematical analysis, whereas language production does not. So the argument "once upon a time computers didn't play chess; now they do; so one day they will understand language" is a real fallacy.

I'd rather not spend any more time lampooning Raymond Kurzweil's shoddy reasoning. He obviously wants very badly to believe that machines will one day be intelligent. I don't know why. I'm content to let him believe what he wants.

But I should mention that the claims I've dealt with in this essay are not the only examples of Kurzweil's silliness. For example, he builds upon his claims about computers understanding natural language to suggest that they will one day be conscious and thus that they will one day be spiritual. "Just being--experiencing, being conscious--is spiritual, and reflects the essence of spirituality. Machines, derived from human thinking and surpassing humans in their capacity for experience, will claim to be conscious, and thus to be spiritual." Yowza! Let's list the critical terms that Kurzweil neglects to define: "being," "experiencing," "conscious," "spiritual," and "essence." These are some of the most vexed terms in the entire history of philosophy. Yet in one fell swoop, in one fell sentence, Kurzweil ignores all the subtleties involved with the concepts, everything that thinkers have wrestled with for over two-thousand years.

Kurzweil routinely exhibits a similar lack of sensitivity. And when he does attempt to show some sensitivity, he lacks rigor. For example, he spends the majority of Chapter Three listing and briefly describing different views about the meaning of "consciousness." Then, rather than giving any sort of analysis of these views, he sums everything up by saying: "My own view is that all of these schools are correct when viewed together, but insufficient when viewed one at a time... On its face, my view is contradictory and makes little sense. The other schools at least can claim some level of consistency and coherence" (60). And that's all. He doesn't say in what way the individual views are incorrect, or in what way they can be synthesized. He doesn't say how his own synthetic view can sustain itself in the face of the internal contradictions that he himself admits. This kind of laziness is damning, in my humble opinion. I expected more from a writer that I had heard so many good things about.

Monday, March 3, 2008

Journal 6

On the SRS, I wrote a rather long entry under Other Requirements. This entry dealt with a requirement that (before our software can receive any widespread use) it needs two kinds of associated texts: a user's manual (which talks about basic things like how to move and how to drop items, etc.), and a more advanced textbook-like resource which would provide teachers with creative ways to use the software (such as real lesson plans for teaching specific aspects of a foreign language curriculum.) These are both things that we cannot expect general users (aside from Dr. Carl) to be able to figure out for themselves. Dr. Carl, I hope, will be able to help us with the second text--because we developers naturally know very little about teaching foreign languages.

Also, I wrote several functional requirements pertaining to various aspects of Play Mode and Edit Mode.

Journal 5

This is a little late, but I still remember what I did a few weeks ago because it was fairly noteworthy. I had been having doubts about our (my) decision to use an external API called JGN (Java Game Networking) to facilitate our game's multiplayer interaction. It's a decent API; however, the documentation is severely lacking. In fact, the only documentation that exists consists of the code's comments and a few wiki pages (which I wrote myself.) Needless to say, we were having trouble tracking down several bugs. Specifically, character avatars (as mentioned in my previous post) were not being correctly removed from the scene graph. But despite the lack of documentation, we managed to clean things up. The virtual collaboration environment is almost complete now.