Sunday, April 13, 2008
Killer Robot
A quick synopsis: A robot arm kills its operator because the software which controlled the arm contained a significant bug. Who is at fault?
The programmer? The programmer misinterpreted the formulas that he was supposed to be using in order to create the arm-controlling algorithm. His misinterpretation was due, arguably, to his lack of domain-specific knowledge about robotics and physics. Also, arguably, his inability to correct his mistake was due to his tendency to reject criticism of his code.
The manager? The manager, being overly concerned with "doing things his way," led to his dismissal of the prototyping method (as opposed to the waterfall method.) The result was a product that could not be tested until late in its development cycle.
The user interface designers? The software's user interface was difficult to use and violated several principles of good interface design. The result was that the robot operator was unable to shut down the machine when it began to malfunction.
The Ivory Snow philosophy? The management at the offending software company adhered to a philosophy that advocated the impossibility of perfection in any software product and furthermore suggested that imperfections should not be fretted over. The result was that the software's glitches were considered necessary evils, when in fact they could have been prevented.
The software tester? The software tester faked the results of certain unit tests, which would have demonstrated that critical parts of the code were not working properly. She did this in compliance with a direct order from her superior, raising the question of exactly who is at fault.
The company as a whole? The company was under an obligation to produce software that met certain safety requirements. Having clearly failed to do so, one must wonder if the company's senior management should be held responsible.
"The Case of the Killer Robot" does a pretty good job of avoiding the temptation to place blame on any particular individual or group. In fact, it presents several fairly unbiased points of view. But to be honest, I think it fails to raise some of the more salient ethical issues at the heart of the matter.
Who do I think is at fault? Capitalism.
Long before the advent of robotic arms, Karl Marx foresaw almost all of the evils that "The Case of the Killer Robot" seeks to illuminate.
Where was the robot operator killed? In a factory--the main vehicle for what Marx calls the "means of production." Why was the operator in the factory? Because he was a member of a socio-economic class that must trade their labor for capital. Why is there such a class of people? Because capitalism requires there to be a workforce and perpetuates the growth of the working class.
Why did the programmer make such fatal mistakes? He was clearly "alienated from his labor" (as Marx puts it.) Because he did not own the product of his labor (because he did not own the software) he had no reason to be invested in its quality. Why did he have to create products that did not belong to him? Because he had to feed his children, which requires capital. Why does it require capital? Because we live in a capitalist society.
Why did the company's management act so deplorably? Because they are corrupt members of the capitalist upper-class. They own the means of production, growing richer and richer on the sweat of those beneath them. Because the amount of capital they gain depends on the speed of production, they naturally adhere to the Ivory Snow philosophy, preferring quantity over quality.
The case of the killer robot is merely a modern version of the many tragic tales we learn about in history classes: tales of young chimney sweeps getting lung cancer, tales of children losing limbs in factories during the industrial revolution, tales of men and women dying in coal mines, and many other tales of the casualties sustained by the working class. There are too many stories to tell in one lifetime, but they all have something in common: the economic class of the victims is always the same. Marx and Adorno said it best when they highlighted the inherent violence of the capitalist system. It is a system that kills members of its workforce routinely.
"The Case of the Killer Robot" has a lot of technical jargon about "waterfall methods," "Ivory Snow," "prototyping," "non-functional requirements," "unit tests," "computer ethics," and "robots." But ultimately, this language merely obfuscates important issues. Capitalism is an inherently corrupt system that infects everything it touches. And it touches everything. Why should we be at all surprised that the field of software engineering has become infected?
Sunday, March 23, 2008
Book Review
Book Review:
The Age of Spiritual Machines
By Raymond Kurzweil
Published by Viking Adult on January 1, 1999
I had more respect for Raymond Kurzweil before reading The Age of Spiritual Machines than I did after I finished it. His arguments--if they can indeed be called "arguments"--are flimsy at best and downright deceiving at worst. What I can't decide is whether or not Kurzweil (who has a long list of astonishing credentials) meant to assemble a book full fallacious arguments, or if he simply lacks the kind of philosophical rigor necessary to deal with the issues that his book pretends to deal with.
The first difficult pill to swallow is Kurzweil's attempted generalization of Moore's Law (a "law" which posits an accelerating rate of increase in computational power along with a corresponding decrease in the price of computers.) In the opening chapter of his book, Kurzweil seeks to posit a similar Law. This Law (audaciously named "The Law of Time and Chaos") posits that ever since the universe began, everything in existence has operated under a Law that can be stated in two parts:
1) "The Law of Increasing Chaos: As chaos exponentially increases, time exponentially slows down."
2) "The Law of Accelerating Returns: As order exponentially increases, time exponentially speeds up (that is, the time interval between salient events grows shorter as time passes.)"
Kurzweil argues that Moore's Law is just an instantiation of the Law of Accelerating Returns, which has been operating on Earth ever since the dawn of evolution (a process of ever increasing order.) If we accept that evolution is a process of increasing order and that the Law of Accelerating Returns is valid, then (Kurzweil argues) we must accept that time is speeding up. As long as we define "time is speeding up" as "the rate at which interesting things happen is increasing," I'm happy to give Kurzweil the benefit of the doubt here--in spite of his wonky use of the term "time." After all this is what he seems to mean in his aforementioned parenthetical addendum to the Law of Accelerating Returns.
The really misguided portions of Kurzweil's discourse occur when he tries to use the Law of Accelerating Returns to make predictions about the future. To make a long story short: evolution makes time speed up (stuff always happens quicker than it used to), the accelerating production of technology is the result of this temporal speedup, technological production--because it orders things--makes time speed up even more (stuff happens even more quickly), and therefore... machine intelligence will rival human intelligence by the year 2039. This is the very definition of a non sequitur fallacy. To be fair, Kurzweil doesn't state his case as simplistically as I have just now; but he does rely heavily on the Law of Accelerating Returns to suggest that the coming intelligence of machines is "inevitable"--to use his word.
Here's one reason why this conclusion is ridiculous even if we consider the Law of Accelerating Returns valid. Even if more and more stuff is happening more and more quickly, there are still a great many conceivably possible biological and technological events that won't be happening anytime soon. For example, human beings won't be evolving to have purple skin. And we won't be inventing technology to suck all the marrow out of our bones and inject it into trees. The problem with Kurzweil's argument is that you can use it to prove just about anything: stuff is happening faster and faster, it will continue to do so, therefore we will have purple skin by the year 2039. This is obviously silly. And so are Kurzweil's conclusions.
Here's another reason we should be suspicious of drawing the "intelligent machines are inevitable" conclusion form Law of Accelerating Returns. The Law of Accelerating Returns can be used to predict the occurrence of events that exclude the creation of intelligent machines. For example, one can argue that the constant increase in technology is bringing us ever-closer to total nuclear annihilation. Should this happen, it would constitute a major set-back for advances in computing machinery, let a lone the human race.
Am I being unfair by attacking what is clearly one of Raymond Kurzweil's wonkiest arguments? I don't think so. He himself recognizes the Law of Accelerating Returns to be a crucial element in the overall argument. Moore's Law alone isn't enough to show that intelligent machinery is inevitable. In the introduction to his chapter on the Law of Time and Chaos. he states, "For the past forty years, in accordance with Moore's Law, the power of transistor-based computing has been growing exponentially. But by the year 2020, Moore's Law will have run its course. What then? To answer this critical question, we will need to understand the exponential nature of time" (vi). In other words, Moore's Law doesn't predict the advent of machines sufficiently fast for approximating human intelligence. Part of the reason for this is that Moore's Law only applies to transistor-based computers, which have a theoretical maximum speed. Kurzweil needs another Law if he is going to make the predictions he wants to make. Unfortunately the Law he uses and the conclusions he draws from it are seriously flawed, as we have seen.
Let's give Kurzweil a little more slack, however. Let's pretend computers will keep getting faster from now until the end of time. And let's pretend that the human race won't be destroying itself between now and then. This still isn't enough to jump to the "machines will be as intelligent as human beings in X number of years" conclusion. The gist of Kurzweil's argument in this direction is, "Look at how dumb computers used to be. Look at how many people thought they would never be able to do X, Y, and Z. And now look at how smart they are now--able to to X, Y, and Z. Now we have grounds to doubt people who say that computers will never be as intelligent as human beings." One of Kurzweil's favorite examples is the game of chess. Prior to Deep Blue's win over world champion Gary Kasparov, the idea of a computer playing grandmaster-level chess was considered ludicrous by many people, including notable experts in artificial intelligence such as Hubert Dreyfus. Now that computers have achieved this milestone, Kurzweil argues, "the sky is the limit."
Wait a second. Not so fast. What has actually been accomplished by this so-called milestone in computer chess playing abilities? Computers like Deep Blue are designed to crunch numbers, analyzing millions upon millions of chess positions every fraction of a second. Needless to say, this is not what human beings do when they play chess. Even a fleet of grandmasters won't look at millions upon millions of chess positions every second--or even over the course of their lifetimes. Indeed, during each move of a chess game, grandmasters tend to look at three or four possible variations--maximum. But (as chess champion Jose Capablanca once said) "those variations happen to be the right ones." What is it that makes the chess variations a human being looks at "happen to be right," whereas a computer must look at several million variations to find the right ones? This is a question that many philosophers and artificial intelligence researchers have examined. Raymond Kurzweil, unfortunately, is not one of them. He seems to think that computers will become fast enough that they don't need the kind of intuition that human beings have. However, if (as philosophers like Hubert Dreyfus argue) human intuition is the key to human intelligence, then machine intelligence--if it occurs at all--will never be of the same type as human intelligence.
Kurzweil's book suffers because he posits the inevitability of intelligent machines without ever defining what he means by "intelligence." Let's give him a little more slack and assume that a machine can be intelligent even if it isn't doing the same kind of calculating that human beings are doing. This means that we can consider Deep Blue and Gary Kasparov both to be "intelligent" when it comes to playing chess, even though they are playing chess in very different ways. Even with these assumptions, Kurzweil's arguments remain unconvincing.
Kurzweil sides with Turing in believing that computers will one day be able to intelligently converse with human beings in a variety of natural languages. He further believes they will converse so well that their responses will be indistinguishable from those a human being might make. (In other words, they will pass the Turing test.)
Unfortunately, the number of chess positions a computer needs to calculate in order to play decent chess is minuscule compared to the number of linguistic responses a computer would need to calculate in order to respond as a human being does in everyday conversation. Kurzweil asserts that choosing the proper response could be as simple as performing the following algorithm on a really, really, really fast computer:
"For my next step, take my best next step. If I'm done, I'm done."
Yep. That's all there is to it. Kurzweil describes this as "a really simple formula to create intelligent solutions to difficult problems" (282). Let's give Kurzweil the benefit of the doubt on this one: matters are certainly more complicated and he knows it, but he has to make things simple for his readers. As well as giving him the benefit of the doubt, let's assume with him that all the complexities really do boil down to the above recursive formula. For example, when playing chess he suggests a chess program follows this variation of the formula:
"PICK MY NEXT BEST MOVE: Pick my next best move. If I've won, I'm done."
For chess, this works. It works because the idea of "best move" is defined in terms of whether or not you've "won." Luckily the winning conditions for chess (checkmate) are easy to evaluate computationally, so it makes sense to define "best move" in terms of those winning conditions. But this formula becomes ludicrous in areas like natural language production:
"PICK MY NEXT BEST WORD: Pick my next best word. If I'm done, I'm done."
What does it mean to be "done" in conversation? When do you stop talking? At any given moment, how do we (as human beings) know when we've said the right things? How do we know we're going to say the right things the next time we open our mouths? The fact is, we don't always know. And if we don't always know, how are we supposed to write a stopping condition in a recursive formula for a computer's "next best word?"
My point is that chess lends itself to mathematical analysis, whereas language production does not. So the argument "once upon a time computers didn't play chess; now they do; so one day they will understand language" is a real fallacy.
I'd rather not spend any more time lampooning Raymond Kurzweil's shoddy reasoning. He obviously wants very badly to believe that machines will one day be intelligent. I don't know why. I'm content to let him believe what he wants.
But I should mention that the claims I've dealt with in this essay are not the only examples of Kurzweil's silliness. For example, he builds upon his claims about computers understanding natural language to suggest that they will one day be conscious and thus that they will one day be spiritual. "Just being--experiencing, being conscious--is spiritual, and reflects the essence of spirituality. Machines, derived from human thinking and surpassing humans in their capacity for experience, will claim to be conscious, and thus to be spiritual." Yowza! Let's list the critical terms that Kurzweil neglects to define: "being," "experiencing," "conscious," "spiritual," and "essence." These are some of the most vexed terms in the entire history of philosophy. Yet in one fell swoop, in one fell sentence, Kurzweil ignores all the subtleties involved with the concepts, everything that thinkers have wrestled with for over two-thousand years.
Kurzweil routinely exhibits a similar lack of sensitivity. And when he does attempt to show some sensitivity, he lacks rigor. For example, he spends the majority of Chapter Three listing and briefly describing different views about the meaning of "consciousness." Then, rather than giving any sort of analysis of these views, he sums everything up by saying: "My own view is that all of these schools are correct when viewed together, but insufficient when viewed one at a time... On its face, my view is contradictory and makes little sense. The other schools at least can claim some level of consistency and coherence" (60). And that's all. He doesn't say in what way the individual views are incorrect, or in what way they can be synthesized. He doesn't say how his own synthetic view can sustain itself in the face of the internal contradictions that he himself admits. This kind of laziness is damning, in my humble opinion. I expected more from a writer that I had heard so many good things about.
Monday, March 3, 2008
Journal 6
Also, I wrote several functional requirements pertaining to various aspects of Play Mode and Edit Mode.
Journal 5
Wednesday, February 20, 2008
Journal 4
Article:
http://news.bbc.co.uk/1/hi/technology/7254078.stm
Wednesday, February 6, 2008
What I'm doing on the project
Book to Review
Tuesday, January 29, 2008
Who's Doing What
Carl has always had excellent input and possesses a noteworthy ability to put himself into the shoes of the end-user when discussing the implementation of various features. He is familiar with all sectors of the code and has recently taken lead on the implementation of our client-server network software, which requires integrating our software with an external API called Java Game Networking.
Bobby has saved our asses several times. For example, without him, we would never have been able to set up an SVN repository (which took us four hours of pain and suffering until we finally decided to call Bobby--after which we were done in five minutes.) He's a Linux expert and has also been working with databases for years. So naturally he has solved many of our platform-related problems (because we develop on Linux), and he has helped integrate our software with an external Java database API.
Kyle will be working on the database system, which is a fairly young component of our game and will have plenty of opportunities to grow. I assume he will also become extremely familiar with the portions of our software that come into close proximity with the database component: the network system and the game entity system.
Matt will be working on the entity system, which means he will become very familiar with our process of loading 3D models and 3D animations. He will also probably take part in building the aspects of game-play that involve interactions between the player and objects (entities) in the game's virtual world.
Jon hasn't told me what he's doing yet. But he managed to build our project on a Window's box, which is quite impressive. And I hope, at the very least, that he will help our artists build the project on their Windows machines too. This is something we haven't been able to do yet; and I think our artists would really like to be able to get and build the latest version of our software whenever we make updates.
Project Manager
One thing I DON'T do is call the shots about the future directions of the project. These directions have historically been decided by discussion and with equal input from everyone involved. My job is not to say, "Here's what we're doing next." Instead, I say, "Where should we go from here?" And I try to facilitate conversation to that effect.
Monday, January 28, 2008
Wednesday, January 23, 2008
First Post
Report on meeting with Glenda Carl
In particular I think Dr. Carl's suggestion of ways for a teacher to examine students' progress during the game (or during game sessions) was very interesting. An in-game chat that logs what students say to each other would provide a record for teachers to assess how well students seem to be using the language they are supposed to be learning.
What I Could Contribute to Dr. Buchele's Project:
To be honest, I don't know enough about the project to make a reasonable guess as to how my skills could be useful. If the “digitization” of textbooks took the form of converting them into a web-application, then the I could possibly use the skills I’ve gain from developing web-applications for NITLE. However, it is my understanding that the project will not involve web-applications as much as it will involve creating an on-board resource for the laptops running Sugar.
I think this project has definite social significance and should be further developed. It is a shame, however, that the Ministry of Education in Ghana is opposed to the idea of using freely available software rather than porting their pre-existing textbooks to an electronic form (which will be costly and probably not as advantageous to the children who are supposed to the using the software.)