Friday, November 4, 2011

A Levitation Spell

Continuing in the same vein of the previous post, I've been adding more details to the spell-casting UE.

In total, these details are as follows: 1) The broom animates when enchanted, 2) The sunlight grows dim, 3) The broom emits light and particles, and 4) a sound effect plays (although you can't hear it in the screenshot below). In the case of the levitation spell, the broom rises slowly off the ground to the specified height and begins to pulse up and down.

One critical thing we're missing is that the character doesn't animate during the act of spell casting. This is a big deal. Even from the video below, it's clear that the broom has been enchanted, but not that the character cast a spell. So my next goal is to model our main character and the associated animations (running, walking, idling, jumping, and spell casting).




Animation

I want casting spells to feel fun and exciting for our users, so I'm putting a lot of time into crafting the user experience (UE) and the user interface (UI).

We can divide the spell casting experience into three parts: Before spell casting, during spell casting, and after spell casting. The UE/UI concerns are different in each case. In a future post, I'll start itemizing these concerns.

At the moment, I'm working on the "during spell casting" phase. Here, I'd like the objects under enchantment to really LOOK like they've been enchanted. To that end, I created a simple broom model in blender. I textured it in photoshop. Then I rigged and animated the broom in Blender so that it wriggles and writhes when it becomes enchanted.

Although this is just one detail that pertains to the "during spell casting" UE, I believe that the accumulation of artistic details is what makes for a truly immersive UE. Here's a video of the broom animation:


Sunday, April 13, 2008

Killer Robot

"The Case of the Killer Robot" raises some interesting issues.

A quick synopsis: A robot arm kills its operator because the software which controlled the arm contained a significant bug. Who is at fault?

The programmer? The programmer misinterpreted the formulas that he was supposed to be using in order to create the arm-controlling algorithm. His misinterpretation was due, arguably, to his lack of domain-specific knowledge about robotics and physics. Also, arguably, his inability to correct his mistake was due to his tendency to reject criticism of his code.

The manager? The manager, being overly concerned with "doing things his way," led to his dismissal of the prototyping method (as opposed to the waterfall method.) The result was a product that could not be tested until late in its development cycle.

The user interface designers? The software's user interface was difficult to use and violated several principles of good interface design. The result was that the robot operator was unable to shut down the machine when it began to malfunction.

The Ivory Snow philosophy? The management at the offending software company adhered to a philosophy that advocated the impossibility of perfection in any software product and furthermore suggested that imperfections should not be fretted over. The result was that the software's glitches were considered necessary evils, when in fact they could have been prevented.

The software tester? The software tester faked the results of certain unit tests, which would have demonstrated that critical parts of the code were not working properly. She did this in compliance with a direct order from her superior, raising the question of exactly who is at fault.

The company as a whole? The company was under an obligation to produce software that met certain safety requirements. Having clearly failed to do so, one must wonder if the company's senior management should be held responsible.


"The Case of the Killer Robot" does a pretty good job of avoiding the temptation to place blame on any particular individual or group. In fact, it presents several fairly unbiased points of view. But to be honest, I think it fails to raise some of the more salient ethical issues at the heart of the matter.

Who do I think is at fault? Capitalism.

Long before the advent of robotic arms, Karl Marx foresaw almost all of the evils that "The Case of the Killer Robot" seeks to illuminate.

Where was the robot operator killed? In a factory--the main vehicle for what Marx calls the "means of production." Why was the operator in the factory? Because he was a member of a socio-economic class that must trade their labor for capital. Why is there such a class of people? Because capitalism requires there to be a workforce and perpetuates the growth of the working class.

Why did the programmer make such fatal mistakes? He was clearly "alienated from his labor" (as Marx puts it.) Because he did not own the product of his labor (because he did not own the software) he had no reason to be invested in its quality. Why did he have to create products that did not belong to him? Because he had to feed his children, which requires capital. Why does it require capital? Because we live in a capitalist society.

Why did the company's management act so deplorably? Because they are corrupt members of the capitalist upper-class. They own the means of production, growing richer and richer on the sweat of those beneath them. Because the amount of capital they gain depends on the speed of production, they naturally adhere to the Ivory Snow philosophy, preferring quantity over quality.

The case of the killer robot is merely a modern version of the many tragic tales we learn about in history classes: tales of young chimney sweeps getting lung cancer, tales of children losing limbs in factories during the industrial revolution, tales of men and women dying in coal mines, and many other tales of the casualties sustained by the working class. There are too many stories to tell in one lifetime, but they all have something in common: the economic class of the victims is always the same. Marx and Adorno said it best when they highlighted the inherent violence of the capitalist system. It is a system that kills members of its workforce routinely.

"The Case of the Killer Robot" has a lot of technical jargon about "waterfall methods," "Ivory Snow," "prototyping," "non-functional requirements," "unit tests," "computer ethics," and "robots." But ultimately, this language merely obfuscates important issues. Capitalism is an inherently corrupt system that infects everything it touches. And it touches everything. Why should we be at all surprised that the field of software engineering has become infected?

Sunday, March 23, 2008

Book Review

Book Review:

The Age of Spiritual Machines

By Raymond Kurzweil

Published by Viking Adult on January 1, 1999

I had more respect for Raymond Kurzweil before reading The Age of Spiritual Machines than I did after I finished it. His arguments--if they can indeed be called "arguments"--are flimsy at best and downright deceiving at worst. What I can't decide is whether or not Kurzweil (who has a long list of astonishing credentials) meant to assemble a book full fallacious arguments, or if he simply lacks the kind of philosophical rigor necessary to deal with the issues that his book pretends to deal with.

The first difficult pill to swallow is Kurzweil's attempted generalization of Moore's Law (a "law" which posits an accelerating rate of increase in computational power along with a corresponding decrease in the price of computers.) In the opening chapter of his book, Kurzweil seeks to posit a similar Law. This Law (audaciously named "The Law of Time and Chaos") posits that ever since the universe began, everything in existence has operated under a Law that can be stated in two parts:

1) "The Law of Increasing Chaos: As chaos exponentially increases, time exponentially slows down."

2) "The Law of Accelerating Returns: As order exponentially increases, time exponentially speeds up (that is, the time interval between salient events grows shorter as time passes.)"

Kurzweil argues that Moore's Law is just an instantiation of the Law of Accelerating Returns, which has been operating on Earth ever since the dawn of evolution (a process of ever increasing order.) If we accept that evolution is a process of increasing order and that the Law of Accelerating Returns is valid, then (Kurzweil argues) we must accept that time is speeding up. As long as we define "time is speeding up" as "the rate at which interesting things happen is increasing," I'm happy to give Kurzweil the benefit of the doubt here--in spite of his wonky use of the term "time." After all this is what he seems to mean in his aforementioned parenthetical addendum to the Law of Accelerating Returns.

The really misguided portions of Kurzweil's discourse occur when he tries to use the Law of Accelerating Returns to make predictions about the future. To make a long story short: evolution makes time speed up (stuff always happens quicker than it used to), the accelerating production of technology is the result of this temporal speedup, technological production--because it orders things--makes time speed up even more (stuff happens even more quickly), and therefore... machine intelligence will rival human intelligence by the year 2039. This is the very definition of a non sequitur fallacy. To be fair, Kurzweil doesn't state his case as simplistically as I have just now; but he does rely heavily on the Law of Accelerating Returns to suggest that the coming intelligence of machines is "inevitable"--to use his word.

Here's one reason why this conclusion is ridiculous even if we consider the Law of Accelerating Returns valid. Even if more and more stuff is happening more and more quickly, there are still a great many conceivably possible biological and technological events that won't be happening anytime soon. For example, human beings won't be evolving to have purple skin. And we won't be inventing technology to suck all the marrow out of our bones and inject it into trees. The problem with Kurzweil's argument is that you can use it to prove just about anything: stuff is happening faster and faster, it will continue to do so, therefore we will have purple skin by the year 2039. This is obviously silly. And so are Kurzweil's conclusions.

Here's another reason we should be suspicious of drawing the "intelligent machines are inevitable" conclusion form Law of Accelerating Returns. The Law of Accelerating Returns can be used to predict the occurrence of events that exclude the creation of intelligent machines. For example, one can argue that the constant increase in technology is bringing us ever-closer to total nuclear annihilation. Should this happen, it would constitute a major set-back for advances in computing machinery, let a lone the human race.

Am I being unfair by attacking what is clearly one of Raymond Kurzweil's wonkiest arguments? I don't think so. He himself recognizes the Law of Accelerating Returns to be a crucial element in the overall argument. Moore's Law alone isn't enough to show that intelligent machinery is inevitable. In the introduction to his chapter on the Law of Time and Chaos. he states, "For the past forty years, in accordance with Moore's Law, the power of transistor-based computing has been growing exponentially. But by the year 2020, Moore's Law will have run its course. What then? To answer this critical question, we will need to understand the exponential nature of time" (vi). In other words, Moore's Law doesn't predict the advent of machines sufficiently fast for approximating human intelligence. Part of the reason for this is that Moore's Law only applies to transistor-based computers, which have a theoretical maximum speed. Kurzweil needs another Law if he is going to make the predictions he wants to make. Unfortunately the Law he uses and the conclusions he draws from it are seriously flawed, as we have seen.

Let's give Kurzweil a little more slack, however. Let's pretend computers will keep getting faster from now until the end of time. And let's pretend that the human race won't be destroying itself between now and then. This still isn't enough to jump to the "machines will be as intelligent as human beings in X number of years" conclusion. The gist of Kurzweil's argument in this direction is, "Look at how dumb computers used to be. Look at how many people thought they would never be able to do X, Y, and Z. And now look at how smart they are now--able to to X, Y, and Z. Now we have grounds to doubt people who say that computers will never be as intelligent as human beings." One of Kurzweil's favorite examples is the game of chess. Prior to Deep Blue's win over world champion Gary Kasparov, the idea of a computer playing grandmaster-level chess was considered ludicrous by many people, including notable experts in artificial intelligence such as Hubert Dreyfus. Now that computers have achieved this milestone, Kurzweil argues, "the sky is the limit."

Wait a second. Not so fast. What has actually been accomplished by this so-called milestone in computer chess playing abilities? Computers like Deep Blue are designed to crunch numbers, analyzing millions upon millions of chess positions every fraction of a second. Needless to say, this is not what human beings do when they play chess. Even a fleet of grandmasters won't look at millions upon millions of chess positions every second--or even over the course of their lifetimes. Indeed, during each move of a chess game, grandmasters tend to look at three or four possible variations--maximum. But (as chess champion Jose Capablanca once said) "those variations happen to be the right ones." What is it that makes the chess variations a human being looks at "happen to be right," whereas a computer must look at several million variations to find the right ones? This is a question that many philosophers and artificial intelligence researchers have examined. Raymond Kurzweil, unfortunately, is not one of them. He seems to think that computers will become fast enough that they don't need the kind of intuition that human beings have. However, if (as philosophers like Hubert Dreyfus argue) human intuition is the key to human intelligence, then machine intelligence--if it occurs at all--will never be of the same type as human intelligence.

Kurzweil's book suffers because he posits the inevitability of intelligent machines without ever defining what he means by "intelligence." Let's give him a little more slack and assume that a machine can be intelligent even if it isn't doing the same kind of calculating that human beings are doing. This means that we can consider Deep Blue and Gary Kasparov both to be "intelligent" when it comes to playing chess, even though they are playing chess in very different ways. Even with these assumptions, Kurzweil's arguments remain unconvincing.

Kurzweil sides with Turing in believing that computers will one day be able to intelligently converse with human beings in a variety of natural languages. He further believes they will converse so well that their responses will be indistinguishable from those a human being might make. (In other words, they will pass the Turing test.)

Unfortunately, the number of chess positions a computer needs to calculate in order to play decent chess is minuscule compared to the number of linguistic responses a computer would need to calculate in order to respond as a human being does in everyday conversation. Kurzweil asserts that choosing the proper response could be as simple as performing the following algorithm on a really, really, really fast computer:

"For my next step, take my best next step. If I'm done, I'm done."

Yep. That's all there is to it. Kurzweil describes this as "a really simple formula to create intelligent solutions to difficult problems" (282). Let's give Kurzweil the benefit of the doubt on this one: matters are certainly more complicated and he knows it, but he has to make things simple for his readers. As well as giving him the benefit of the doubt, let's assume with him that all the complexities really do boil down to the above recursive formula. For example, when playing chess he suggests a chess program follows this variation of the formula:

"PICK MY NEXT BEST MOVE: Pick my next best move. If I've won, I'm done."

For chess, this works. It works because the idea of "best move" is defined in terms of whether or not you've "won." Luckily the winning conditions for chess (checkmate) are easy to evaluate computationally, so it makes sense to define "best move" in terms of those winning conditions. But this formula becomes ludicrous in areas like natural language production:

"PICK MY NEXT BEST WORD: Pick my next best word. If I'm done, I'm done."

What does it mean to be "done" in conversation? When do you stop talking? At any given moment, how do we (as human beings) know when we've said the right things? How do we know we're going to say the right things the next time we open our mouths? The fact is, we don't always know. And if we don't always know, how are we supposed to write a stopping condition in a recursive formula for a computer's "next best word?"

My point is that chess lends itself to mathematical analysis, whereas language production does not. So the argument "once upon a time computers didn't play chess; now they do; so one day they will understand language" is a real fallacy.

I'd rather not spend any more time lampooning Raymond Kurzweil's shoddy reasoning. He obviously wants very badly to believe that machines will one day be intelligent. I don't know why. I'm content to let him believe what he wants.

But I should mention that the claims I've dealt with in this essay are not the only examples of Kurzweil's silliness. For example, he builds upon his claims about computers understanding natural language to suggest that they will one day be conscious and thus that they will one day be spiritual. "Just being--experiencing, being conscious--is spiritual, and reflects the essence of spirituality. Machines, derived from human thinking and surpassing humans in their capacity for experience, will claim to be conscious, and thus to be spiritual." Yowza! Let's list the critical terms that Kurzweil neglects to define: "being," "experiencing," "conscious," "spiritual," and "essence." These are some of the most vexed terms in the entire history of philosophy. Yet in one fell swoop, in one fell sentence, Kurzweil ignores all the subtleties involved with the concepts, everything that thinkers have wrestled with for over two-thousand years.

Kurzweil routinely exhibits a similar lack of sensitivity. And when he does attempt to show some sensitivity, he lacks rigor. For example, he spends the majority of Chapter Three listing and briefly describing different views about the meaning of "consciousness." Then, rather than giving any sort of analysis of these views, he sums everything up by saying: "My own view is that all of these schools are correct when viewed together, but insufficient when viewed one at a time... On its face, my view is contradictory and makes little sense. The other schools at least can claim some level of consistency and coherence" (60). And that's all. He doesn't say in what way the individual views are incorrect, or in what way they can be synthesized. He doesn't say how his own synthetic view can sustain itself in the face of the internal contradictions that he himself admits. This kind of laziness is damning, in my humble opinion. I expected more from a writer that I had heard so many good things about.

Monday, March 3, 2008

Journal 6

On the SRS, I wrote a rather long entry under Other Requirements. This entry dealt with a requirement that (before our software can receive any widespread use) it needs two kinds of associated texts: a user's manual (which talks about basic things like how to move and how to drop items, etc.), and a more advanced textbook-like resource which would provide teachers with creative ways to use the software (such as real lesson plans for teaching specific aspects of a foreign language curriculum.) These are both things that we cannot expect general users (aside from Dr. Carl) to be able to figure out for themselves. Dr. Carl, I hope, will be able to help us with the second text--because we developers naturally know very little about teaching foreign languages.

Also, I wrote several functional requirements pertaining to various aspects of Play Mode and Edit Mode.

Journal 5

This is a little late, but I still remember what I did a few weeks ago because it was fairly noteworthy. I had been having doubts about our (my) decision to use an external API called JGN (Java Game Networking) to facilitate our game's multiplayer interaction. It's a decent API; however, the documentation is severely lacking. In fact, the only documentation that exists consists of the code's comments and a few wiki pages (which I wrote myself.) Needless to say, we were having trouble tracking down several bugs. Specifically, character avatars (as mentioned in my previous post) were not being correctly removed from the scene graph. But despite the lack of documentation, we managed to clean things up. The virtual collaboration environment is almost complete now.

Wednesday, February 20, 2008

Journal 4

Currently, I am overseeing the creation of the virtual collaboration environment (VCE), which is one of our main goals this semester. Objects created on one client must be reflected on another client. And furthermore, any object moved, scaled, rotated, or otherwise manipulated on any client must undergo a parallel manipulation on all clients in order for the virtual world to be kept in sync. We currently have two kinds of objects: common objects and avatars. Common objects are like the ones I have previously mentioned. They can be manipulated by anyone; they are the building blocks of the virtual world. The avatars, on the other hand, are representations of the players who are currently in the world. These are subject to the movements and rotations that a given client's camera undergoes during the user's navigation in the virtual world. Avatars exist so that people can better interact while they virtually collaborate in the VCE.

Article:

http://news.bbc.co.uk/1/hi/technology/7254078.stm