Thursday, September 24, 2009

Star Trek: The Original Series / The Doomsday Machine: A cautionary tale for the near future in military robotic technology?







Trek geeks might, paraphrasing Whitehead, say a good deal of ethical and philosophical thought is but a footnote to Trek, the original series. Case in point:

THE DOOMSDAY MACHINE (click to watch entire episode)

I would have embedded the video, but for some reason CBS has requested YOUTUBE disable the embed code. Why? I don't know. It seems to me that would reduce audience. But, what do I know.


This episode is not only a classic Trek battle-of-the-hams between Williams (Shatner and Windom) but is particularly apropos of recent trends in military technology, the use of robotics, something the ethical and tactical implications of which my colleagues here at the Naval Academy Stockdale Center are giving a very good going over. Sure to produce some very interesting cutting edge literature on the subject.

The doomsday machine is a planet destroying machine. Even though it looks very much like a gigantic cheesy tin-foil cornucopia, it has a decided mean streak. It is the last surviving artifact of a long ago interplanetary war. Created by one belligerent planetary civilization, with a simple program design, it was to use an anti-proton beam to reduce the enemy planet and, while in the process, it was designed to feed and defend itself, so that it could continue its destructive function. It was designed to move about, take in the rubble, and convert it to usable energy. It was also programmed to pursue destroy and consume any attacking ships from enemy forces. The programming, because it was relatively simple, and the robot, because able to fuel itself, apparently succeeded, but no programming was included that would enable or cause the machine to cease functioning. By the time the Federation Starship Constellation ran into it, the machine had wandered far from its original space, continuing to operate according to its programming.


This machine is what we would these days call a fully autonomous weapon. Like a living organism, it can seek and find fuel to ensure its continued functioning. Like an organism it has the ability to recognize threats, and deal with them. In the machine's case, it has no built in "evade" function, because it does not need it. It is impervious to external attack.


Now, the designers of this machine, no doubt reasoned that their considerable investment should be protected to the utmost because it was costly, it would guarantee victory, and most importantly, it would save the lives of its own people, in particular its military personnel, and the home planet's population. The home planet would be spared world-wide war and resultant carnage. The military folks would be spared the need of attacking the enemy planet or dealing with repelling counterattacks. Loss of life would be minimized, and indeed would be unnecessary, given the machine. The civilization's leaders may have reasoned that they had a moral obligation to use the technology instead of people to fight the war, given that the technology was feasible, and affordable. They may have reasoned that the other planet posed an existential threat, were on the verge of creating a similar weapon, thus morally permitting the creation of the doomsday machine. We are not fully informed about the machine's history. So, we speculate as we watch the episode unfold

So, they might have reasoned, there were very good reasons for developing the weapon. What is more, difficulties with remote control might have pushed them to develop the autonomous functionality, because time lags, and the vagaries of communications over hundreds of thousands perhaps millions of miles might have rendered effective remote control impossible.


All of this pushed them toward creating an fully autonomous instrument that survived the war, the belligerent civilizations, and became a threat to worlds in no way involved. The law of unintended consequences writ large. Galactic large.


What lessons does this episode have for us as we evolve toward the ability to create fully autonomous robots for military applications?
In terms of "smarts" the machine is probably no smarter than a primitive unicellular organism. In fact, a good case can be made that it is considerably more dumb than your run of the mill cell. Because it is so dim, it is not capable of discriminating belligerents from non-belligerents, and is therefore not able to tell that its job is done. To program such discriminatory ability would be a difficult task, perhaps impossible, if the program code is like ours. The more complex the battle environment, the more "sorts" of objects with which the machine has to contend, the less plausible seems the level of success in mimicking human capabilities in this regard. This could lead to tragic consequences, death by 'friendly fire,' death of innocent non-combatants, etc..
Perhaps this is something that the creators thought would not arise as a problem. After all, they may have reasoned, we are sending this thing far away to another planet. Around that planet, will only be enemy. [Apparently, they did not consider it a moral problem that they were destroying the planet, belligerent force, and all. But we will leave this aside.] This being the case, we can see that it greatly simplifies the programmers task.
Similarly, one can imagine in the not-to-distant future, a decision to send fully autonomous robots weaponry, only into relatively 'simple' environments, that will not present difficult discriminatory tasks (for example discriminating combatants from non-combatants, when there are no obvious visual differences, because say, the combatants do not wear uniforms), i.e., into environments where there is good reason to believe only combatants will be found. In such cases, one could argue that we should rely on the technology if we have it, if it is indeed feasible, and affordable, when compared to sending in human beings at risk of their lives.
Interestingly, though, the story of the doomsday machine argues against utilization of fully autonomous robots of such lethality, for reason of the unintended consequences that might arise. Because of their dimwittedness, they are prone to function in ways we would not intend.
Perhaps a better option would be remote control, as with Predator Drones. But, this would necessarily increase the risk to personnel by placing them close enough to make remote control/sensing reliably feasible. The positive though, is that by doing this you can combine the relative impervious nature of the machine with the discriminatory powers of human beings, with considerably less risk of the lives of those human beings.
However, does this make resort to war too "easy?" Do we want to do this? On the one hand we preserve the lives of our own servicemen and women, if we use these things, but utilization of them may tempt resort to war sooner rather than later in any given scenario, because the risk to our own is relatively small when compared with conventional warfare. Similar questions have arisen with each innovation in military technology, that allowed the distance between combatants to increase while not diminishing, or increasing the ability to kill. Think about the introduction of archers, the disdain felt by infantry toward them, similar attitudes toward fire-arms, machine guns, tanks and missiles, in their respective eras. Introduction of new technology for purposes of killing during war always introduces difficult questions.
One last consideration: In refusing to commit to creation of fully autonomous robot weapons, the civilization that may have that capability does, at least in the short term, prevent the technology from possibly falling into the hands of its enemies. Yes, the remotely controlled weaponry would possibly be recovered, and they may be able to develop this windfall into fully autonomous weaponry, but there would at the very least, be time to prepare. This problem becomes acute when the enemy has a demonstrated propensity for not giving one whit for discrimination or proportionality, and/or they are sworn to utterly destroy their foes.

In fact, it is this very possibility that is raised by the episode. The civilization that made the gigantic star roving killer cornucopia appears to have had little compunction, and created the machine knowing it would utterly annihilate the home planet of its enemy. This raises the question: Given enough time, technological developments are inevitable, and either through black markets, or invention, groups with little compunction can acquire doomsday machines.
Given enough time the bad guys will develop this technology. So why not beat them to the punch, and be that far ahead of them? That would deter them, hopefully, unless they are death cultists. In that case, perhaps we should annihilate? On the other hand, is it better to simply design defenses, or make attempts at sabotage. This latter of course relies on the good guys having good intel, and people in the enemy camp that actually can do something. These are never certainties.
Once again, similar points of concern have been made in the past about what were then, innovations in distance killing.
These points are well taken, if also troubling.