John P. Sullins, Assistant Professor of Philosophy at Sonoma State University, has substantially contributed to the fields of philosophy of technology and cognitive science as well as to the fields of artificial intelligence, robotics and computer ethics. In addition John P. Sullins is a Military Master at Arms and directs the Sonoma State University Fencing Master's Certificate Program.
How and why did you get interested in the field of military robots?
It was not intentional. My PhD program focused on artificial intelligence, artificial life and consciousness. During my studies the works of Rodney Brooks and others, who were suggesting that embedding these systems in real world situations would be the only way to gain traction on the big issues troubling AI, persuaded me. So, I began studying autonomous robotics, evolutionary systems, and artificial life. Right away I began to be troubled by a number of ethical issues that harried this research and the military technological applications it was helping to create. Just before I finished my doctorate the events of September eleventh occurred closely followed by a great deal of interest and money being directed at military robotics. Instead of going into defense contract research, as a number of my peers were doing, I decided to go into academic philosophy as this seemed like the best angle from which to speak to the ethics of robotics. Like the rest of us, I have been swept up by historical events and I am doing my best to try to understand this dangerous new epoch we are moving into.
In your work you have engaged questions regarding ethics of artificial life, ethical aspects of autonomous robots and the question of artificial moral agency. Where do you see the main challenges in the foreseeable future in these fields?
In the near term the main issue is that we are creating task accomplishing agents, be they AI (Artificial Intelligence), Alife (Artificial Life), or robotic in nature which are being deployed in very ethically charged situations.
In Alife work is proceeding on the creation of protocells, which will challenge our commonsense conception of life and may open the door to designer biological weapons that will make today’s weapons look like the horse does to transportation technology. Autonomous robotics has two main challenges, the most important is their use in warfare, which we will talk more about later and secondly, the emergence of social robotics, machines designed as companions, helpers, and most notoriously, sexual objects.
I believe that a more fully understood concept of artificial moral agency is vital to the proper design and use of these technologies. What worries me most is that in robotics we are rushing headlong into deploying them as surrogate soldiers and sex workers, two activities that are surrounded by constellations of tricky ethical problems that even human agents find immensely difficult to make sense of. I wish we could have spent some additional time to work out the inevitable bugs with the design of artificial moral agents in more innocuous situations first.
Concerning the use of robots by the military, Ronald Arkin has worked on an ethical governor system for unmanned systems. Do you think similar developments will be used in other application areas of robots in society? Especially the impact of robots on health care and care for the elderly concerns ethically sensitive areas.
Yes I do think that some sort of ethical governor or computational application of moral logic will be a necessity in nearly every application of robotics technology. All of one’s personal interactions with other humans are shaped by one’s own moral sentiments. It comes so naturally to us that it is hard to notice sometimes unless someone transgresses some social norm and draws our attention to it. So if we expect robots to succeed in close interactions with people we need to solve the problem Arkin has addressed with his work. Right now, our most successful industrial robots have to be carefully cordoned off from other human workers for safety reasons, so there is no pressing need for an ethical governor in these applications. But when it comes to replacing a human nurse with a robot, suddenly the machine is through into a situation where a rather dense set of moral situations develops continuously around the patients and caregivers. For instance, one might think that passing out medication could be easily automated by just modifying one of the existing mail delivery robots in use in offices around the world. But there is a significant difference in that a small error in mail delivery is just an inconvenience, whereas a mistake in medication could be lethal. Suppose we could make a fool proof delivery system and get around the last objection, even then we have a more subtle problem. Patients in a hospital or nursing home often tire of the prodding, poking, testing and constant regimen of medication. They can easily come to resist or even resent their caregivers. So, a machine dropped into this situation would have to be able to not only get the right medication to the right patient but then will need to also engage the patient in a conversation to try to convince him or her that it is interested in the well being of the patient and wants only what is best for him or her, listen attentively and caringly to the patients concerns and then hopefully convince the patient to take the medication. So we can see that this simple task is imbedded into a very complex and nuanced moral situation that will greatly task any know technology we have to implement general moral intelligence. So I think the medical assistant sector of robotics will not reach its full potential until some sort of general moral reasoning system is developed.
This is the Turing Test with a vengeance. The question really comes down to this: How do you, and indeed CAN you build a machine that will evince the sort of sensibilities that the average nurse possesses, that makes him/her of such high value to patients?
Well, there seem to be two answers to this complex question, depending on what you are referring to by 'evince'. Is the word referring to observable behaviors, or rather to the sort of inner life that we know from our own experience, is at the root of our own moral behavior?
Taking the latter meaning first, we can answer the question this way: Yes, we can build a machine that has the requisite range of emotional, cognitive moral and ethical reasoning abilities, and we will then have a good nurse, or potential nurse, if it needs training or experience.
Answer to question taken in its second sense: No we needn't try to do all that. Let's build a machine that may very well not have emotions, conscious or unconscious cognition etc, to rely on, and which may not even have moral intuitions, theories or etc.. that your typical nurse or philosopher has, but which nevertheless is able to emulate the visible behavior that all of these things bring about in human nurses.
It seems to me that these are the only two possible goals of anyone that wants to create artificial moral agents. (Now, having said that, I confess that anything that satisfies the latter conception just does not appear to me to be a moral agent. It cannot know anything about itself, nor its environment, nor the consequences of its actions. It only emulates behaviors that are 'evinced' by beings such as ourselves, that do have these characteristics. And, finally, since it cannot know these things, it cannot feel responsibility, and indeed cannot be held responsible by others. It's creators are the only responsible parties involved and should be held responsible) So, paradoxically put, this sort of AMA ain't a moral agent.
In any case, we are not very near being able to accomplish any of this. The closest we get to the former sort of created moral agent is good old human reproduction and child rearing, that is, if you consider humans to be very sophisticated machines. Many of them become nurses do they not?
Now, it may or may not be the case that non-biological machines can be created that will be conscious, have emotions, etc.., I don't see how we can tell one way or the other right now if this is indeed possible, but all would agree, that the eventuality is distant future, if at all. My own tentative hunch is that we are going to find, as we try try and try again, that it is not possible to create conscious moral agents unless we create beings that are physically very much like us, that is biological beings, made out of essentially the same stuff.
Now, supposing we can do this, (create not only artificial life, but artificial life that is at the same time moral agency) this does raise a whole host of questions. Some of them: To what degree can we engineer improvements in their emotional and cognitive function that will make them better functioning moral creatures than us? If this is possible, would we want to engineer ourselves in this way as well? Are there unintended consequences in such attempts? Will the artificially created beings come to see us as children, or inferior species that must be supervised, controlled or even reared? All these questions are the fodder of science fiction right now, and the budding "singularity" debate, and admittedly geared toward the distant future, but fascinating and necessary fodder for careful thinking as well. Now moving on to the second sense of the word "evince":
To succeed in the second endeavour (creating machines that emulate the complex behavior of nurses without the subjective goings on), looks to be another tall order. So far, we have managed to create machines that can emulate human calculation, carry out simulations of physical objects and environments, thus emulating real world natural processes, prototype testing, generating end products that count as designed physical objects, which can themselves be "emulated" or better yet "instantiated" in real matter, and we even have built and programmed machines that can 'talk' with human beings, in attempting to pass Turing's test. But all of this is a far cry from the emotional/social/moral skills a nurse has.
No program that takes on Turing's challenge can fool a human for long, when it comes to emulating natural conversation. Why? To put it simply, these programs, are after all programs, rigid sets of instructions that correlate electronic inputs with electronic outputs. They instruct the machine to spit out certain outputs if they receive certain inputs. The only flexibility that exists in such programs is the flexibility that is in the coded instructions. And, even if programmers attempt to build flexibility in by providing more lines of coded instructions, human interlocutors can pretty quickly tell what's going on, pretty quickly detect the repetitive and stereotyped responses.
Finally, if it is true, as Searle argues, that in the unlikely event that a code using machine does succeed in perfectly emulating the behavior of a nurse, that it nevertheless does not understand, empathise, and etc.. in the way that nurses do, I suspect that common knowledge of that fact amongst patients and prospective patients would be cold comfort. We'd much rather have that warm blooded comfort provided by the under-appreciated human nursing community.
OK, back to the interview:
A lot of the challenges concerning the use of robots in society seem to stem from the question of robot autonomy and especially from the question of robots possibly becoming moral agents. Where do you see the main challenges in this field?
This is a great question and I have much to say about it. I have a complete technical argument which can be found in the chapter I wrote on Artificial Moral Agency in Technoethics, in the Handbook of Research on Technoethics Volume one, edited by Rocci Luppicini and Rebecca Addell. But I will try to distil that argument here. The primary challenge is that no ethical theory has ever given serious concern to even non human moral agents, much less artificial moral agents so we are existing in a conceptual void and thus most ethicists would find the concept unthinkable or even foolish. I think it is important to challenge this standard moral certainty that humans are the only thing that count as moral agents and instead entertain the notion that it is possible, and in fact desirable, to admit nonhumans and even artifacts into the club of entities worthy of moral concern. So if you will allow me to quote myself from the work I cited above, “…briefly put, if technoethics makes the claim that ethics is, or can be, a branch of technology, then it is possible to argue that technologies could be created that are autonomous technoethical agents, artificial agents that have moral worth and responsibilities—artificial moral agents.”
There is a conceptual ambiguity in this paragraph that needs to be clarified. Let me begin to spell out the ambiguity by offering some quick thumbnail definitions of terms that Sullins begins to use here:
1. Moral agent - an entity that is capable of moral responsibility
2. Moral patient - an entity that is deserving of some level of moral consideration.
3. Moral consideration - the act of taking into account the interests of moral patients or moral agents.
Now, Sullins says two things: that we have not heretofore taken seriously the idea that non humans can be moral agents; and that we also need to consider allowing things that are non-human on the list of things that are moral patients, and thus deserving of moral consideration.
As to the former point, I think it is probably false, both as a matter of philosophical history, and as a matter of more broad cultural 'ruminations' if you will.
Firstly, Kant in particular took up the possibility of non human moral agents, and made the argument that in so far as these beings exhibited rationality, freedom, and were able to formulate maxims and subject these to the categorical imperative testing procedure, to that extent such beings would be on par with us as moral agents.
Secondly, Fiction (literature, high brow, medium brow, low brow) and other elements of pop culture have been replete with stories that raise this issue. Can anyone remember HAL, or the M5 computer, or Mr. Data, or I Robot, or etc.?
Much the same thing can be said about his suggestion that we entertain the possibility that we expand the realm of moral patient-hood. Seems that is very much underway, and has been for some time. Once again, at the level of academic life, and in the more general culture. (E.g., Peter Singer and PETA). Arguments have been made that inanimate objects including the entirety of Earth, are moral patients. The literature is legion, the advocacy groups equally so.
Quibbles aside, he does make the good point that if we are going to aim for the creation of artificial devices that are largely autonomous and which interact with moral patients, then we should entertain the possibility that it would be a good idea to build them in such a way as they can be moral agents. Once again, I revert to the first comment; if we think this is wise, the road to success in this endeavor would be the road that actually tries to build artificial objects that have the inner life we have, the complex inner life that eventuates in behavior we judge morally and can recognize as coming from the moral perspective. Going the emulation path seems a dead end.
Let me explain myself a bit more clearly. Every ethical theory presupposes that the agents in the proposed system are persons who have the capacity to reason about morality, cause and effect, and value. But I don’t see the necessity in requiring personhood, wouldn’t the capacity to reason on morality, cause and effect, and value, be enough for an entity to count as a moral agent?
I don't see how you can conceptually cut this joint. Only persons can pull off this task. Why? I think Kant had it right, as well as Aristotle. Animals can reason about cause and effect, but cannot reason about morality and value, because, among other things, they do not have the cognitive capacity to recognise other entities as moral patients, do not have the capacity to recognize that these other entities have interests, and can suffer. In order to do these things, one has to have empathetic abilities, and ultimately self consciousness. For, in order to have empathetic abilities, one has to be able to take up the position of the other and be able to ask and answer the question 'what would it be like for me if I were in that position?' No animal other than man can do this, and no non-conscious machine could do this. As to the possibility that a non-conscious machine could emulate the behavior of such entities in situations like warfare...I'm skeptical.
And further, you probably do not even need that to count as an entity worthy of moral concern, a “moral patient” as these things are often referred to in the technical literature. So for me a thing just needs to be novel and/or irreplaceable to be a moral patient, that would include lots of things such as animals, ecosystems, business systems, artwork, intellectual property, some software systems, etc. When it comes to moral agency the requirements are a little more restrictive. To be an artificial moral agent the system must display autonomy, intentionality, and responsibility. I know those words have different meaning for different people but by “autonomy” I do not mean possessing of complete capacity for free will but instead I just mean that the system is making decisions for itself. My requirements of intentionality are similar in that I simply mean that the system has to have some intention to shape or alter the situation it is in. And finally the system has to have has some moral responsibility delegated to it. When all of these are in place in an artificial system it is indeed an artificial moral agent.
Once again, to take the risk of being accused of quibbling here: such entities are not moral agents, but tools of moral agents. If they cannot be responsible, they cannot be moral agents. I don't think that they can be responsible, but that others are responsible for them, and their use. To use a simple example, if I choose to have my robot go rescue the proverbial drowning kid from the pond, instead of going in myself, and as a result of my inputs it does in some sense of the word "intend" to do so, yet fails to do so, even though it is true in some sense of the phrase that I 'delegated responsibility' to the machine, in the important sense I did no such thing. I am responsible for the kid drowning, not the machine. Why? The method I chose to use failed, while I had other methods at my disposal. I am the only moral agent involved here.
If we speak about a moral judgment made by a machine or artificial life-form, what would be the impact of this on society and human self-conception?
There are many examples of how it might turn out badly to be found throughout science fiction. But I do not think any of those scenarios are going to fully realize themselves. I believe this could be a very positive experience if we do it correctly. Right now, the research in moral cognition suggests that human moral agents make their decisions based largely on emotion, guided by some general notions acquired from the from religion or ethical norms of their culture and then they abduct from this an exhibited behavior. Working on artificial moral agents will force us to build a system that can more rationally justify its actions. If we are successful, then our artificial moral agents might be able to teach us how to be more ethical ourselves. We are taking on a great responsibility, as the intelligent designers of these systems it is ultimately our responsibility to make sure they are fully functioning and capable moral agents. If we can’t do that we shouldn’t try to build them.
This view of the results of psychological studies of moral cognition does not mention the fact that much of that research in fact shows that there are something like a set of innate ethical theories that shape our judgements, that are at least as basic and powerful as are cultural traditions, and indeed cross cultural. Not surprisingly, these innate theories line up quite well with standard philosophical ethical theories. He's quite right to point out that it will be of value to our research vis, getting clear about these theories and their inter-relations, that it is usefull to try to emulate them in machinary and computer code. No quibbles there, and no quibbles with his contention that it is we that are ultimately morally responsible agents. But, given what he had just said, I'm a bit confused. Does he or does he not think artificial objects can be moral agents?
We are not guaranteed success in this endeavor, we might also build systems that are amoral and that actively work to change the way we perceive the world, thus striping ourselves of the requirements of moral agency. This is what I am working to help us avoid.
You have argued that telerobotic systems change the way we perceive the situation we are in and that this factor and its effect on warfare is insufficiently addressed. Where do you see the main ethical challenges of this effect and what could be done to solve or at least mitigate these problems?
The main issue is what I call telepistemological distancing: how does looking at the world through a robot color one’s beliefs about the world? A technology like a telerobotic drone is not epistemically passive as a traditional set of binoculars would be. The systems of which the drone and pilot are part of are active, with sensors and systems that look for, and pre-process, information for the human operators’ consumption. These systems are tasked with finding enemy agents who are actively trying to deceive it in an environment filled with other friendly and/or neutral agents, this is hard enough for just general reconnaissance operations but when these systems are armed and targets are engaged this obviously becomes a monumental problem that will tax our telepistemological systems to the limit. It does not stop there, once the images enter into the mind of the operator or soldier, myriad social, political, and ethical prejudgments may color the image that has been perceived with further epistemic noise.
No doubt all of this is true, but, as with the introduction of the telescope, telerobotic systems, if designed well, will be tremendous aids in waging ethical warfare, in that they will increase discriminative abilities. This is already the case. I know there is much hand-wringing about collateral damage with use of remotely piloted aircraft, but as compared to conventional operations, the numbers are in fact much lower. Not to mention, the risk to soldiers is considerably less.
As we can see, there are two loci of epistemic noise; 1) the technological medium the message is contained in and 2) the preconditioning of the agent receiving the message. So if we are to solve or mitigate these problems they have to be approached from both of these directions. First, the technological medium must not obscure information needed to make proper ethical decisions. I am not convinced that the systems in use today do that so I feel we should back off in using armed drones.
I'm not sure on what Sullins bases this judgment. There is enough 'drone' footage available to demonstrate that the technology allows careful and considered monitoring of potential targets, and timing of attacks to minimize killing of innocents. The evidence is overwhelming.
The preconditioning of the operator is a much harder problem. Today’s soldiers are from the X-Box generation and as such come into the situation already quite desensitized to violence and not at all habituated to the high level of professionalism needed to follow the strict dictates of the various ROEs, LOW, or Just War theory.
A recent report by the US Surgeon General where US Marines and Soldiers were interviewed after returning home from combat operations in the middle east suggests that even highly trained soldiers have a very pragmatic attitude towards bending rules of engagement they may have been subject to. As it stands only officers receive any training in just war theory but drones are now regularly flown by non officers and even non military personnel such as the operations flown by the CIA in the US, so I am worried that the pilots themselves are not provided with the cognitive tools they need to make just decisions. To mitigate this we need better training and very close command and control maintained on these technologies and we should think long and hard before giving covert air strike capabilities to agencies with little or no public accountability.
Concur on the advisability of ethics training. I would suggest though, that just war theory is just the tip of the iceburg. Why not provide ethics education more broadly construed? Why not delve into the aspects of just war theory that bring into play ideas from other areas of ethical thought, in particular the main theories that have been offered; Kantian, Utilitarian, Virtue Ethics, Contractual, and Natural Law perspectives?
Are remote pilots less sensitive to the lethality they wield? Are they indeed X-Boxers who are more likely rush unthinkingly into killing? I don't think so, and the easy assumption should not be made that they are. The lack of sensitivity toward killing in games is best explained by the fact that the game players are aware that they are playing games. I see no reason to believe that the same individuals would not in fact be well aware that in piloting lethal drones, they are engaged in actual warfare. Once again, commonly available video backs this up.
As far as CIA UAV operations are concerned, one can witness a continuous increase. As you mentioned there a various problems connected with them. To single out just one: do you think the problem with the accountability of the actions – i.e. the question of the locus of responsibility – could be solved in an adequate manner?
This is a very hard problem that puts a lot of stress on just war theory. A minimal criteria for a just action in war, is obviously that it be an action accomplished in the context of a war. If it is, then we can use just war theory and the law of war to try to make some sense of the action and determine if it is a legal and or moral action. In situations where a telerobot is used to project lethal force against a target, it is not clear whether the actions are acts of war or not. Typically, the missions that are flown by intelligence agencies like the CIA are flown over territory that is not part of the overall conflict. So the “War on Terror” can spill out into shadowy government operators engaging an ill defined set of enemy combatants anywhere on the globe that they happen to be. When this new layer of difficulties is added to the others I have mentioned in this interview, one is left with a very morally suspect situation. As an example we can look at the successful predator strike against Abu Ali al-Harithi in Yemen back in 2002. This was the first high profile terrorist target engaged successfully by intelligence operatives using this technology. This act was widely applauded in the US but was uncomfortably received elsewhere in the world, even by those other countries that are allied in the war on terror. Since this time the use of armed drones has become the method of choice in finding and eliminating suspected terrorists who seek sanctuary in countries like Pakistan, Yemen, Sudan, Palestine, etc. It is politically expedient because no human intelligence agency agents are at risk and the drone can loiter high and unseen for many hours waiting for the target to emerge. But this can cause wars such as these to turn the entire planet into a potential battlefield while putting civilians at risk who are completely unaware that they are anywhere near a potential fire-fight. While I can easily see the pragmatic reasons for conducting these strikes, but there is no way they can be morally justified because you have a non military entity using lethal force that has caused the death and maiming of civilians from countries that are not at war with the aggressor. I am amazed that there has not been sharp criticism of this behavior in international settings.
It is not only politically expedient but also morally defensible to use "drones" in this way. Consider the alternatives. Each alternative offers increased risk to soldiers and/or innocents. Being able to hover and monitor actually increases time for reasoned and thoughtful action. Now, I think Sullins overstates things considerably by saying we run the risk of making the entire planet a battlefield, but he does bring up the salient point that unsuspecting innocents are often around targets. That is true, but it is also true that low yield missiles and monitoring allow us to pick times of attack that lessen their risk of death. When more conventional alternatives are considered one quickly realizes that they bring with them greater risk to innocents as compared to "drone" attacks. Even special operations involve small arms and small arms combined with the high stress of such operations, the likelihood of firefights and the darkness often favored, puts innocents at equal or greater risk I would offer.
Negotiations and treaty will no doubt be needed to create specific rules of engagement and laws of war to cover this growing area of conflict. Yet, even if the major players can agree on rules of engagement and laws for the use of drones that does not necessarily mean the rules and laws obtained will be ethically justified. To do that we have to operate this technology in such a way that we respect the self determination of the countries they are operated in so that we do not spread the conflict to new territories, and we must use them with the double intention of hitting only confirmed military targets and in such a way that no civilians are intentionally or collaterally harmed. I would personally also suggest that these missions be flown by trained military personnel so that there is a clear chain of responsibility for any lethal force used. Without these precautions we will see more and more adventurous use of these weapons systems.
Good point about wanting to create a clear chain of responsibility, but I see no a-priori reason to believe that such responsibility can be extended to CIA employees
One of the problems you have identified in UAV piloting is, that there is a tendency for these to be controlled not only by trained pilots, typically officers with in-depth military training, but also by younger enlisted men. Also do you see the future possibility to contract UAV piloting to civil operators? What would be the main challenges in these cases and what kind of special training would you think would be necessary for these UAV operators?
Yes, there is a wide variety of UAVs in operation today. Many of them do not require much training to use so we are seeing a trend emerging where there are piloted by younger war fighters. Personally, I prefer that we maintain the tradition of officer training for pilots but if that is impossible and we are going to continue to use enlisted persons, then these drone pilots must be adequately trained in the ethical challenges peculiar to these technologies so they can make the right decisions when faced by them in combat situations.
Since the larger and more complex aircraft like the Predator and Raptor, are typically piloted from locations many thousands of miles away, it is quite probable that civil contractors might be employed to fly these missions. That eventuality must be avoided, at least when it comes to the use of lethal force in combat missions. The world does not need a stealthy telerobotic mercenary air force. But, if we can avoid that, I do think there is a place for this technology to be used in a civil setting. For instance, just recently a Raptor drone was diverted from combat operations in Afghanistan and used to help locate survivors of the earthquake in Haiti. Certainly, that is a job that civil pilots could do. Also, these machines are useful for scientific research, fire patrols, law enforcement, etc. All of which are missions that it would be appropriate for civilians to accomplish. The ethical issues here are primarily those of privacy protection, expansion of the surveillance society, and accident prevention. So with that in mind I would hope that civil aviation authorities would work to regulate the potential abuses represented by these new systems.
Regarding the impact of telerobotic weapon systems on warfare, where do you see the main challenges in the field of just war theory and how should the armed forces respond to these challenges?
Just war theory is by no means uncontroversial but I use it since there are no rival theories that can do a better job then just war theory even with its flaws. It is, of course, preferable to resolve political differences through diplomacy and cultural exchange, but I do think that if conflict is inevitable, we must attempt to fight only just wars and propagate those wars in an ethical manner. If we can assume our war is just, then in order for a weapons system to be used ethically in that conflict, it must be rationally and consciously controlled towards just end results.
Telerobotic weapons systems impact our ability to fight just wars in the following ways. First they seem to be contributing to what I call the normalization of warfare. Telerobots contribute to the acceptance of warfare as a normal part of everyday life. These systems can be controlled from across the globe so pilots living in Las Vegas can work a shift fighting the war in the Middle East and then drive home and spend time with the family. While this may seem like it is preferable, I think it subtly moves combat into a normal everyday activity in direct confrontation with just war theory that demands that warfare be a special circumstance that is propagated only in an effort to quickly return to peaceful relations.
This is an odd argument. It seems to be saying that we should eschew this technology so as to limit warmaking options only to those that will encur wars of high cost, wars very much like conventional wars, in which risk of deaths, soldier and civilian, are large, so as to discourage warfare in general. In response it can be said that judicious use of remotely piloted lethal force would obviate the need for such recourse, and in fact would be morally preferable for that very reason. It allows us to target the people who need targeting, and eliminate them, without having to invade, or do other things that risk larger swathes of innocents. Also, we can ask if he means this as a general point. Should we, generally speaking eschew technologies that run the risk of making warfare too easy, so as to make the choice horrible enough to keep warfare a last and rare option? One cannot but note that such arguments could have been, and indeed were made in the past when new technologies were invented, firearms for instance. Yet, empirically, has warfare increased since then? I doubt it. What is more, one cannot in fact effectively 'eschew' emerging technologies. They will be developed, by civilized warriors, or by the barbarian. We had better assure that the civilized warrior develops them first, and maintains overwhelming technological superiority. An moral argument to that effect is not hard to construct.
Finally, part of the reason Sullins has trouble deciding whether or not use of lethal RPA amounts to war, I would suggest, is that such tactics, while military in nature, and indeed use of lethal force, is not warfare, but preventative of war. As such, it is morally defensible, given the alternative.
Also, telerobots contribute to the myth of surgical warfare and limit our ability to view one’s enemies as fellow moral agents.
What is the import of the use of the word "myth" here? Tendentious.
That last bit is often hard for people to understand, but moral agents have to be given special regard even when they are your enemy. Just war attempts to seek a quick and efficient end to hostilities and return to a point where the enemy combatants can again respect one another’s moral worth. For instance, look how many of the European belligerents in WWII are now closely allied with each other. The way one conducts hostilities must not be done in a way that prevents future cooperation. Telerobotic weapons seem to be doing just the opposite. The victims of these weapons have claimed that they are cowardly and that far from being surgical, they create devastating civilian casualties. These allegations may or may not be true, but they are the image that much of the world has of those countries that are using these weapons fanning the flames of intergenerational hatred between cultures.
One suspects that our enemies are being disengeuous here, and would be more than glad to use the same methods, and against greater swathes of targets, if they could. One must also point out that the targets (AQ, the Taliban crime family, and the like) already do not 'respect our moral worth' and nothing we will do can change that. What we need to worry about is not the tender perceptions of the Islamototalists, but those of the general populace in countries like Pakistan and Yemem, countries that provide a goodly amount of the fighters we are engaged with. I do not believe that we need to eschew the technology in order to alter their innacurate perceptions of the U.S. I am not naive about the prospects, and it indeed may not matter what we do. The perceptions may be set an inalterable, however arguments need to be made, loudly and repeatedly that conventional means of warfare are certainly available to us, and we could use them, but we choose not to do so, instead choosing "surgical" warfare, using 'drones', because we do not want to harm non-combatants.
One would think this would be obvious, to them, but it is not, thanks to an effective propaganda campaign by our enemies. We counter that propaganda not by limiting ourselves, and thus playing into enemy hands with resort to tactics that will lead to increased casualities, nor by choosing retreat, but by clearly making the moral case for the kind of warfare we are engaged in, across its full spectrum, from COIN to use of remote piloted aircraft. If I had my druthers, we would hijack the radio and television in the area (Al Jazeera?) and broadcast 24 hours a day.
So what you are saying is, that the current method of using UAVs might actually endanger one of the principles of just war theory, the probability of obtaining a lasting peace (iustus finis), in other words the short term military achievements might curb the long term goals of peace?
Yes that is exactly right. People who have had this technology used against them are unlikely to forgive or reconcile. When these technologies are used to strike in areas that are not combat zones they tend to fan the flames of future conflict even if they might have succeed in eliminating a current threat. This can cause a state of perpetual warfare or greatly exacerbate one that is already well underway. For instance, we can see that the use of remote controlled bombs, missiles and drones by both sides of the conflict in Palestine are not ending the fight but are instead building that conflict to new highs of violence.
The armed forces should respond to this by understanding the long-term political costs that come with short-term political expediency. Right now, a drone strike that causes civilian casualties hardly raises concern in the home audience. But in the rest of the world it is a source of great concern. It is also important to resist the temptation to normalize telerobotic combat operations. I would suggest backing off on using these weapons for delivery of lethal force and move back to reconnaissance missions. And yes, I do know that that will never happen, but at least we should use these weapons only under tight scrutiny, in declared combat zones, with the intent both to justly propagate the conflict and eliminate non combatant casualties.
This simply concedes too much to the enemy. Suppose that we find, via recon that UBL is in fact in Pakistan, and can be 'surgically' removed from this mortal coil. Is it responsible to take that option off the table in such an a-priori fashion?
One question connected to the normalization of warfare through telerobotics, is the so called shift-work fighting. Where do you see the main challenges in the blending of war and civilian life and how could this be countered?
I need to be careful here so that I am not misunderstood. I do understand that these technologies take the war fighters that would have had to risk their own lives in these missions out of danger and put in their place an easily replaceable machine. That is a moral good. But what I want to emphasize is that it is not an unequivocal good. Even if our people are not getting hurt, there will be real human agents on the other end of the cross hairs. Making a shoot or don’t shoot decision is one of the most profound a moral agent can be called on to make. It can not be done in an unthinking or business-as-usual way. So when we blend war fighting with daily life we remove these decisions form the special moral territory they inhabit in just war theory and replace it with the much more casual and pragmatic world of daily life.
This psychology just seems false on its face. Police are analogical to these pilots are they not? They have home lives, and often work in dangerous environments, yet they do not suffer from "normalization" of decision making in their work. Why should it be any different for remote pilots, especially if they are trained military folks?
Realistically I do not think there is anyway to counter this trend. It is politically expedient from the viewpoint of the commanders, it is preferable to the individual war fighters, and there does not seem to be any international will to challenge the countries that are using UAVs in this way. As the technology advances we will see more and more naval craft and armored fighting vehicles operated teleroboticaly and semi autonomously as well. For instance, this is a major plank of the future warfare planning in America and quite a bit of money is being directed at making it a reality. It is my hope though, that these planners will take some of these critiques seriously and work to keep the operators of these future machines as well trained and professional as possible and that they operate them with no cognitive dissonance. By that I mean the operators should be well aware that they are operating lethal machinery in a war zone and that it is not just another day at the office.
I understand, that in your speech at the IEEE International Conference on Robotics and Automation 2009 in Kobe, you have also presented recommendations for the use of telerobotic weapon systems. What should be our top priority at the moment?
The Conference in Kobe was very interesting. Roboticists such as Ronald Arkin are working hard on designing systems that will act like “ethical governors” in the hope that future autonomous and semi autonomous military robots will be able to behave more ethically than humans do in combat situations. So the top priority right now should be to tackle this idea seriously so we can make sure that these ethical governors are more than just an idea but an actual functioning part of new systems. The main sticking point right now is that at least theoretically, a system with a functioning ethical governor would refuse orders that it deemed unethical, and this is proving to be a difficult technology to sell. If I can be permitted one more top priority it would be to investigate some of the claims I have made to provide more detailed information. Is telepistemological distancing real? Do drone pilots view the war as just a kind of super realistic video game? The military has the funds and personnel to carry out these studies and without this data we cannot rationally and consciously use these weapons and therefore cannot use them ethically.
Yes, to this research project. I suspect it will render negative results for some of the questions, thus obviating some of the worries here presented.
To mitigate the most detrimental negative effects of telepistemological distancing, there are five aspects one might consider:
1.Constant attention must be paid to the design of the remote sensing capabilities of the weapon system. Not only should target information be displayed but also information relevant to making ethical decisions must not be filtered out. Human agents must be easily identified as human and not objectified by the mediation of the sensors and their displays to the operator. If this is impossible, then the machine should not be operated as a weapon.
An interesting point made here, that if you are looking at video of real human beings in real time, that experience is quite different than looking at blips or abstract visual representations. That latter interface may tend to make decisions les deft morally. Another area where research would help with evaluation of the veracity of this claim.
2.A moral agent must be in full control of the weapon at all times. This cannot be just limited to an abort button. Every aspect of the shoot or don’t shoot decision must pass through a moral agent. Note, I am not ruling out the possibility that that agent may not be human. An artificial moral agent (AMA) would suffice. It is also important to note that AMAs that can intelligently make these decisions are a long ways off. Until then, if it is impossible to keep a human in the decision loop, then these machines must not be used as weapons.
Another interesting point, and one which in part makes me more than a bit nervous. I don't think it would be wise to turn control completely over to AMAs, especially if they are merely 'emulative' AMAs. Keep the humans in the OODA loop.
3.Since the operator his or herself is a source of epistemic noise, it matters a great deal whether or not that person has been fully trained in just war theory. Since only officers are currently trained in this, then only officers should be controlling armed telerobots. If this is impossible, then these machines should not be used as weapons.
Concur on the ethics training, with the minor caveats already mentioned.
4.These weapons must not be used in any way that normalizes or trivializes war or its consequences. Thus shift-work fighting should be avoided. Placing telerobotic weapons control centers near civilian populations must be avoided in that it is a legitimate military target and anyone near it is in danger from military or terrorist retaliation.
Interesting point here, considering that Langley is already so situated.
5.These weapons must never be used in such a way that will prolong or intensify the hatred induced by the conflict. They are used ethically if and only if they contribute to a quick return to peaceful relations.
This seems to be something for which it is impossible to design a practicable metric, and it also does not take into account situations where you have an intractible enemy that has no interest in ever 'returning to peaceful relations.' When such an enemy is involved, how exactly are you to apply #5, and to whom?
All in all this is a a very interesting and provoking interview, and an excellent exposition of 'cutting edge' worries concerning emerging military technology and the ethical implications that come with them.