Friday, May 3, 2013

The U.S. endangers world health by using vaccination workers to find UBL?

In an unattributed op-ed, Scientific American makes the case that it was immoral for the CIA to make use of a vaccination campaign to gain intelligence on the whereabouts of UBL.

Some key snippets and then reactions:

The operation that led to his death may yet kill hundreds of thousands more. In its zeal to identify bin Laden or his family, the CIA used a sham hepatitis B vaccination project to collect DNA in the neighborhood where he was hiding. The effort apparently failed, but the violation of trust threatens to set back global public health efforts by decades…

It is hard enough to distribute, for example, polio vaccines to children in desperately poor, politically unstable regions that are rife with 10-year-old rumors that the medicine is a Western plot to sterilize girls—false assertions that have long since been repudiated by the Nigerian religious leaders who first promoted them. Now along  come numerous credible reports of a vaccination campaign that is part of a CIA plot—one the U.S. has not denied.

The deadly consequences have already begun. Villagers along the Pakistan-Afghanistan border chased off legitimate vaccine workers, accusing them of being spies. Taliban commanders banned polio vaccinations in parts of Pakistan, specifically citing the bin Laden ruse as justification. Then, last December, nine vaccine workers were murdered in Pakistan, eventually prompting the United Nations to withdraw its vaccination teams. Two months later gunmen killed 10 polio workers in Nigeria—a sign that the violence against vaccinators may be spreading.

Dire predictions of what COULD occur as unintended consequence:

The distrust sowed by the sham campaign in Pakistan could conceivably postpone polio eradication for 20 years, leading to 100,000 more cases that might otherwise not have occurred, says Leslie F. Roberts of Columbia University's Mailman School of Public Health. “Forevermore, people would say this disease, this crippled child is because the U.S. was so crazy to get Osama bin Laden,” he argues.


OK. Seems a bit overheated, not to mention overconfident in regard to predicting the distant future.

“20 year postponement of eradication” 


That sounds like a case of the vapors.

If the deeply conspiratorial mindset in AfPak and other areas predates the use of this program by ten years (or more, I would argue), then it seems likely that the overall numbers of victims of polio or other diseases attributable to the aftermath of the Abbottabad charade alone may be being exaggerated by Roberts, just a smidgen.  His extrapolation 20 years into the future (let alone into the realm of the ‘forevermore’) seems no more plausible than this one:

Continued work by legitimate vaccination campaigns will, on the whole be successful and memories will fade, such that, the level of deaths attributable to anti-Western conspiracy mongering will be no more than they would have been sans Abbottabad. Eradication will occur more or less on schedule

I see nothing in the piece that convinces me that there will be a significant long-term uptick in distrust over and above what would have existed if that operation had not been revealed, nor any numbers showing worldwide increase in cases of such diseases since last May.  But, that brings us to another point:

The program itself is not to blame for the unintended consequences he does mention; the revelation of the program is to blame. Whoever it is that revealed the program is obviously more directly responsible for those deaths than poor Dr. Shakil Afridi  languishing in a Pak jail, or his CIA partners. If their cooperation had remained a secret, then the alleged rise in deaths attributable to the publicity obviously would not have occurred.

Lastly, the article does make an interesting argument using the Hippocratic Oath: Is it the business of medical professionals to take part in what are essentially military our counter-terror operations? A medical professional’s first and sworn duty is to the welfare of those he/she treats. The article makes plain that the people around the Hey Abbottabad compound did not receive the full complement of vaccinations once the intel had been gathered.

Physicians take a Hippocratic oath to do no harm. Humanitarian workers adhere to an international code of conduct that requires that their services be provided independently of national agendas, on the basis of need alone. The misguided vaccine program in Pakistan was started in a poor neighborhood of Abbottabad, no doubt to give it an air of legitimacy. Yet after the first in a standard series of three hepatitis B shots was given, the effort was abandoned so that the team could move to bin Laden's wealthier community. This lapse in protocol proves that the best interests of the recipients were not the guiding principle of the effort—while not coincidently betraying the program for the sham it was.

A couple of things can be said in response. Firstly, medical personnel take part in both of these sorts of activities all the time. Medical personnel are employed at Gitmo, on the front during wartime and in other such settings.  It is possible to serve both interests while meeting ones primary obligation. There is nothing logically impossible about that. Obligations MAY come into conflict, but they need not. Additionally, humanitarian personnel have aided military activities in Iraq, arguably sincerely believing it was in service to the indigenous populations they promise to serve, that is, for their protection. Once again; there is nothing necessarily incompatible in that sort of case.

Now, having said that, I believe there is a good point to be made, that the full program of vaccinations should have been carried out. That is well taken. The locals deserved it, and there was no risk to the vaccination providers.

Second point related to first: If ‘agendas’ (national or no) are anathema to the values inherent in the Hippocratic Oath, or to the principles of a humanitarian organization, and anathema to such a degree that by taking part in intelligence, military, police or other operations a doctor or humanitarian could significantly further the welfare of those he normally serves, would he not have an obligation to do so, precisely because it helps him in his primary professional obligation?  The ‘agenda’ of Bin Laden may not have been national, but it was political. Additionally, it was antithetical to the values of these professional entities, and, more directly, because of his presence, and because of the deeply conspiratorial mindset he perpetuates, Afghans and Paks are not living lives they could. UBL certainly did not adhere to the ‘do no harm’ stricture, and his actions brought down the wrath of the U.S. not only militarily, but economically, in ways that impacted AfPak. Does this not make a case that Dr. Afridi was morally correct or at least morally OK in doing what he did?  By extension, does it not also make a case for the moral permissibility of the CIA’s operation as a humanitarian operation, even as it also served the dual purpose of US National interest?

Things to consider as you read the hand-wringing op-ed.



U.N. Springs into action on autonomous (and not quite autonomous) weapons: recommends moratorium, while it forms commission and international law confab.

The Special Rapporteur on pretentious vainly trying to catch up with 30 year old The Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, has authored a cautionary document that recommends nations halt the use and/or development of autonomous weapons systems until the UN forms a commission to talk about it for a year. More here:

Some of the weapons systems mentioned in the report are in section 45:
While much of their development is shrouded in secrecy, robots with full lethal autonomy have not yet been deployed. However, robotic systems with various degrees of autonomy and lethality are currently in use, including the following:
The US Phalanx system for Aegis-class cruisers automatically detects, tracks and engages anti-air warfare threats such as anti-ship missiles and aircraft.
The US Counter Rocket, Artillery and Mortar (C-RAM) system can automatically destroy incoming artillery, rockets and mortar rounds.
Israel’s Harpy is a “Fire -and-Forget” autonomous weapon system designed to detect, attack and destroy radar emitters.
The United Kingdom Taranis jet-propelled combat drone prototype can autonomously search, identify and locate enemies but can only engage with a target when authorized by mission command. It can also defend itself against enemy aircraft.
The Northrop Grumman X-47B is a fighter-size drone prototype commissioned by the US Navy to demonstrate autonomous launch and landing capability on aircraft carriers and navigate autonomously.
The Samsung Techwin surveillance and security guard robots, deployed in the demilitarized zone between North and South Korea, detect targets through infrared sensors. They are currently operated by humans but have an “automatic mode”.
So is the Special Rapporteur suggesting the U.S. Israel, the UK and ROK stop using these things and other systems like them until the commission gets a chance to talk about them and put together a team of international lawyers that can craft some robust body of code that will govern their use?  Seriously? Part of the message to U.S is this: Open your Navy to attack; make it more vulnerable by complying. Turn that damn Phalanx system off till we get all this squared away.
Well. Okeydokey then. We’ll get right on that.
Good luck with that project Special Rapporteur. When, in the history of technology (military and non) have human beings sat back and waited for the legal regulatory apparatus of governments to catch up with and exhaustively take account of all possible moral and legal ramifications of the introduction of new technology?  Never.  It is always a ‘make it up as you go’ process of adaptation for extant governments and bodies of law.  Why? For the very simple reason that innovation is a constant, and being constant, at any one time you care to pick (at least in the modern world) it outstrips the moral and legal systems in place. They never quite catch up. By the time that they do ‘catch up’ further innovations are rendering them obsolete to some degree.
Unless we wish to outlaw such innovation, there is no real way to remedy this permanent gap.  Additionally, no technologically advanced nation, if it has the ability to make use of a technology that will not only assure its security, but allow it unprecedented ability to successfully adhere to the strictures of just war doctrine that demand discrimination and proportionality, will in good conscience pass on that opportunity. Additionally, use of such technology often obviates the need to resort to war as classically understood. That last resort gets resorted to less frequently, not more.  The Star Wars Droid Army scenarios are simply not panning out as to how the technology is actually used. Dystopian sci-fi stories are not arguments, especially when they are not borne out by historical trends.
Some of the worries that you see in the document, worries as to the potential for making use of force too easy or a something-other-than-last- resort are as old as military innovation. Nothing new in that argumentation. As I recall, ancient Greek infantry types tended to dislike or lack respect for missile troops (archers and slingers) because it was too easy for them to kill from a distance. Similar things were said when firearms were introduced, and upon each improvement of the design. 

And as to use of lethal force. First, warfare itself: Warfare is often resorted to earlier than is morally allowed, and it does not seem that technological innovation is uniquely to blame for that. Besides, the very same technology that is supposed to provide such temptation can also be used to deter such temptation. The surveillance possibilities introduced by some of this technology make it that much more difficult for states and non-state actors to undertake acts of aggression either internally or externally, without detection, and with plausible deniability.

Secondly, force is often resorted to in contexts that are well short of war, and actually such uses make it unnecessary to resort to war. Targeted assassinations and sabotage are obvious cases in point.  There is no reason to think that technology will not (and should not) be used for such purposes (Stuxnet anyone?).
Such reports as this one generated by the U.N. do tend to focus on the potential hazards of new developments in technology with considerably less emphasis put on the potential benefits. That does not lead to a fair appraisal.
In any case, it is entirely unrealistic to expect nations such as those fingered in the report to simply give up (even temporarily) the advantages that this technology provides them, especially when they are faced with a very dangerous world, full of actors that have considerably less moral compunction about acting badly. There is an obligation, for instance, on the part of the U.S. to protect its sailors, that it would inexcusably neglect if it were to entertain the moratorium.

In the end, this report's primary contribution will no doubt be as intellectual/conceptual  fodder for academics and lawyers. 

And, there is nothing wrong with that. It always pays to discuss the moral/legal ramifications of technological innovations. It needs doing in our schools and in government. However, as a serious policy proposal, it would be a vain attempt at halting the natural momentum of technological discovery and application, while also allowing those who would not heed its call to catch up.