Wednesday, June 25, 2014
Really just an excuse to work in a reference to an old Star Trek TOS episode. Wait for it...
By electing to drive a typically equipped car I engage in behavior that can end up in death or injury.
By electing to drive a cutting edge, AI enabled self-driving high tech gee whiz car, I engage in behavior that can end up in death or injury, but the likelihood is substantially decreased. However, I will spend considerably more money than for a typically equipped car.
I might decide to refrain from driving, in order to assure myself of no future bad consequences.
By electing to refrain from driving, I will not be engaging in that behavior; will not run the risk, but will not benefit from having a car.
In a rough nutshell, we can use this to illustrate the so called ‘precautionary principle.’ It says something like this:
If an activity is known to carry certain risks, it may be the case, that all things considered, it would be best to refrain from it, even in cases of epistemological uncertainty, that is; when we know the risks are present within a certain degree or range of probability, but we do not know whether we will incur them.
This principle, in some form or another is often cited in reference to climate change, to support either mitigation technology, or limitation/elimination of carbon generating technologies. The mitigation technologies are analogous to the gee-whiz car; the limitation/elimination regime is analogous to electing not to drive automobiles.
Quasi-formally, the argument for elimination goes like this.
In any case where an activity has avoidable negative consequences that are likely enough and severe enough, we should avoid it.
A has probability P (high enough) of causing severely negative consequences C (severe enough).
Therefore, we should avoid doing A.
Applying this to the car example, the conclusion is that we should not drive cars, assuming you think the accident rates are high enough.
Quasi-formally, the argument for mitigation is similar but has this first premise
In any cases where an activity & etc…, we should introduce devices that will reduce the likelihood and severity of the consequences of that activity.
This leads to the conclusion, contained in the later clause:
We should introduce devices that will reduce….& etc.
To apply this to our car example, the conclusion is that we should drive cars that have mitigation technologies, such as airbags, computer aided traction, or self-driving tech; drive with helmets; etc., including some combination of such ‘devices’ as innovation provides them.
Now, obviously, in this latter case, we can take greater or lesser measures, depending on how much risk we would like to eliminate. The more mitigation we opt for, the greater the cost. That expenditure must come from somewhere. Sacrifices must be made in some other aspect of life. Money ‘don’t grow on trees,’ even if it is created by fiat. At some point, excessive caution will price the technology out of existence, in that it will become prohibitively expensive, drying up the market, and thus the providers.
In the car case, we could require all the above mitigation measures, and include some sort of Michelin Man body suit, ejection seat, self-driving AI in all cars, (if it is demonstrably better than human drivers), and drive the cost of cars up so high that only the very wealthy could afford them. That being the case, we would need to factor in the negative repercussions this ‘carlessness’ would have in other spheres of human activity, and determine whether the negative utilitarian impact outweighs the precautionary benefits.
In the elimination case, elimination of the activity will equally as obviously bring with it negative consequences. So, we will have to carry out a similar projection of impacts.
In either case, we need to ascertain probabilities of the various negative impacts. We should also take into account the positive impacts for all options, and their probabilities. We also need to ascertain how allocation of finite monetary and other resources will be impacted.
So, in order to properly and thoroughly carry out the sort of thinking instantiated in the precautionary principle, we need not only consider the positive impact of mitigation or elimination, but the negative impact as well. We should also consider the positive and negative impacts of retaining the status quo, and we should compare the three proposed regimes.
In any case, the plausibility of the arguments for action will also rely on the degree of certainty we have as to the causal connection between the activity and the targeted negative consequences under consideration.
In the car case, the certainty is quite high, that human driving has direct causal connection to injuries and deaths on our roads. In the case of climate change, there is not as high a level of certainty as to whether or not carbon dioxide levels are cause of climate change, or symptom, or as to the role of human produced CO2. Indeed, there is even disagreement as to whether or not any significant change is occurring at all, and whether or not it is caused by anything we can significantly affect. If, for instance it is due to long cycle solar minima/maxima, short of altering dear ole’ Sol, there is nothing we can really do about it.
So, in general, it looks as if there are three basic attitudes we can take to such risks.
1. Retain status quo.
In each case, we have to consider viability of the proposed measures, costs and benefits, and do this for all involved parties. Using the car analogy again, it may be proposed that we require all cars in the world to have a certain set of precautionary gizmos. If so, we would have to consider whether or not such a thing is feasible and affordable across the globe, for all (or most) drivers, and how they should be provided with the gizmos.
Should the regime be regulated into effect? Who enforces? Who pays? Who can afford the new cars? Who cannot? Should we rely on market forces? If so, do we not retain a status quo? All these questions and others need to be addressed. We’d also have to establish that the gizmos would actually do the job.
In the case of climate change, we might have a proposal to eliminate oil, gas or coal fired power plants in favor of solar, wind or hydro powered plants. Less drastically, we might introduce mitigation requirements, such as “filters” on power plants, or we might introduce one of the so-called ‘carbon credit’ schemes. If so, we not only have to consider if the U.S. can afford such regimes, but Namibia or Sudan or Pakistan. We have to ascertain the likelihood of compliance across the lines of the world’s many sovereign nations. We have to compare the use this money would be put toward with other possible uses, and ask if it is overall the best ‘bang for the buck.’
It would seem that these proposals would be compelling only if it is the case that the consequences of inaction are perceived to be sufficiently dire and likely. Hollywood analogy time: It would have to be something as dire and likely as the scenario presented in the Bruce Willis film Armageddon. We would have to be in a position where we literally see the damn hammer of the gods hurtling toward us. Only then would we come together, override sovereignty or cost considerations and build the damn obelisk of Kirok.