Thursday, July 25, 2013
Bertrand Russell presents a famous example involving chickens in a barnyard in order to discuss the logic of inductive inference from repeated instances. It runs like this:
You are a chicken. Your life is quite comfortable regular and predictable. Your world is a large warm enclosure within which you are free to roam, along with other chickens. Among the regularities is this: Each day, shortly after sun-up, you are visited by a creature in overalls that feeds you and the other birds. This goes on day after day for years. Not that you are capable of counting, but many such days pass. You are comfy and contented. Life is pretty good.
Day 2460 rolls around. You wake as the environs brighten; you hear the familiar sound of the barn door opening and spy the familiar overall-clad creature. Now, what can you expect? What should you expect if you are a rational chicken?
Note your data set. You have a large set of experiences just like this one, at least in their beginning portions. For 2459 repetitions, your days have started just like this. Each time, that overall- clad creature came in, accompanied by that same squeak, and all of this is accompanied by the same ambient light level. Each and every time these conditions have been met, you received food. So, anticipating another repetition, you make your way to the creature, bounding happily, as chickens sometimes do. Quite suddenly you find your neck most securely grasped, your body violently jerked upward and around, your neck snaps, there is pain, and then shock, then you lose consciousness.
You’re dinner for that evening.
If you are a particularly bright bird it might dawn on you, in that last fleeting moment, that even though it was true in the past that each of those 2459 times, you were fed, and continued to live another day, that turned out to be not such a hot guarantee things would continue just so into the future.
This is an example that highlights the difference between deductive and inductive inference. Deductive arguments are such that the truth of the premise material guarantees the truth of the conclusion, while this is not the case with inductive arguments. The truth of the premises, in such arguments, does not guarantee the truth of the conclusion. We would like it to be the case that the truth of the premise materials in such arguments renders it likely that the conclusion is true, but we have to admit that there is no guarantee- level dead certainty on offer.
There are some interesting corollaries to this example. It is used to point out that it is possible for rational agents to be in a position where they are confident they have all the data they need, indeed may believe there is no more relevant data to be had, but are very wrong about all this. It may well be that there is a larger context within which they reside that, if only known, would provide them with additional relevant data that would force a very different conclusion. If the chicken had known that beyond his ‘world’ the overall-clad creature not only ate, but ate chicken daily, had knives and other butchering tools, he might have run to the other side of the coop!
A second corollary related to the first: Notice that the chicken is not in a position to know these things for two reasons.
He is physically confined so that he is only capable of a very limited set of experiences. Obviously, his data set is constrained for this reason.
Another reason it is constrained; He’s a chicken. He ain’t too bright. He is mentally unable to comprehend that human beings like eating chickens, does not, and cannot comprehend human purposes in raising chickens. Indeed, he cannot grasp the concept of “raising” anything. These sorts of things are quite beyond his ken.
So, we can be in the epistemological dark for two reasons: our environment is quite limited and not truly representative of the typical. For that reason we miss a great many things we would otherwise see. The second reason we may be in the dark is that we are, to put it bluntly, to dim to know we are.
This has application to human beings in two ways:
1. We may not have access to enough relevant data when we make inductive inferences in the sciences (pure or applied) or in politics economics or warfare. What is more, we may not know that we do not have enough relevant data, and may even be sure that we have all we really ever would need. This can occur because of our environment being sparse.
2. Additionally, it may be the case that we are constitutionally unable to comprehend significant chunks of relevant data that would render us able to produce sound judgments, due to cognitive, mental or biological limitations. Otherwise put, we may be constitutionally unable to comprehend that we lack these, and cannot access them, even if they are part of the environment.
In the former case, if we somehow do discover the fact that there is more relevant data to be had, we will look back at our earlier selves with a bit of pity, while being more wary, and better informed for the future.
In the latter case, we will never know that we do not know these things. Never.
It is probably hubris to assume that all unknowns are of the first sort; things about which we can learn, things which we can ultimately comprehend. There is no good reason to think that we are significantly different from the chickens in that there are absolutely no things that we cannot ultimately fully comprehend. It seems highly doubtful that we would be that lucky.
On the other hand, it would behoove us to act as if we can discover and comprehend all the unknowns that exist. What other option is there but to try our darnedest? For, it seems that we cannot know, with certainty that we cannot know all there is out there. There is only one way to find out. Try. And if we fail, does that conclusively show that we cannot know all there is to know? No. We simply don’t know that we cannot ever know all there is to know. That’s all we would know. Know what I mean?