A common argument against consequentialist ethics is the empirical fact that most persons seem to place more weight on the consideration of interests of those they know than on those they don't. While not fatal, it is true that most consequentialists tend to regard as a fundamental premise the idea that all persons deserving of consideration should receive equal consideration; thus when one points out that, in practice, most persons choose to regard the consideration of interests for friends and family as of higher importance than the consideration of interests for strangers or others outside their sphere of knowledge, the dedicated consequentialist usually has to retort by maintaining that 'true ethics is not decided by popular vote', or that it is merely a biological and psychological necessity that humans can only intensely relate with ~150 other persons, but that this should not have anything to do with how one should act.
However, I believe that a different counterargument can be raised: that the consequentialist position may in fact _necessitate_ that a person give extra weight to the consideration of interests of those they know more closely. While this new argument relies on a few extreme conditions that cannot yet be verified scientifically, I nevertheless believe that it will soon become clear to the reader that if these conditions are allowed as premises, then this new way of looking at consequentialist ethics will be hard to dispute by resorting to weighted consideration of interest arguments.
While not all consequentialist ethics are the same, generally they all agree that in order to decide on which set of consequences is preferable, we must have a way to measure those consequences against one another. Usually this is done using units of some type; for ease of understanding, I will use 'good' units in this argument, where those consequences that have a higher cardinality of 'good' units are strictly preferable to those consequences that have a lesser cardinality of 'good' units. My reason for using mathematical terminology here will soon become plain. (A higher cardinality set means merely that there is 'more of' that set in relation to another.) I am avoiding the ordinary usage of the word 'maximize' here for two reasons: first, that there may be multiple realizations of a maximized result, and second, that there may be no maximally realized result. Again, this will become clear momentarily.
Traditionally, consequentialist ideas have been described in terms that only work with a finite number of possible consequences. For example, we may consider whether to do X or Y, where X has consequence X1, and Y has consequence Y1. Solving this conundrum consists only in comparing the cardinality of 'good' units in X1 and Y1 and determining which is greater; hence we would then choose to do the action associated with that consequence. Of course, reality is much more complicated; not only is it nearly always extremely difficult to predict consequences accurately, but the number of possible actions one may take is limited mostly by one's imagination. Nevertheless, the basic idea remains: were we able to predict the future and compare cardinalities of all possible consequences, then we would be able to choose which actions had the highest cardinality (in a finite set, at least one and as many as all may have the highest cardinality). In this finite instance, it makes sense to speak of 'maximizing' the cardinality.
But it is possible that the total set of actions one may take is not finite in number, but infinite to some degree. In this case, 'maximizing' may no longer make sense at all, since there might always be some action which produces a higher cardinality consequence than any individual action you choose. In other words, in the special case where there are an infinite number of possible actions, it may be true that there is no 'best' choice, just as there is no 'highest' number.
(It should be noted that it is possible to have a highest (single or tied) cardinality even when the number of possible actions are infinite, but it is no longer necessary as it is when the number of actions available are finite.)
But even more alarming is when you consider the possibility that the cardinality of 'good' units in total is infinite. If the total sum of 'good' units is infinite, then choosing a consequence where unjust genocide occurs (normally a net negative) or choosing a consequence where world peace comes about (normally a net positive) makes no difference at all! For if the total cardinality of 'good' is infinite, then subtracting or adding ANY finite amount, no matter how large (or even any _infinite_ amount!), makes no difference to the total amount of 'good' in the world. Mathematically, the cardinality remains unchanged.
(Before continuing this argument, I would like to address the issue of whether or not an infinite number of possible actions is possible in principle. If units in space are infinitely divisible, then obviously an infinite number of possible actions arises as a consequence. But that space is infinitely divisible is not clear; quantum physics, at least, seems to suggest that at some minimal distance, further subdivision is not possible (or at least that further subdivision does not alter observable causation). But this is not the only way that an infinite number of choices need be present. You may also have an infinite amount of space, a concept that cutting edge physics seems to think is much more plausible, whether in the form of a multiverse interpretation of quantum theory, or an infinitely expanding universe that procreates through new 'big bang's. But even if you find both of these types of infinity hard to swallow, there is also the idea that time may continue on indefinitely--even pre-Einsteinian physicists were ready and willing to believe in a steady state universe of infinite duration, and I believe most people would admit this kind of infinity into the universe even if they dismissed the other forms.
In the case of infinitely divisible partitions of matter, the first half of the above argument, relating to the irrationality of a maximal 'good' consequence, is most relevant; but the second half, regarding the concept of an infinite amount of 'good' is not possible when that universe is finite in both extent and duration. However, in the case of an infinitely large or infinitely durable universe, there may only be a finite number of possible actions one can take, but the total 'good' in universal terms may very well be infinite in extent, so the second half of the above argument applies perfectly well, while the first half of the argument does not seem to apply. Please note that for the rest of this essay, I will concentrate only on infinite extent/duration universes.)
Assuming some possibilty of extent/duration infinity in the universe, and further assuming that the total cardinality of 'good' units in the universe is infinte in amount (this is possible, I hurry to note, even if 'good' is scarce in the universe, just as primes are scarce among integers even while they are both infinite in extent), then consequentialism seems to fall apart completely. For no longer does the addition of any amount of good or ill make any difference in the total 'good'; even if you chose an action which made the universe infinitely worse off, consequentialist ethics would not be able to warn you against that choice (because infinity minus infinity is still infinity). But this seemingly insurmountable objection can be avoided in two different ways.
First, there is the psychological defense: As stated earlier, humans, for biological reasons, are capable of keeping only ~150 other persons in their world view at one time. While this makes no difference to what ethical prescriptions may be, it does mean that we are incapable of seeing the infinite extent of 'good' all at once. We instead see portions of the 'good' and can then see quantifiable results by adding to that finite amount through consequentialistically decided acts. While not affecting the total 'good', we are increasing the amount of 'good' in a given area, making it more dense in the region of our surroundings. While I am not making the argument that this means we should pay attention to the region of our surroundings while ignoring everything else, it does explain why in practice most persons act this way. It is rational for us to expect the average consequentialist to feel like she is making a difference by acting locally even while ignoring more pressing issues halfway around the world. As oxymoronic as this previously sounded, the concept of an infinite amount of 'good' makes it possible for someone to be regionally minded and rationally consequentialist at the same time.
Second, there is the mathematical defense, which is not descriptive like the psychological defense, but is rather purely prescriptive: When cardinalities are equal, we should choose that class of action which, when universally applied, has the consequence of possibly bringing about the densest amount of 'good'. This overly complex statement requires clarification.
If you were asked to choose a random integer (assuming we both understand the meaning of 'random' when used in this sense), the probability of your chosen integer being a prime number is effectively zero (assuming we both understand 'zero probability' to mean what it means when used in this sense). This is because the non-primes so overwhelm the primes on the higher end of the scale. But you'd have a fully 50% chance of your random integer being an even number. This is what is meant by 'density' here. The evens are far more dense among integers than primes are. This is true even while all three categories, integers, primes, and evens, all have the same cardinality.
So if there is an infinte amount of 'good', then while any given action will not be able to change the cardinality of 'good', there may be _classes_ of actions which, when always applied, can modify the density of that 'good'.
Classes of action make no difference in terms of cardinality: If I choose the action that brings about world peace, the cardinality of 'good' may not change, even if we chose that action based on a rule that said we should always bring about world peace when possible. In other words, even if world peace came about everywhere, at all times, the cardinality would not change. In this sense, talking about classes of actions does not help. (Infinity plus infinity is still just infinity.)
But in terms of density, classes of actions can bring about huge differences. Using that same example, if world peace came about everywhere, at all times, then, depending on the relation between constant world peace with other possible actions, that class of action of always bringing about world peace may increase the density of 'good' even while leaving the cardinality the same. Thus, we would want to take that class of action, knowing that if it is always followed, then the density of 'good' may increase.
You'll have noticed, I'm sure, that I've used the modifier 'may' here. This is because I cannot think of a reliable way of demonstrating that a particular class of action will increase the density of 'good' when universally applied. Nevertheless, it _may_ increase the density of 'good', and that is enough to justify doing so, given that the alternative is to definitely not increase the density of the good at all. In general, if a class of action is equally dispersed among possible actions, then that class will add to the density of 'good'. If that class of action is diminishing in dispersion among possible actions, then if the rate diverges, it will not add to the density of 'good', while if it converges, then it will add to the density of good. As to how you can tell whether or not a class of action is equally dispersed, let alone converges or diverges, I have no idea. (For example, imagine the 'good' is 0 mod3, every third integer among integers, making the density 1/3. Choosing a class of action that makes 1 mod3 'good' will then increase the density to 2/3 (even 1 mod27 will increase the density to 1/3 + 1/27). But choosing a class of action that makes primes 'good' will add only zero to the original 1/3, and the density will remain the same.)
In practice, choosing a class of action that, when universalized, may increase the density of 'good' could take the form of 'help your neighbor' or 'give preference to your family'. In this sense, the consequentialist may rationally give preference to one's own while ignoring admittedly greater needs of strangers. Helping the strangers may bring about a higher finite amount of 'good', but since the cardinality is equal to helping one's own, this is not a pressing difference. On the other hand, helping strangers may bring about an increase of the density of 'good', but it is not clear that it would be a greater increase than helping one's own instead! Indeed, it seems possible that 'helping one's own' and 'helping strangers' may very well increase the density of 'good' equally well, and so choosing between these ideals would be totally arbitrary! (Of course, it may turn out that they increase the density by different amounts, in which case consequentialism would demand one choice over the other. But the point is that it remains unclear as to how each choice may affect density, and so you cannot blindly say that helping whomever needs most help will take precedence.)
Thus these are the two arguments, psychological and mathematical, descriptive and prescriptive respectively, that show why a rational consequentialist might weight the consideration of interests of one's own more highly than that of strangers, if they are faced with the possibility of an infinite universe in extent or duration in which the total good is nonfinite. With this additional reply added to their arsenal, I imagine the consequentialists will have an easier time defending their doctrine against those who argue that friendship comes first.
No comments:
Post a Comment