ACT UTILITARIANISM AND DECISION PROCEDURES

Robert L. Frazier

Magdalen College, Oxford

10 August 1993
notes revised 17 November 1993

.

A standard objection to act utilitarian theories is that they are not helpful in deciding what it is morally permissible for us to do when we actually have to make a choice between alternatives.$^1\:$ That is, such theories are worthless as decision procedures. A standard reply to this objection is that act utilitarian theories can be evaluated solely as theories about right-making characteristics and, when so evaluated, their inadequacy as decision procedures is irrelevant.$^2\:$ Even if somewhat unappealing, this is an effective reply to the standard objection.

Some philosophers want the advantage of making this distinction as well as the benefit of having an acceptable decision procedure. Peter Railton, for example, makes the distinction and says that it is `an empirical question (though not an easy one) which modes of decision making should be employed and when'.$^3\:$ I shall argue that the cost of act utilitarianism's impracticability is higher than mere difficulty in deciding which decision procedure to employ. A consequence of act utilitarianism's impracticability is that, if act utilitarianism were true, we would have inadequate reason to believe that our moral decisions were correct. I now turn to this revised impracticability argument.

.

A Revised Impracticability Argument

(1) Act utilitarianism cannot have a practicable, validated ethical decision procedure.

(2) If act utilitarianism cannot have a practicable, validated ethical decision procedure, then we cannot be justified in believing of any act that it satisfies act utilitarianism's conditions for moral permissibility.

(3) If we cannot be justified in believing of any act that it satisfies act utilitarianism's conditions for moral permissibility, then either act utilitarianism is not true, or we cannot be justified in believing of any act that performing it is morally permissible.

(4) Therefore, either act utilitarianism is not true, or we cannot be justified in believing of any act that performing it is morally permissible.

In my defence of this argument I will focus on premises (1) and (2). Premise (3) seems reasonable and I will have nothing to say about it. After my defence of the premises, I will consider a general objection to the argument. I will start by explaining some notions that I employ. In particular, I should say what I take act utilitarianism to be and what an ethicaal decision procedure is, what makes one practicable and when one is validated.

My argument is intended to apply to a suite of act utilitarian theories. All of the relevant theories connect the moral permissibility of acts with their utility. They differ, however, in their views about what determines the utility of an act. For example, some propose that the utility of an act is a function of a number of discrete features, while others say that it is a function only of the welfare or happiness it produces.$^4\:$

Another feature common to this suite of views is their proposal that objective or actual utility, not subjective or expected utility, is the relevant utility. Indeed, a way to avoid the criticism that I am leveling against act utilitarianism, though not one without its own costs, is to adopt a different utilitarian theory, one that says that agents are morally permitted to perform an action just in case it is reasonable to believe that it will have at least as good consequences as any of its alternatives.$^5\:$ The utilitarian views that I have in mind are also impersonal. What matters is the amount of utility (aggregate, average, etc.), not who acts or benefits from an act.

I characterize this suite of views, AU, as follows:

AU: For any person, $s$, and act, $a$, it is morally permissible for $s$ to do $a$ if and only if there is no act that $s$ could do instead that has greater utility than does $a$.

In general, a decision procedure is a method for determining an answer to questions for which it is a decision procedure. In particular, an ethical decision procedure (EDP) gives an answer about the moral permissibility of an act. An EDP is practicable when we are able to use it. It need not be easy to use, nor must it be possible to use it in extraordinary situations. But in ordinary situations, even if it takes considerable effort, we must be able to apply the method. Applying it must be practically possible. An EDP is validated when we have good reason to believe that it gives the correct answer to questions for which it is a decision procedure. Of course, this does not mean that we have to be certain that it gives the correct answer. That would be to require too much. But we must have a reasonable amount of evidence that it is accurate.

With these preliminaries out of the way, I can turn to the defence of premise (1).

Premise (1)

The defence has two parts. The first part is to argue that if we consider AU as an EDP, we find that it is impracticable. The second part of the defence is to argue that since AU is impracticable, no other EDP for AU can be validated. If AU is impracticable and no other EDP for AU can be validated, then, clearly, AU cannot have a practicable, validated EDP.

There are at least three approaches that can be used to support the claim that AU is impracticable as an EDP. The first i to argue that, since it is impossible to determine uniquely our alternatives at a time (there is no unique set of alternatives), it is impossible to arrive at a unique determination of the relative utilities of our alternatives. The second relies on the view that it is practically impossible to determine all of the consequences of our alternatives (even if we can determine what these alternatives are). The third approach is to argue that we cannot determine the relative utilities, or values, of alternatives, even of those whose consequences we know. Although I believe that each approach could be fruitful, I will concentrate on the second.$^6\:$ Keep in mind that the relevant claim is not that it is merely very difficult to determine all of the consequences of our alternatives, but that it is practically impossible to do so.$^7\:$

The practical impossibility of ascertaining the consequences of our alternatives results, largely, from the transitivity of causation. Since causation is transitive, every event that succeeds an act and is in a causal chain with that act is a consequence of that act. Most acts are parts of a very large number of causal chains and these causal chains can go on indefinitely. Consequently, most acts have an abundance of consequences and each consequence can be relevant to the utility of the act. Jonathan Bennett seems to recognize this when he says, `I sometimes feel oppressed by the thought that so much of the world's future history will have been caused by (among other things) my conduct ...'.$^8\:$

The better to see just how difficult it is to determine the consequences of acts and, therefore, their utilities, let us consider a couple of acts, their alternatives and their consequences. Let us consider a famous act, which we would suppose to have a lot of significant consequences, and a less famous act. Considering these two examples does not, of course, establish that we cannot determine the consequences of our actions. It merely displays some difficulties encountered in trying to do so. Indeed, anyone providing a list of all of the consequences of some act would show, conclusively, that it is not practically impossible to do so.

The famous act is Socrates' drinking of the hemlock. Can we ascertain all of the consequences of this actual act? I believe not. Surely no one will claim actually to have done so. Of course we can ascertain some of them, although it may be difficult to say how much any event is a consequence of Socrates' act. For instance, it is hard to see how Plato would have produced the Crito if Socrates had not drunk the hemlock. Even if he had, it is hard to see how it would have been as important. This, however, is just the beginning of the consequences. Indeed, since Socrates' act is still being discussed, there is very little reason to believe that it will not go on having important consequences.

If we cannot ascertain all of the consequences of Socrates' drinking of the hemlock, we surely cannot ascertain what the consequences would have been had he not done so. For example, would all of philosophy consist of footnotes to Plato? The difficulties in determining the consequences of Socrates' alternatives make it impossible for us to ascertain their utilities (relative or absolute). The impossibility is not conceptual or metaphysical, merely practical.

We encounter similar problems with more prosaic acts. This morning I put some change in the right-hand pocket of my trousers. Can I ascertain the consequences of what I actually did? I do not even know when it stops having consequences. It and its alternatives may not have very significant consequences, but I do not believe that we can ascertain them, or that we can discover their relative utilities. Remember that according to AU the significance of an act does not, in itself, determine the permissibility of an act; instead, it is the relative utility of the act that does so. Consequently, little differences between acts of little significance can make a moral difference.

It is important to note that I am not committed to saying that for all we know, any seemingly unimportant act may have titanic consequences. There may be some situations whose very nature may give us good reason to think that any act done will be insignificant in the larger scheme of things. I do not have strong views about whether this is so. However, I do believe that if we have some alternatives in such situations we cannot be justified in thinking of any of these alternatives that doing it will bring about at least as much utility as any of the others. Nor do I believe that, in situations where there may be titanic consequences, we can be justified in believing of any act that it will have at least as much utility as its alternatives.

By considering the nature of causality, we should conclude that it is practically impossible to ascertain all of the consequences of any action.$^9\:$

If AU, considered as a decision procedure, were practicable, but merely difficult to use, then it could be employed in validating EDPs that were easier to use. However, since it is not, it cannot. Is there some other way to validate an EDP for AU? We cannot test an EDP for AU directly, by considering what an act's actual consequences will be. This is because we cannot ascertain all of an act's remote consequences. So, if we are to be able to validate an EDP for AU, it must be that we can be justified in discounting the remote consequences of acts. I shall now argue that we cannot be justified in discounting remote consequences and hence that AU cannot have a validated EDP.

What reasons could one give for thinking that these remote consequences can be discounted?

As far as I can tell, there are three ways in which one might attempt to justify the total discounting of remote consequences. The first is to claim that remote consequences are insignificant. The second is to claim that the remote consequences of an act must have the same relative utility as its near consequences. And the third is to claim that remote consequences even out so that any good remote consequence is matched by an equally bad remote consequence.

These can be thought of as empirical claims, supported by an examination of a number of cases, as conceptual claims about the relationship between events, time and utility, or as assumptions. Regardless of how they are understood, we have little reason to accept them. Since they are most reasonably taken to be empirical claims, let us first decide whether we have good empirical evidence for them.

The empirical evidence needed to support any one of these claims can be obtained only by examining a number of cases to see what the relationship is between near and remote consequences. There is a major obstacle to meeting this requirement: most, if not all, acts are still having effects. Since we are attempting to justify the discounting of remote effects, we cannot assume that the effects more distant than now are not relevant. Consequently, we have a paucity of cases to examine and it is hard to see how we can test these claims.

Perhaps it would be good enough to test the claims partially. If we look at a number of acts performed long ago and find that one of the claims has held over a relatively long period of time, then we might have some justification for accepting it. G.E. Moore suggests something like this. He accepts the need for discounting future consequences, suggests that they are insignificant because `whatever action we now adopt, ``it will be all the same a hundred years hence''' and says that `this might, perhaps, be shewn to be true, by an investigation of the manner in which the effects of any particular event become neutralised by lapse of time'.$^10\:$ Moore does not report the results of such an investigation, which is not surprising given the difficulties in determining the consequences of actions. Unfortunately, however, Moore simply assumes that his claim is true and argues for substantive normative conclusions despite his statement that `[f]ailing such a proof, we can certainly have no rational ground for asserting that one of two alternatives is even probably right and another wrong'.$^11\:$ As far as I can tell, Moore is not alone and none of these claims has been established by empirical investigation.

If any of the three claims about the relationship between near and remote consequences is a conceptual claim, it is false. Consider first the claim that remote consequences are insignificant. It would seem that the remote consequences of an act could be more significant than the close ones simply by virtue of there being more. This is true of any act that is part of the beginning of a social, political, religious or scientific movement.

What about the claim that the relative utilities of the near and remote consequences of acts are the same? If this is a conceptual claim, it is a dubious one. It is not hard to imagine that the near consequences of using a certain pesticide to increase the food supply would be very good and the remote consequences very bad, while the near consequences of not using the pesticide would be quite bad and the remote consequences quite good.

Are we justified in believing that, after a time, the good and bad consequences of an act even out? Again, if this is a conceptual claim, it is false. Assume that it is a commonsense view that persons of one sex are mentally inferior to persons of another sex. Also, assume that this view had its roots in one act. It is not hard to imagine that there are many more bad remote consequences of this act than there are good remote consequences of it.

There is one additional strategy which might be employed by a proponent of any of these views about the relationship between near and remote consequences. It is suggested by J. J. C. Smart, who says that we should accept the claim that the remote consequences of actions are insignificant because they `approximate rapidly to zero like the furthermost ripples on a pond after a stone has been dropped into it'.$^12\:$ The strategy is to justify acceptance of the claim on theoretical grounds, namely that if it is not accepted, then the theory, AU, must be seen as impracticable. Smart says that assuming it is needed in order `to make utilitarianism workable in practice'.$^13\:$ Since I am arguing that AU's not being practicable has an interesting and, perhaps, unhappy consequence, it would be unreasonable for me simply to accept the truth of this assumption.

This concludes my defence of premise (1). I have considered whether AU can have a practicable, validated EDP and have attempted to establish that it cannot.

Premise (2)

Premise (2) relies on this assumption: we must have adequate evidence if we are to be justified in believing that a particular act satisfies AU's conditions for rightness.

Whatever method we use to acquire the relevant evidence would be a candidate for an EDP. But I have argued that, considered as an EDP, AU is impracticable, and, given this, no other EDP for AU could be validated. So there is no method that we can use for acquiring this evidence and we do not have reasonable grounds for believing that we ever have it. We cannot, therefore, be justified in believing of any particular act that it satisfies AU's conditions for moral permissibility.$^14\:$ If my reasons in support of this conclusion are good, then, it seems, we have to accept that we are pretty much in the dark about the relative utilities of all acts.

It might be objected that I am asking too much: where I appear to be asking for justification, I am actually demanding certainty. No one is going to argue that we can be certain about what is morally permissible. Reasonable act utilitarians will say only that we have enough evidence about the utilities of alternatives to make rational decisions about what is morally permissible. Surely this is good enough. Since my argument supposes the stronger claim, which no one makes, it is an argument against a straw man. Call this `the straw man objection'.

According to proponents of this objection, what evidence about utilities allows these decisions to be rational? If the objection is to be successful, evidence about near consequences must suffice. Since evidence about near consequences can provide a preponderance of evidence that an alternative has maximal overall utility, perhaps this is adequate. So it is rational to decide that an alternative is morally permissible when there is a preponderance of evidence that it has maximal utility. If this is correct, it could be rational to choose an alternative solely because of the evidence about its near consequences.

On this view, what are we to say about remote consequences, which, for all we know, might affect the relative utilities of the alternatives? Since evidence concerning remote consequences is not available to us, it cannot be part of our evidence for or against some alternative's having maximal utility. Consequently, remote consequences can safely be ignored. They are to be ignored, not because they are irrelevant to an alternative's utility, but because they are irrelevant to rational decisions based on a desire to maximize utility.$^15\:$

The result of these considerations is, purportedly, that evidence about the near consequences of alternatives allows us to have beliefs about the overall utilities of alternatives that are sufficiently justified for decisions based on those beliefs to be rational. So, while the beliefs are not certain, they are justified enough to support rational decisions. It is in failing to recognize this that my argument is supposed to miss the mark.

In order to see how the straw man objection fails, we have to distinguish between rational justification for acting and epistemic justification for believing. These two sometimes come apart. We can be rationally justified in doing something, without being epistemically justified in believing that our decision is correct. My claim is not that, if AU is true, we can never be rationally justified in doing something (even on utilitarian grounds). It is that we would not be epistemically justified in believing that our action will maximize utility or is morally right.

To see how rational and epistemic justification can diverge, consider the following. We have a choice between incompatible alternatives, A, B and C. There is a .45 chance of A's being the alternative that maximizes utility, a .35 chance of B's being that alternative and a .20 chance of C's being that alternative. We know these probabilities and that we are trying to maximize utility. It seems that it would be uniquely rational to do A. Do we have good reason to believe that doing A will maximize utility? I think not. We know that the chance of A's maximizing utility is less than the chance of its not doing so: there is a .55 chance that B or C will be the alternative that maximizes utility. So we are rationally justified in doing A, but not epistemically justified in believing that A is the alternative that will maximize utility. Rational justification and epistemic justification can diverge.

I have been arguing that, in general, our available evidence is woefully inadequate to justify belief about the utilities of our alternatives. My strategy has been to try to show that we do not have sufficient reason to believe that evidence about near consequences is adequate when trying to determine relative utility. So, although the evidence that we have might seem to indicate that some alternative has the most utility, this evidence is undermined by our knowledge that we do not have good evidence about the remote consequences of the alternative and that the remote consequences are relevant.

This does not undermine the claim that it is rational to decide what to do based on our knowledge of near consequences. If we have to make a decision, it is rational to act on the best information we have, even if we know that our information is completely inadequate. It is unfortunate, but true, that we often have to make decisions, perhaps rational decisions, with evidence that is insufficient to justify the belief that we have made the correct decision. Indeed, we sometimes have to make decisions knowing that we are likely to be wrong. My view is that, if AU is correct, this is true of moral decisions.

Of course there is some connection between rational and epistemic justification. If moral decisions are rationally justified, but we are not epistemically justified in believing that they will have the results we hope for, that they are rational is of little comfort.

.

It might be thought that my argument relies on a general skepticism with respect to induction and knowledge of the future. If it does, then it is boring and should be ignored. An argument about epistemic justification in moral theory that relied on such sweeping considerations would be uninteresting, even if such a general skepticism is defensible. It would be uninteresting because it would amount to pointing out a trivial consequence of accepting this kind of skepticism.

Why might my argument be thought to rely on such a general skepticism? The reasoning might go like this: If induction is justified and we can have justified beliefs about the relative utilities of past acts according to their types, then we can have justified beliefs about the relative utilities of present alternative acts of those types. Your argument rejects the consequent of this conditional, so it rejects the antecedent: either induction is not justified or we cannot have justified beliefs about the relative utilities of past acts according to their types. Since it is reasonable to believe that we can have justified beliefs about the relative utilities of past acts according to their types, you must reject the view that induction is justified. That is, you must accept skepticism with respect to induction and knowledge of the future.

I do reject the antecedent of the conditional, but not, however, its inductive conjunct. Instead, I reject the claim, understood in the proper manner, that we have justified beliefs about the relative utilities of past acts and, therefore, their relative utilities according to type.

Observed or experienced consequences of actions are supposed to provide the base for induction. As time passes, however, the consequences of an act become more and more intermingled with the consequences of other acts; it becomes more and more difficult to apportion responsibility between acts and to know whether and to what degree some event is the consequence of some act. So, considering an earlier example, it is difficult to determine the consequences and relative utilities of Socrates' drinking or not drinking the hemlock. Indeed, it is even likely that most past acts are still having consequences and will do so into the indefinite future. Unless we can discount the consequences of acts from the time that determining them becomes practically impossible, we cannot be in a position to make justified claims about the relative utilities of past acts. And I have argued that we do not have either empirical or conceptual grounds for discounting remote consequences. Consequently, we are not in a position to make justified claims about the overall relative utilities of acts, past or present.

I am not skeptical about induction; rather, I am skeptical about our ability to have justified beliefs about the overall consequences of any action. Mill's claim that for `the whole past duration of the human species ...mankind have been learning by experience the tendencies of actions' may, on one interpretation, be true.$^16\:$ But it is true only if it is understood as a claim about the near consequences of actions.

.

If AU characterizes the correct moral theory, then in order to be justified in believing that it is morally permissible to perform a particular act, we have to be justified in believing that that act has at least as much utility as any of its alternatives. I have argued that since we cannot determine all of the consequences of actions and cannot justify discounting their remote consequences, we cannot be justified in believing of any particular act that it has at least as much utility as any of its alternatives. If I am right, then someone who accepts AU should accept that we are in the dark about what we are morally permitted to do. Indeed, anyone who thinks that the actual consequences of acts are morally significant may have to accept this.$^17\:$

Notes

$^1\:$ See, for example, J.L. Mackie, Ethics: Inventing Right and Wrong, Harmondsworth, 1977, p. 129, where he says that even if problems concerning the calculation of utility were solved `there would still be a fatal objection to the resulting act utilitarian system. It would be wholly impracticable'.

$^2\:$ See R. Eugene Bales, `Act-Utilitarianism: Account of Right-Making Characteristics or Decision-Making Procedure?', American Philosophical Quarterly, viii (1971), 257-265; David O. Brink, Moral Realism and the Foundations of Ethics, Cambridge, 1989, pp. 256-261; and James Griffin, Well-Being, Oxford, 1986, pp. 195-206.

$^3\:$ Peter Railton, `Alienation, Consequentialism and the Demands of Morality', Philosophy and Public Affairs, xiii (1984), p. 117.

$^4\:$ G.E. Moore, Principia Ethica, Cambridge, 1903, is a utilitarian of the former kind, while, perhaps, Griffin is a utilitarian of the latter kind.

$^5\:$ For an excellent discussion of this strategy and the costs associated with it see John Broome, Weighing Goods, Oxford, 1991, ch. 5.

$^6\:$ For an interesting discussion of the difficulties associated with specifying a unique set of alternatives see Lars Bergström, `On the Formulation and Application of Utilitarianism', Nous, x (1976), 121-144. For a discussion of the problem of determining utilities of acts whose consequences we know see Griffin, `Part Two: Measurement', pp. 27-124.

$^7\:$ Throughout I am taking it that there is a distinction between an alternative and its consequences. If alternatives are characterized in terms of acts and their consequences, then my claim would be that it is practically impossible to determine our alternatives.

$^8\:$ See Jonathan Bennett, Events and Their Names, Indianapolis, 1988, p. 49. For a discussion of the transitivity of causation see pages 46-49 of the same work.

$^9\:$ There is one kind of possible situation that represents an exception to my claim. It seems that there may be situations where we are justified in believing that an act and its alternatives will have no further consequences at all. For example, we might know that the end of the earth was imminent. This is not our ordinary situation.

$^10\:$ Moore, pp. 151-153.

$^11\:$ Moore, p. 153.

$^12\:$ J.J.C. Smart and Bernard Williams, Utilitarianism: For and Against, Cambridge, 1973, p. 33.

$^13\:$ Smart, p. 34. He concedes that there may be cases where the remote consequences do not diminish in significance. One such case would be Adam and Eve's production of offspring. It is not clear whether he thinks the principle actually is true, or whether he thinks that we should make our choices as if it were. He does say that he does not `know how to prove such a postulate, though it seems plausible enough' (p. 34).

$^14\:$ I am assuming that if you do not have good reason for believing that you are justified in believing a proposition, then you are not justified in believing the proposition. If this is not acceptable, then the argument can be modified.

$^15\:$ I believe that forms of this argument have been suggested to me by various philosophers, including Alan Fuchs, Heidi Malm, and Roger Crisp. Crisp also suggested that this argument can be found in Moore. In my discussion of discounting I gave what I believe to be a correct account of Moore's argument. As I see it, Moore recognizes that our being justified in ignoring remote consequences is conditional on showing that over a period of time the consequences become less significant. He gives his reasons for believing that showing this is possible, but does not claim that he has shown it or that anyone else has. However, the rest of his discussion assumes that it has been shown. This, I believe, gives the false appearance that he is claiming that we can simply ignore remote consequences.

$^16\:$ Utilitarianism, ed. John M. Robson, Toronto, 1965, Collected Works of John Stuart Mill, x. 224.

$^17\:$ Versions of this paper were presented at the 1990 meeting of the Virginia Philosophical Association, the 1991 Pacific Division meeting of the APA and the Wolfson Philosophical Society (Oxford). I thank the participants in those meetings for their comments, in particular Alan Fuchs, Tony Ellis, Roger Crisp and Heidi Malm (who served as commentator at the APA meeting). Also, I am grateful to others who have provided comments on various versions of this paper, especially Brad Hooker, Peter Vallentyne and Penelope Mackie.



Robert L. Frazier 2009-10-27