What can ethics courses achieve?

In a Houston Chronicle commentary titled “Where’s Right and Wrong in Ethics?,” Donald Bates explores why required university courses in ethics fail to produce ethical business practices. Bates lists many familiar examples of unethical behavior in public life (Enron and WorldCom, for examples) and blames them conveniently on the separation of church and state.

Bates claims that ethics is taught from a position of Utilitarianism (the greatest good for the greatest number), egoism (what most benefits the long-term interest of the individual), rights (deontological forms of duty to others or entitlements for oneself), or abstract principles of justice. This is his first mistake. University ethics courses teach the theories listed by Bates, although his list is far from exhaustive, but ethics instructors are not wont to teach “from a perspective.” To understand the study of ethics, students must be familiar with competing theories, but universities provide education, not indoctrination.

Bates goes on to say, “Trying to teach ethics without a religious underpinning means absolutes do not exist, everything is situational.” This is his second mistake. The fact that many competing ethical theories (and religions, for that matter) have emerged over the centuries is not evidence that absolutes do not exist. It is evidence only that absolutes are extremely difficult to discover and agree upon. Teaching ethics from the standpoint of a “religious underpinning” is to teach from a standpoint of absolute knowledge of right and wrong and good and bad, which would require professors to claim to know the mind of God, a claim that would be met with suspicion for good reason.

Expecting ethics courses to make the world more ethical is a little like expecting professional athletes and pop stars to be good role models. Ethical solutions and agreement are not easy to come by. Claiming that the state should enforce morality founded in religion begs the question of which religious perspective is correct and who will decide on the proper perspective. Rational people of good will disagree on ethical practices each day, and this is a good foundation of a pluralistic society.

If we are lucky, we might be able to teach a few students a little humility and respect for the efforts of others to discover right from wrong. Many students will claim that ethics is just a matter of common sense. Oddly enough, Bates seems to agree. In each case he presents, he believes there is a universally accepted opinion of what is right and wrong. If he is correct, students do not need to be taught what is right, they need to be prevented from being evil. It is unlikely that the leaders of Enron and WorldCom made a mistake in ethical thinking. More likely, they decided to do something that showed no concern for the harm it caused others.

An ethical society requires skeptical humility from its leaders and educators, recognition of the humanity of others, and a desire to limit harm to all. This lesson is not easily taught but it is easily shared by the way we live.

Future of Bioethics

Many problems of bioethics revolve around the value of life. Many bioethicists accept the Judeo-Christian view that human life and human life only has great intrinsic value. As a corollary it is taken that anything thing that is both alive and human possesses a right to respect and continued life.

These assumptions are powerful and pervasive, but go against the intuition of many people. The assumption that human life has great value and is even sacred would lead one to assume that it is proper to create as much human life as possible, but only a few people actually believe this. The prevalence of contraception and encouragement of abstinence belies an underlying belief that perhaps not every human life is of great value simply because it is possible for it to exist.

Similarly, rights are not granted uniformly to all that are human and alive, although many pretend that they are. When consciousness ceases to exist or fails to begin in living human tissue, many people will regard this being as perhaps being worthy of dignified treatment, but the idea that it is of the same value of all other human life is not represented through everyday actions of most people.

Concern for the “right to die” is some circumstances also implies a rejection of the view that life is sacred in all cases. Alternative views of the value of life can be useful in resolving the apparent contradiction between the actions many people take and their declared respect for life and individual rights. Not all people see life as sacred and valuable. The first noble truth of Buddhism, for example, is that life is suffering. We seek continued existence as a result of desire, which intensifies our suffering. Life becomes valuable, then, because it fulfills a desire which is itself irrational. Other views see life as the inevitable consequence of physical laws or nature. The fact that humans exist and desire life is a brute fact that is morally significant only because of the suffering generated by the desire for life.

We may recognize that life is valuable for reasons that are not metaphysical. A pre-embryonic collection of cells may be of great moral significance to a certain man who is hoping, with a bit of desperation, to become a father and see his child before he succumbs to a life-threatening disease himself. For this man, these human cells are not morally significant because they are endowed with rights and dignity at their first creation. He is not concerned with the metaphysical status of the cells. He is concerned, instead, with their ontological status. They exist and he wants them to survive because he is interested in their continued existence. In this case, we may feel morally obliged to take great measures to ensure the survival of these cells because they mean so much to this hopeful father. We are concerned for this father and he is concerned for his progeny. The moral commitment arises from concrete human relationships.

For similar reasons, non-human life may become of great moral concern to us. Police officers who have worked with service animals for many years will often refer to a deceased animal as a “partner” and such animals sometimes receive funerals and memorials. Few would claim that service animals are accorded respect and value because of the sanctity of life.

In both the cases I’ve given above, it can be claimed that the duties accorded to life are indirect duties to the ones who care about the life. While that is true, the moral commitment could arise from a direct concern for a life. An individual may value her own life because she enjoys being alive and wants to continue her existence. Her own concern for her life makes her life something of value. Our of a concern to reduce her suffering at the thought that her life may not be preserved, medical professionals will devote themselves to preserving her life.

In such cases as outlined above, it is compassion, sympathy, empathy, or care that creates moral demands for the preservation of life. This view of the value of life will not appease the demanding vitalist, but it may be accepted by many people from different faiths and philosophical backgrounds. It helps us reconcile the strong drive to preserve and extend life with our belief that some people have a right to die, that some non-human life deserves extraordinary care and respect, and the view that some human cells are precious while others are less precious.

Cartesian Ethics: Concern for Self and Others

Cartesian Ethics: Concern for Self and Others

When examining the ethics of Descartes, it is easy to focus exclusively on his interest in virtue and concern for self-interest. As a result, discussions of his ethics often have what Cecilia Wee called “a persistent image of the Cartesian agent as a selfish or egoistical individualist” (255) in her essay titled “Self, Other, and Community in Cartesian Ethics.” According to Descartes, by using reason to understand the nature of God and the moral order of the universe, humans are able to control their passions and accept their fate in life with equanimity. We may not be happy in the sense of being exuberant. Rather, the virtuous person is rewarded with a “satisfied mind.” Virtue and contentment is not the end of the story, however. In a typical description, Donald Rutherford describes Cartesian ethics in this manner:

In agreement with the ancients, he takes philosophy’s practical goal to be the realization of a happy life: one in which we enjoy the best existence that a human being can hope to achieve. Descartes characterizes this life in terms of a type of mental flourishing, which he calls “contentment of mind,” or “tranquility.” Here the influence of Stoic and Epicurean ethics is evident. (1)

In Descartes’s philosophy we find some echoes of previous moral views such as the virtue ethics of the ancients, but we also see that Descartes anticipated many modern moral theories, including Kant’s respect for persons and utilitarianism.

Many have lamented the fact that Descartes was never able to develop his moral theory in the formal manner of his metaphysics and epistemology. There is no cause for despair, however, as Descartes has left us plenty of grist for the grind. If we take as a given that the aim of philosophy is to enable us to live better lives, we cannot only examine Descartes’s comments on morality, but we must also evaluate how his metaphysical and epistemological claims promote eudaimonia. By this, we do not mean to see whether Descartes has found a way to make us all happy but whether he promotes a sense of being better off and flourishing, for it is clear that cheerfulness is not the supreme good. He writes to Princess Elizabeth, “Seeing that it is a greater perfection to know the truth than to be ignorant of it, even when it is to our disadvantage, I must conclude that it is better to be less cheerful and possess more knowledge” (CSM III, 268).

In this, we can see that epistemological concerns have a moral dimension for Descartes, so it is appropriate to evaluate his epistemology from a normative standpoint. His epistemology, of course, rests on his metaphysical assumptions. For the purposes of this paper, I will take his metaphysical assumptions as discoveries that are proven through his writings. He counsels Elizabeth that although there are many things we cannot know, we must content ourselves with the most useful truths. Most importantly, he argues that “there is a God on whom all things depend, whose perfections are infinite, whose power is immense, and whose decrees are infallible” (CSM III, 265).

If we take Descartes at his word and accept that the provisional morality expressed in the Discourse is truly provisional, then the value of examining his comments there is dubious. On one view, the provisional morality is nothing more than a literary or rhetorical device designed to heighten excitement for the epistemological project. The need to take a provisional morality implies that skepticism is extremely risky. With the proper preparation, however, one can be sure to make things no worse than they are. Descartes reassures his readers both that he is doing something extremely bold and also that he is taking no chances with the stability of society and morality. Descartes is giving his promise that he will not abandon all standards of behavior while completing the skeptical experiment. On the other hand, he does seem to be claiming that the provisional morality is good enough for himself until his metaphysical and epistemological foundations can be established.

Further, he suggests that others may choose to embark on a similar project, so he seems to be endorsing his provisional morality for anyone with the proper will and disposition for the project. Excluded from the endeavor, as he notes in the Discourse, are those who believe themselves “cleverer than they are” so that they judge too hastily and those who recognize they are better to “follow the opinions of others rather than seek better opinions themselves” (CSM I, 118). For himself, and anyone wishing to follow his lead, he laid out “three or four” maxims. The confusion over the number may stem from the fact that the fourth is more of a decision than a maxim.

One value of the Cartesian metaphysical project is that it gives a sense of serenity in knowing we only accept what is certain as true. We can know that there is a good God, free will, and a moral order to the universe. Since God is perfect, we know we are never being deceived. We also know that God does not create evil, as evil is not a thing. Further, we know that we can always choose what is good and correct. Error is the result of will, not intellect, which derives truth from God.

Descartes’s metaphysical argument regarding evil and free will is, unfortunately, incoherent. The problems with his view of humans become apparent when compared to the lives of animals, angels, and the human mind when separated from the body. Animals, lacking free will or intellect, act in a perfect manner and are not capable of evil. In his early writing, he says, “The high degree of perfection displayed in some of their actions makes us suspect that animals do not have free will” (CSM I, 5). Of course, animals also lack moral agency and are of no concern morally. The suffering animals endure has no moral significance. He makes this point distressingly clear when he describes a vivisection by saying, “If you slice off the pointed end of the heart in a live dog, and insert a finger into one of the cavities, you will feel unmistakably that every time the heart gets shorter, it presses the finger” (CSM I, 317). An animal’s behavior is never to be faulted. However, it also should never be praised.

Angels, lacking bodies, are not subject to the errors that arise from preconceived opinions. Angels do not contain all the perfections of God, else they would be God, but they are immune from the confused and obscure thinking that plagues those of us burdened with sensation. Their ideas must be clear and distinct, as they are pure intellect. In a letter to Chanut, Descartes declares, “We regard the least of the angels as incomparably more perfect than human beings” (CSM III, 322). Angels also have free will. They are not part of the mechanistic material of the universe. Given that angels have free will along with an intellect that is not clouded by obscure and confused ideas, it is difficult to see why the existence of humans in the universe is of any value.

Humans are burdened with an infinite will, a finite intellect, and an unreliable body that gives rise to false opinions. Through great effort, humans are able to enumerate and simplify their ideas until they are left only with clear and distinct ideas that are true and certain. This process of elimination of error from the human mind is supposed to be of obvious value. By coming to certain knowledge, we are able to make accurate and positive judgments, and these result in proper virtue. This is the source of esteem for humans. In The Passions of the Soul, he writes, “I see only one thing in us which could give us good reason for esteeming ourselves, namely, the exercise of free will and the control we have over our volitions. For we can reasonably be blamed only for actions that depend on this free will. It renders us in a certain way like God by making us masters of ourselves, provided we do not lose the rights it gives through timidity” (CSM I, 384). Here Descartes seems to imply that angels, “incomparably more perfect” than humans, are less praiseworthy since they need not struggle against the limitations of the physical body. In this case, it is not clear why we should consider it a blessing to be praiseworthy.

Being more perfect seems to be a good alternative to being praiseworthy, as being capable of praise brings with it a host of afflictions. Having a soul connected to a body and a body whose actions are imperfect as a result of free will leads to no small amount of suffering. For Descartes, this is not cause for alarm or self-pity.

Rather than feeling remorse over the afflictions and inconveniences of life, Descartes sees things differently. In a letter to Princess Elizabeth, he says, “There is a God on whom all things depend, whose perfections are infinite, whose power is immense and whose decrees are infallible. This teaches us to accept calmly all things which happen to us as expressly sent by God” (CSM III, 265). Given that suffering results from the soul being connected to body, it may seem that God is responsible for evil, but Descartes rejects this notion as well. God is not the author of evil, because evil is not a thing. In the Principles, Descartes claims that God is the source of all things, but he hastens to assure us, “When I say ‘everything,’ I mean all things: for God does not will the evil of sin, which is not a thing” (CSM I, 201). Sin is, rather, the result of bad judgment and movement of the will. God could have given us perfect judgment, but “we have no right to demand it of him . . . we should give him the utmost thanks for the goods which he has so lavishly bestowed upon us, instead of unjustly complaining that he did not bestow on us all the gifts which it was in his power to bestow” (CSM I, 205). Humans are the authors of and remedies for evil.

Evil is only a privation of our perfections, rather than a thing created by God. This scholastic account of evil is a rather cold comfort for the miserable wretch suffering a multitude of afflictions. Still, even contemporary theologians offer us the same reassurance. John Hick, for example, tells us that the world without suffering might be quite pleasurable, but it would be “very ill adapted for the development of the moral qualities of human personality. In relation to this purpose, it would be the worst of all possible worlds” (115). This ignores the possibility of a universe with neither pleasure nor pain—a universe with no sentient life.

Arthur Schopenhauer, under the influence of Indian religious and philosophical writings, sees this scholastic view of evil as being completely backward. He says, “I therefore know of no greater absurdity than that absurdity which characterizes almost all metaphysical systems: that of explaining evil as something negative. For evil is precisely that which is positive, that which makes itself palpable; and good, on the other hand, i.e. all happiness and all gratification, is that which is negative, the mere abolition of a desire and extinction of a pain” (42). Taking a moral framework that seems diametrically opposed to the view of Descartes, Schopenhauer sees compassion as the greatest moral good. He says:

Boundless compassion for all living beings is the firmest and surest guarantee of pure moral conduct, and needs no casuistry. Whoever is inspired with it will assuredly injure no one, will wrong no one, will encroach on no one’s rights; on the contrary, he will be lenient and patient with everyone, will forgive everyone, will help everyone as much as he can, and all his actions will bear the stamp of justice philanthropy, and loving kindness. (229)

Further, Schopenhauer points out that it would seem illogical to claim that a person was unjust and immoral and still claim that person to be very compassionate. In this way, morality supervenes on compassion. At first look, it appears Schopenhauer has made a huge departure from the kind of morality perceived by Descartes (this is confirmed if we look at their views of animals), but Descartes makes some statements that are surprisingly similar. In following passage, Descartes almost appears to be a precursor to Schopenhauer:

Those who are generous in this way are naturally led to do great deeds, and at the same time not to undertake anything of which they do not feel themselves capable. And because they esteem nothing more highly than doing good to others and disregarding their own self-interest, they are always perfectly courteous, gracious, and obliging to everyone. Moreover, they have complete command over their passions. (CSM I, 385)

We might object, though, that Descartes is merely advocating compassion as an appropriate emotion or virtue. He may not be arguing that we should set aside our self-interest for others. Descartes is often viewed as an egoist and virtue ethicist (Wee 255). There is plenty of textual evidence to support such a claim, but it is also clear that putting the interests of others out of compassion or duty is, in itself, a virtue. He makes this clear in a letter to Princess Elizabeth:

That each of us is a person distinct from others, whose interests are accordingly in some way different from those of the rest of the world, we ought still to think that none of us could subsist alone and that each one of us is really one of the many parts of the universe, and more particularly a part of the earth, the state, the society, and the family to which we belong by our domicile, our oath of allegiance, and our birth. (CSM III, 266).

In our pursuit of virtue, which is in turn a pursuit of the good life, we must be compassionate and, at least occasionally, put the interests of others ahead of our own interests. In this regard, Descartes takes a step toward the utilitarian theories of Bentham, Hume, Mill, and even contemporary philosophers such as James Rachels and Peter Singer. We will not go so far as to claim Descartes is an early utilitarian, but we can see the rudiments of utilitarian thought in a these passages.

Descartes’s ideas on moral agency also predicted later ethical theories. The material universe, for Descartes, is a mechanical system governed by necessary physical laws. Humans are in a unique position in this universe as humans are the only beings possessing both body and mind. When we consider the interests of others, we consider only the interest of those who are worthy of esteem and blame, i.e. humans. As John Marshall puts it in his book, Descartes’s Moral Theory, “Because they possess intelligence and will, others merit our esteem as beings of a certain kind, beings having the potential for a specific kind of development, both intellectual and moral” (152). Simply possessing free will gives one the potential for virtue, which deserves respect. Because all humans have intellect and will, we must treat them with a measure of respect, even if they behave badly. As mentioned above, Descartes tells us that we are only worthy of praise or blame because we have control over our volitions. This control makes us somewhat like God by “making us masters of ourselves” (CSM I, 384). Thus, all humans deserve respect, but it is only through the intellect and the will that humans choose appropriate actions. Animals, of course, are not capable of such actions, so all humans are in a special category of respect. Even animals can be trained, he says in article 50 of the Passions, to have some control over their impulses. They are not acting rationally, of course, but merely responding to training. He says, “For since we are able, with a little effort, to change the movements of the brain in animals devoid of reason, it is evident that we can do so still more effectively in the case of men. Even those who have the weakest souls could acquire absolute mastery over all their passions if we employed sufficient ingenuity in training and guiding them” (CSM I, 348).

This interplay of reason and will, available only to humans, sounds similar to more modern ethical theories, especially those related to Kant and his categorical imperative. Rutherford describes the relationship by noting Cartesian ethics “is crowned by a principle of moral universalism: in virtue of their free will, all human beings have the same moral status and deserve equal moral respect. In this we find an important anticipation of Kant’s ethics, which emerges from a similar consideration of the unconditional value of a rational and free will” (12).

For Kant, our intellectual abilities and virtues of courage, resolution, and so on will be of no value if our will is mischievous. Rather than saying a good will and rationality will make us happy, he says, “A good will appears to constitute the indispensable condition of being worthy of happiness” (445). This echoes Descartes’s claim that we are praiseworthy when our will chooses what is evident to the intellect. Kant also insists that moral thought “is only possible in a rational being, in so far as this conception, and not the expected effect, determines the will” (448). It is only humans who receive praise or blame for their actions, and concern for non-human beings is of no moral significance. Kant says, “Beings whose existence depends not on our will but on nature’s, have nevertheless, if they are irrational beings, only a relative value as means, and are therefore called things; rational beings, on the contrary, are called persons, because their very nature points them out as ends in themselves” (452). Non-human beings are of concern only with regard to what goods they can provide humans. Kant would, of course, have no objection to the vivisection described by Descartes.

On the question of suicide, Descartes and Kant take a different approach, but it seems unlikely that Descartes would object to Kant’s argument. First Descartes tells Princess Elizabeth that suicide is to be avoided because “natural reason teaches us also that we have always more good than evil in this life, and that we should never leave what is certain for what is uncertain. Consequently, in my opinion, it teaches that though we should not seriously fear death, we should equally never seek it” (CSMK 276). In contrast, Kant claims that anyone who seeks suicide would be acting from a self-contradiction. He says, “Now we at once see that a system of nature of which it should be a law to destroy life by means of the very feeling whose special nature it is to impel to the improvement of life would contradict itself, and, therefore, could not exist as a system of nature” (450). Kant’s rational argument would surely not contradict Descartes’s vision of will, intellect, and virtue.

Descartes’s position in the history of normative ethics is similar to his position in the history of philosophy as a whole. Although he set out to establish a new philosophy, he never fully broke with the ancients of the scholastics. Still, he broke new ground and provided fertile fields to be plowed and cultivated by thinkers such as Schopenhauer, Bentham, Mill, and Kant. Even if Descartes was not able to provide a fully developed account of how we should live, he gave considerable details as to what the good life is and how it can be achieved. Rather than trying to determine what his ethical system might have been, philosophers might be better served by trying to determine how Descartes’s ideas can serve contemporary ethical theories. We must be committed to the idea that philosophy can make life better, and Descartes at the very least provides sufficient detail for us to ponder his larger questions.


Works Cited

Descartes, Rene. The Philosophical Writings of Descartes: Volume I. Trans. John Cottingham, Robert Stoothoff, and Dugald Murdoch. Cambridge: Cambridge UP, 1985.

—. The Philosophical Writings of Descartes: Volume II. Trans. John Cottingham, Robert Stoothoff, and Dugald Murdoch. Cambridge: Cambridge UP, 1985.

—. The Philosophical Writings of Descartes: Volume III. Trans. John Cottingham,

Robert Stoothoff, Dugald Murdoch, and Anthony Kenny. Cambridge: Cambridge

UP, 1991.

Hick, John. “There is a Reason Why God Allows Evil.” Philosophy of Religion.

Englwood Cliffs, NJ: Prentice Hall, 1963. Rpt. in Philosophical Questions. Ed.

William Lawhead. Boston: McGraw Hill, 2003. 111-16.

Kant, Immanuel. The Foundations of the Metaphysics of Morals. Trans. T. K. Abbott

(1873). Rpt. in Philosophical Questions. Ed. William Lawhead. Boston: McGraw

Hill, 2003. 111-16.

Marshall, John. Descartes’s Moral Theory. Ithaca: Cornell UP, 1998.

Rutherford, Donald, “Descartes’ Ethics”, The Stanford Encyclopedia of Philosophy

(Fall 2003 Edition), Edward N. Zalta (ed.),

URL = .

Schopenhauer, Arthur. Essays and Aphorisms. Trans. R. J. Hollingdale. New York:

Penguin, 1970.

Schopenhauer, Arthur. Philosophical Writings. Ed. Wolfgang Schirmacher. New York:

Continuum, 1994.

Wee, Cecilia. “Self, Other, and Community in Cartesian Ethics.” History of Philosophy

Quarterly 19.3 (July 2002) 255-73.

Nozick and the Problem of Moral Progress

Nozick and the Problem of Moral Progress

In attempting to achieve a new and objective approach to ethics, Nozick eschews discussion of many of the standard elements of moral philosophy: compassion, agency, harm, and sentiment. He may have been trying to avoid the nearly sentimental excesses of less rigorous approaches, but he perhaps reflects too strongly what Genevieve Lloyd refers to as “the chillingly abstract character of Reason.” The result is a provocative account of what ethics is and how it has arisen. What is missing is an account of how ethics should proceed and how one should lead one’s life, which leaves the reader intrigued but ultimately dissatisfied. Although Invariances reflects a slight retreat from Nozick’s more extreme classical liberalism of the past, his nods to libertarianism seem to prevent him from developing his moral philosophy further. He is also hampered in his project by concern over the is-ought debate, which he does not fully resolve.

Can Ethics be Objective?

Nozick’s approach to ethics is limited, at least in part, by his attempt to gain an objective standard for discovering invariant truths. His approach is stated as the following:
Unbiased and distanced choice of ethical principles leads to ones with invariance properties that, in virtue of those invariance features, are effective in achieving the goals of ethics: the protecting, fostering, or maintaining of cooperative activities for mutual benefit; the guiding of such activity (as with principles for dividing benefits); mandating behavior for response to deviations from the first two goals listed; and fostering virtues and dispositions that maintain patterns of behavior. (290)

He effectively demonstrates that competing approaches to ethics yield results that are far from invariant. Although both Rawls and utilitarians try to achieve an unbiased method that provides some distance, one arrives at very different conclusions when applying Rawls’s difference principle or a utilitarian approach. The distinction between the two, however, highlights a failing of utilitarianism, which is a focus on benefit, or pleasure. If any feature of ethics is invariant, it is more likely to be a desire to reduce harm than to promote pleasure. Although he was perhaps not the greatest philosopher in history, Arthur Schopenhauer presented a fairly compelling argument that harm, not pleasure, is the positive force in life, saying, “Evil is precisely that which is positive, that which makes itself palpable; and good, on the other hand, i.e. all happiness and all gratification, is that which is negative, the mere abolition of pain” (42).
Indeed, utilitarian philosophers often meet with widespread agreement when arguing for the reduction of suffering when discussing world hunger and medical ethics. However, the rancor of opposing sides emerges when the discussions turn to topics such as sexual ethics or sacrificing a few for the benefit of many. In discussing a utilitarian model for sexual ethics, Alan Goldman says, “Certainly I can have no duty to pursue such pleasure myself, and while it may be nice to give pleasure of any form to others, there is no ethical requirement to do so” (97). While utilitarians are unlikely to suggest that promiscuity is a moral requirement, the discussion does highlight the problem of focusing on pleasure rather than reducing harm. And Goldman goes on to note that sexual acts become immoral only when someone is harmed through force, deception, or exploitation. To achieve an objective explanation for the nature and function of ethics, Nozick must find features that are not theory-dependent but are invariant across theories. Looking toward evolutionary psychology and biology, he feels the best feature to meet this test is coordination and cooperation to mutual benefit.

Nozick anticipates the problem of describing ethics as coordination of activities to mutual benefit. He notes, “Some ethical principles do not operate to mutual benefit, and some modes of coordination (such as Thomas Schelling’s ‘coordination games’ may not strike us as ethical” (283). Simply coordinating activities to achieve some mutual benefit frequently falls outside the circle of ethical behavior, unless one reconceptualizes ethics in a radical manner. Nozick is not prepared to make such an immodest proposal. Nozick is trying to establish connections among existing ethical theories and develop an approach that will overcome past dilemmas. In order for this to work, however, there must be some qualifications on what is cooperation to mutual benefit. Without a concept of harm, cooperation often seems less than ethical. Cooperation to mutual benefit might better be described as a mutual agreement to not hurt one another. When cooperation does not reduce harm, it is rarely considered a matter of morality or ethics. If two people lose their kites in a tree and cooperate to get them out or make a mutually beneficial business transaction, this may seem desirable and good, but not a moral choice. In this example, it is assumed the kites are of little value and losing them is not considered a great harm. By changing the example slightly, getting the kites might reduce harm, and the choice might seem more like a moral choice. If the kite had been passed down to a child from a recently-deceased parent, causing the child to have great sentimental feelings for the kite, and someone helped him get the kite down to help relieve his grief, this would seem a moral choice, and most would consider it a moral obligation.

While competing moral theories are couched in various terms, reduction of harm is the underlying value for the overwhelming majority. Ethical egoists (objectivists are included here) focus on harm to the individual. Protecting one’s own self-interest generally involves cooperation with others for mutual benefit, which entails an agreement to not do harm to one another. It would be nice to say that concern for harm is an invariant feature of the human species. Unfortunately, some hold other views. Peter Singer addresses the problem in “Famine, Affluence, and Morality”:

I begin with the assumption that suffering and death from lack of food, shelter, and medical care is bad. I think most people will agree about this, although one may reach the same view by different routes. . . . People can hold all sorts of eccentric positions, and perhaps from some of them it would not follow that death by starvation is in itself bad. It is difficult, perhaps impossible, to refute such positions, and so for brevity I will henceforth take this assumption as accepted. (582)

Kant also warned against indifference to suffering. In his discussion of the categorical imperative, he describes a man who is fairly well off and who decides that the suffering of others is of no concern to him. Kant notes that the man’s decision is contradictory since “instances can often arise in which he would need the love and sympathy of others, and in which he would have robbed himself, by such a law of nature springing from his own will, of all hope of the aid he desires” (80). While humans may be greedy and selfish, it is the desire to avoid harm that generates morality and ethical theories, not a desire for benefit. It may be argued that any benefit does, in fact, reduce harm and suffering, but it is no small matter to describe benefit as the positive and quantifiable value while suffering is actually of much greater concern. It is the emphasis on suffering that gives an ethical theory both its normative force and emotional appeal.

Given that coordination to mutual benefit might better be an agreement to reduce harm, is it possible to meet Nozick’s criteria for objective ethical statements? In order to do so, Nozick demands both intersubjective agreement and invariance. He notes, “Intersubjective agreement was epistemologically prior—it was our route to discovering that a truth is objective—while invariance was ontologically prior—it specified the nature of objectiveness and it underlay and explained intersubjective agreement” (291). The desire to reduce, and therefore avoid, harm will pass fairly rigorous tests for invariance and will demonstrate nearly universal intersubjective agreement. Anyone who does not wish to agree to reduce harm in exchange for being protected from harm is generally considered to be a sociopath. When defining harm, however, intersubjective agreement breaks down. Nozick rather deftly avoids examining too many specific issues related to coordinated effort. However, when exploring the question, problems occur with disheartening frequency. Physicians tend to abide, or at least claim to, the notion that they should, above all else, do no harm. When faced with a patient who has no chance of survival, some physicians give false hope while others are brutally honest. In an effort to avoid harm, they come to opposite conclusions. The confusion is perfectly rational, though, considering the views of various patients. When interviewed, some patients will say they would not want to be told there was no hope. They would rather enjoy their remaining days in blissful ignorance. Others say they would want to know the unvarnished truth and would feel violated and degraded if misled by a physician. The physician/patient relationship is an important example of cooperation to mutual benefit, one that has brought our species and society great advancement and advantage. Nozick would do his readers a favor by providing some guidance as to how to resolve such dilemmas.

How Do We Account for Moral Progress?

According to Nozick, Ethics is, in brief, coordination and cooperation to mutual benefit. Ethics progresses by widening the circle of agents who participate in such cooperation. The progression is limited, however, by a prohibition against enforcing any ethical rules that go against an individual’s free choices. While someone may elect to act out of a caring sentiment for others, there is no moral force requiring such actions. Nozick makes no attempt to distinguish between social coercion in the form of laws and social coercion in the form of adopting a higher moral standard. Many who are opposed to legal requirements to perform any given action still feel that such actions must be performed only because it is the right and the only right thing to do. For example, many would argue that if one can save a life through minimal effort without placing one’s own life at risk, then that person is morally obligated to do so. An ethical theory that did not account for at least this level of obligation to others would seem a failure to many.
This omission seems a bit strange, given that Nozick appears to value moral progress and expresses support for including women, gays, and ethnic minorities in the circle of cooperation to mutual benefit (he even mentions the possibility of including animals and fetuses in calculations of mutual benefit). Other moral philosophers who want to substantially raise the bar for ethical standards have not advocated government coercion to enforce such standards (Rachels, Unger, Singer). Nothing prevents Nozick or any other ethicist from suggesting that moving to higher levels of ethics is something one ought to do, but Nozick seems to find this a noxious proposition at best.

Still, he seems to place some value on the higher levels of ethics. He even gives a suggestion as to how we may move to higher levels of ethics. He says:
Perhaps each layer effortlessly (though not inexorably) gives rise to the next. The domain of coordination to mutual benefit is expanded ever more widely, and the basis for this is found in traits common to all human beings. We hypothesize a basis in value for our evaluations that it would be good to extend cooperation more widely. (281)

He also notes that our actual behavior is contingent on our capacity for emotional responsiveness, which results in compassion for all people or all creatures and an “adherence to nonharm to them” (281). Some individuals will become more developed emotionally and will rise to the higher levels of ethics and assume a caring role toward other people, the world, or even animals.

Nozick also gives some hint of how moral progress can occur on a wider social scale. He uses a game-theory model whereby cooperation can be gradually extended over generations. In this model, a group previously excluded from cooperation (immigrants or slaves) will benefit from any cooperation from the dominant group, which acts, of course, from its own accord. The dominant group has shown moral progress, presumably, by merely replacing slavery, as an example, with extreme exploitation. In stark opposition to Rawls’ view of justice, Nozick appears to see it as immoral to demand anything more for the least advantaged than some participation in cooperation to mutual benefit, even if the benefit to the least advantaged is barely better than nothing. He says, “The new distribution need only surpass what each got under the old distributions for cooperation to be mutually beneficial. The principle of the first stage says that cooperation should be extended when it results in a joint distribution that is (weakly or strongly) Pareto-better than the existing one of no extended cooperation” (261-262). Still, in Nozick’s view, no group may work ethically to ensure the detriment of another group. The norms he has proposed “encourage the extension of such coordination and cooperation. And they also forbid one kind of interaction that is not to mutual benefit, namely, interacting with another (or with others) in a way that forces that other (or them) to be worse off” (264). While many would agree that moral progress does follow a pattern of exclusion, partial inclusion, and finally full cooperation with subgroups, there is a nagging feeling that humans could do better. Even where it is not possible to achieve, we still desire an ethical theory that demands inclusion of subgroups, even when the dominate social groups desire no cooperation at all or when cooperation is not necessarily mutually beneficial. While Nozick holds that we must not coerce cooperation under any circumstances, others take the opposite view. In Ruling Passions, Simon Blackburn warns against trusting a natural progression to maximum cooperation:

The advice to limit our concerns might go along with the happy belief in an “invisible hand” or mechanism by which a number of independent agents, each acting on their own narrow concerns, in fact maximize the social good. . . . This mechanism is the great buttress to free markets and laissez-faire capitalism. Unfortunately. . . there are situations in which instead of an invisible hand there is an invisible boot, ensuring that the same agents do worse than they would under a more generous regime of concern for each other.

In Nozick’s system, cooperation for mutual benefit is not exactly contractarianism, but it appears to have some of the same limitations. Cooperation for mutual benefit can be extended to groups heretofore excluded, but only by free choices. The risk is great that excluded groups will be left out of such cooperation completely or will be given only limited opportunity for benefit. Those who hold the most resources (or legal rights, prestige, status, consideration) will have all the advantages in any cooperative venture. Those with nothing will be forced to “voluntarily” accept even the smallest distribution. Groups in power could, and historically have, exclude the least advantaged for generations. Nozick notes, “Certain extensions should have taken place earlier, even when it did not benefit the extending group” (395). In this case, he is referring to slavery, women’s rights, gay rights, and civil rights. He does not, however, provide a basis for moral compulsion to motivate extending groups. Given the overall structure of his theory, it is difficult to understand his use of “should” in the above quotation. This echoes the contractarian approach, which holds that moral rules are established through voluntary consent, which is quite similar to mutual cooperation. Tom Regan describes this approach by saying, “The result is that this approach to ethics could sanction the most blatant forms of social, economic, moral, and political injustice, ranging from a repressive caste system to systematic racial or sexual discrimination. Might, according to this theory, does make right. . . . Such a theory takes one’s moral breath away” (474).

Although he does not state it directly, Nozick implies that forcing cooperation against the free choices of individuals would violate the rights of adult humans. The first level mandates “respecting another (adult) person’s life and autonomy, forbidding murder and enslavement, restricting interference with a person’s domain of choice, and issuing in a more general set of (what have been termed negative) rights” (280). The explanation of how these rights arise is not provided. It might be a fairly simple matter to explain what non-humans or non-adults might be denied these rights based on the lack of ability to cooperate for mutual benefit, but it is a more challenging task to explain how adult humans have come to have these rights. Some have argued that the rights come with conscious awareness, which is a product of evolution, but this leaves us in a precarious position. At what point in the stages of evolution are rights (or consciousness, for that matter) bestowed? Negative rights exist only because humans, in cooperation or not, want them to exist. Our desire to be protected from murder causes us to declare that murder is a great moral evil. On this count, both non-humans and non-adults might readily agree.

Extending cooperation to new groups is a matter for the higher levels of ethics. Nozick apparently feels these higher levels have something to contribute to the benefit of society. He says, “I do not say that the ethics of each higher layer is more obligatory. It just is lovelier, more elevating” (281). In Nozick’s system, ethical behavior based on compassion and genuine concern for others is nice when it occurs, but it is more a lagniappe than an obligation. Nozick hints at the importance of compassion in rising to higher levels of ethics, but he does not fully explore the topic. Given his reliance on evolutionary biology, one might expect him to examine the possibility that compassion is selected for through evolution. Indeed, when one shows a particularly high level of concern for the well being of others, it is popular to describe such a person as being “highly evolved.” And this is in turn intended to indicate a person of high moral stature. It is impossible to imagine morality without compassion and impossible to imagine moral progress without increasing human responsiveness to the suffering of others.

Schopenhauer gives the idea clear expression:

Boundless compassion for all living beings is the firmest and surest guarantee of pure moral conduct, and needs no casuistry. Whoever is inspired with it will assuredly injure no one, will wrong no one, will encroach no one’s rights; on the contrary, he will be lenient and patient with everyone, will forgive everyone, will help everyone as much as he can, and all his actions will bear the stamp of justice. (Philosophical Writings 229)

Schopenhauer goes on to say that it would be an obvious contradiction to say that someone is virtuous but knows no compassion or that someone is unjust and malicious, yet very compassionate. Ethical vegetarians are extremely aware of this feature of ethics. People who want to become vegetarians will often ask for advice on food preparation, restaurants, and so on. It is distressingly common for such people to assure the ethical vegetarian upon whom they are imposing that they care not a whit for the suffering of animals, but that they only want to improve their own health a little bit. They proudly declare their lack of compassion, lest someone perceive their actions as ethically motivated. It is simply impossible to imagine moral behavior in the absence of compassion. In Nozick’s view, though, someone’s behavior may be guided only by the desire to avoid social sanctions, rather than by compassion. While it is certainly the case that many people act only to avoid social sanctions, their behavior cannot be described as ethical in any satisfying sense. However, it is impossible to know whether one feels compassion, but it is possible to observe whether one is acting in a manner consistent with social standards. This leads to a bifurcated approach to the study of ethical behavior. Nozick, in keeping with the aim of his project, maintains an objective stance, which is at least partly empirical. We can observe and evaluate the behavior of others, and they must be sanctioned when their behavior violates the principles of coordination to mutual benefit. Many who focus on evolution as an explanation for how ethical theories have emerged and how moral behavior can be explained tend to see the universe as unfeeling and full of competition for survival, ignoring the need for cooperation to survive. Richard Dawkins, for example, describes the world as “a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won’t find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.”

Some then conclude that ethics is merely the behavior that helps propagate the species. From this “is” there is great difficulty explaining any sort of “ought.” Nozick, who takes Hume’s claim that one can never derive an “ought” from an “is” seriously, struggles with this throughout his section on ethics. He states, without any supporting explanation or argument, that one is bound to pursue the widest possible cooperation to mutual benefit, apparently suggesting only that all adult humans should be considered agents in such cooperation. As much as he denies it, he seems to be saying that evolution has produced certain characteristics that should be what they are because things are as they should be. Evolution has given us certain behaviors that helped us be successful in the past, so we should continue to maximize those behaviors. He also attempts to explain why an individual should be ethical. In some sense, he feels we should be ethical for the same reasons that we should believe things that are true. He acknowledges that there are some cases in which this will not have the best result, but in general we are most successful when we are ethical and attempt to only hold true beliefs.

The claim is, in part, that evolution has instilled in us the capacity for success. Our instincts and biological functioning have ensured our survival this far, so we can be fairly confident that we perceive the world in such a way that success is likely if we trust our perceptions. On these grounds, we take it that our beliefs about the world are probably true beliefs or we would not be successful in negotiating our way through the world. There are plenty of examples of false beliefs giving advantages over true beliefs (overestimating one’s ability in competition, for example), but we still do well by believing that our perceptions give us the best picture of reality that is possible. This raises some interesting questions for both epistemology and ethics. For example, some have suggested that there is a biological basis for religious belief. When a portion of the temporal lobe in the brain is stimulated, an individual is likely to have an intense emotional response to the world, ascribing great meaning to everyday objects and experiences. This biological basis could account for religious experiences of humans around the globe. Some claim that religious experience and belief motivate people to form societies, find meaning in life, and even to coordinate effort to mutual benefit. However, since religious beliefs vary widely and are directly contradictory, it cannot be that most humans hold religious beliefs that are true. If religious beliefs have helped ensure the survival of humans, then, it cannot be that true beliefs have helped us, but it must be that false beliefs are crucial to success. Religion cannot be viewed as one of the rare occasions where false beliefs might help success—religion is a pervasive, though not universal, characteristic of humans. If evolution is our guide to what we ought to do, then it holds that we should form some kind of religious belief, and many have argued this is the case. Others would say that religious belief is precisely that which most threatens our survival. Only by shunning the supernatural can we achieve an objective and workable picture of reality and of each other. Nozick, of course, places a high value on objective truths that are, as much as possible, invariant and value free. This would exclude religious belief. It would seem, though, that excluding religion here would also exclude evolutionary biology as our guide to the objective world.

Nozick’s objective stance prevents him from accounting for an individual’s progress to higher levels of ethics. Developments in society may make it possible for societies to advance to higher levels. In extremely poor and desperate conditions the most ruthless forms of ethical egoism often replace coordination to mutual benefit. The struggle for survival is so fierce that concern for others only threatens one’s own life. In stark contrast to Nozick, Kai Nielsen believes that the ability to extend cooperation, including care, to disadvantaged groups entails an obligation to those groups to meet their needs. Further, he claims that when we can, we should satisfy all wants of all groups in a society: “We should, that is, provide all people equally, as far as possible, with the resources and social conditions to satisfy their wants, as fully as possible compatible with everyone else doing likewise.” (Nielsen 390) Nielsen feels it is morally imperative that we extend benefits equally to as many people in society as possible. To refuse is to be morally remiss. Rawls takes a comparatively moderate approach, claiming only that the least advantaged must not suffer needlessly when it can be prevented. Rawls is concerned primarily with reducing harm, while Nielsen wants to maximize benefit. If it were not for the coercion required to redistribute benefits, we might think Nozick would prefer Nielsen’s approach because it requires a wider circle of coordination. Of course, any forced redistribution of wealth gained through free choices will be anathema to Nozick. Interestingly, he might have more approval for Singer’s plan to reduce world poverty. Singer’s claim is that we must eliminate poverty by voluntarily giving our money away. According to Singer, governments cannot be trusted to solve the problem. As humans, we have an obligation to reduce the suffering of our fellow creatures. Of course, Singer is insisting that each individual has an obligation to rise to the higher levels of ethics (a position he shares with Peter Unger). Nozick only allows that we are able to rise to higher levels, never required to do so.

Nozick makes some references to extending concern to animals and fetuses. These are matters for only the higher levels of ethics, as neither animals nor fetuses are capable of coordinating effort to mutual benefit. This raises the question of agency and reveals a weakness in Nozick’s view of ethical progress. Animals and fetuses are similar in their ability or effect coordination of effort, but are quite dissimilar in other respects. Agency is frequently determined by the ability of an individual or group to enter into an agreement for standards of behavior. In this respect, most recognize that while fetuses have no ability to reach any agreement, they have the potential to make such agreements at some time in the future. Unless evolution is suddenly accelerated beyond anyone’s expectations, animals are sadly bereft of any potential for reaching any agreements or of coordinating activities. But animals cannot be thrown out of the agency circle just yet. Others (Bentham, Singer) will argue that the ability to suffer, not the ability to enter into agreements, is the basis for moral agency. On this count, fetuses are likely to have a lesser capacity than fully developed non-human animals. As cooperation with animals and fetuses is not possible, ethical choices must be guided by another principle, and reduction of harm (or suffering) suits the purpose.

The problem now is that it is difficult to quantify harm. A fetus, if permitted to survive, has the potential to experience great benefits and to benefit others. It also has the potential to cause great harm. While cooperation is not possible at the moment, the promise of future cooperation might endow the fetus with “rights” that will later impose obligations and duties. In a sense, rights are granted now for delayed obligations and duties. In the effort to reduce harm, one might argue that denying the potential adult human the opportunity to enjoy all the benefits associated with life is to do great harm to the potential human. Many see the elimination of potentiality to be of little concern. Reducing the number of humans on the planet may be the only way to ensure the survival of the species. Having the child born could cause harm to society, harm to the parents, and harm to the child. All life is filled with suffering, and being born is hardly of any intrinsic value, or so it is argued. A child who is a hardship to its parents could cause them a great deal of grief (the product of rape is only a further illustration of the same point), and a child born into poverty could become a burden on society as a whole.

On the other hand, it is argued that animals are capable of experiencing harm right now. What’s more, millions of animals are needlessly tortured and brutalized each day. While animals can be used for the benefit of humans, they do not coordinate their activities for mutual benefit, and they will never be able to do so. The only way to reduce the suffering of animals is for humans to take a caring role toward them. While humans act out of compassion for animals, no one believes animals will be able to return the favor. Still, it is argued that having evolved from lower animals, our experience of pain must be similar to the experiences of at least the higher order of mammals. Understanding the pain of animals compels us to try to reduce it, understanding that we would want our pain to be reduced whenever it is possible.

These problems illustrate the importance of agency in an ethical theory. Nozick’s mandate for the widest coordination possible does not address genuine moral problems that we must face on a daily basis. If the ability to participate in cooperative activities determines whether one is deserving of consideration, then what is the status of adults with brain damage (from diseases such as Alzheimer’s Disease or from injury), children, fetuses, or animals? There must be some normative force behind the idea that we must rise above the first level of respect and show some concern for the suffering of those who are able to suffer.

How Should We Live?

Nozick makes the claim that people should be moral in order to be valuable:
Being moral instances and realizes a more general kind of value, and you should be moral because it is . . . a better way to be. The unethical person may not care about being more valuable (when he realizes what value is), but his not caring about this just reinforces his lesser value. The unethical person, then, is not getting away with anything. (283)

Nozick goes on to claim that this value sanction is an “attractive and promising theoretical route to giving normative force to ethics” (283). It is true, in general, that those who refuse to coordinate activities to mutual benefit are valued less by society, but it is hard to see that this will give normative force to ethics. Snipers, corporate marauders, and petty thieves seem to thrive on the lack of value society gives them. The value sanction itself seems to compel them to greater and greater crimes. Psychological egoism claims that people act only from their own self-interest. Both moral and immoral individuals choose the actions that benefit them in the most direct way. Those who have risen to the highest level of ethics of caring and responsiveness do this only because it makes them feel good to do so. In Nozick’s view, these people are probably those who are concerned with value and being valued. Being valued by society, according to egoists, is a selfish motivation for acting in a moral manner. The counter claim, of course, is that wanting to be valued and desiring value is the definition of a moral person in the first place, so this cannot be a sign of selfishness, but only of moral worth. No one should apologize for being the type of person who feels good when doing the right thing. Unfortunately, the unethical person might feel just fine by taking advantage of weaker people, using deceit to gain objectives, and even of using violence and force for sheer pleasure. The unethical person, then, derives pleasure from being wicked. In order to give our theory moral force, we almost need to say that everyone should want to have moral value and derive pleasure from doing works that coordinate to mutual benefit.

Existentialists have made the claim that humans always choose “the good.” By choosing an action, one gives that action a stamp of approval. The CEO who plunders the wealth of his stockholders has consciously chosen this action as better than all competing options. Each person is responsible for his own actions, and there can be no universal standard by which actions are evaluated. There is a catch, however. Each individual lives in a world of other people, and these others can often perceive the individual just as she is. As Sartre put it in No Exit, “Hell is other people.” The person who lives with no concern for others, who violates even the most basic level of ethics, must live with the nagging fear that others will perceive him as he really is, and shame is his lot. Unfortunately, many feel neither shame nor remorse, and no one has found a way to instill these feelings (or compassion) in people who do not have them. We can explain why ethical behavior is good for humans and, therefore, should be encouraged. We cannot explain why an individual who is unethical can be compelled to become ethical.

It is possible, however, to examine how one comes to be unethical in the first place. If compassion is essential to ethical behavior (or concern for having a greater value), then we could look at people who lack this feature and try to determine why. This might lead to a deterministic view that shows that ethical behavior is only the product of genetics, environment, and training. Such a view could still be of great value, though. The discovery of genetic markers that prevent the development of compassion, or finding educational methods that inculcate a value for reducing suffering, or manipulating the environment to produce more caring individuals might be seen as an advance. Many (libertarians, for example) will see any efforts to create ethical individuals to violate the first level of respect for persons. The most likely approach will be to label unethical behavior as a symptom of a disease, and then interventions will be medical interventions, rather than political or social coercion. The gloss that ethical behavior gives life would likely disappear in such a world of manufactured behavior. Nonetheless, suffering might be greatly decreased.

On the subjective individual level, and individual is likely to develop a sense of whether moral behavior is of value or not by interacting with other people. A person who is honest, compassionate, and concerned for mutual benefit might have a variety of experiences. In a positive and loving community, this person will be valued and will be successful in relationships, contracts, and personal fulfillment. This person’s sense of morality is likely to grow stronger and stronger. In other environments, the compassionate person will be abused, manipulated, and exploited. This person is likely to learn the lesson quickly and become hardened and disinterested in either the approval or the suffering of others.

Nozick has provided a brilliant descriptive framework for ethics, giving plausible explanations for the function and origin of ethics. If someone were so inclined, Nozick’s ideas could probably be developed into a comprehensive theory for describing ethics. It is likely that the theory would pass most tests for objectivity and might even have a number of invariant features. The system would still lack, however, all but the weakest moral force. Using evolutionary biology as a basis for this theory and combining it with concern over the is-ought debate prevents Nozick from developing a more robust ethical system. Add to this a libertarian perspective that shuns all but the weakest ethical obligations, and the system is limited almost entirely to descriptive ethics, but Nozick is trying to establish something more than a mere description. He is trying to establish something that is morally binding. In order to do so, it is necessary to bring in ideas such as agency, compassion, and harm. The concept of harm is essential to developing an idea of what is ethical. Compassion, or a similar sentiment, is necessary for distinguishing between ethical choices and choices that are merely beneficial. And agency is necessary for determining what individuals or groups are to be considered a part of the ethical realm. At the minimum, an ethical theory should compel individuals to reduce harm and consider the interests of all sentient beings, though this consideration does not entail that all sentient beings are in any sense equal. With or without social sanctions, efforts to compel individuals to be compassionate are doomed to failure. An ethical society, then, should examine what actions and conditions nurture compassion and work toward a more compassionate world. Advances in technology, communications, medicine, and science all create increased opportunities to extend cooperation and show compassion to a greater number of groups. Where it is possible to reduce harm, nurture compassion, and widen the circle of cooperation, it should be undertaken. This is not to say that social sanctions should be in place to coerce individuals, but it is to say that anyone who fails to reduce harm, nurture compassion, and widen the circle of cooperation is not acting ethically. To be ethical requires this much at least.

Becoming familiar with death

Death be a Stranger No More

Although every human is ultimately successful at achieving death, most of us experience profound anxiety over the event. When pressed, some of us will claim that we do not fear death as much as the process of dying, but philosopher Thomas Nagel points out that the worst thing about dying is that it is followed by death (3). Simone de Beauvoir adds lucidity to the human experience, adding, “All men must die; but for every man his death is an accident and, even if he knows it and consents to it, an unjustifiable violation” (526). Of course, we can give many philosophical and spiritual reasons for fearing death and dying, but our lack of familiarity with the process must play a crucial role in our anxiety. Philippe Aries points out, “any discourse on the subject of death becomes confused and expresses one of the many forms of pervasive anxiety” (Reversal 134). He claims that we moderns have moved death in to the shadows out of fear, but we’ve only intensified the anxiety. Eliminating the fear of death and dying is not an option for humans, but it is possible to stop denying the existence of death and to face death head on and in close proximity.

Albert Camus describes the desire to control one’s own death in The Happy Death. The protagonist, Mersault, wants to be conscious when he dies to experience the last part of life and to have some will in his death. He faces death in paradoxes: “Conscious yet alienated, devoured by passion yet disinterested, Mersault realized that his life and his fate were completed here and that henceforth all his efforts would be to submit to this happiness and to confront its terrible truth” (140). This may not be the kind of happy death most of us would imagine, but it has features that seem common to what most people want, the desire to manage dying with dignity and autonomy. A change in how and where people die could help more people experience their own version of a happy death. In fact, I assert that a hands-on approach to the dying and recently dead would offer many benefits for both the dying and their caregivers.

Confronting one’s own death gives one a clearer sense of identity and purpose. It is cliché to say that we should live every day as if it is our last, but planning for our final days focuses our attention on who we want to be and how we want to be remembered. A constant recognition of the certainty of death is now seen as morbid and even psychologically harmful, but the person who is prepared to die is not rejecting life. Rather, such a person is likely enhancing an appreciation for life and experiencing a deeper connection with family, friends, and other loved ones.

In “Dying in a Technological Society,” Eric J. Cassell argues that death in the past was primarily a moral matter. When one was clearly about to die, the task at hand was to care for spiritual matters. He says that death is now a technical matter of rescuing patients from the hands of death. Death has become a technological event, he says, in part because “death has moved from the home into institutions—hospitals, medical centers, chronic care facilities and nursing homes” (43). He notes also that the nature of death has changed as a result of changes to family structure. Notably, the desire of the elderly to live independent lives is part of the reason for death moving from the moral to the technological realm. This creates quite a quandary. Cassell says, “To die amidst his family he must return to them—reenter the structure in order to leave it. Reenter in denial of all the reasons he gave himself and his children for their separation, reasons equally important to them in their pursuit of privacy and individual striving and in their inherent denial of aging, death and fate” (44). On his view, the free choices of older individuals have denied them of the care they desire at the end of life. Death must now be removed from the technical sphere and regained in the sphere of morality and family.

The first step to realizing the best deaths possible for patients is to recognize that dying is a natural process that does not require medical intervention. Of course, those who are dying may have medical needs such as pain management or comfort care, but in this respect they are no different from the living as we all need pain management and comfort from time to time. To change how we die, death must not be seen by medical professionals as the dark enemy to be kept at bay for as long as possible but as the final visitor we must all meet at the end of life. By permitting families and friends participate in the care of the dying, we may also help the living better prepare for the process of dying and the inevitability of death. In his seminal work, The Patient as Person, Paul Ramsey said, “‘The process of dying’ needs to be got out of the hospitals and back into the home and in the midst of family, neighborhood, and friends. This would be a ‘systemic change’ in our present institutions for caring for the dying as difficult to bring about as some fundamental change in foreign policy or the nation state” (Ramsey 135).

This systemic change is difficult to bring about because it must overcome profound changes in way families are structured, the way care is provided, and the way society perceives death. Care in hospitals is often synonymous with technology. In the home, “care” implies being with a paid caregiver. Many, if not most, people would prefer to die at home with loved ones, but loved ones are rarely home, and few can afford to take off months or sometimes years to care for a dying person no matter how strong the bonds of love. What’s more, the dying person is often caught between the medical urging to prolong death at all costs and the discomfort with death of caregivers. Jack Coulehan captures this tension well:
The term invisible death sounds rather benign, but its invisibility ultimately carries with it a lack of preparation and inability to cope with the savage beast. Savagery emerges from its lair in many guises, among which is the alluring face of medical technology. Closely bound up with the reclusion of death from social life is the embarrassment that the living feel in the presence of dying people (Jack Coulehan, 37).

The embarrassment could be relieved by a program of death education, support for home death, and greater acknowledgment and discussion of death in our society. Often, the natural processes of death can be shocking to those who are with a loved one at the time of death. A few short conversations with caregivers about the processes dying people experience would lessen the anxiety and shock of the caregivers when the dying person begins to gag, wheeze, cough up fluids, and so on.

Narratives are filled with evidence that death, distant and medicalized, is not what patients desire. Poet Donald Hall was married to the much younger poet, Jane Kenyon. To the surprise of both, it was Kenyon who died first, of leukemia. Hall was in a position to care for her in their home. He was strong enough physically to lift her, and strong enough mentally to face death with her, although he narrates his experience in excruciating detail in his book of poetry, In Memoriam. Even given his ability to care for her and her desire to die at home, she almost died in a hospital. Hall writes,

When she couldn’t stand, how could she walk?
He feared she would fall
and called for an ambulance to the hospital,
but when he told Jane,
her mouth twisted down and tears started.
‘Do we have to?” He canceled.
Jane said, ‘Perkins, be with me when I die (41 Hall).

Ultimately, he had a change of heart, and she died as she stared at him with eyes full of “dread and love.” This was her desire and he fulfilled it out of love and devotion for her, surely he benefited as well. Besides the knowledge that he honored the dying wish of his beloved wife, he also had an experience of death that was excruciating but also filled with care and valuable to him. The exquisite pain of the experience shows through his words. Death will always be unwelcome, but, oddly, we may learn that we can “survive” death in the sense that we know everyone will pass on successfully and that we move beyond pain rather than toward it.

Activists have made an effort to educate parents about the dangers of “medicalizing” birth. They claim that birth is a much richer experience when done as naturally as possible in the presence only of the family and a birth attendant, rather than a hospital room full of strangers and exotic technology. The movement for home death echoes many of the arguments for home birth. Indeed, those who favor home birth are more likely to favor home death as well. Researchers also observe a correlation between areas where home births are common and areas where home death is common. These correlations may relate to shared social values, but they may also be a result of the proximity of individuals to hospitals or other care facilities.

A study published in the American Journal of Public Health by Silviera, Copeland, and Feudtner in 2006 attempted to analyze the contribution of social values to home death as opposed to other factors such as availability of hospital beds and income. In part, the conclusion stated, “Although we found that hospital bed availability was associated with hospital death at the individual level, the relationship became insignificant at the aggregate level” (6). The study notes that about 90 percent of patients state a preference for dying at home but only about one-third are able to do so. The correlation with home birth seems to reflect some shared social values, but the analysis is extremely difficult. Still, the authors have suggestions for increasing the number of deaths that occur in the home. They say, “Reducing the proportion of people who die where they had not wanted to die is likely to require programs that address individuals, their society, and its cultural values, and the health system in which they reside” (7). Any attempt to increase the number of home deaths will require concerted effort to educate both the public and health care professionals in addition to more alternatives to hospital care.

Cultural Considerations

Those educated to be culturally competent will recognize that the desire of a family to wash and prepare the body of a deceased loved one is common to many cultures. With a few exceptions, the more faith a society places on technology, the more distant its citizens will be from the process of death. A number of obvious reasons present themselves. Many people believe death can be kept at bay longer if their loved ones are in a hospital receiving the best care modern technology can provide. This, of course, implies an unwillingness to accept the inevitability of death and a deep fear of the process of dying, which should in this view be left to the experts.

In an essay published in 1975, Jack Goody describes Western attitudes toward death:
Only the bare bones of death are seen today in Western societies. With smaller households and low mortality, each individual experiences a close death very infrequently, if we understand close in both a spatial and social sense. In childhood, one is often kept away from the immediate facts of death, either by parents (if it is a sibling) or by relatives and friends, if it is a parent. Grief is suppressed rather than externalized (7).

In the last century, we have become more and more distant from death, especially in the United States. Many adults have never seen a corpse. This enables denial of death for a time, but it prepares one poorly for the fact of death when it occurs. In pre-industrial societies it is impossible to avoid the reality of death. The fact that technology and affluence have enabled us in the West to isolate ourselves from death does not mean it is good to do so, for death has not been eradicated, only hidden from view. The psychological and spiritual, however one defines the term, benefits of experiencing the death of others in a loving and close manner will benefit us all as a society. We will become habituated, to put it in Aristotelian terms, to care and grieve with greater immediacy and efficacy.

In the past, Americans were much more familiar with death in every aspect, and I would not want to return to the conditions of pre-Civil War America. In this era, as described by Lewis O. Saum in “Death in Pre-Civil War America, death was so ubiquitous that almost no one had failed to be in the presence of a dying person, often a child. He describes a society in which every letter was opened with dread because it was sure to have news of more death. Letters generally contained graphic details of the effects of disease and dying, and the general populace knew well the signs of impending death. They realized, also, that no one was immune from death. All the same, death was recognized as a chance to behave morally. To die well was to accept the fate of Providence. In addition, most felt that the proximity of death gave an opportunity for spiritual growth and reflection. Saum says, “Philosophy has been referred to as the learning to die, and insofar as humble Americans philosophized they did indeed learn to die” (39). Those who are constantly aware of death tend to choose their actions more carefully than those who are denying the existence of death. They lead deeper spiritual lives. Although it may happen, it is not necessary for modern American culture to experience pandemic or massive loss of life from violence or war such as existed in pre-Civil War America to regain familiarity with death. More care for loved ones and more frank discussion about the process of dying could help regain some of the benefits with our earlier experience with death without having to revive the horrifying conditions that provided them.

Philippe Aries describes the transition from the home to the hospital for death and the family. He notes, “Rapid advances in comfort, privacy, personal hygiene, and ideas about asepsis have made everyone more delicate. Our senses can no longer tolerate the sights and smells that in the early nineteenth century were a part of daily life, along with suffering and illness” (Hour 570). Advances in hygiene, comfort, and privacy were certainly goods freely chosen by most Americans. Again, we have no one to demonize, but few could foresee the consequences of this shift to what was thought to be life-prolonging and medically superior—indeed the hospital was life-prolonging and medically superior. Aries notes that the burden of care had been shared by extended families and neighbors, but the circle began shrinking in the twentieth century. He says, “This little circle of participation steadily contracted until it was limited to the closest relatives or even to the couple, to the exclusion of the children. Finally, in the twentieth-century cities, the presence of a terminal patient in a small apartment made it very difficult to provide home care and carry on a job at the same time” (570). In contemporary America, the burden of care frequently falls to one person, a spouse or single son or daughter. Perhaps society did not set out to remove itself from death; it is merely one of the more dire consequences. Many think it morbid to talk openly of the need to be present at the death of a close relative or friend. Death has been placed behind a privacy curtain of a hospital. Aries notes, “The hospital is the only place where death is sure of escaping a visibility—or what remains of it—that is hereafter regarded as unsuitable and morbid. The hospital has become the place of solitary death” (571). It is not bad manners to discuss death openly. My teenaged son is interested in mortuary science and funerary practices. He has read books on the subject, visited funeral museums on two continents, and become something of an expert. As a result, his school counselors called his parents into a meeting to discuss the possibility that he was psychologically disturbed or suicidal. It was incomprehensible to the mental health professional of the twenty-first century that an interest and knowledge of death could be expressed in a psychologically healthful manner.

Just how closeness to death enriches our lives is as difficult to define as exactly how art enriches our lives or the study of the humanities. What we take away from a death experience may be as varied as what audience members take away from a tragic drama or moving symphony. Simone de Beauvoir describes her mothers death saying, “Cancer, thrombosis, pneumonia: it is as violent and unforeseen as an engine stopping in the middle of the sky. My mother encouraged one to be optimistic when, crippled with arthritis and dying, she asserted the infinite value of each instant; but her vain tenaciousness also ripped and tore the reassuring curtain of everyday triviality” (526). Certainly some of us would rather stay behind the curtain of everyday triviality and enjoy a greater distance from death.

In 1987, my grandmother’s brother died of bone cancer. Within six weeks my grandfather has succumbed to lung cancer. Only weeks later, fire that resulted from lightning destroyed her home. My uncle, a country preacher, told her how wonderful it was that God had been with her throughout this horrible ordeal. He seemed desperate to regain some “everyday triviality,” but death has a way of forcing a deeper meaning on us. Indeed, the conversation we have with death throughout our lives informs all that we do, and we harm ourselves when we cough politely and dismiss ourselves at the earliest convenience. A meaningful life demands more of us. In the words of Eric Cassell, “In the care of the dying, it may give back to the living the meaning of death” (48). Although we know that death is still inevitable, we want to deny its existence. Aries says, “The tears of the bereaved have become comparable to the excretions of the diseased. Both are distasteful. Death has been banished” (580). It would be difficult not to celebrate the success at pushing death a little further away. No longer do parents withhold attachment to their children until they feel more certain they may live. No longer is calamity floating to us on every breeze, but this tide of great accomplishment separates us from our humanity and meaning. And death, still, is not vanquished but merely held at bay.

Patient Autonomy

Most patients express a wish to die at home rather than in a hospital among strangers, yet most people in America die in a hospital. Many people die in emergency situations, and dying in a hospital presents ethical qualms for virtually no one. Others, however, desire to die at home but get caught up in the fight against death rather than care for dying. Medical interventions and efforts to prolong life take precedence over providing the care the patient has requested.
Jeffrey Stout provides a typical narrative of how death occurs in America:
My maternal grandfather, for all his traditional skill in carrying out his own dying, did not die in his bedroom at home. Like the vast majority of Americans today, he died in a hospital, which he experienced as a sprawling bureaucracy, run by managers, staffed by technical experts, and clogged with advanced technology he could neither understand nor do without . . . . After days of frustration, he finally called a couple of doctors into his room and vented his moral outrage (Stout 275).
The death of Stout’s grandfather is what Philippe Aries refers to as the bad death or ugly death. In his description of the bad death, he says,

This is always the death of a patient who knows. In some cases he is rebellious and aggressive; he screams. In other cases, which are no less feared by the medical team, he accepts his death, concentrates on it, and turns to the wall, loses interest in the world around him, cuts off communication with it. Doctors and nurses reject this rejection, which denies their existence and discourages their efforts (587).

The usual demon of bioethics, the paternalistic physician, is not the problem in this case. Any doctor would be reasonable in assuming that patients brought to the hospital were brought there for care. By pushing death into the hospital, we have created an untenable situation for medical teams. Any given professional providing care for Stout’s grandfather would probably agree that a home death would be preferable. This is true for most patients with long-term, terminal illnesses. Generally, patient autonomy is not taken away by paternalistic hospital staff; it slips into a bureaucracy created to fight death, not accommodate it. Patients themselves or their caregivers may voluntarily check into a hospital when a medical crisis occurs without foreseeing that they are in effect asking the doctors to treat their condition rather than to allow death its natural progression. Indeed, when a patient is presented to hospital staff, they must assume that the patient is seeking treatment to prolong life. To withhold treatment in this case could lead to a charge of negligence.

Patient autonomy may also be limited by external factors. To take the extreme and most obvious example, the wish to die at home can only be accommodated for those who have a home. The wish to die among family members can only be afforded those with loving family members. The patient’s wishes can be respected but cannot be fulfilled anymore than the common wishes to marry a billionaire with eternally youthful good looks. The mere act of wanting something does not make it possible or obligatory. Patients may, of course, refuse treatment. The easiest way to avoid treatment, however, is to stay away from treatment providers. It is difficult to break old habits, though, and most of us are in the habit of going to doctors and hospitals when we feel bad.

In some cases, patients may not only express a preference for where they will die but for how their body will be prepared once they have passed. William May says, “While the body retains its recognizable form, even in death, it commands a certain respect. No longer a human presence, it still reminds us of that presence which once was utterly inseparable from it” (139). Does respect for autonomy extend beyond death? Philosopher Jeremy Bentham asked that his remains be preserved and kept at University College London as his “Auto Icon.” Perhaps he would not be thrilled with the results of the original efforts at preservation, but he was preserved, and his remains are still displayed at the university. Would we be violating Bentham’s autonomy by destroying or burying his remains? More to the point, can caregivers be accused of violating the autonomy of the dead recently deceased or otherwise? The first impulse is to say any reasonable request should be respected beyond death, but often our selfish ends make us think differently. Franz Kafka asked that all his manuscripts be destroyed when he died, but his friend Max Brod never carried out the request. As a result, Kafka has become a renowned author, and we may feel Brod did him a great favor by failing to carry out his dying wish. We could also take the view that Kafka was harmed by having his wish ignored.

Thomas Nagel gives some justification for respecting the wishes of the dead. He says, “When a man dies we are left with his corpse, and while a corpse can suffer the kind of mishap that may occur to an article of furniture, it is not a suitable object of pity. The man, however is. He has lost his life, and if he had not died, he would have continued to live it, and to possess whatever good there is in living” (7). The person who has died is still of value to us. Our obligations do not evaporate on the occasion of death. Decisions about whether a patient’s autonomy can be violated after death need clarification, but most families try to honor the free choices of their loved ones. This is more easily done when the patient is not handed over to strangers in a hospital. The exception is when the recently dead wished to donate usable organs. In cases of lingering illness, this is usually not a concern or even an option, but if donation is desired and possible, a hospital death may be recommended.

Caregiver Autonomy

Medical staff sometimes suggest (on occasion the suggestions may feel like force to the caregivers and patients) that patients be transferred to long-term critical care or hospice in spite of the preference of both the caregivers and the patients to have the death occur at home. Indeed, caring for a dying patient can be extremely traumatic and physically demanding, so the concerns of medical staff for the caregivers are understandable. One aspect of care for demented patients that is often overlooked is reduced inhibition. Having to lift and bathe an adult patient is physically demanding, cleaning feces and urine and changing diapers can be psychologically disturbing, but watching one’s parents engage in extremely inappropriate and embarrassing sexual behavior can be completely demoralizing. Given these realities, it is easy to understand why doctors and nurses would advise caregivers to consider long-term care or hospice over home care for the dying patient. Nonetheless, a beneficent denial of autonomy is still denial of autonomy.

A 2005 study published in Palliative Medicine examined the predictors for a home death. Not surprisingly, it found that home deaths were most likely to occur when the dying person wanted to die at home, when the physician visited the home during the last month of life, and when the care recipient had a healthy caregiver. The authors note that ethicists tend to focus on individual autonomy but that the autonomy of the caregiver cannot be ignored in this context. The article says, “The emphasis on individual autonomy overlooks the communal nature of death. When considering who is responsible for meeting the needs of the dying person, the informal caregiver plays a significant role. The choice of dying at home has profound consequences on informal caregivers, typically the wife or daughter” (497). Efforts to increase the number of home deaths must consider the need for support for caregivers and acknowledgment of the autonomy of caregivers. Sometimes the autonomy of the patient or caregiver must necessarily be compromised, so solutions should be sought that respect both. Some examples might include in-home hospice or palliative care, caregivers’ day out programs, or even out-of-home temporary services where a patient could be cared for outside the home for short periods to provide some brief respite for the caregiver. Ideally, families could work together to provide such solutions themselves, but it is not always possible for such solutions to work, especially with smaller families. In some cases, a hospital death may be the best option. In such a case, the authors of the study mentioned above suggest that we work to improve “the environmental qualities of institutions to enable them to offer the same things that people value about home deaths” (498). The worst any dying person should have to endure would be a death in a hospital in a quiet room with familiar and consistent caregivers. Proper education and social support should help caregivers cope with the disturbing behavior of dying and demented patients. Home death is unlikely to be successful when the needs of the caregivers are ignored.

In 2004, Cindy Cooley published an essay in the International Journal of Palliative Care exploring why patients in the United Kingdom are not able to choose where they die. Among other reasons, caregivers have difficulty getting hoists and other equipment that would enable them to care for the dying in the home. Providing equipment for home care would be cheaper than admitting patients to hospitals, but, again, bureaucracy works against the care team, whether it is made up of family or palliative care nurses.

I recently interviewed a woman in Galveston, Texas who cared for her dying father for three years as he died from Alzheimer’s Disease. He died in September 2006. During one medical crisis, she took him to the hospital for help. When he was to be released from the hospital, the staff told her he would need to be moved to a long-term care facility. The woman averred, explaining that she would care for him at home and that she had power of attorney. She was told that she would not be able to provide appropriate care and that it would be too much of a strain on her. She explained that she would prefer to decide for herself when she was over-taxed, rather than leaving the decision to strangers. A few months later, her father died in her arms at home as both he and she had wished. The medical staff members at the hospital were correct, though, that the strain was tremendous, and the process took a visible toll on her physically and mentally. Not only was the strain of caring for him sometimes surprisingly difficult, but she was not prepared for the bodily occurrences at the end. She had imagined holding him as he gently passed over to a state of calm, peaceful death. She was surprised by his convulsions, gasping, expelling of mucous, and other fluids. She struggled to comfort him in his last moments while also cleaning and restraining him. In spite of all this, she says she would do the same thing all over again.

One reason such situations occur is that medical emergencies as the one described above create chaos. Giving further reasons that patients do not die in the place of choice, Cooley also wrote, “Relatives may panic; they may be elderly or on their own and unsure if the end is imminent when the patient is gasping or panicking. Fear at the end is often enough to galvanize the relative into calling the emergency services . . . Doctors who are unfamiliar with the patient or unsure what to do to relieve distress may see the hospital as the safest option.” The result is that the patient dies in the hospital rather than home. Without sufficient education, it is impossible for caregivers to distinguish between the end of life and a passing crisis that requires intervention to provide comfort to the patient. This distinction is often difficult for professional caregivers, so it is understandable that family members would have trouble making a decision while watching a loved one in obvious discomfort or even agony.

The role of the caregiver complicates matters of patient autonomy in all cases here. Even a caregiver who wishes to honor the choice of the patient may weaken toward the end. The physical and psychological demands may be much greater than anticipated. Medical crises may be much more traumatic than anyone could imagine. Many dream of quietly holding a loved one and talking softly as the person gently slips into the comfort of death, but death is rarely so commodious. Even before such surprises, fewer caregivers than patients have a preference for the death to occur at home. They may agree to have the death in home only out of respect for a loved one. These caregivers may have enormous anxiety about watching someone die to begin with. Any sign of “emergency” that gives them reason to call an ambulance can be used to relieve them of an accepted but unwanted burden.
The authors of a Canadian study on home death note the nature of competing autonomous choices in home death, saying, “A central ethic in palliative care is the view that how people die should be grounded in self-control and choice. However, the emphasis on individual autonomy overlooks the communal nature of death. When considering who is responsible for meeting the needs of the dying person, the informal caregiver plays a significant role. The choice of dying at home has profound consequences on informal caregivers, typically the wife or daughter.”

In her book, Healing the Dying, Melodie Olson gives advice to caregivers. She advises caregivers to recognize their physical limitations and get help when needed. This is good advice, but it assumes that help is available. Most caregivers probably do not endure the strain and ardor of care alone out of choice. It is difficult to imagine caregivers turning away genuine offers of support. Olson also advises caregivers to sleep as much as needed. Again, she assumes that patients will be cooperative and put their needs on hold long enough for caregivers to get adequate rest. She also advises, among other things, to take advantage of respite care (182). Rather than viewing her list as good advice for caregivers, it would make sense to view it as a list of caregiver needs. Both patients and caregivers sometimes opt for hospice or hospital care as the time of death approaches. One might guess this is caused by the unexpected hardship of a home death. Patients will be much more likely to die at home if caregivers are provided respite care, opportunities to sleep and exercise, and help with strenuous physical tasks. Home death is more likely to occur when home caregivers are given support, information, and alternative access to services. A home visit from a physician may not be necessary. Visits from social workers, nurses, or other professionals may prove extremely useful in improving success for home deaths in the U.S.

Death with Dignity

Jack Coulehan describes two distinct movements for death with dignity. One is rooted in a philosophical tradition defending self-determination and individual rights, specifically the right to euthanasia and assisted suicide. This movement promotes social changes that will prevent patients from living beyond their normal lifespan and promotes aid in dying. The other movement to promote death with dignity focuses on relationships between the dying person and others. This movement recognizes the communal nature of death and seeks to shine a light on the “invisible death” described by Aries. The relational concept of death with dignity described by Coulehan advocates a more humane way to approach the “art of dying,” even if it must be done in a hospital.

This relational movement for death with dignity does not focus solely on the dignity of the patient. We are concerned now for the dignity of caregivers, family, friends, and society. Instead of being invisible, death will be publicly recognized and mourning will no longer be considered impolite or embarrassing. Society will have tolerance for the concerns of the dying patient and the grief and passion of the caregivers. We will recognize that leading an authentic life will require an acceptance and recognition of death.

A return to communal death and mourning will not come easy, but small steps and bring continual improvement. Already, hospice care is becoming more available and bringing families into contact with dying relatives. This is not a complete answer, but it is improvement for a society so accustomed to hiding death. In addition to hospice care, availability of medical equipment in the home can help facilitate a dignified death without undue strain on caregivers. Respite care, provided in home or in a medical facility, can make it possible for caregivers to keep loved ones at home when it might otherwise be necessary to put them in a hospital. Proper death education can help caregivers recognize the signs of impending death without panicking and calling an ambulance to relieve the agony of the dying person. As we learn to better prepare for death as a society, as families, and as individuals we will live fuller and deeper lives with greater appreciation for the sensations of earthly existence.

Language and the Content of Belief

Language and the Content of Belief

If language is a core feature of consciousness, our conscious thoughts, expressed in language, should accurately reflect our belief states, and we should be able to accurately determine the contents of at least our own beliefs. Further, we should be able to freely affect what our belief states are through rational analysis. It is this ability that creates in us a sense of moral agency and responsibility. Through rational analysis and argument, we can form beliefs that are appropriate and honorable. If we assume other humans are more or less like us, we may also be able to extend this ability to other humans through inference and analogy. Ascribing content to the beliefs of non-human animals would be riskier business, unless we found animals that could use our language. If language is a core feature of consciousness, then a machine that could use human language as a human might use language would have achieved human consciousness. On the other hand, if language is a more distal feature of consciousness, ascribing content to our own beliefs might be as risky as ascribing content to the beliefs of other humans, animals, and machines. Our moral decisions may be determined by something other than rational analysis. Our moral views may be the product of evolution, not reason. I will argue that many of our beliefs and thoughts are unconscious, and we attempt to ascribe content to our beliefs by the same inferences we make to ascribe content to others. To say we know our own minds is only to say that we are aware of our minds, not to claim that we know the specific content of our beliefs.

Human language brings clarity and understanding to human thoughts and beliefs. In fact, many have argued that without language, humans have no capacity for thought or belief. Descartes expresses a firm conviction that language is necessary for any thought:

There has never been an animal so perfect as to use a sign to make other animals understand something which bore no relation to its passions; and there is no human being so imperfect as not to do so. . . . The reason animals do not speak as we do is not that they lack the organs but that they have no thoughts. It cannot be said that they speak to each other but we cannot understand them; for since dogs and some other animals express their passions to us, they would express their thoughts also if they had them. (CSMK 575)

While the idea that language is necessary for the emergence of belief has been accepted for centuries, philosophers and others have begun to use the term “belief” more permissively, making the assertion much less obvious. While to say a cow had beliefs may have once implied the cow ascribed to some creed or doctrine, the claim has a much more mundane connotation in contemporary philosophy. For example, using the language of belief/desire psychology, we might say that a group of cows and humans gathering under a cover after hearing a thunderclap share a common belief that it is about to rain. We will also say they desire to stay out of the storm. Cows do not need the ability to express their beliefs to want to avoid a storm that appears to be imminent. In this case, it is easy to describe the cow’s behavior using the language of belief/desire psychology, but it is also easy to imagine that the humans under the cover are in a far different position than the cows; they understand their position, have plans and fears for the future, and have a sense of what it is right and wrong to do. We want to say the humans are conscious, and the cows are not. We know the humans are conscious because we assume them to be more or less like us, and we are conscious. Language expresses our thoughts and beliefs, and we assume that other humans use language and experience consciousness as we do.

Language does more than provide evidence of consciousness, though; it is the structure of consciousness. A sophisticated study of human language and behavior should produce a powerful and accurate psychological theory. If language sets humans apart from machines and animals, then language is quite likely the feature of human consciousness that produces moral agency and responsibility. If animals and machines are not capable of beliefs and thoughts, then humans are the only known creatures to have any concept of moral responsibility. However, if consciousness is not unique to humans, or if language is not the stuff that makes consciousness, then we may not be able to construct an adequate description of beliefs and desires, much less moral agency.

Language of Machines

Daniel Dennett argues that we can use language, through the “intentional stance,” to describe the beliefs of people, animals, or artifacts including a thermostat, a podium, or a tree (Brainchildren 327). It is easy to construct sentences to describe the beliefs of these objects (“The thermostat believes it is 70 degrees in this room”). If the thermostat is working properly and conditions are more or less normal, we should be able to predict the temperature based on the actions of the thermostat, or we should be able to predict the actions of the thermostat by knowing the temperature in the room. We recognize the possibility of error, however. As the thermostat may be broken, we are likely to say, “According to the thermostat, . . .” If the room does not feel warmer or cooler than the thermostat indicates, then we assume all is well. If we want to know the true nature of belief, being able to describe the beliefs of a thermostat is outrageously unsatisfying. Unless the thermostat is able to describe its own beliefs using language, we are loath to even suggest it has beliefs.

But given the capacity for human language, machines might appear to have beliefs and desires similar to human beliefs and desires. In fact, if a machine could use human language in a manner indistinguishable from human use, it is difficult to see how the consciousness of the machine could be denied with any certainty. Of course, the claim that such a machine is impossible goes back at least to Descartes, who wrote, “It is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do” (CSM II 140). Surely Descartes did not imagine 21st century computer programs when he provided this early version of the Turing Test (in which a computer is held to be conscious if it can master human conversation), but so far his challenge has not been met.

In John Searle’s Chinese room argument, we are challenged to accept that even a computer that could pass the Turing Test would not prove the computer is conscious. Although he does not deny that machines could someday be conscious, a language program would not be proof of it (Searle 753-64). Our best reason for believing the machine is not conscious is that it is not similar enough to a human to be considered conscious by analogy. Even if we can’t deny beliefs and desires to a machine with certainty, we are equally ill equipped to accurately ascribe beliefs and desires to machines, or trees, or stones.

Beliefs of Non-Human Animals

We are more likely to feel confident ascribing beliefs to non-human animals for several reasons: they share at least part of an evolutionary history with humans, they share a genetic material with humans, they share behaviors similar to those of humans, and they share a physiological structure similar to that of humans. As a result, many humans feel comfortable making inferences about non-human animal experience and consciousness based on analogy with humans.

David Hume claims that we can make many inferences about animals based on the assumption that animals are analogous to humans in many respects. Similarly, we can make inferences about humans based on the observation of animals. For Hume, this is compelling evidence that humans are not as rational as we like to think. Animals make many of the same inferences as humans without the benefit of scientific or philosophical reasoning. Our philosophical arguments are used only to support beliefs we share with less rational animals. While we may think we are using reason, we are only providing explanations for beliefs built by habit or biology. He says,

It is custom alone, which engages animals, from every object, that strikes their senses, to infer its usual attendant, and carries their imagination, from the appearance of the one, to conceive the other, in that particular manner, which we denominate belief. No other explication can be given of this operation, in all the higher, as well as lower classes of sensitive beings, which fall under our notice and observation.

Hume clearly feels we can ascribe beliefs to non-human animals. In particular, we can assume that animals believe in cause and effect. In contemporary terms, our beliefs may be formed by evolution or experience, but our own understanding of those beliefs is expressed through rational explanation. Hume’s assumption that it is possible to infer anything at all about humans based on an analogy with animals is, of course, unproven. However, his description brilliantly illustrates the possibility that beliefs we hold to be founded in reason are merely the result of habit, while reason is only our way of expressing those beliefs. This is enough to warn us of the perils of ascribing content to beliefs based on our descriptions of our own beliefs. It is at least possible that there is a great divide between what we believe and what we think we believe.

In his paper, “Do Animals Have Beliefs?,” Stephen Stich examines the difficulty of ascribing content to animal beliefs. For Stich, the problem of ascribing content to animal beliefs is serious enough that we may fear ascribing content to any beliefs at all. Stich offers two possible accounts for animal belief and belief ascription, ultimately rejecting both. (Animals 15-28)

The first possibility is that animals do have belief, and we can ascribe content to those beliefs by observing animal behavior (in the manner of Hume). Stich contends, “When we attribute a belief to an animal we are presupposing that our commonsense psychological theory provides . . . a correct explanation of the animal’s behavior” (Animals 15). Indeed, desires and beliefs can provide a foundation for describing the causes of animal behavior. Assuming they are analogous to humans, animal beliefs are formed by perception and inference. Seeing, hearing, and smelling food in a dish, the dog comes to believe there is food in the dish, just as there is every morning. This belief results in a desire to gain access to the dish. Once an animal has formed beliefs, these beliefs can generate other beliefs.

For example, some dogs have a desire to chase squirrels. Upon seeing a squirrel in the back yard, such a dog will bark at the door, because this particular dog believes barking at the door will cause a human to come and open the door. (We could describe an infinite array of beliefs. For example, dogs believe squirrels should be chased. Dogs believe humans should open doors for dogs. Dogs believe barking at doors is more effective than scratching them.)

According to Stich, the appeal of a view based on beliefs and desires is that it is the most intuitive explanation for human behavior. Further, it is hard to imagine that we could explain human behavior through belief/desire psychology without being able to explain animal behavior in the same way. If folk psychology fails in one case, it appears to fail in the other.

The second possibility is that animals do not have belief. It is impossible to ascribe content to animal beliefs; therefore, it is meaningless to talk about animals having belief. If a dog has no concept of what a bone is, then it is impossible to say that the dog has any beliefs at all about bones. Without language, it is impossible to ascribe belief to animals. This begs the question of whether language actually enables us to ascribe content to beliefs accurately. Still, if we can’t ascribe content to the beliefs of animals, then we may run into trouble ascribing content to the beliefs of humans.

Stich poses the solution offered by David Armstrong. According to Armstrong, although animals lack the concepts we have, we can ascribe content to animal beliefs in a “referentially transparent” (de re) manner. A dog may respond to a bone in the same manner we would expect it to respond if it had our concept |bone|. Armstrong acknowledges that we can not talk about animal beliefs in a way that is “referentially opaque” (de dicto). In order to do this, we would have to know that the dog had a concept analogous to our concept of |bone|, which is impossible. Armstrong claims, however, that the dog does have a de dicto concept of |bone|, and enough research of animal psychology might eventually give us insights to animal concepts. For Armstrong, our de re discussions of animal concepts presuppose that there are correct de dicto beliefs on board the animal that correspond to our de re descriptions. If no correct de dicto concepts exist, then our efforts are only a way of describing animal behavior, not a way of understanding animal belief (19-21).

On Armstrong’s view, eventually we will gain enough knowledge of animals to accurately ascribe content (de dicto) to animal beliefs. Stich’s most serious objection to Armstrong’s argument is that we can only ascribe contents of beliefs to subjects that “have a broad network of related beliefs that are largely isomorphic to our own” (27). We cannot ascribe content to the beliefs of any being that does not share our concepts, and we have no way of knowing what concepts animals share. For example, even if we understand all the conditions necessary for a dog to react to a bone in front of him, it will make no sense to say, “Fido believes a bone is in front of him,” unless we assume Fido has a concept for “in front of,” among others. Following Armstrong’s suggestion, it may be possible to determine exactly how a dog would react to a bone or bone-like object in every conceivable situation. We can predict with 100 percent accuracy the behavior of the dog. We may identify all the properties of the human concept |bone| and all the properties of the dog concept of |bone’|. We’re not out of the water, though, as the concept |bone’| is not the dog’s concept but our concept of the properties of the dog’s concept. We still don’t know what concept the dog has on board.

For Stich, a larger problem may be that we do not know what concepts other humans share. If we follow the reasoning that we can only claim beings have beliefs if they have specifiable content and that content is only specifiable if they have concepts isomorphic to our own, we are in a position of implying that humans with concepts radically different from our own have no beliefs at all. Examples of such humans would include people from different times or cultures. Indeed, anyone from a different language community would be in danger of being declared to be wholly without beliefs.

Stich concludes that it is impossible to decide whether a belief without specifiable content is a belief at all, and it is impossible to verify content for either human or non-human animals. He claims, “If we are to renovate rationally our pre-theoretic concept of belief to better serve the purposes of scientific psychology, then one of the first properties to be stripped away will be content” (27). Folk psychology, based on the attribution of content to beliefs and desires, is inadequate for a scientific account of belief.

Belief and Other Minds

If there is any possibility of accurately attributing belief to any other minds, it would seem that human minds, with a capacity for human language, would be the best hope. We recognize that a human can have a mind full of desires, beliefs, and rational arguments without ever expressing them. In Kinds of Minds, Daniel Dennett points out that this is possible because we sometimes have beliefs and desires that go unexpressed, and we can imagine never expressing any of them, or at least misleading people as to what they are. Actually ascribing content to the beliefs of humans is risky business, then, but at least we feel confident that humans are generally able to communicate beliefs and desires roughly isomorphic to our own beliefs and desires. We believe humans have minds, and their use of language is the best evidence of it (Kinds 12).

Because humans use language, we show them greater moral concern than we show other animals. The closer their language is to our own, the more concern we show them. Wittgenstein famously said that if a lion could talk, we couldn’t understand it. Dennett suggests that this lion would be the first in history to have a mind and would be a unique being in the universe. We assume that any animal that can use language in the manner of humans has a mind (Kinds 18).

The problem with this assumption is that we might be easily fooled. Another human may use language in exactly the same way that I do, express all the beliefs I have, exhibit all the behavior I exhibit, and perhaps be acting deceptively or robotically. When serial killers and pedophiles are arrested, friends, family members, and coworkers are generally interviewed who express that they have made grandly mistaken ascriptions of beliefs and desires to the criminals. It is the trust we place in members of our language community that enables us to be duped in such horrendous ways. We should perhaps be less confident that members of our language community have beliefs and desires isomorphic to our own.

But even if some members of the language community are deceptive, surely they at least have minds—at least have some beliefs and desires, even if we can’t know the content. If we encounter a robot with a human appearance and the ability to use human language effectively (something like the fictional Stepford Wives), would we assume the robot to have a mind? Such robots are being developed, but none exists (see Dennett’s discussion of Cog[1] in Kinds of Minds, page 16), so the questions can’t be answered empirically. While developing such a robot, we may come to understand exactly how a mind develops and comes into being. On the other hand, it is possible to imagine such a robot existing with no mind and no human feeling at all. If we can imagine a robot as an automaton, why not imagine that at least some humans are automata? Perhaps their use of language is as unconscious as our basic reflexes. Their bodies simply produce language naturally with no self-awareness and no beliefs and desires. While we assume this is not the case, it is impossible to determine this with any certainty.

What We Know of Our Own Minds

If nothing else is certain, we must know the contents of our own minds. Descartes was unable to doubt the existence of his mind, and it seems quite impossible for me to doubt the thoughts I am thinking right now. As I produce thoughts, I am aware of them, and it is impossible for me to escape them. My thoughts, formed by language, express the contents of my beliefs and desires precisely, because that is how I have intended to express them to myself. I can’t imagine I am deceiving myself or that I am an automaton. I am a thinking being immersed in my conscious life. If the language I use in thinking expresses my beliefs accurately and rationally, then this is what enables me to develop moral principles and behave in a morally responsible manner.

But what of our “unconscious” thoughts? Hume demonstrated that our belief in cause and effect seems to exist in a precognitive state. We don’t use language and reason to develop a belief in cause and effect—in at least some cases, language merely expresses what is built into us. Our moral reasoning, though, is based on careful consideration and tediously crafted arguments. Surely our language is not expressing a precognitive instinct or intuition. In Kinds of Minds, Dennett quotes Elizabeth Marshall Thomas saying, “For reasons known to dogs but not to us, many dog mothers won’t mate with their sons” (10). Dennett rightly questions why we should assume that dogs understand this behavior any better than humans understand it. It may just be an instinct, produced by evolution. If the dog had language, it might come up with an eloquent argument on why incest is wrong, but the argument would seem superfluous—just following the instinct works well enough.

By the same token, human moral arguments may do nothing more than express or at best buttress deeply held moral convictions instilled by evolution or experience. In a Discover magazine article titled “Whose Life Would You Save?” Carl Zimmer describes the work of Princeton postdoctoral researcher Joshua Green. Green uses MRI brain scans to study what parts of the brain are active when people ponder moral dilemmas. He poses various dilemmas familiar to undergraduate students of utilitarianism, the categorical imperative, or other popular moral theories.

He found that different dilemmas trigger different types of brain activity. He presented people with a number of dilemmas, but two of them illustrate his findings well enough. He used a thought experiment developed by Judith Jarvis Thompson and Phillipa Foote. Test subjects were asked to imagine themselves at the wheel of a trolley that will kill five people if left on course. If it is switched to another track, it will kill one person. Most people respond that they will switch to another track in order to save four more lives, apparently invoking utilitarian principles. In the next scenario, they are asked to imagine they can save five people only if they push one person onto the tracks to certain death. Far fewer people are willing to say they would push anyone onto the tracks, apparently invoking a categorical rule against killing innocent people. From a purely logical standpoint, the two questions should have consistent answers.

Greene found that some dilemmas seem to evoke snap judgments, which may be the product of thousands of years of evolution. He notes that in experiments by Sasrah Brosnan and Frans de Waal capuchin monkeys who were given a cucumber as a treat while other monkeys were given grapes would refuse to take the cucumbers and sometimes would throw the cucumbers at the researchers. Brosnan and De Waal concluded that the monkeys had a sense of fairness and the ability to make moral decisions without human reasoning. Humans may also make moral decisions without the benefit of reasoning. It appears evolution has created in us (at least in those who are morally developed) a strong aversion to deliberately killing innocent people. Evolution has not prepared us for other dilemmas such as whether to switch trolley tracks to reduce the total number of people killed in an accident. These dilemmas result in logical analysis and problem solving. Zimmer writes, “Impersonal moral decisions . . . triggered many of the same parts of the brain as nonmoral questions do (such as whether you should take the train or the bus to work)” (63). Moral dilemmas that require one to consider actions such as killing a baby trigger parts of the brain that Greene believes may produce the emotional instincts behind our moral judgments. This explains why most people appear to have inconsistent moral beliefs, behaving as a utilitarian in one instance and as a Kantian the next.

It may turn out that Hume was correct when he claimed, “Morality is determined by sentiment. It defines virtue to be whatever mental action or quality gives to a spectator the pleasing sentiment of approbation” (Rachels 63). His claim is that we evaluate actions based on how they make us feel, and then we construct a theory to explain our choices. If the theory does not match our sentiment, however, we modify the theory—our emotional response seems to be part of our overall architecture. The work of philosophers, then, has been to construct moral theories consistent with our emotions rather than to provide guidance for our actions.

Language gives us access to our conscious thought. Language permits us to be aware of our own existence and to feel relatively assured that other minds exist as well. It is through language that we make sense of ourselves and the world. We may be deceived, though, into thinking that thought is equivalent to conscious thought. Much of what goes on in our mind is unconscious. Without our awareness, our mind attends to dangers, weighs risks, compensates for expected events, and even makes moral judgments. Evolution has provided us with a body that works largely on an unconscious level. However, humans, and perhaps some nonhuman animals, have become aware of their own thoughts, and this awareness has led to an assumption of moral responsibility. This awareness should not be taken to prove that we are aware of the biological facts that guide our moral decisions.

Stephen Stich explores the development of moral theory in his 1993 paper titled, “Moral Philosophy and Mental Representation.” In the essay, Stich claims that while most moral theories are based on establishing necessary and sufficient conditions for right and wrong actions, humans do not make mental representations based on necessary and sufficient conditions. He says, “For if the mental representation of moral concepts is similar to the mental representation of other concepts that have been studied, then the tacitly known necessary and sufficient conditions that moral philosophers are seeking do not exist” (Moral 8). As an alternative, he suggests that moral philosophers should focus on developing theories that account for how moral principles are mentally represented. He writes:

These principles along with our beliefs about the circumstances of specific cases, should entail the intuitive judgments we would be inclined to make about the cases, at least in those instances where our judgments are clear, and there are no extraneous factors likely to be influencing them. There is, of course, no reason to suppose that the principles guiding our moral judgments are fully (or even partially) available to conscious introspection. To uncover them we must collect a wide range of intuitions about specific cases (real or hypothetical) and attempt to construct a system of principles that will entail them. (8)

On this view, moral theories represent beliefs that are not only unconscious but are unavailable to the conscious mind. In order to make a determination of the content of our own moral beliefs, then, we must examine our own moral decisions and infer the content of our beliefs. In this approach, we find that humans are deciphering their own beliefs in much the same manner the Brosnan and De Waal determine the moral beliefs of capuchin monkeys. Not only does language fail to give a full accounting of our belief states, but our conscious thoughts may be an impediment to determining our actual beliefs, so that we must consider prelinguistic or nonlinguistic cues to discover what we actually believe.

Conclusion

When we ascribe content to the beliefs of other beings, including human beings, we assume those beings have mental experiences roughly isomorphic to our own. Based on our own experiences and beliefs, we make inferences about the beliefs of other beings. The more a being resembles us, the more confident we are in making such inferences. As a result, we are most comfortable ascribing contents to the beliefs of humans who speak the same language we speak. We are even more comfortable if the person is of the same gender and social class. Even in these cases, though, we may be too optimistic. Our own beliefs may be as inaccessible to us as the beliefs of our distant neighbors or monkeys or lobsters. Ascribing content to beliefs may be futile. On the other hand, we seem to survive quite well assuming that we know our own beliefs and that others have beliefs that are more or less transparent to us. We may be able to use the language of belief/desire psychology as a heuristic to help us understand, manipulate and cope with our behavior and the behavior of others. Although language is a distal feature of consciousness and may not accurately determine the content of our beliefs, language may enable us to gain a community of thinkers and form successful relationships with other beings.


Works Cited

Dennett, Daniel C. Brainchildren. Cambridge: MIT P, 1998.

—. Kinds of Minds. New York: Basic Books, 1996.

Descartes, Rene. The Philosophical Writings of Descartes: Volume II. Trans. John Cottingham,

Robert Stoothoff, and Dugald Murdoch. Cambridge: Cambridge UP, 1985.

—. The Philosophical Writings of Descartes: Volume III. Trans. John Cottingham,

Robert Stoothoff, Dugald Murdoch, and Anthony Kenny. Cambridge: Cambridge UP,

1991.

Hume, David. An Enquiry Concerning Human Understanding. Vol. XXXVII, Part 3. The

Harvard Classics. New York: P.F. Collier, 1909–14; Bartleby.com, 2001.

www.bartleby.com/37/3/. [May 11, 2004].

Hume, David. “Morality as Based on Sentiment.” The Right Thing to Do: Basic Readings in

Moral Philosophy. Ed. James Rachels. Boston: McGraw Hill, 2003.

Searle, John. “Is the Brain’s Mind a Computer Program?” Reason at Work. Eds. Steven Cahn,

Patricia Kitcher, George Sher, and Pater Markie. Wadsworth, 1984.

Stich, Stephen P. “Moral Philosophy and Mental Representation.” The Origin of Values, ed.

Michael Hechter, Lynn Nadel & Richard E. Michod. New York: Aldine de Gruyter.

1993. 215-28. http://ruccs.rutgers.edu/ArchiveFolder/Research%20Group/

Publications/MPMR/MPAMR.html. [May 11, 2004].

—. “Do Animals Have Beliefs?” Australasian Journal of Philosophy 57.1: 15-28.

1979.

Zimmer, Carl. “Whose Life Would You Save?” Discover Apr. 2004: 60-65.


[1] Dennett is working with Rodney Brooks, Lynn Andrea Stein, and a team of robotocists at MIT to develop a humanoid robot named Cog. Dennett says, “Cog is made of metal and silicon and glass, like other robots, but the design is so different, so much more like the design of a human being, that Cog may someday become the world’s first conscious robot.” (Kinds 16)

Morality of Tragic Pleasure

Randall Horton

The Morality of Tragic Pleasure

Human enjoyment of intense emotional pain is a paradox that remains to be resolved or eliminated. Aristotle attempted to give an account that defended the morality of enjoying tragic pleasure. Later philosophers, such as Hume, have claimed that pleasure evoked by fiction is pleasure of the artistry and not pleasure in the negative emotions produced. Modern philosophers have attempted to eliminate the paradox by either claiming that there is no pleasure derived from tragedy or that painful emotions themselves are pleasurable. In order to explain the paradox or understand whether there is an actual paradox, it is necessary to compare painful emotions evoked by fiction, non-fiction, and actual experience. I will argue that both painful and pleasurable emotions have the same physiological characteristics and that the perception of pleasure or pain is a cognitive decision contingent on one’s moral framework.

Human life, perhaps all sentient life, is an extended process of loss, grief, pain, and sorrow. Suffering is so bountiful that Arthur Schopenhauer felt it must be the purpose of existence, saying, “If the immediate and direct purpose of our life is not suffering, then our existence is the most ill-adapted to its purpose in the world” (41). Many may reject this view as being overly pessimistic, but few live to old age without adopting it. Pain is, of course, necessary to survival. Those who do not feel pain are unable to develop appropriate behavioral responses to the world. An absence of physical pain makes it impossible to know, for example, when one has a broken arm. An absence of emotional pain leads to severe criminal pathology.

As a result, moral philosophies must provide an account for human suffering and its role in the development of human behavior and experience. Jeremy Bentham contends that, whether we admit it or not, we live our lives in avoidance of pain and in pursuit of pleasure; this being the way of the world, we should adopt reduction of pain and promotion of pleasure as our guiding moral principles. He said, “In words a man may pretend to abjure their empire: but in reality he will remain subject to it all the while. The principle of utility recognizes this subjection, and assumes it for the foundation of that system, the object of which is to rear the fabric of felicity by the hands of reason” (507).

Using a slightly different approach in “There Is a Reason Why God Allows Evil,” John Hick argues that life would not be worth living without the appropriate amount of suffering. A world without suffering could have no pleasure, as we could not know what it is. Similarly, without the urge to reduce our suffering, we could not develop morally or spiritually, as we would have nothing to guide our actions. In short, a universe with no suffering would be a Godless universe. He notes, “Such a world, however well it might promote pleasure, would be very ill adapted for the development of the moral qualities of human personality. In relation to this purpose, it would be the worst of all possible worlds” (115).

Indeed, individuals who have experienced great suffering tend to view those who have led lives free from loss and pain to be less developed morally. While we claim to want to avoid suffering as much as possible, we also tend to think suffering improves us in many ways. Because we suffer, we become more compassionate, we appreciate life’s pleasures more, we love more deeply, and we gain a deeper and richer understanding of beauty.

It is no surprise, then, that our art and drama reflect the great depths of human suffering that we experience. But what do we hope to gain from the artistic depictions of human suffering and toil, and could we not gain the same benefits from actual suffering and toil? The great paradox, of course, is that art not only reflects suffering but creates it, and we seem to take pleasure in generating our own pain, quite contrary to Bentham’s claims. One assertion may be that we take pleasure in tragic events because it is good for us. Enjoyment of tragic events may be an element of human experience that has enabled our species to survive. Most philosophers have insisted that we are only able to enjoy the suffering provoked by art because we know we can escape the suffering; we know that it is not real. The claim is that real suffering creates no pleasure and that we desire to avoid as much actual suffering as possible, but it is at least conceivable that there is an element of pleasure in some actual suffering, but that we are loathe to acknowledge it, as that would indicate a callous disregard for the suffering of others. With artistic suffering, we are free to admit to our enjoyment, as no one is getting hurt.

Aristotelian Tragedy and Moral Development

For Aristotle, it seems, only in art can tragedy promote morality. However, painful events may provide an opportunity for the moral man to demonstrate his achievement of goodness. Aristotle says, “If great events turn out ill they crush and maim happiness; for they both bring pain with them and hinder many activities. Yet even in these nobility shines through, when a man bears with resignation many great misfortunes” (Introduction 325). The tragic event does not promote morality but might provide an opportunity to demonstrate it. At the same time, tragic events will not destroy the happiness of an individual of moral worth. He says, “The happy man can never become miserable—though he will not reach blessedness, if he meet with fortunes like those of Priam” (Introduction 326). Aristotle is clear on our relation to actual pain. He notes, “It is agreed that pain is bad and to be avoided; for some pain is without qualification bad, and other pain is bad because it is in some respect an impediment to us” (Introduction 467). Pain is only good when it helps to make us virtuous, for “pain is used as an instrument of punishment” (Introduction 335). But even if actual pain is not associated with pleasure and is only useful as a punishment, we take delight in seeing representations of images and events that would ordinarily cause us pain. He says, “We delight in looking at the most detailed images of things which in themselves we see with pain” (Poetics 4). He continues that we enjoy such representations because “learning is most pleasant, not only for philosophers but for others likewise (but they share in it to a small extent)” (Poetics 4). So, by watching representations of painful events, we can learn, which promotes happiness and pleasure, even though actual painful events do not have this advantage.

For Aristotle, tragedy can only provide the appropriate experience if it is plausible. It is plot, rather than character, that makes a tragedy plausible. Unlike the historian, the tragedian must “relate not things that have happened, but things that may happen” (Poetics 12). Still, the poet may refer to actual events because “things which have happened are obviously possible—they would not have happened if they were impossible” (Poetics 12). This allows room for plots to depict pain. Events that would produce pain if experienced can indirectly produce pleasure and education when properly represented through poetry.

It is important to note that for Aristotle, tragedy must not simply be a representation of terrible events that happen to passive characters. There is no education, certainly no moral lesson, to be gleaned from randomly occurring tragic events. As Cynthia Freeland notes in “Plot Imitates Action: Aesthetic Evaluation and Moral Realism in Aristotle’s Poetics,” “When he says in the Poetics that the change in fortune of a tragedy will occur owing to a frailty or mistake (hamartia) of the hero, he is emphasizing that tragic unhappiness requires the agent’s contribution” (119). It is essential for Aristotle that the agents cause, but do not deserve, the horrible events they endure. Only plots of this nature provide katharsis.

Katharsis is variously described as a purgation of negative emotion, a purification, or an education of the emotions. In his essay titled “Katharsis,” Jonathan Lear provides an intriguing argument that all these interpretations are incorrect. At the risk of oversimplifying his argument, he claims all formerly held interpretations of katharsis imply that tragedy should remedy some deficit in the audience. He sees these possibilities as unlikely, as Aristotle most likely considers the audiences of tragedies to be both educated and virtuous. Lear’s alternative account is that we all fear tragedy may befall us; even though it seems highly unlikely that we will kill one parent and marry the other without realizing it, extremely tragic events do happen to people who are quite virtuous. Further, tragedians seek to remind us that we are all subject to the possibility of tragedy. As the Chorus from King Oedipus tells us, “None can be called happy until that day when he carries his happiness down to the grave in peace” (68). As I write, Elizabeth Smart has just been returned to her parents after being abducted by a man Elizabeth’s mother hired and brought to their home. The mother hired the man, who was homeless, as an act of compassion, but she paid dearly for her efforts. Her hamartia resulted in a nine-month ordeal for her daughter and family. The nation has watched with rapt attention, and we all wonder how we would handle such a situation. Lear suggests that a fictional tragedy gives us an opportunity to experience real tragedy without facing an actual loss, providing moral development that can promote happiness. Lear points out that Aristotle, in the Rhetoric, says that those who endure great calamities no longer feel fear. Viewing fictional tragedy, then, gives us the opportunity to imagine ourselves in a position of having nothing more to fear (335).

Why is it valuable to have this vicarious experience of tragedy? For Lear, fictional tragedy permits us to explore how to behave with dignity in the face of real trouble. It also teaches us that the world can remain rational and meaningful in spite of disastrous events. Being able to experience the feelings of pity and fear that lead to fearlessness without the event actually happening brings both relief and pleasure, i.e., katharsis. Of course, as Aristotle points out, we also derive pleasure from the fact that the poet is able to evoke such powerful emotions of pity and fear while presenting a rational and morally worthy plot.

The story of Elizabeth Smart will likely be badly rendered in a rush-to-completion movie for television in the near future. The simple fact that the story is true (or at least based on truth) will not be enough to permit audiences to derive pleasure from it. A fictionalized account produced by great writers and directors would likely have much greater force and provide more pleasure. In such a case, the pleasure would be derived from the artistic bravura of the artists involved in the production.

Nonetheless, the experience of Elizabeth’s family taught us all that it is possible to face such tragedy with great courage and dignity (some may disagree, but I think most felt the Smarts behaved admirably in the aftermath of the tragedy). The Smarts knew they would be blamed for Elizabeth’s abduction. They invited strangers to the house and left the house unsecured. We admire them for admitting their mistakes and asking for compassion. We do not require that they do anything so drastic as gouge out their eyes. They deserve reprieve. As we watch, we learn that the world continues to operate in a rational and moral manner in spite of great tragedy. We learn that it is possible to behave nobly in the face of tragedy. The factual event provides all the advantages of a fictional one except for the appreciation of the artistic rendering. For the Aristotelian, the experience of tragedy can reinforce a rational view of the world.

This account works to a point, but Lear and Aristotle both leave us with an incomplete view of the distinction between fictional tragedy, reported tragedy, and our own tragedy. I’m sure the Smarts would scoff at anyone who claimed to have benefited from the Smart’s experience of tragedy. To claim that we no longer fear the abduction of our own children because we’ve vicariously lived through it with the Smarts would seem callous and naïve. We also have no admiration for the person who says, “I’ve never experienced a great tragedy such as yours, but I did watch a movie about it.” Whatever benefit fictional tragedy provides for our view of the world as rational and moral, the benefits are greatly multiplied by the experience of actual tragic events. While fictional tragedy may enable us to imagine ourselves facing tragedy with dignity, only actual tragic events test our muster. It is a folk cliché to say, “You never know how you will act until you are in the situation,” but it is folk wisdom that is often corroborated by experience. This is not to completely negate Lear’s point, however. Seeing that tragedy can be endured with dignity can bring pleasure and reassurance, but it cannot relieve us of our fear nor provide the moral education of actual tragedy. Fiction can supplement life experience, but it is no substitute for it.

To clarify the important distinctions between fiction, non-fiction, and actual experience, it may help to explore whether non-fiction accounts of tragic events could be tragic in the Aristotelian sense. In considering whether non-fiction tragedies can meet Aristotle’s definition of tragedy, Stacie Friend claims in “The Pleasure of Documentary Tragedy” that non-fiction tragedies can provide what Aristotle would deem a proper pleasure from tragedy. She avoids the problem of determining what Aristotle meant by proper pleasure, or katharsis, by focusing instead on what features Aristotle said a tragic plot should have. Aristotle said that the unified plot necessary to tragic pleasure is not found in real life, but Friend claims that any account, fiction or non-fiction, that fits Aristotle’s definition of a proper tragedy would by default have to provide the pleasures appropriate to tragedy.

Technology such as the video camera has made it possible to create non-fiction tragedies. She says, “What is important for my purposes is the fact that documentary footage of this kind can be edited—this opens up the possibility of imposing, on real events, just the sort of narrative structure Aristotle counts as necessary to producing tragic pleasure” (2). Without being able to describe such proper pleasures, we can still deduce that particular tragedies must provide them. Her example of a non-fiction tragedy is the documentary, Startup.com, which is probably not the best example for her purposes. Startup.com, the 2001 documentary directed by Chris Hegedus and Jehane Noujaim, covers the rise and fall of two young entrepreneurs who start an Internet company. Friend notes that the documentary is edited to create a plot structure that fulfills Aristotle’s requirements for a tragedy, but she overlooks some of the other requirements. It is true that Aristotle says tragedies must be plot driven, with characters being only supplementary, but he also insists that the protagonist be morally equal or superior to the audience. On this count, Startup.com is more of a morality play than a tragedy. Internet startups of the 1990s were characterized by overconfidence and greed, and this company was no exception. Other than those participating in the high-tech craze of the 1990s, people were happy to see these young and brash business people get what they deserved. While many may feel envious of the business acumen of the characters, few would feel morally inferior. Friend notes that the producers of the documentary hoped the business and the friendship of the two partners would fail. It is hard to believe the documentary was not intended to appeal to the indignation and even schadenfreude of those of us who missed out on the heavy profit-taking of that period.

Nonetheless, Friend’s general point that real life tragedies can be created by video editing may have some value. Reporting of the Ellizabeth Smart abduction has already reflected many elements of tragedy, and the family has received hundreds of offers to retell their story through various means. The cases of Elizabeth Smart, Andrea Yates (who drowned her children), Clara Harris (who killed her husband) and others have demonstrated that modern news reporting, especially so-called television news magazines, edit news stories to have a plot structure similar to tragedy’s, and audiences obviously find pleasure in them. Stories that provoke pity and fear are especially popular. News reports edit the stories to present universals, rather than particulars. The audience is expected to identify with the protagonist and realize that the events could happen to anyone. The reports of child abductions in general, and of Elizabeth Smart in particular, have been quite successful at arousing pity and fear. On this account they may produce what Aristotle considered proper pleasure, but if Lear is correct, these news reports are severely lacking. The reports do not help us to imagine ourselves without fear. On the contrary, parents around the country nearly reached a state of mass hysteria in the summer of 2002 in response to a bombardment of news reports on child abductions. It is interesting to note that parents were reacting to reporting of abductions, as the number of actual abductions was consistent with previous years. Witnessing a non-fiction account of the undeserved loss of a child’s life does not immediately reduce fear; however, retrospective reports may help an audience experience the kind of relief that Lear describes, if the outcome is perceived to be rational and those affected are seen to be virtuous. The Smart case fulfills this purpose, but other circumstances could change that. In some cases, it may be that the public blames the abduction on the parent’s immoral behavior or even on the child. Parents and children who are perceived to be sexually active or active drug users gain little sympathy from a judgmental public. The value of such moral judgment for the public is that it enables us to feel protected from such bad events, believing such horrors only happen to bad people. This is consistent with Aristotle’s moral view of tragedy.

Aristotle insists that virtue is the result of practice. He says, “We become just by doing just acts, temperate by doing temperate acts, brave by doing brave acts” (331). If this is the case, then viewing fictional tragic events permits us to practice virtue without experiencing actual suffering, which leads us to virtue only by showing us what to avoid. Lear also sees this advantage to tragic fiction. Actually experiencing tragic events would provide better practice, to be sure, but fictional accounts may help us strengthen our virtuous response prior to the experience of an actual tragedy. Knowing that we are virtuous and respond appropriately to tragedy will give us a sense of well-being. Facing our fear of the greatest loss imaginable, even the fear that we may cause the death of our loved ones, can help us to become fearless. Non-fiction accounts of tragedy might provide the same opportunity, but this would be using pain to promote good, which would seem contrary to Aristotle. Similarly, experiencing actual tragic events would provide the opportunity to practice virtue, but at the cost of experiencing great pain, which is to be avoided. Even if experiencing painful events can help develop a virtuous response, it is unlikely that Aristotle would consider any pleasure derived from them to be a proper pleasure. Friend emphasizes that Aristotle’s insistence that tragedy must be fiction is “not based on moral qualms about the enjoyment of actual suffering” (2). But the purpose of pain, according to Aristotle, is to guide us to a virtuous life by showing us what to avoid. For Aristotle, we take pleasure in the morally appropriate response to painful events. To derive pleasure directly from actual pain would be wholly immoral. Some people do, in fact, take pleasure from pain, either their own pain or the pain of others, but this is obviously outside the Aristotelian moral framework. It may be that audiences take pleasure, or admit to it, only from fictional tragedies because it is morally permissible to do so. After viewing a production of King Oedipus, one might proudly declare, “I thoroughly enjoyed that performance.” After viewing a documentary of the same events, any moral person would be extremely reluctant to admit to taking great enjoyment from the experience, though one might express an appropriate moral response. Still, it would seem in bad taste at best to express pleasure at having the opportunity to flex one’s moral muscle. Expressing delight in experiencing the pain provoked by an actual experience of a tragic event would be even more shocking. Aristotle and Lear give insight into how it can be moral to take pleasure from a representation of a tragic event, but the complex relationship between fiction and reality needs further exploration. Hume, in his essay “On Tragedy,” explores the paradox further and considers the effects of actual tragic events on individuals.

Hume and the Pleasure of Tragedy

Hume’s exploration of the pleasure of tragedy ostensibly ignores the moral significance of the paradox of tragedy. He asks simply how it is that audiences can take such great satisfaction from such painful experiences as those produced by the best tragedies. One possibility, he notes, is that we simply enjoy being moved to a highly excited state, whether we are experiencing pain, joy, or sadness. All these emotions are preferable to languor. He quotes Jean-Baptiste Dubos as saying, “Let it be disagreeable, afflicting, melancholy, disordered; it is still better than that insipid languor, which arises from perfect tranquility and repose” (699). Hume rejects this explanation, though, as he says actual suffering is much more efficient at producing extreme emotions, but most people avoid seeing or experiencing actual suffering.

He suggests, then, that fictional tragedy provides not only an excitement of the emotions but also an appreciation for the talent of the author or players. He says,

This extraordinary effect proceeds from that very eloquence, with which the melancholy scene is represented. The genius required to paint objects in a lively manner, the art employed in collecting all the pathetic circumstances, the judgment displayed in disposing them: the exercise, I say, of these noble talents, together with the force of expression, and beauty of oratorical numbers, diffuse the highest satisfaction on the audience, and excite the most delightful movements. By this means, the uneasiness of the melancholy passions is not only overpowered and effaced by something stronger of an opposite kind; but the whole impulse of those passions is converted into pleasure, and swells the delight which the eloquence raises in us. (701)

Here Hume echoes the Aristotelian view that we take delight in representation. When we watch a well-written and acted tragedy, our emotions are moved with great force. As we are aware that the tragedy is a fiction, we admire the talents of the author and player, and therefore we are able to transfer our excitement from suffering to pleasure. The more forcefully our emotions have been moved, the greater our pleasure will be. This would not be possible in a non-fiction account of tragedy. We appreciate the art, then, and not the negative emotions. This account seems incomplete at best, for the quality of the artistic expression is judged by its ability to produce a profound effect on the emotions. Without producing negative emotions, no tragedy would be judged to be well-written. It is the powerful emotion audiences seek, and such power requires artistic quality. A disappointed member of an audience of a tragedy will often say, “I felt nothing for the characters,” demonstrating that a powerful emotion was sought.

Hume argues to the contrary. He says, “However we may be hurried away by the spectacle, whatever dominion the senses and imagination may usurp over the reason, there still lurks at the bottom a certain idea of falsehood in the whole of what we see” (700). Knowledge that the representation isn’t true is enough to permit the audience to convert pain into pleasure. Most viewers of powerful tragedies will say they were so moved they forgot it was only a play or movie. The ability to actually feel the powerful emotion without being aware that it is fiction is what delights us most. When the emotions are too intense, some will actually repeat to themselves, “It is only a movie.” If the representation were of an actual tragedy, Hume claims, we would not be able to convert the pain into pleasure. To illustrate this point, he says we would not use our artistic skills to comfort the parents of a dead child:

Who could ever think of it as a good expedient for comforting an afflicted parent, to exaggerate, with all the force of elocution, the irreparable loss which he has met with by the death of a favorite child? The more power of imagination and expression you here employ, the more you increase his despair and affliction. (704)

Though we may not always use great artistic expression to bring pleasure to the parents of a deceased child, we certainly do express their loss in the strongest terms possible. This is the purpose of eulogy—to use all the “force of elocution” to state the great loss that has been suffered. While none would call it pleasure, afflicted parents feel entitled to feel the full force of their grief. Suffering an “irreparable loss” is far different from experiencing grief for a fictional character or even an unknown, but real, person such as Elizabeth Smart.

Hume further muddies his claim with a criticism of what he considered to be shocking British theatre of his time. He mentions the shocking image of an old man crashing into a pillar and smearing it with brains and gore. Such an image, he says, cannot convert to pleasure. In reality, though, he seems to be claiming that it should not convert into pleasure. He says:

The mere suffering of plaintive virtue, under the triumphant tyranny of vice, forms a disagreeable spectacle, and is carefully avoided by all the masters of the drama. In order to dismiss the audience with entire satisfaction and contentment, the virtue must either convert itself into a noble and courageous despair, or the vice receive its proper punishment (704).

There seems to be an implicit acknowledgment that audiences can be quite moved by spectacle such as that described above. Further, audiences might enjoy such arousal immensely. Hume finds such pleasure unsavory and, indeed, immoral. Here, Hume is in agreement with Aristotle. In order to be morally appropriate, the virtuous must endure a “noble and courageous despair.” Hume ignores the fact that some audiences thrill to the excitement of spectacle that is completely contrary to the moral framework he describes. One can only imagine how he would react to the popularity of the video series, Faces of Death, and web sites devoted to video footage of carnage and gore. It is certain that audiences take pleasure from witnessing such horrible images, and such pleasure could never be considered moral within an Aristotelian framework. Hume’s disgust with such spectacle indicates his agreement with Aristotle, but it leaves unanswered the paradox of how audiences can enjoy horrible events. In “Real Horror,” Robert Solomon claims there is no paradox—audiences do not enjoy painful emotions.

The Effects of Real Horror

Robert Solomon draws a distinction between the negative emotions produced by art and the negative emotions produced by actual events. To illustrate his point, he describes the feelings most Americans felt while watching images of the World Trade Center attacks. We felt fear, shock, grief, and probably some inexpressible emotions, but no one would describe the feeling as pleasure. It would be impossible, Solomon claims, to feel pleasure at such a spectacle. He does note that we wanted to see the images again and again, but the simple act of choosing to view something does not prove that one takes pleasure in the viewing. He says we would prefer that horrible events not occur, but if one has occurred, “we would rather see than not see” it (3). It is a way, he says, of managing grief and overcoming trauma.

However, if art-horror is pleasurable, it is “precisely because it is not horror” (3) In fact, Solomon notes (as have others before him) that when real horrors loom (during war or disaster), art-horror becomes more popular. Art-horror appears to be a way of escaping from real horror. Watching monsters on the screen seems pleasurable when trying to face actual monsters of nuclear war or terrorism. An Aristotelian, such as Lear, would probably be able to accept this view as well. Neither Aristotle nor Lear claimed that the negative emotions produced by fiction were equivalent to those of actual experience; they merely claimed that those emotions produced by fiction can aid the development of individuals in coping with actual events. Solomon rejects this notion, however, stating, “It is not the case that watching a movie about airline crashes, no matter how fictionalized, immunizes a nervous flyer before his or her upcoming trip” (14). Nor would a movie about child abductions protect us from fearing for our children. In the same vein, news coverage of Elizabeth Smart’s abduction did not lessen the fear of parents following the story.

If constant news coverage of tragic or horrible events does not prepare us for future events, why are we compelled to view repeated representations of such events? Solomon says, simply, “Trauma demands repetition” (15). He notes that the “decrease in pain and trauma should not be confused with pleasure” (15). News coverage of the WTC attacks and Elizabeth Smart’s abduction can help reduce the feelings of trauma, but provide neither pleasure nor preparation for personal tragedy or horror. As to desensitizing one to the pain for horrible events, Solomon claims, “Two instances of real horror may have such an effect on one another, but there is not much evidence that art-horror has much of an effect on the horror of real horror” (13). Since Solomon wrote his essay, actual events have supported his claim. In 1986, Americans were horrified as they watched the space shuttle Challenger exploded on lift-off. The emotions we felt would surely count as horror by Solomon’s account. In 2003, we watched again as the space shuttle Columbia disintegrated. While most Americans responded to the Columbia accident with the appropriate levels of grief and concern, the trauma was not nearly so great. We can only guess that the next successful terrorist attack on American soil will produce a weaker response.

Solomon argues convincingly that the feelings we have when confronted with actual horrible events are not the same as those produced by fiction, but he dismisses too readily the appeal of the negative emotions produced by art. The giddy excitement one feels when watching a horror movie may not compare to the feelings of watching the WTC attacks, but the intense sadness one feels while watching a play such as Margaret Edson’s Wit is at least similar to the pain one feels while dealing with disease and death. The play features the final months in the life of a cancer patient, a retired professor of English. The main character, Vivian Bearing, faces dehumanization at the hands of selfish doctors while reflecting on her dehumanization of her own students. She not only faces the loss of life but a negative evaluation of the life she has lived. Although Edson’s writing is superb, such talent is probably not necessary in provoking negative emotions regarding cancer. Cancer deaths are so common that few adults have not either faced a diagnosis of cancer or lost a loved one to the disease. Even for one who is not cancer phobic, the self-evaluation at the end of life is a universal phenomenon for humans. We all go through constant reflection on life and, too often, we find our own lives lacking in some way. As we watch Vivian Bearing grapple with her past, we are reminded of the importance of our own decisions. It is extremely painful to view a quality production of Wit.

I saw the play in the company of a retired professor of classical languages who was so distressed that he could not discuss it afterwards and declined to join the group for coffee. His pain was not make-believe pain, and he was not more interested in the quality of writing than he was in his pain. Emotional pain is the central characteristic of watching a play such as Wit. Quality writing, acting, and production abound in other forms of entertainment. Comedies, documentaries, and essays all afford wonderful opportunities for artistic expression. Also, we can pretend to be in pain by taking an acting class, attending a role-playing seminar, or other activities. It is the fact that the play provokes pain that makes it appealing.

It is not only drama that produces such profound emotions. We may feel intense sadness at looking at a painting or listening to music. We may appreciate the artistic accomplishment of a work that moves us to tears, but it is actually being moved to tears that attracts us. We don’t read tragedies because they are well written and happen to move us to tears. We read tragedies because they move us to tears by virtue of the fact that they are well written.

Cognitive, Emotional, and Moral Interaction

If it is the painful emotions we seek, though, why do we not take pleasure from actual horrible events? We can only take pleasure from emotions arising in the appropriate moral context. Indeed, there are people who take pleasure from negative events such as car crashes, homicides, or even natural disasters. We will say of such people, however, that they lack moral development. Our emotions are not value neutral; emotion is only determined by a complex interplay of physiology and cognition. In his 1975 book, Mind and Emotion, George Mandler describes the interplay:

Emotional experience occurs by definition in consciousness. Both the perception of arousal and the results of cognitive interpretive activities are registered and integrated prior to or in consciousness. Output from consciousness is frequently coded by our language systems into socially sanctioned and culturally determined categories. (67)

Emotion is determined first by an autonomic (pre-cognitive) response to stimuli and then by a conscious interpretation of the physiological arousal. Whether a particular arousal is perceived as pain or pleasure is determined largely by one’s moral convictions. Art-induced emotions give one the freedom to experience negative emotions in an appropriate moral framework.

Occasionally, we may be surprised by what we find pleasurable, and giving an account of it can be a challenge. Cynthia Freeland confronts this challenge in her essay, Realist Horror:

My own strategy of reading this genre [realist horror] involves me, admittedly, in a sort of tension: ideological critique focuses on problematic ways in which realist horror films create discourses of knowledge and power, serving conservative and patriarchal interests, and it is likely to produce a critical view of realist horror. But I have also tried to foreground the horror and mass media audience’s ability to produce subversive interpretations, acknowledging that viewers do indeed have a significant power and interpretive role in reading, and resisting, realist horror. (15-16)

It is possible, at least, to enjoy realist horror, she claims, from an appropriate moral standpoint. For those who are unable to find an appropriate moral grounding, such pleasure is impossible. Some will never be able to enjoy video recordings of actual human death and suffering, as their moral grounding prevents it.

In a related issue, some philosophers have followed Hume in suggesting that pain induced by art can be converted into pleasure. Modern psychology has given at least some support for this theory in the form of excitation transfer. According to this theory, excitation produced by one stimulus can later be attributed to another stimulus; thus one arousal can be interpreted as two distinct emotions. Psychologist Mary Beth Oliver discusses excitation transfer from sexual arousal to horror in her 1994 paper titled “Contributions of Sexual Portrayals to Viewers’ Responses to Graphic Horror.” She hypothesizes that “sexual scenes in horror films should increase arousal among viewers, and this arousal should serve to enhance fear in response to suspenseful or violent portrayals that follow” (3). Her research supported this hypothesis somewhat, but only for males and to a lesser degree than expected. Nonetheless, excitation transfer theory can give an account of how art-induced pain can “convert” to pleasure. In this view, one can experience intense pain while viewing or reading a particular work, and reinterpret this arousal as pleasure subsequently.

This is not necessary, however to account for our appreciation of negative aesthetics. Pain is an essential feature of the human experience. In order to understand our lives and world, we will seek pain, and, though it may seem a paradox, we are able to enjoy great emotional intensity. Whether this intensity can be perceived as pleasure depends on our morality. While we may sometimes be seeking a moral story, as in the case with tragedy, we sometimes seek a simple emotional experience. Some forms of music and painting provide such an experience. We don’t feel sadness because it is morally appropriate, but because it is morally permissible. We seek the painful experience and permit ourselves the indulgence so long as no one is actually getting hurt. Non-fiction accounts of tragic events may create negative emotions, but we would not describe the experience as pleasure; it would seem quite unsavory to say we “enjoyed” the actual suffering of an innocent human being. We might, however, have an appreciation for our understanding and reaction to a non-fiction story. We might be glad that we experienced it, and we might feel that we benefited from it. As Lear would point out, we might take solace in realizing that we can handle tragedy if called upon to do so. To really learn life’s lessons, however, we must experience actual painful events. We will not describe the feelings we have upon losing a loved one as pleasure under any circumstance. We may feel the experience has aided our moral development, but it is impossible, as Solomon would insist, to view such events as pleasure.

In all cases, though, our experience of pain is identical in physiology. The three cases differ only as a result of a cognitive interpretation. We feel an autonomic response and make a conscious decision to identify it as either pleasure or pain. In some cases, the pain is the same as pleasure, at least to the extent that we have chosen to feel the pain, and art gives us permission to do so. It is morality that guides our response to extreme events, and we would do well to follow Aristotle’s advice to train the emotions appropriately, so that we might live in a more compassionate world.


Works Cited

Aristotle. Introduction to Aristotle. Ed. Richard McKeon. New York: Random, 1947.

Aristotle. Poetics. Trans. Richard Janko. Indianapolis: Hackett, 1987.

Bentham, Jeremy. Qtd. in Philosophy: A Text with Readings. 7th ed. Ed. Manuel

Velasquez. Belmont, CA: Wadsworth, 1999. 582-84.

Freeland, Cynthia. “Plot Imitates Action: Aesthetic Evaluation and Moral Realism in Aristotle’s

Poetics.” Essays on Aristotle’s Poetics. Ed. Amelie Oksenberg Rorty. Princeton:

Princeton UP, 1992. 111-32.

—. “Realist Horror.” Philosophy and Film. Ed. Cynthia Freeland and Thomas

Wartenberg. New York: Routledge, 1995.

Friend, Stacie. “The Pleasures of Documentary Tragedy.” Author’s manuscript. ASA

Program, Miami, 2002.

Hick, John. “There is a Reason Why God Allows Evil.” Philosophy of Religion. Englwood

Cliffs, NJ: Prentice Hall, 1963. Rpt. in Philosophical Questions. Ed. William Lawhead.

Boston: McGraw Hill, 2003. 111-16.

Hume, David. “On Tragedy.” Problems in Aesthetics. Ed. Morris Weitz. New York:

Macmillan, 1970. 699-705.

Lear, Jonathan. “Katharsis.” Essays on Aristotle’s Poetics. Ed. Amelie Oksenberg Rorty.

Princeton: Princeton UP, 1992. 315-40.

Mandler, George. Mind and Emotion. New York: Wiley, 1975.

Oliver, Mary Beth. “Contributions of Sexual Portrayals to Viewers’ Responses to Graphic

Horror.” Journal of Broadcasting & Electronic Media 38.1 (Wint. 1994): 1-17.

EBSCO.

Schopenhauer, Arthur. Essays and Aphorisms. Trans. R. J. Hollingdale. New York:

Penguin, 1985.

Solomon, Robert. “Real Horror.” Author’s manuscript. n.d.

Sophocles. The Theban Plays. Trans. E. F. Watling. New York: Penguin, 1978.