How to Grieve for a Child: Al-Kindi’s Advice

While philosophers seem to thrive on conflict and would really have nothing to say at all without substantial disagreements, they are remarkably consistent on how to respond to death, dying, and loss. Most recently, I have turned to the work of Al-Kindi , who lived Al-kindifrom about 801 to 866 in Baghdad, for advice on how to respond to grief. Al-Kindi gives us the example of the mother of Alexander the Great.

As his death approached, Alexander wrote to his mother to prepare her for the loss of her child. As Al-Kindi tells it, Alexander said, “Do not be content with having the character of the petty mother of kings: order the construction of a magnificent city when you receive the news [of the death] of Alexander!” Everyone in Africa, Europe, and Asia should be invited to a great celebration of his life with one proviso, that anyone struck my similar misfortune should not come. After his death, his mother was mystified that no one obeyed and attended the funeral until someone pointed out to her that no one had ever escaped the type of misfortune she was experiencing and those with similar losses were told not to come.

Al-Kindi says Alexander’s mother exclaimed, “O, Alexander! How much your end resembles your beginning! You had wanted to console me in the perfect way for the misfortune of your death.” This story of consolation is similar to the Buddhist parable of Kisa-GotamiKisa Gotami who lost her young son and was advised by the Buddha to collect a mustard seed from every family that had not lost a close relative. Of course, she was unable to find any family that had not faced loss, so she realized her suffering was universal and took comfort in the teachings of Buddhism.

German philosopher Arthur Schopenhauer, himself influenced by Buddhist texts, also points us to the suffering of others for comfort: “The most effective consolation in every misfortune and every affliction is to observe others who are more Schopenhauerunfortunate than we, and everyone can do this. But what does that say for the condition of the whole?” Indeed, the suffering of others may make us feel petty for our complaints, but it does little to relieve our pessimism about life. But maybe we just cling to life too tenaciously.

Al-Kindi tells us that all our possessions are only on loan to us and that “the Lender has the right to take back what He loaned and to do so by the hand of whomever he wants.” He says we should not see our loss as a sign of disgrace; rather, “the shame and disgrace for us is to feel sad whenever the loans are taken back.” He is speaking of possessions in this instance, not of children, but I’ve heard many people say that our children are only “on loan” from God, who can call them home at any moment. I personally have never found any comfort in this, and I wonder whether anyone has ever felt the brunt of loss softened by the thought of a merciful God calling in His loans.

No matter what happens, Al-Kindi tells us we should never be sad, as sadness is not necessary and “whatever is not necessary, the rational person should neither think about nor act on, especially if it is harmful or painful.” Many philosophers echo this sentiment. We should trust that God has created the world that is perfect according to God’s design; therefore, we should accept the vicissitudes of life with equanimity. This advice is almost universally dispensed and almost universally not followed for a simple reason: sadness is really an involuntary reaction to loss and pain.

Al-Kindi tells us the death is not an evil, because if there were no death, there would be no people. By extension, if what is thought to be the greatest evil, death, is not evil, then anything thought to be less evil than death is also not evil. As such, we have no evil to fear in our lives. From these assertions, Al-Kindi claims that we bring sorrow to ourselves of our own will. A rational person would not choose such a form of self-harm, so depression and mourning can be controlled through the proper exercise of reason.

Most ancient philosophers, and many contemporary ones, will tell us that letting our rational nature rule our emotional nature will ease our pain in the face of loss. Certainly, a rational examination of death, life, and loss helps us to make sense of our suffering, but it does not eliminate suffering. In fact, if you see grief as a moral failing, which many thinkers have said it is, I believe your suffering is compounded. Grief, hard enough to bear on its own, becomes a catalyst for an explosion of guilt and shame.

While it is important to examine the causes of our suffering and explore what meaning loss brings to our lives, denying the necessity of grief is as useless as denying the necessity of breathing. While I can accept that Al-Kindi’s description of death is accurate, it only helps me come to terms with the prospect of losing my own life. For each of us, our own death brings a promise of relief, but the death of our loved ones only brings relief when they are so burdened by suffering that we can no longer bear to see life oppressing them.

Death is still an evil, because it robs me of the people that make my life meaningful. It threatens to rob me of the people, indeed, who may make my life bearable. It is possible to imagine that death is not an evil, but, more importantly, we must recognize that love is certainly a good, and to lose those we love is an excellent reason to mourn. Mourn freely, I say, without guilt and without shame.

Seeking God in Silence

Painter Fang Min has a series of paintings featuring Buddhist monks seeming happy enough despite an insect perched on or near their faces (you can see examples here and here). When I saw the exhibit in China, a small explanation accompanied the paintings. I Monks jdon’t remember it in detail, and I can’t seem to find it anywhere online, but the story was fairly straightforward. It was about a monk who left the hustle bustle of the city to see peace and tranquility in the country only to find that his meditations were still disturbed by the sounds of the country: farmers working, livestock making noises, and so on. He retreated further away, deep into the woods, but still found the sounds of nature disturbing. Eventually, he fled deep inside a cave to find absolute quiet—except for the sound of a single insect. Frustrated that he still was unable to secure tranquility, he sought out the Buddha for advice. The Buddha told him, of course, that he must seek tranquility inside himself, not demand it from the world around him.

This reminded me of my experiences with the Religious Society of Friends (or Quakers). The meetings I attended were unprogrammed, which means Friends sit in silent reflection receptive to spiritual prompting. Some people refer to this as “silent worship” or a “silent meeting.” This isn’t really accurate, as Friends are expected to speak when moved to do so. Nonetheless, some people would remark on how wonderful some meetings were when they remained especially quiet. On other occasions, some attendees would complain of being distracted by the sounds of people speaking, children, animals, neighbors mowing lawns, airplanes passing overhead, and on and on.

I always thought that if I were to sit in silent reflection, it meant that I would not make any noise, not that I wouldn’t hear any. If I understand correctly, Quakers have the idea that God is in everything and everyone. For me, listening for God is just to listen to whatever happens to be in the universe. I never took it that God could distract me from God. The work is in the contemplation I am doing, not in finding silence.

The composer John Cage said that music never stops, only listening does. To help people listen, he composed a piece that was four minutes and 33 seconds of silence. During that piece, the audience listened to ambient sounds in the environment (or even the sounds of their own bodies). Cage said that anyone can do this at any time. It just takes an aesthetic attitude. He wasn’t trying to create four and a half minutes of silence. He was trying to create four and a half minutes of attention. Some people did not like being forced into a meditative state, but some people don’t like anything.

I suppose some of this depends on what one seeks when one seeks God. Spinoza described God as being infinite and eternal. God occupies every point in space and every moment in time. What then, is not God? Everything in the universe must be God, and God must be everything in the universe. To believe anything else is to limit God’s presence and power. I think this is why Einstein said he believed in the God of Spinoza.

Everything you hear today is the voice of God. Everything you see is the presence of God. Keep your eyes and ears open, please.

My Actual Dream About Peter Singer

The following is an actual dream (nightmare) I had. As far as I know, it doesn’t mean anything. I have no idea why Peter Singer was in it, but I only wish him good health and safe travels.

I am crossing riotous waters on a suspended steel walking bridge composed of steel cables with metal planks bolted to them on either side. As I walk, a storm moves in quickly and pelts me with blinding rain that makes footing unsure. As my feet slip on the metal planks, the planks begin to come undone and slide off the cables. I am forced to cling to the cables and pull myself up onto the loading dock on the far side of the bridge.

As I take cover under an overhang on the dock, I see Peter Singer in a white cargo van on an elevated roadway or ramp of some kind. To my horror, he drives off the ramp and crashes nose first onto the concrete dock below. The van is badly mangled and I fear he is dead. I think to call 911 but realize my phone is in the van. Just then, he pops through the broken glassbroken glass of the van like a jack-in-the-box and says, “Well, that was lucky!” in a comic fashion to the sound of laugh track laughter. Before I can feel any relief, he collapses and appears dead.

I walk to the nearest person (the dock seems crowded with rubberneckers now) and ask, “Did you call 911?” She says, “Well, HE won’t call!” [More laugh track.] Finally, I am overwhelmed and start to walk away. I hear a voice call after me, “I’m sorry. Did you know him?” [More laugh track.] I say, “No, but I’ve been reading his books for decades.” [Laugh track.]

The voice replies, “I know what you mean. It takes me a long time to get through a book, too.”

Silence.

Ebola and the ethics of international drug testing

Ebola has been around for nearly 40 years now, and until recently the public was unaware of any available treatments or treatments in development for the disease. In fact, there is no market incentive for pharmaceutical companies to develop treatments as most of its victims are too poor to buy medicines. If and when Ebola spreads to more affluent parts of the world, of course, pharmaceutical companies will adjust their research and development strategies.

As market incentives for development of treatments do not exist, it falls to governments to fund research into possible treatments and vaccines. As Marie-Paule Kieny, assistant director-general of the World Health Organization (WHO), pointed out, “If it hadn’t been for the investment of a few governments in the development of these drugs, we would be nowhere.” Much of the funding for research has come from the United States, not from humanitarian concerns for Africans, but for domestic concerns. According to a Globe and Mail article by Geoffrey York, “most of the research on Ebola treatments has been financed by the U.S. government, often because of fears that the Ebola virus could be used aesculab-stabas a form of bioterrorism.” Be that as it may, it is a relief to know that someone is working on treatment and prevention.

As the disease has occurred in Africa, you might expect that research on it should also occur in Africa with robust drug trials being conducted in an ongoing basis, Bioethicist Arthur Caplan  says it is unreasonable to expect the research to happen in Africa. He wrote, “Privileged humans were always going to be the first ones to try it. ZMapp requires a lot of refrigeration and careful handling, plus close monitoring by experienced doctors and scientists—better to try it at a big urban hospital than in rural West Africa, where no such infrastructure exists.” ZMapp is the drug given to the Americans who contracted Ebola in Africa before being flown back to the US for treatment.

It might be possible for pharmaceutical companies to build such infrastructure, but Caplan encapsulates the real reason research does not happen in Africa nicely: “Drugs based on monoclonal antibodies usually cost a lot—at least tens of thousands of dollars. This is obviously far more than poor people in poor nations can afford to pay; and a tiny company won’t enthusiastically give away its small supply of drug for free.” Enthusiastically give away? No, they won’t even develop the drug in the first place.

Now that an experimental treatment (ZMapp) does exist, should it be tested on Africans? Bioethicist George Annas says, “If the drugs we are currently working on have been shown to be reasonably safe, and if there is realistic and robust African review and individual informed, voluntary consent, use of American-developed drugs in Africa could be justified.” Annas is here emphasizing the protection of possible African research participants rather than explaining why only the privileged should receive the drug, and he has good reason.

It isn’t as though the lack of infrastructure in Africa has prevented drug trials from taking place there in the past as you might imagine from the debate over Ebola drugs. In fact, testing has raised serious issues of exploitation in the past as drugs were tested on vulnerable populations with no intention of ever providing those same populations with any treatments that might be developed. In 1994, the HIV drug, AZT (zidovudine) was found (in a study known as AIDS Clinical Trials Group 076)  to prevent transmission from HIV-positive mothers to their infants. The study was considered important in the development of drugs to treat AIDS, but there were no plans to provide AZT to the communities where it was tested once the clinical trials concluded. Research subjects in Africa bore the risks associated with taking experimental medications but would not see the benefits of the medications developed.

As there is no market incentive for pharmaceutical companies to develop treatments while protecting research subjects in vulnerable populations, it is up to governments to help promote treatments for unprofitable diseases. This has obviously happened to an extent., but we could, and should, do more. Philosopher Thomas Pogge has initiated a plan to help improve the situation. He has proposed a Health Impact Fund  that would provide a sort of artificial market incentive for companies to develop otherwise unprofitable treatments. Under the plan, governments would contribute to a fund that would then be distributed to pharmaceutical companies based on their ability to develop drugs that would have the greatest health impact. In order to receive payments from the HIF, companies would agree to provide treatments at cost anywhere in the world. I don’t know whether the Health Impact Fund will provide a solution to treating diseases that primarily affect the poor, but it certainly represents the kind of thinking required to address these serious issues.

If we are not motivated by the suffering of others in the world, and it appears many in affluent countries are not, we may do well to recognize that diseases do spread beyond all borders. Diseases that do not affect us today may well affect us tomorrow. The so-called “free” market is obviously not the solution, so we will do well to consider other options.

If correlation isn’t causation, what is?

If you get into any kind of discussion of a controversial topic these days, someone is likely to try to shut you down with a simple “correlation does not prove causation, stupid” kind of refrain. And they are correct that correlation cannot prove causation (a funny website named Spurious Correlations has gone viral). The rub, though, is that correlation cannot prove causation because causation is, shall we just say, difficult to prove. Further, if you want to find the cause of something (say X), you are really going to need to look for things that correlate with X.

For example, say a certain area of the world has begun to have earthquakes regularly when they were almost nonexistent before. If you find that a certain type of gas extraction had begun just before the increase in earthquakes, you might wonder whether this correlation might offer any hints into the cause of the earthquakes. If you could find no correlation between the earthquakes and anything else, you might begin to describe the earthquakes as mysterious and unexplained.

Why? Because correlation is the biggest hint of where causation might be found. The Hume and Ifamous Scottish philosopher, David Hume, laid out some rules for judging causes and their effects. The third rule says there must be constant union between the cause and effect (and it is this correlation that “chiefly” constitutes the causal relation). In fact, it is the constant conjunction between like causes and like effects that reinforces or justifies our belief in causation itself. In other words, without correlation, we would have no reason to believe in causation at all.

Hume also points out that we can observe correlations, but we cannot observe causation. If you ask someone to describe an observation of causation, you will hear a story about a correlation. Because causation cannot be directly observed, our belief in causation cannot be verified. Our belief in causation comes from an instinct to believe in causation and not from any rational argument or proof of causation. Hume points out that even animals are born with a belief in causation even without the benefit of the rational ability to study philosophical or scientific arguments.

To be sure, once you’ve identified a correlation, you can begin the work of determining whether the supposed causes are actually responsible for the effects you’ve observed. You may not get proof, but you can get more and more evidence so that your belief in the cause is more and more justified. No matter how much evidence you get, though, you will have started with a correlation.

So, it is true that correlation does not prove causation, but a statistically significant correlation is a good place to begin your search for a cause. If you know of a way to find causation without first observing a correlation or to prove causation, please leave it in the comments.

Is there a wrong way to grieve?

Over the past few months, I’ve written of several philosophers of the ancient past who taught that grief should not overwhelm us before themselves becoming overwhelmed by grief. Stoic philosophers taught that we should understand that death is nothing to fear or mourn, if only we can have the proper understanding, but the emotion of grief trumps rational explanations every time. I would conclude, then, that we should not attempt to suppress or diminish our grief but should let it unfold naturally and grieve for as long as necessary. Criticizing the grief of others seems counterproductive at best.

But this left me wondering whether there is a wrong way to grieve. What obligations can the bereaved have to others? Obligations to the dead? Does grief suspend normal obligations?

Like the rest of the world, I don’t know what caused Spc. Ivan Lopez to go on a shooting rampage at Ft. Hood. He certainly had experienced a great deal of stress in his life and had good reason to experience problems with mental health. According to a CNN article by Ray Sanchez, Lopez’s father said the recent deaths of his mother and grandmother, medical treatment, and changes related to transfer of military installations “surely affected his condition.” Grief often becomes unmanageable when it is combined with other complications, obstacles, and challenges. We do well not to ignore the impact of grief on those around us. We are part of a community, and the health of the community deals in part on how well we respond to grief.

For an example from fiction, I’m reminded of “A Rose for Emily” by William Faulkner. Emily has much to grieve for: When she loses her father, she loses a loved one but also status, wealth, predictability, and honor. She responds by simply refusing to acknowledge her loss. In the beginning she denies that her father is even dead. Eventually, she relents and permits him to be buried, but continues her life as if nothing has changed. Her neighbors go along out of pity, not respect. As you probably remember, Emily eventually takes a lover from out of town, kills him, and sleeps with his body for the rest of her life.

Emily’s neighbors had tried to offer condolences to her when her father died, but she denied his death. After his death, the neighbors reacted to her with a mix of compassion, respect, suspicion, and disgust, but they also lacked the will to intervene as Emily continually pushed them away. They left Emily with her privacy and, as much as possible, a little dignity, which only led her to more extreme and destructive measures.

If I say that Emily grieved unethically, you may say that grieving wasn’t the core problem; rather, she was refusing to accept change. But grief is always a reaction to change, and all change is annihilation. The bereaved will often say the whole world changed, and that is exactly what has happened. Emily’s world changed, but she refused to accept either her father’s death or her change in fortune. By killing her lover, she tried to preserve a moment forever. Emily’s response to grief was understandable but not excusable. Then again, perhaps her neighbors did not respond ethically to Emily’s grief. The neighbors did reach out to Emily, even with follow-up visits, but failed to intervene more forcefully. Are they obligated to take matters into their own hands?

I recently had the opportunity to hear author Cheryl Strayed speak on her latest book, Wild, which is about Strayed’s own response to her mother’s death. Strayed is a talented and courageous writer and proficient speaker. As she talked about her grief journey, she only lost her composure once. She said that after her mother’s death she became the kind of daughter her mother would not have wanted her to be. She described her adultery, promiscuity, and substance abuse through tears that evaporated as she moved on to discuss how she began to manage her grief more positively (ethically?).

I ask whether there is an ethical way to grieve. We can see that people, overcome by grief, behave in ways that are certainly unethical in most contexts, but we may have such compassion for the bereaved that we soften our judgment of them. “What she did was wrong,” we may say, “But I can see why she did it. I might have reacted the same way.” But this may be true anytime someone acts unethically. In the exact same situation, I may have acted as Bernie Madoff acted. In fact, we have all acted in unethical ways. We had our reasons (grief, exhaustion, addiction, depression, or whatever), but our actions were unethical.

So what helps people behave more ethically? Jean-Paul Sartre, the famous Existentialist philosopher, says that with each of our actions we choose “the good.” He doesn’t mean we always make good choices, but given our options, we choose the one we thought was best, which means we write our ethical values for public view by the actions we choose. In this environment, other people become our hell. Nothing is more damaging to us than being trapped by the others’ perceptions of us.

When we choose an action, we are choosing the one that seems best to us at the time. The problem is that some of us have run out of good ideas for what to do. We often explain ourselves, rightly, by saying, “I didn’t know what to do!” If we had more ideas, we would have more choices and could make better decisions. Sartre claimed we have absolute freedom, but really we can increase our freedom by increasing the number of actions we have in our consciousness. Sartre saw others as our judge, jury, and executioner, but they can also become our community.

It is Sartre’s companion and lover who had a broader vision for existentialist ethics. Simone de Beauvoir was able to see the positive importance of others in our lives. Beauvoir declares “freedom can be achieved only through the freedom of others.” If we want to be free, we must seek our freedom through the freedom of our community, and our freedom grows out of our love. Beauvoir says, “If we do not love life on our own account and through others, it is futile to seek to justify it in any way.” Without valuing others, our life truly loses meaning, and we will lose all hope.

When I was in China, I once thanked someone for helping me with a problem, and she responded, beautifully, “When we help each other, we are free.” Indeed, it is the only way for us to become free. And it is the only way for us to have more good ideas of what we can do.

Why I hate Steak and BJ Day

On March 14, I learned of a new holiday known as Steak and BJ Day. Known as a humorous response to Valentine’s Day, the idea behind Steak and BJ Day is that women get all the attention on Valentine’s Day (men spend about twice as much as women) and there should be day for men to get what they enjoy, which is, obvious to the creators and celebrants of this day, steaks and blow jobs. It’s just a joke. It’s all in fun. If you don’t like it, don’t participate.

Many women seem to feel this is a fair way to compensate men for being so generous on Valentine’s Day, apparently having no qualms describing their romantic relationships as blatant prostitution. (“After all the trouble he went to for Valentine’s Day, I owe him something. Teehee.”) If people want to live their lives exchanging gifts for sexual favors and cooking services, I have no problem with it, so long as everyone knows what is going on and feels comfortable commodifying relationships. I have a different problem with this holiday.

Steak and BJ Day is based on a crude masculine stereotype that is inoffensive to men who live for their next steak and treat of oral sexual gratification. All men are supposed to want this. Any man who doesn’t love and know how to prepare steak, in fact, should turn in his man card, according to this web site.  Again, it is just a joke. If you don’t love steak, you are just a girl. Hilarious. I mean, who would want to be a girl? It isn’t meant to offend anyone. Any man who objects to this stereotype is himself at risk of being told he is too sensitive or not a “real man” or a “typical man.” People who are less kind will tell him he is a sissy, wimp, girl, or any number of nastier anti-gay slurs.

So, men who don’t want these things should turn in their man cards (see this site for an uproariously funny rendition of this ). “Turn in your man card” is the functional equivalent of “you throw like a girl.” As much as people insist this is all just a joke, the consequences of masculine stereotypes are severe. Children who fail to express their gender in expected ways are more likely to be bullied and abused and suffer from depression and PTSD (see a study on the risk here). You may have heard what happened to a boy who liked My Little Pony. Further, anti-gay attacks are typically in reaction not to sexual activity but to perceived non-conformity to gender stereotypes (a 1982 study by Joseph Harry found that “effeminate” men are twice as likely to be victims of gay bashing than gender conforming men), which means gay-bashing victims include many heterosexuals or children with no obvious sexual orientation or identity at all.

This bias against unmanly men is nothing new. Through an essay by Elizabeth V. Spelman, I found a passage in Plato‘s Republic describing what kinds of men would be inappropriate for a decent society:

We will not then allow our charges, whom we expect to prove good men, being men, to play the parts of women and imitate a woman young or old wrangling with her husband, defying heaven, loudly boasting, fortunate in her own conceit, or involved in misfortune and possessed by grief and lamentation—still less a woman that is sick, in love, or in labor.

People sometimes want to credit Plato with an early form of feminism, because he felt women should be trained in the mode of men. Like many today, he felt it was quite admirable for women to strive to “achieve” masculine traits. Men being the highest form of human perfection, Plato thought it made sense for women to strive for the masculine ideal. The man who would follow the lead of women, however, would be lowering himself below his station and be pathetic at best. His view persists as we encourage girls in sports, mathematics, and leadership, but forbid boys from nurturing, crying, creativity, and careers related to care and empathy. It seems odd to me that eating meat is considered particularly masculine, but vegetarian men are portrayed as being the least manly of all. The hatred and devaluation of “feminine” men is an extension of the oppression of women. Feminist philosopher Jean Grimshaw points out that the conception of a feminine ideal depends on “the sort of polarization between ‘masculine’ and ‘feminine’ which has itself been so closely related to the subordination of women.”

The hatred of “effeminate” men is an extension of the devaluing of the feminine, but it leads to violence and oppression of both men and women. In order to be free, we must assign equal value to all human activities and emotional dispositions. Leadership and assertiveness have their value, but we will not last long in a society devoid of nurturing, care, and concern. Another feminist philosopher, Genevieve Lloyd, puts it this way:

If the full range of human activities–both the nurturing tasks traditionally associated with the private domain and the activities which have hitherto occupied public space–were freely available to all, the exploration of sexual difference would be less fraught with the dangers of perpetuating norms and stereotypes that mutilated men and women alike.

I added the emphasis on the word “mutilated,” because I am grateful to her for using such strong language to describe accurately what sexist stereotypes have done to us. I often hear women struggle to describe how sexism hurts men. Some say it discourages men from working hard or from caring for others, but they miss the fact that sexism destroys men from the inside out. Very few men escape childhood without having their masculinity questioned and challenged. And too many men have responded violently to a woman who has taunted them with, “If you were a real man, you’d . . . !” The constant demand that a boy or man prove his resilience, indifference to pain and fear, and lack of compassion rends men from their humanity. Those who resist are often trampled under foot and left with depression, addiction, anxiety, and self-loathing. Too often, it ends in self-destruction through addiction, isolation, or suicide.

You may be thinking I take things a little too seriously. No one would kill himself over Steak and BJ Day. I agree, but I am asking you to consider the good of masculine stereotypes, and I tell you they serve no purpose and provide no benefit. The cumulative effect of such stereotypes is to prevent men from being whole and to destroy those who are uninterested or unable to fulfill the social expectations such stereotypes are designed to enforce.

For the love of humanity, please free us all.

See also: Why I Hate Valentine’s Day

Sunshine disinfects nothing

I seem to remember Jon Stewart once playing a clip of a politician declaring that sunshine is the best disinfectant. After the clip, Stewart warned viewers that using sunshine as a disinfectant could lead to a nasty infection. In response to the Sunshine (Open Payments) Act, bioethicist Mark Wilson sounds a similar alarm in a recent paper.

For years, many people, including myself, have argued that industry payments to physicians should be disclosed to the public, so that we will all be aware of possible financial conflicts of interest (FCOI). My hope was that disclosing conflicts of interest might help actually reduce corruption or even simple bias in medical practice, but Wilson points to our experience of Wall Street before and after the 2008 financial collapse to show that knowledge of conflicts of interest does not prevent them. Rather, disclosure only shifts the burden for reducing FCOI to patients, who are least empowered to eliminate them. Rather than fixing the problem, Wilson claims the Sunshine Act only “mythologizes transparency.”

Wilson pointed me to a paper (“Tripartite Conflicts of Interest and High Stakes Patent Extensions in the DSM-5”) in Psychotherapy and Psychosomatics that illustrates the problem. If you want the details, you can read the paper yourself, but I will skip right to the conclusion, which I admit is how I read most papers anyway:

[I]t is critical that the APA recognize that transparency alone is an insufficient response for mitigating implicit bias in diagnostic and treatment decision-making. Specifically, and in keeping with the Institute of Medicine’s most recent standards, we recommend that DSM panel members be free of FCOI.

Telling people about FCOI does not reduce bias and corruption; it only offers an opportunity for people to be aware that bias and corruption exist. I think it is valuable that the Sunshine Act is making people aware of FCOI. In response, though, I hope we will take steps to reduce FCOI. Unfortunately, the burden is indeed shifted to voters and consumers. The most disturbing and obviously true statement Wilson makes in his paper is this: “Until politicians end their own commercial COIs, the Sunshine Act will likely remain the governance order of the day.”

We can’t hope the experts will solve this problem. We must demand that FCOI are eliminated.

What scientism means to me

I’ve been reading many posts on scientism lately. Some have been from well-known academics and some have been from less known equally astute members of my social-networking circle. Some seem to equate scientism with atheism, some equate it with a reasoned approach to the world, and some equate it with pure evil, apparently.

I don’t know what definition is correct, but I view scientism as the belief that science is not only the best way to gain information about the world but also the best way to make meaning in the world. As a humanist, I reject scientism because I believe we can and should turn to philosophy, literature, religion, art, music and other forms of human introspection and expression to make meaning in our lives. This does not mean I reject the idea that science is the best way to learn facts (disputable as they may be) about the world.

In other words, I think climate scientists are the best qualified individuals to give information about whether the climate is changing and what is causing it. I don’t think I should challenge scientists because I don’t “feel” like they are correct. Opinions are not all equal. Informed opinions are of greater value than uninformed opinions any day.

Similarly, believing that religions can help us find our make meaning in our lives does not mean that scientific information regarding evolution is invalid. Science as an endeavor does not encroach upon religion. It is only when religious dogma makes scientific claims that conflict arises between the two discrete domains of knowledge. Some people in science may occasionally make a religious claim, citing their authority as a scientist, that runs in to conflict with religion and creates controversy as well, but I really think that most scientists simply do their best to report the best information they can glean from available evidence with the hope of improving life for all of humanity.

I’m not sure, but I suspect this has all come to head because of recent controversies over evolution and climate change. Folks on the left have accused those on the right of being “anti-science” because they reject the findings of scientists in these two areas. Many on the right took this as an attack on religion for some reason that I don’t understand, but there you have it. What would we call the view that religion is the only way to find information about the world? Religionism?

Anyway, in response to the left’s accusations of an anti-science bias on the right, some on the right have accused the left of being anti-science because they don’t like genetically-modified foods or vaccinations or something. Never mind that many who oppose GMOs and vaccinations are either conservatives or libertarians, it is true that some people on the left do not approach the world with scientific rigor.

And somehow this has all resulted in people tossing the word “scientism” around like a new hacky-sack. If someone says you are anti-science, you can just say that they are guilty of “scientism.” And, once someone throws that label at you, it is hard to shake it off. So, you either accept the label, ignore the situation completely, or fire back a volley of counter-attacks.

In Steven Pinker‘s response to such an attack, he embraced scientism in a positive sense by simply recounting all the successes of scientific reasoning. Of course, in response to an accusation of scientism, he basically says humanists should embrace scientism and accept that only scientists can save the humanities from extinction. He said, “A consilience with science offers the humanities countless possibilities for innovation in understanding.” He then inadvertently points out the risk of doing so, saying, “In some disciplines, this consilience is a fait accompli. Archeology has grown from a branch of art history to a high-tech science.” In other words, we should all accept how the infusion of science can improve our disciplines by destroying them.

Pinker mentions that philosophy has benefited from collaborations with cognitive scientists, and interesting and productive work has certainly been done in philosophy around cognitive science, but western philosophers have been involved in scientific theory and method from the beginning. Early on, philosophers and scientists were essentially the same people, but even later philosophers sought both to influence scientific method and apply apply scientific method to philosophy. In the twentieth century, the drive to conduct philosophy with the rigor of science led it to a level of obscurity that almost destroyed any hope of philosophers reaching any kind of popular audience.

In the twenty-first century, this movement continues but without a somewhat different focus under the banner of “experimental philosophy.” In this scientific approach to philosophy, philosophers actually gather data to analyze and test their philosophical assumptions. Kwame Anthony Appiah summarizes the problem with this approach quite succinctly: “You can conduct more research to try to clarify matters, but you’re left having to interpret the findings; they don’t interpret themselves. There always comes a point where the clipboards and questionnaires and M.R.I. scans have to be put aside.” When all is said and done, data must be interpreted, and interpretation has always been the forte of philosophers, so, as Appiah suggests, we must return to the armchair for the hard work of hard thinking.

But how do philosophers reach beyond their small circle of professional philosophers to a more popular audience? Philosophers achieve this when they write on matters that intersect with the daily lives of non-philosophers. Appiah is an excellent example of someone who is able to engage the public on matters of moral concern to anyone who happens to be alive on this planet. As a public intellectual, he comments on how we think, how we converse, and how we interact with one another. This ability has taken him out of obscurity and into the public domain.

But the least obscure living philosopher in the world must be Peter Singer. Singer writes on issues that affect our daily lives (what we eat, what we do with our money, how we preserve life), and he creates great controversy in the process. Whether you think he is skilled as a philosopher or not, you cannot deny the scope of his reach. He is helping, as is Appiah, us to interpret and determine exactly what value we place on life and exactly what we consider a good life to be.

Neither Appiah nor Singer is anti-science, but both know that a philosopher’s skill lies in helping us examine what is meaningful and valuable to our personal lives. They seem also to realize that science is unable to interpret and analyze human values. No, it is the humanities that enable us to envision a meaningful and rewarding existence. Scientific advances make a constant re-examination and re-evaluation necessary, and the humanities help guide us down that path. The idea that the humanities have nothing to add to this journey toward meaning and value is what I call “scientism.” Scientists and humanists can both be guilty of scientism.

And scientists and humanists can both engage in a search for meaning that reaches beyond data.

Do all ethicists have a messiah complex?

Last May, Nathan Emmerich wrote a column warning that bioethicists must not become a “priestly caste.”In the column, he warns that giving bioethicists moral authority over all practices in medicine and healthcare will have an anti-democratic effect and hinder public discourse.

He may have overstated the authority that bioethicists generally have, but it is true that some see their job as handing down judgment on various practices in medicine and research while others, frankly, would be happier to just accept the opinion of “experts” in order to avoid having to take full responsibility for their ethical decisions. The ethical expert has arisen because of rising demand. After making a thorny decision, who would not want to be able to say, “My decision was reviewed and approved by experts in ethics”?

Ethicists will do well to resist a priestly role. If you begin to believe that something is morally correct simply because you believe or say that it is, then you should apply for sainthood, not a position as an ethics consultant. When Euthyphro is asked if he knows he is doing the right thing, he replies, “The best of Euthyphro, and that which distinguishes him, Socrates, from other men, is his exact knowledge of all such matters. What should I be good for without it?” Euthyphro considers himself an expert on matters of morality and dismisses any suggestion that his opinions might be challenged. As he attempts to explain himself, his logic breaks down. Ethicists as experts would do well to open themselves to challenges from all corners as Emmerich suggests.

All this is further complicated, though, by Eric Schwitzgebel’s finding that ethicists are no more ethical than non-ethicists. Comparing ethicists and other professors, Schwitzgebel and his colleague, Joshua Rust, found that both ethicists and their colleagues reported that the ethicists were no more ethical than their colleagues. This is not terribly surprising. I may think I am a pretty ethical person but not be willing to say my colleagues in metaphysics are a bunch of thieves and charlatans. By the same token, they may think I am pretty ethical but have enough self-respect not to sell themselves short.

Of further harm to the reputation of ethicists, Schwitzgebel says ethics courses do not appear to have much affect on the ethical behavior of students. He notes that many of us who teach ethics do no claim that it will make our students behave more ethically. This is probably true in most philosophy departments, but ethics courses in law schools and business schools, for example, are designed to prevent unethical behavior down the road.

It isn’t likely that any type of ethics course can cause an unethical person to become more ethical, but courses can have an effect on ethical behavior. Courses in specific disciplines can provide a framework for codes of behavior in a particular field such as law, business, psychotherapy, or medicine. Through such courses, students can become well versed in expected norms as well as actual regulations from laws or professional codes of behavior. In addition, students can learn to examine cases and apply accepted principles of their fields to various situations they may encounter during their careers.

Theoretical courses give students a larger ethical toolbox to examine conflicts that arise in their careers and also in their daily lives. Few ethics professors have had students say that, thanks to the ethics class, they have stopped lying and cheating, but most of us have had students tell us that they now see questions in a new light. Rather than simply relying on instinct or prior teaching, students learn new ways to frame ethical problems and new approaches for identifying possible ethical harm. If nothing else, we give the students who are already ethical a greater vocabulary for articulating their actions and beliefs.

With any luck, ethicists, ethics instructors, and students will all leave the class with a bit of humility. The ethicist who believes his or her own hype as a moral authority has passed into dangerous territory. At best, the ethicist has the tools to examine ethical problems with greater detail and nuance. In the end, people eventually have to act, and a thorough ethical analysis can help guide them.

But ethics courses have a greater importance. Imagine a society where no one ever studied or discussed ethical theory or ethical decisions. It is impossible to imagine such as society, I think, because we do have to make decisions, and that requires thinking about them in detail. Some people would always rely on their “gut feeling,” but others would worry and ponder and ruminate. And they might seek the counsel of others who have spent time worrying and pondering and ruminating. And soon we would see the rise of a priestly caste and a separate group of committed but imperfect thinkers devoted to analyzing ethics in both theory and practice. We would make many mistakes, and many people would be hurt, but at least we would be trying.

At least we are trying.