When you have few options, clinical trials often seem like the best hope, but experimental treatments often offer little hope of therapeutic benefit.
ethics
Oil spill shuts down 65 miles of the Mississippi river
Will industry-funded research kill you?
Last week, I wrote a blog about the effects of financial conflicts of interest (FCOI) on treatment decisions of doctors and whether disclosure alone will have any effect on eliminating bias and corruption. As a result, I received some comments and information on FCOI in published research.
Before I say more, I would like to clarify that someone who is conducting research funded by industry is not technically, in my studied opinion, involved in a FCOI, because such a person has the single interest of generating products that will result in profit for industry. It is possible that research undertaken with the aim of commercial success will benefit humanity, but if profit is not possible, humanity be damned. (I am making an assumption, which may be naive, that most of us think medical research should be aimed at making life better for humanity.)
To help combat the problem of bias in research, John Henry Noble suggests prison time for those found guilty of scientific fraud. In my opinion, he makes two strong claims: 1. “The false claims of the perpetrators rise to the status of crime against society, insofar as they endanger public health by sullying and misdirecting the physician’s ‘standard of care.’” 2. “The due process of law is likely to uncover and judge the evidence of guilt or innocence more reliably and fairly than will the institutions of science and the professions that historically have resisted taking decisive action against the perpetrators.”
I agree that jail time is appropriate for egregious cases of scientific fraud, but I’m not sure it eliminates the problem of industry-driven research. Another person told me industry-funded research should be published for two reasons: 1. Some people are biased without the benefit of industry funding. 2. Some industry-funded research proves to be quite beneficial. Perhaps surprisingly, I agree with both of these statements as well–as far as they go. Certainly, many people carry any number of biases that do not result from corporate funding, and the history of scientific fraud is littered with examples. Further, corporate labs frequently create products I enjoy immensely.
Oddly enough, the person defending industry-funded research sent me a link to a paper to support the contention that FCOIs are not a strong predictor of bias. I say it is odd because the paper didn’t seem to support that position. The paper analyzed the associate between industry funding and the likelihood that the researchers would find an association between sweetened beverages and obesity. The authors of the paper found that “Those reviews with conflicts of interest were five times more likely to present a conclusion of no positive association than those without them.” It is perhaps the conclusion of the paper that gives hope to those advocating for industry funding:
They [results of the study] do not imply that industry sponsorship of nutrition research should be avoided entirely. Rather, as in other research areas, clear guidelines and principles (for example, sponsors should sign contracts that state that they will not be involved in the interpretation of results) need to be established to avoid dangerous conflicts of interest.
In other words, it would reduce bias if sponsored researchers were limited to collecting data without analyzing it. This is hardly a ringing endorsement of industry-funded research, but so be it.
So, I do not think all industry-funded research should be banned. Rather, I think we (as a society) need to ensure that we have ample researchers who are free of FCOIs. In other words, we need substantial funding for independent research centers where researchers can work for the advancement of knowledge without a constant concern for the production of profit. Forcing our public universities and research labs to turn to corporations for funding corrupts our pursuit of knowledge and the advancement of society. We must restore public funding to education and research.
For more on the possible risks of funded research, read about Dan Markingson here. Or read about Jesse Gelsinger here.
Sunshine disinfects nothing
I seem to remember Jon Stewart once playing a clip of a politician declaring that sunshine is the best disinfectant. After the clip, Stewart warned viewers that using sunshine as a disinfectant could lead to a nasty infection. In response to the Sunshine (Open Payments) Act, bioethicist Mark Wilson sounds a similar alarm in a recent paper.
For years, many people, including myself, have argued that industry payments to physicians should be disclosed to the public, so that we will all be aware of possible financial conflicts of interest (FCOI). My hope was that disclosing conflicts of interest might help actually reduce corruption or even simple bias in medical practice, but Wilson points to our experience of Wall Street before and after the 2008 financial collapse to show that knowledge of conflicts of interest does not prevent them. Rather, disclosure only shifts the burden for reducing FCOI to patients, who are least empowered to eliminate them. Rather than fixing the problem, Wilson claims the Sunshine Act only “mythologizes transparency.”
Wilson pointed me to a paper (“Tripartite Conflicts of Interest and High Stakes Patent Extensions in the DSM-5”) in Psychotherapy and Psychosomatics that illustrates the problem. If you want the details, you can read the paper yourself, but I will skip right to the conclusion, which I admit is how I read most papers anyway:
[I]t is critical that the APA recognize that transparency alone is an insufficient response for mitigating implicit bias in diagnostic and treatment decision-making. Specifically, and in keeping with the Institute of Medicine’s most recent standards, we recommend that DSM panel members be free of FCOI.
Telling people about FCOI does not reduce bias and corruption; it only offers an opportunity for people to be aware that bias and corruption exist. I think it is valuable that the Sunshine Act is making people aware of FCOI. In response, though, I hope we will take steps to reduce FCOI. Unfortunately, the burden is indeed shifted to voters and consumers. The most disturbing and obviously true statement Wilson makes in his paper is this: “Until politicians end their own commercial COIs, the Sunshine Act will likely remain the governance order of the day.”
We can’t hope the experts will solve this problem. We must demand that FCOI are eliminated.
What scientism means to me
I’ve been reading many posts on scientism lately. Some have been from well-known academics and some have been from less known equally astute members of my social-networking circle. Some seem to equate scientism with atheism, some equate it with a reasoned approach to the world, and some equate it with pure evil, apparently.
I don’t know what definition is correct, but I view scientism as the belief that science is not only the best way to gain information about the world but also the best way to make meaning in the world. As a humanist, I reject scientism because I believe we can and should turn to philosophy, literature, religion, art, music and other forms of human introspection and expression to make meaning in our lives. This does not mean I reject the idea that science is the best way to learn facts (disputable as they may be) about the world.
In other words, I think climate scientists are the best qualified individuals to give information about whether the climate is changing and what is causing it. I don’t think I should challenge scientists because I don’t “feel” like they are correct. Opinions are not all equal. Informed opinions are of greater value than uninformed opinions any day.
Similarly, believing that religions can help us find our make meaning in our lives does not mean that scientific information regarding evolution is invalid. Science as an endeavor does not encroach upon religion. It is only when religious dogma makes scientific claims that conflict arises between the two discrete domains of knowledge. Some people in science may occasionally make a religious claim, citing their authority as a scientist, that runs in to conflict with religion and creates controversy as well, but I really think that most scientists simply do their best to report the best information they can glean from available evidence with the hope of improving life for all of humanity.
I’m not sure, but I suspect this has all come to head because of recent controversies over evolution and climate change. Folks on the left have accused those on the right of being “anti-science” because they reject the findings of scientists in these two areas. Many on the right took this as an attack on religion for some reason that I don’t understand, but there you have it. What would we call the view that religion is the only way to find information about the world? Religionism?
Anyway, in response to the left’s accusations of an anti-science bias on the right, some on the right have accused the left of being anti-science because they don’t like genetically-modified foods or vaccinations or something. Never mind that many who oppose GMOs and vaccinations are either conservatives or libertarians, it is true that some people on the left do not approach the world with scientific rigor.
And somehow this has all resulted in people tossing the word “scientism” around like a new hacky-sack. If someone says you are anti-science, you can just say that they are guilty of “scientism.” And, once someone throws that label at you, it is hard to shake it off. So, you either accept the label, ignore the situation completely, or fire back a volley of counter-attacks.
In Steven Pinker‘s response to such an attack, he embraced scientism in a positive sense by simply recounting all the successes of scientific reasoning. Of course, in response to an accusation of scientism, he basically says humanists should embrace scientism and accept that only scientists can save the humanities from extinction. He said, “A consilience with science offers the humanities countless possibilities for innovation in understanding.” He then inadvertently points out the risk of doing so, saying, “In some disciplines, this consilience is a fait accompli. Archeology has grown from a branch of art history to a high-tech science.” In other words, we should all accept how the infusion of science can improve our disciplines by destroying them.
Pinker mentions that philosophy has benefited from collaborations with cognitive scientists, and interesting and productive work has certainly been done in philosophy around cognitive science, but western philosophers have been involved in scientific theory and method from the beginning. Early on, philosophers and scientists were essentially the same people, but even later philosophers sought both to influence scientific method and apply apply scientific method to philosophy. In the twentieth century, the drive to conduct philosophy with the rigor of science led it to a level of obscurity that almost destroyed any hope of philosophers reaching any kind of popular audience.
In the twenty-first century, this movement continues but without a somewhat different focus under the banner of “experimental philosophy.” In this scientific approach to philosophy, philosophers actually gather data to analyze and test their philosophical assumptions. Kwame Anthony Appiah summarizes the problem with this approach quite succinctly: “You can conduct more research to try to clarify matters, but you’re left having to interpret the findings; they don’t interpret themselves. There always comes a point where the clipboards and questionnaires and M.R.I. scans have to be put aside.” When all is said and done, data must be interpreted, and interpretation has always been the forte of philosophers, so, as Appiah suggests, we must return to the armchair for the hard work of hard thinking.
But how do philosophers reach beyond their small circle of professional philosophers to a more popular audience? Philosophers achieve this when they write on matters that intersect with the daily lives of non-philosophers. Appiah is an excellent example of someone who is able to engage the public on matters of moral concern to anyone who happens to be alive on this planet. As a public intellectual, he comments on how we think, how we converse, and how we interact with one another. This ability has taken him out of obscurity and into the public domain.
But the least obscure living philosopher in the world must be Peter Singer. Singer writes on issues that affect our daily lives (what we eat, what we do with our money, how we preserve life), and he creates great controversy in the process. Whether you think he is skilled as a philosopher or not, you cannot deny the scope of his reach. He is helping, as is Appiah, us to interpret and determine exactly what value we place on life and exactly what we consider a good life to be.
Neither Appiah nor Singer is anti-science, but both know that a philosopher’s skill lies in helping us examine what is meaningful and valuable to our personal lives. They seem also to realize that science is unable to interpret and analyze human values. No, it is the humanities that enable us to envision a meaningful and rewarding existence. Scientific advances make a constant re-examination and re-evaluation necessary, and the humanities help guide us down that path. The idea that the humanities have nothing to add to this journey toward meaning and value is what I call “scientism.” Scientists and humanists can both be guilty of scientism.
And scientists and humanists can both engage in a search for meaning that reaches beyond data.
You shouldn’t have to go to jail for mental health treatment
Last week I tweeted a link to a Texas Observer article by Emily DePrang about sexual assaults in Harris County jails. DePrang had written about two Bureau of Justice Statistics studies that showed the Harris County Jail on Baker Street had sexual assaults that are higher than national averages.
One survey reported rates of sexual victimization as reported by inmates, and found that inmates reported higher than average rates of victimization from other inmates. The other survey was based on official reports of sexual violence in jails and also reported higher than average rates for the Baker Street jail. DePrang did not discuss, in her short post, all the statistical and methodological limitations of the studies in question.
To my surprise, Alan Bernstein, the director for public affairs at the sheriff’s office tweeted me, saying he hoped someone would fact-check DePrang’s article as it had many mistakes, so I asked him what the mistakes were, and he sent me a list of items he felt were misleading. Later, the Texas Observer agreed to publish his response to the article (his published response was slightly different from what he sent me).
For the most part, his response pointed out the limitations of the study. Also, he noted that only one of four jails in Harris County had a higher incidence of sexual assault, and he also noted that jail had a high percentage of inmates who are under treatment for mental illness. In his note to me, Bernstein asked, “Is touching a clothed inmate’s thigh sexual violence? Maybe so. But this is one of the actions considered sexual victimization in the study.” I will just say that I consider any unwanted touching of my upper thigh over or under clothing to be sexual assault, even if the “violence” seems minor.
In trying to separate the signal from noise, though, what interested me most was not the definition of sexual violence or even the limitations of the study but the fact that the jail had so many inmates on medications. The Houston Chronicle quoted Sheriff Adrian Garcia saying, “The Harris County Jail has been referred to as the largest psychiatric facility in the state of Texas” and “More than 2,000 inmates … are on psychotropic medications on a daily basis.” And in Bernstein’s response, posted on the Texas Observer site, he said:
That building houses the jail system’s inmates with acute mental illness. In fact the statistician who worked on the 2011 study tells us that two-thirds of the surveyed inmates in the so-called “high” rate building had “psychological stress disorders.” We don’t know how that was determined, and we would never allege that people with mental illness fabricate allegations more often than anyone else.
I’m not sure what “acute” means in this context, but I suspect anyone on medication is assumed to have an acute mental illness. Given the number of prescriptions written for antidepressants and anti-anxiety medications these days, I suspect a fairly high percentage of the general population is acutely mentally ill, according to these assumptions. Even someone being treated for mild depression, though, will experience unpleasant side-effects if doses are missed, as they are likely to be missed inside a jail. We should be concerned both about lack of treatment for mental health and the over-prescription of drugs for depression and anxiety. Withdrawal sometimes leads to aggressive behavior and could account for some problems. On the other hand, mental illness is also stigmatized, and those receiving treatment may become targets for abuse at the hands of other inmates.
Fortunately, I found more information on treatment of the mentally ill in Harris Country jails in excellent article by DePrang titled “Barred Care.” According to the article, the jail “treats more psychiatric patients than all 10 of Texas’ state-run public mental hospitals combined.” And why is that? Because no one else is treating those patients. Again from the article: “Harris County has one of the most underfunded public mental health systems in a state that consistently ranks last, or almost last, in per capita mental health spending.” Some people get so desperate for relief, that they break the law just so they can go to jail and get treatment.
The program in the jail is commendable. The funding priorities of our state government are not. In 2003, the Texas legislature slashed funding for mental health services in Texas. According to DePrang’s article, “In Harris County, the number of law enforcement calls about people in psychiatric crisis jumped from fewer than 11,000 in 2003 to more than 27,000 in 2012.” So, the Harris County jail has a high number of mentally ill as a result of deliberate action of our state’s lawmakers. This should make us all angry. Cutting funding for mental health services only to force the mentally ill into jails is cruel and expensive. No matter what sends people to jail, many will never really recover from the stigma and the trauma of the experience.
What should be done? We should lobby our lawmakers to restore funding for mental health services in Texas. We should stop blaming the mentally ill for their problems. We should resist the temptation to treat even minor difficulties with powerful and addicting drugs. We should insist that Texas expand Medicaid as part of the Affordable Care Act (this would cost the state nothing) so that people can receive basic medical care and avoid crisis.
In short, we should learn to heal each other. The person with a mental health crisis tomorrow could be you.
Do all ethicists have a messiah complex?
Last May, Nathan Emmerich wrote a column warning that bioethicists must not become a “priestly caste.”In the column, he warns that giving bioethicists moral authority over all practices in medicine and healthcare will have an anti-democratic effect and hinder public discourse.
He may have overstated the authority that bioethicists generally have, but it is true that some see their job as handing down judgment on various practices in medicine and research while others, frankly, would be happier to just accept the opinion of “experts” in order to avoid having to take full responsibility for their ethical decisions. The ethical expert has arisen because of rising demand. After making a thorny decision, who would not want to be able to say, “My decision was reviewed and approved by experts in ethics”?
Ethicists will do well to resist a priestly role. If you begin to believe that something is morally correct simply because you believe or say that it is, then you should apply for sainthood, not a position as an ethics consultant. When Euthyphro is asked if he knows he is doing the right thing, he replies, “The best of Euthyphro, and that which distinguishes him, Socrates, from other men, is his exact knowledge of all such matters. What should I be good for without it?” Euthyphro considers himself an expert on matters of morality and dismisses any suggestion that his opinions might be challenged. As he attempts to explain himself, his logic breaks down. Ethicists as experts would do well to open themselves to challenges from all corners as Emmerich suggests.
All this is further complicated, though, by Eric Schwitzgebel’s finding that ethicists are no more ethical than non-ethicists. Comparing ethicists and other professors, Schwitzgebel and his colleague, Joshua Rust, found that both ethicists and their colleagues reported that the ethicists were no more ethical than their colleagues. This is not terribly surprising. I may think I am a pretty ethical person but not be willing to say my colleagues in metaphysics are a bunch of thieves and charlatans. By the same token, they may think I am pretty ethical but have enough self-respect not to sell themselves short.
Of further harm to the reputation of ethicists, Schwitzgebel says ethics courses do not appear to have much affect on the ethical behavior of students. He notes that many of us who teach ethics do no claim that it will make our students behave more ethically. This is probably true in most philosophy departments, but ethics courses in law schools and business schools, for example, are designed to prevent unethical behavior down the road.
It isn’t likely that any type of ethics course can cause an unethical person to become more ethical, but courses can have an effect on ethical behavior. Courses in specific disciplines can provide a framework for codes of behavior in a particular field such as law, business, psychotherapy, or medicine. Through such courses, students can become well versed in expected norms as well as actual regulations from laws or professional codes of behavior. In addition, students can learn to examine cases and apply accepted principles of their fields to various situations they may encounter during their careers.
Theoretical courses give students a larger ethical toolbox to examine conflicts that arise in their careers and also in their daily lives. Few ethics professors have had students say that, thanks to the ethics class, they have stopped lying and cheating, but most of us have had students tell us that they now see questions in a new light. Rather than simply relying on instinct or prior teaching, students learn new ways to frame ethical problems and new approaches for identifying possible ethical harm. If nothing else, we give the students who are already ethical a greater vocabulary for articulating their actions and beliefs.
With any luck, ethicists, ethics instructors, and students will all leave the class with a bit of humility. The ethicist who believes his or her own hype as a moral authority has passed into dangerous territory. At best, the ethicist has the tools to examine ethical problems with greater detail and nuance. In the end, people eventually have to act, and a thorough ethical analysis can help guide them.
But ethics courses have a greater importance. Imagine a society where no one ever studied or discussed ethical theory or ethical decisions. It is impossible to imagine such as society, I think, because we do have to make decisions, and that requires thinking about them in detail. Some people would always rely on their “gut feeling,” but others would worry and ponder and ruminate. And they might seek the counsel of others who have spent time worrying and pondering and ruminating. And soon we would see the rise of a priestly caste and a separate group of committed but imperfect thinkers devoted to analyzing ethics in both theory and practice. We would make many mistakes, and many people would be hurt, but at least we would be trying.
At least we are trying.
Why I Am Afraid To Die

My interest in the topic of this blog arose several years ago from a conversation with a scholar visiting from China. She had studied Christianity in China and was interested in meeting Christians in the United States and learning more about their beliefs and culture. She admitted to me that she felt some disappointment to learn that a promise of a blissful eternity did not seem to decrease the fear of death for most American Christians. If life is filled with pain and challenges, why would Christians not welcome a release to a joy of eternity?
Lucretius would not be surprised by their fear. He noted that those who boast of fearlessness in the face of death will react to death in pretty much the same way everyone else does. He says:
Of course, we also know some turn to suicide, which may or may not reflect a loss of fear of death. It may only mean a fear of the misery of life has overtaken a fear of death, but I will return to that idea later.
On the other side, I can remember discussions with Christians describing the attitude of suicide bombers in armed conflict. I have heard at least a few people who equate a willingness to die for a cause with a lack of respect for the value of life rather than a lack of fear in the face of death. If we value our lives, must we fear death? Is there a greater moral advantage to reducing the fear of death or to emphasizing death as a loss of something of great value, life?

Epicurus, who inspired Lucretius, felt our lives would be enhanced if we could extinguish, or greatly reduce, our fear of death. Epicurus said, “Death, the most dreaded of evils, is nothing to us, because when we exist, death is not present, and when death is present, we do not exist.” Death is a harm because it robs us of the good of life, but it is a harm that is impossible to experience. Some will say that they don’t fear being dead but fear the process of dying, but Thomas Nagel points out succinctly and convincingly that we “should not really object to dying if it were not followed by death.” Both Nagel and Epicurus argue that death is bad because it deprives us of life, but no amount of life is sufficient to eliminate the harm. No matter how long we extend life expectancy, we will view death as a harm to us.

Of course, some of us face death with more equanimity than others. Scottish author James Boswell visited Scottish philosopher David Hume on his deathbed and was impressed by Hume’s serenity. Boswell mentioned Hume’s calm to Samuel Johnson, but Johnson refused to believe Hume was not covering his fear. In response, Boswell tells us, “The horror of death which I had always observed in Dr. Johnson, appeared strong tonight. I ventured to tell him, that I had been, for moments in my life, not afraid of death; therefore I could suppose another man in that state of mind for a considerable space of time.” Johnson responded, “The better a man is, the more afraid of death he is, having a clearer view of infinite purity.” Our fear of death may, indeed, aid our moral development.

While he doesn’t have much in common with Samuel Johnson, German philosopher Martin Heidegger also sees some advantages to our uneasiness with death. When we contemplate our own annihilation, he says, we are filled with dread, which forces us to confront what is authentic. When we are projected into Nothing, we are transcendent. If we were not “projected from the start into Nothing,” we could not relate to “what-is” or have any self-relationship. Only through confronting annihilation do we have any hope for authentic existence.
It may be that our dread gives both our life and our actions meaning. Suicide, which is often seen as a failure to negotiate life, is not necessarily so. Indeed, Simone de Beauvoir sees suicide a possible way to will ourselves free, even in the most horrific situations. She says, “Freedom can always save itself, for it is realized as a disclosure of existence through its very failures, and it can again confirm itself by a death freely chosen.” If we do not fear our own death, however, this act of defiance and control has little meaning. Willing ourselves free through suicide is only meaningful if it is a triumph over something, and this is not to be taken lightly.

Fear of death propels us forward through life, even in the face of injury, disease, and extreme hardship, and as it propels us forward it also gives meaning to our struggle. By working to overcome our fear, we establish ourselves as free beings capable of making meaning of our own suffering. And if we will ourselves free and full of meaning, we will strive for others’ freedom as well. Indeed, Beauvoir says we extend our own freedom through the freedom of others.
As a final note, let me say that part of willing freedom for others is an effort to remove obstacles that make suicide seem like a triumph. It is for this reason we should work to promote human capabilities and, specifically, to relieve the pain and suffering of depression.
Related articles
- The Art of Living and the Art of Dying are the Same (vinitadhondiyal.wordpress.com)
- Life after Death (lisalyonsx.wordpress.com)
- Politics of Ambiguity (venitism.blogspot.com)
- The Ethics Of Ambiguity (undergrounddiary.wordpress.com)
- The Proper Way to Grieve for a Child: Cicero’s Example (ethicsbeyondcompliance.wordpress.com)
- Choosing Death A Humboldt man’s journey to attend his parents’ double suicide (josephinejohnson.wordpress.com)
The Proper Way to Grieve for a Child: Cicero’s Example

In advising us on how to respond when we encounter someone who has lost a child or suffered an equally calamitous loss, the stoic philosopher, Epictetus said, “Don’t reduce yourself to his level, and certainly do not moan with him. Do not moan inwardly either.” These negative emotions are dangerous to us and to others, so we must be sure to keep them in check.
This sounds harsh, but Epictetus also advises us not to beat ourselves up when we do give over to grief. He says, “Some who is perfectly instructed will place blame neither on others nor on himself.” Epictetus assures us that death is not to be feared, and our terror of it comes from within, but blaming ourselves for our feelings is also pointless.
Scottish philosopher David Hume, reflecting on the nature of tragedy in art, makes a comment about the best way to comfort a parent who has lost a child. Hume says, “Who could ever think of it as a good expedient for comforting an afflicted parent, to exaggerate, with all the force of elocution, the irreparable loss which was met with by the death of a favorite child?” I’m sure Hume is right that we shouldn’t exaggerate the loss, but I would also advise against minimizing the loss in any way, which is what Cicero’s friend, Servius Sulpicius Rufus, did after the death of Cicero’s daughter, Tullia.

Sulpicius said, “If you have become the poorer by the frail spirit of one poor girl, are you agitated thus violently? If she had not died now, she would yet have had to die a few years hence, for she was mortal born.” Sulpicius sounds harsh in this instance, but this is actually offered only after he introduced the topic, saying, “If I had been at home, I should not have failed to be at your side, and should have made my sorrow plain to you face to face. That kind of consolation involves much distress and pain, because the relations and friends, whose part it is to offer it, are themselves overcome by an equal sorrow.” If he had been available, he would have comforted Cicero and perhaps avoided the need for such harsh and critical words later, apparently.
Cicero expressed his gratitude for the comforting words laced with recrimination, but also acknowledged their ineffectiveness, saying, “For I think it a disgrace that I should not bear my loss as you – a man of such wisdom – think it should be borne. But at times I am taken by surprise and scarcely offer any resistance to my grief, because those consolations fail me.”
Cicero had also been writing consolations for himself, and he felt himself the inventor of this type of self-help. He said, “Why, I have done what no one has done before, tried to console myself by writing a book.” (This is quoted by Han Baltussen in the Nov. 2009 issue of Mortality in an essay titled, “A grief observed: Cicero on remembering Tullia.”) Unfortunately, Cicero’s Consolations have not survived the passage of time, so we can only infer what they may have said. In a letter to Titus Pomponius Atticus, Cicero remarked that he wrote in order to heal, but his writing also kept him out of public view, preserving the privacy of his grief and avoiding a vulgar display of emotion.
Cicero also took his turn in consoling others, Baltussen notes, “In the examples where Cicero aims at consoling others, we find a subtle approach, developing, as it were, a ‘philosophy of empathy,’ in which he consciously or unconsciously takes personal and political aspects into account. He shows great sensibility in narrowing or widening the emotional gap between him and the consolee.” Cicero noted that one task as consoler was to establish that he needed consolation himself, as he was grieving for his friend’s loss. I think this goes a little beyond mere empathy. Cicero actually feels his own sorrow upon hearing of the sorrow of a dear friend. He understands the friend’s pain because it is a magnified form of his own pain.
I personally feel that Cicero’s struggle with his grief highlights a social failure to deal with grief constructively. Can we not manage to express and process grief openly without fear of censure from friends and counselors? Since the time of Cicero, we have developed grief therapy, expressions of support for the bereaved, and paid lip service to the process of healing. Yet, we still criticize those who can’t “get it together” within a short time. Sadness is seen as weakness, especially for men, and we do not tolerate prolonged grieving. Cicero was lucky to have friends and the ability to spend time grieving and writing his consolations. Men with less power would have had no option but to keep working without respite.

As for me, I don’t know the best way to console others, but I’ve thought a little about what kinds of consolations have helped me in the past, and these are the things that I appreciate. First, recognize that my pain is of such a magnitude that it obscures the horizon, and I can’t see beyond it. Second, do acknowledge the enormous value of the life I have lost. Third, do remind me that the person I lost had life filled with wonder, love, accomplishments, and happiness. Fourth, remind me also that this person is in a state of peace with no more struggle, pain, or discontentment. Finally, and perhaps most importantly, assure me that I am not alone in the world, my grief is justified, and that a future is possible.
Related articles
- The 5 Stages of Grief and Other Lies That Don’t Help Anyone (wonderfultips.wordpress.com)
- How Long Does Grief Last? (journeyingbeyondbreastcancer.com)
- “Grieving parents behave in a different manner. “ (indianhomemaker.wordpress.com)
- The Ethics of Grief (ethicsbeyondcompliance.wordpress.com)
- Grief (cherrysocks.wordpress.com)
- The Proper Way to Grieve for a Child (ethicsbeyondcompliance.wordpress.com)
The Ethics of Caring and Seasonal Depression
I don’t know if it is the changes in the weather, the length of the days, or what, but we

sometimes find the world slipping away from us. As we reach, objects, people, and activities seem to continuously recede into the distance just beyond our grasp. We forget how to be engaged with even the most basic tasks. Seasonal changes can leave us feeling depressed and melancholy. As the poet Phillip Larken put it:
The trees are coming into leaf
Like something almost being said;
The recent buds relax and spread,
Their greenness is a kind of grief.
For reasons that aren’t completely understood, spring seems to bring a surge of depression and suicides, but winter gets all the attention for warnings about seasonal depression. Some researchers have noticed that suicide spikes coincide with increased pollen production. Apparently, allergies release cytokines, which affect appetite, activity, sex drive, and social engagement. There may be a philosophical question in there as to the difference between having “depression” and having a response to allergies that looks a heck of a lot like depression. Sufferers of either will probably not worry the distinction too much.
Some theorists suggest that suicide peaks in spring because of a “broken promise effect.” When spring doesn’t bring the joy and energy it generally promises, the depressed are moved to suicide. Others have suggested that springtime brings more energy and agitation (and a corresponding drop in melatonin), especially to people with bipolar disorder, that moves them to act against their own lives. Still others speculate that springtime increases in serotonin give people the energy to kill themselves.
I don’t want us to turn away from people who are depressed during the holidays. Rather, I just hope we can remember that some of us occasionally feel depressed and hopeless throughout the year. The extra effort we make through the holidays may be worth making year round.
Still, I know it is true that many of us mourn with greater intensity during the holidays as we count all those who are no longer with us and grieve for our losses, so maybe we should be a little extra careful during December. A little care can go a long way to avoiding a holiday crisis. But we should remember to keep caring and reaching out during the new year, into spring, and for the rest of the year. When we help each other, we are all stronger.
Related articles
- Funky Mood or Seasonal Depression? (drdebramarks.wordpress.com)
- Seasonal Depression (mcalibai.wordpress.com)
- Seasonal Affective Disorder is SAD (sanantonio.myhomecareblog.com)
- Seasonal Depression and Homelessness (jordanfloryblog.wordpress.com)
- Fall Depression – Seasonal Affective Disorder (annveilleux.com)