When I first met my wife in 2007, she told me she was about to quit her job of 27 years in the oil and gas industry to pursue a career in family therapy. Quitting her job meant giving up her employer-provided insurance, so she went on COBRA for 18 months. By that time, she was in graduate school and was able to get student insurance. When she graduated, however, she was unable to get insurance on her own as she had pre-existing conditions that precluded purchasing insurance on the open market.
I was following a similar path. When we first met, I was working on my PhD while also teaching full time. In 2011, I was beginning my dissertation and my college, facing budget cuts, was offering a payoff to anyone willing to resign. By this time, my wife was on my insurance, and I hesitated to give up my benefits, but we eventually decided I would resign and take student insurance for both of us.
From there, I was playing a delicate balancing act. I knew the healthcare exchanges mandated by the Affordable Care Act (Obamacare) were supposed to become available in January 2014. I pressed forward with my dissertation without wanting to graduate before the exchanges were available. I found that I could stay on the student insurance for six months past my graduation date. I defended my dissertation in March 2013, but did not turn in final paperwork in time for spring graduation, meaning that I would have to enroll in the summer. I graduated in August and was able to keep the student insurance for my wife and myself until February 2014.
Thankfully, the exchanges did go into effect by the beginning of 2014, and we were both able to purchase insurance for ourselves. The cost of the insurance was about the same as the price for the student insurance, but it is a much better insurance plan. I am extremely grateful for the Affordable Care Act (ACA), which made this possible.
But the ACA is even better than I realized. I now teach part time for two colleges. Under the ACA, I can join rejoin the Teacher Retirement System of Texas and purchase health insurance along with disability insurance, accidental death and dismemberment insurance, and life insurance for myself and my wife. Further, the teaching I am now doing applies to my years of service in the Teacher Retirement Service, which means my retirement account is growing and will become available to me sooner.
I am not happy with all of President Obama’s policies by any means, and ultimately I would like to see the US adopt a single-payer model for healthcare, but Obamacare is a step in the right direction. Without Obamacare, my wife and I would have joined the millions of working Americans who have no health insurance or access to affordable healthcare.
Last week, I wrote a blog about the effects of financial conflicts of interest (FCOI) on treatment decisions of doctors and whether disclosure alone will have any effect on eliminating bias and corruption. As a result, I received some comments and information on FCOI in published research.
Before I say more, I would like to clarify that someone who is conducting research funded by industry is not technically, in my studied opinion, involved in a FCOI, because such a person has the single interest of generating products that will result in profit for industry. It is possible that research undertaken with the aim of commercial success will benefit humanity, but if profit is not possible, humanity be damned. (I am making an assumption, which may be naive, that most of us think medical research should be aimed at making life better for humanity.)
To help combat the problem of bias in research, John Henry Noble suggests prison time for those found guilty of scientific fraud. In my opinion, he makes two strong claims: 1. “The false claims of the perpetrators rise to the status of crime against society, insofar as they endanger public health by sullying and misdirecting the physician’s ‘standard of care.’” 2. “The due process of law is likely to uncover and judge the evidence of guilt or innocence more reliably and fairly than will the institutions of science and the professions that historically have resisted taking decisive action against the perpetrators.”
I agree that jail time is appropriate for egregious cases of scientific fraud, but I’m not sure it eliminates the problem of industry-driven research. Another person told me industry-funded research should be published for two reasons: 1. Some people are biased without the benefit of industry funding. 2. Some industry-funded research proves to be quite beneficial. Perhaps surprisingly, I agree with both of these statements as well–as far as they go. Certainly, many people carry any number of biases that do not result from corporate funding, and the history of scientific fraud is littered with examples. Further, corporate labs frequently create products I enjoy immensely.
Oddly enough, the person defending industry-funded research sent me a link to a paper to support the contention that FCOIs are not a strong predictor of bias. I say it is odd because the paper didn’t seem to support that position. The paper analyzed the associate between industry funding and the likelihood that the researchers would find an association between sweetened beverages and obesity. The authors of the paper found that “Those reviews with conflicts of interest were five times more likely to present a conclusion of no positive association than those without them.” It is perhaps the conclusion of the paper that gives hope to those advocating for industry funding:
They [results of the study] do not imply that industry sponsorship of nutrition research should be avoided entirely. Rather, as in other research areas, clear guidelines and principles (for example, sponsors should sign contracts that state that they will not be involved in the interpretation of results) need to be established to avoid dangerous conflicts of interest.
In other words, it would reduce bias if sponsored researchers were limited to collecting data without analyzing it. This is hardly a ringing endorsement of industry-funded research, but so be it.
So, I do not think all industry-funded research should be banned. Rather, I think we (as a society) need to ensure that we have ample researchers who are free of FCOIs. In other words, we need substantial funding for independent research centers where researchers can work for the advancement of knowledge without a constant concern for the production of profit. Forcing our public universities and research labs to turn to corporations for funding corrupts our pursuit of knowledge and the advancement of society. We must restore public funding to education and research.
I seem to remember Jon Stewart once playing a clip of a politician declaring that sunshine is the best disinfectant. After the clip, Stewart warned viewers that using sunshine as a disinfectant could lead to a nasty infection. In response to the Sunshine (Open Payments) Act, bioethicist Mark Wilson sounds a similar alarm in a recent paper.
For years, many people, including myself, have argued that industry payments to physicians should be disclosed to the public, so that we will all be aware of possible financial conflicts of interest (FCOI). My hope was that disclosing conflicts of interest might help actually reduce corruption or even simple bias in medical practice, but Wilson points to our experience of Wall Street before and after the 2008 financial collapse to show that knowledge of conflicts of interest does not prevent them. Rather, disclosure only shifts the burden for reducing FCOI to patients, who are least empowered to eliminate them. Rather than fixing the problem, Wilson claims the Sunshine Act only “mythologizes transparency.”
Wilson pointed me to a paper (“Tripartite Conflicts of Interest and High Stakes Patent Extensions in the DSM-5”) in Psychotherapy and Psychosomatics that illustrates the problem. If you want the details, you can read the paper yourself, but I will skip right to the conclusion, which I admit is how I read most papers anyway:
[I]t is critical that the APA recognize that transparency alone is an insufficient response for mitigating implicit bias in diagnostic and treatment decision-making. Specifically, and in keeping with the Institute of Medicine’s most recent standards, we recommend that DSM panel members be free of FCOI.
Telling people about FCOI does not reduce bias and corruption; it only offers an opportunity for people to be aware that bias and corruption exist. I think it is valuable that the Sunshine Act is making people aware of FCOI. In response, though, I hope we will take steps to reduce FCOI. Unfortunately, the burden is indeed shifted to voters and consumers. The most disturbing and obviously true statement Wilson makes in his paper is this: “Until politicians end their own commercial COIs, the Sunshine Act will likely remain the governance order of the day.”
We can’t hope the experts will solve this problem. We must demand that FCOI are eliminated.
Last week I tweeted a link to a Texas Observer article by Emily DePrang about sexual assaults in Harris County jails. DePrang had written about two Bureau of Justice Statistics studies that showed the Harris County Jail on Baker Street had sexual assaults that are higher than national averages.
One survey reported rates of sexual victimization as reported by inmates, and found that inmates reported higher than average rates of victimization from other inmates. The other survey was based on official reports of sexual violence in jails and also reported higher than average rates for the Baker Street jail. DePrang did not discuss, in her short post, all the statistical and methodological limitations of the studies in question.
To my surprise, Alan Bernstein, the director for public affairs at the sheriff’s office tweeted me, saying he hoped someone would fact-check DePrang’s article as it had many mistakes, so I asked him what the mistakes were, and he sent me a list of items he felt were misleading. Later, the Texas Observer agreed to publish his response to the article (his published response was slightly different from what he sent me).
For the most part, his response pointed out the limitations of the study. Also, he noted that only one of four jails in Harris County had a higher incidence of sexual assault, and he also noted that jail had a high percentage of inmates who are under treatment for mental illness. In his note to me, Bernstein asked, “Is touching a clothed inmate’s thigh sexual violence? Maybe so. But this is one of the actions considered sexual victimization in the study.” I will just say that I consider any unwanted touching of my upper thigh over or under clothing to be sexual assault, even if the “violence” seems minor.
In trying to separate the signal from noise, though, what interested me most was not the definition of sexual violence or even the limitations of the study but the fact that the jail had so many inmates on medications. The Houston Chronicle quoted Sheriff Adrian Garcia saying, “The Harris County Jail has been referred to as the largest psychiatric facility in the state of Texas” and “More than 2,000 inmates … are on psychotropic medications on a daily basis.” And in Bernstein’s response, posted on the Texas Observer site, he said:
That building houses the jail system’s inmates with acute mental illness. In fact the statistician who worked on the 2011 study tells us that two-thirds of the surveyed inmates in the so-called “high” rate building had “psychological stress disorders.” We don’t know how that was determined, and we would never allege that people with mental illness fabricate allegations more often than anyone else.
I’m not sure what “acute” means in this context, but I suspect anyone on medication is assumed to have an acute mental illness. Given the number of prescriptions written for antidepressants and anti-anxiety medications these days, I suspect a fairly high percentage of the general population is acutely mentally ill, according to these assumptions. Even someone being treated for mild depression, though, will experience unpleasant side-effects if doses are missed, as they are likely to be missed inside a jail. We should be concerned both about lack of treatment for mental health and the over-prescription of drugs for depression and anxiety. Withdrawal sometimes leads to aggressive behavior and could account for some problems. On the other hand, mental illness is also stigmatized, and those receiving treatment may become targets for abuse at the hands of other inmates.
Fortunately, I found more information on treatment of the mentally ill in Harris Country jails in excellent article by DePrang titled “Barred Care.” According to the article, the jail “treats more psychiatric patients than all 10 of Texas’ state-run public mental hospitals combined.” And why is that? Because no one else is treating those patients. Again from the article: “Harris County has one of the most underfunded public mental health systems in a state that consistently ranks last, or almost last, in per capita mental health spending.” Some people get so desperate for relief, that they break the law just so they can go to jail and get treatment.
The program in the jail is commendable. The funding priorities of our state government are not. In 2003, the Texas legislature slashed funding for mental health services in Texas. According to DePrang’s article, “In Harris County, the number of law enforcement calls about people in psychiatric crisis jumped from fewer than 11,000 in 2003 to more than 27,000 in 2012.” So, the Harris County jail has a high number of mentally ill as a result of deliberate action of our state’s lawmakers. This should make us all angry. Cutting funding for mental health services only to force the mentally ill into jails is cruel and expensive. No matter what sends people to jail, many will never really recover from the stigma and the trauma of the experience.
What should be done? We should lobby our lawmakers to restore funding for mental health services in Texas. We should stop blaming the mentally ill for their problems. We should resist the temptation to treat even minor difficulties with powerful and addicting drugs. We should insist that Texas expand Medicaid as part of the Affordable Care Act (this would cost the state nothing) so that people can receive basic medical care and avoid crisis.
In short, we should learn to heal each other. The person with a mental health crisis tomorrow could be you.
Last May, Nathan Emmerich wrote a column warning that bioethicists must not become a “priestly caste.”In the column, he warns that giving bioethicists moral authority over all practices in medicine and healthcare will have an anti-democratic effect and hinder public discourse.
He may have overstated the authority that bioethicists generally have, but it is true that some see their job as handing down judgment on various practices in medicine and research while others, frankly, would be happier to just accept the opinion of “experts” in order to avoid having to take full responsibility for their ethical decisions. The ethical expert has arisen because of rising demand. After making a thorny decision, who would not want to be able to say, “My decision was reviewed and approved by experts in ethics”?
Ethicists will do well to resist a priestly role. If you begin to believe that something is morally correct simply because you believe or say that it is, then you should apply for sainthood, not a position as an ethics consultant. When Euthyphro is asked if he knows he is doing the right thing, he replies, “The best of Euthyphro, and that which distinguishes him, Socrates, from other men, is his exact knowledge of all such matters. What should I be good for without it?” Euthyphro considers himself an expert on matters of morality and dismisses any suggestion that his opinions might be challenged. As he attempts to explain himself, his logic breaks down. Ethicists as experts would do well to open themselves to challenges from all corners as Emmerich suggests.
All this is further complicated, though, by Eric Schwitzgebel’s finding that ethicists are no more ethical than non-ethicists. Comparing ethicists and other professors, Schwitzgebel and his colleague, Joshua Rust, found that both ethicists and their colleagues reported that the ethicists were no more ethical than their colleagues. This is not terribly surprising. I may think I am a pretty ethical person but not be willing to say my colleagues in metaphysics are a bunch of thieves and charlatans. By the same token, they may think I am pretty ethical but have enough self-respect not to sell themselves short.
Of further harm to the reputation of ethicists, Schwitzgebel says ethics courses do not appear to have much affect on the ethical behavior of students. He notes that many of us who teach ethics do no claim that it will make our students behave more ethically. This is probably true in most philosophy departments, but ethics courses in law schools and business schools, for example, are designed to prevent unethical behavior down the road.
It isn’t likely that any type of ethics course can cause an unethical person to become more ethical, but courses can have an effect on ethical behavior. Courses in specific disciplines can provide a framework for codes of behavior in a particular field such as law, business, psychotherapy, or medicine. Through such courses, students can become well versed in expected norms as well as actual regulations from laws or professional codes of behavior. In addition, students can learn to examine cases and apply accepted principles of their fields to various situations they may encounter during their careers.
Theoretical courses give students a larger ethical toolbox to examine conflicts that arise in their careers and also in their daily lives. Few ethics professors have had students say that, thanks to the ethics class, they have stopped lying and cheating, but most of us have had students tell us that they now see questions in a new light. Rather than simply relying on instinct or prior teaching, students learn new ways to frame ethical problems and new approaches for identifying possible ethical harm. If nothing else, we give the students who are already ethical a greater vocabulary for articulating their actions and beliefs.
With any luck, ethicists, ethics instructors, and students will all leave the class with a bit of humility. The ethicist who believes his or her own hype as a moral authority has passed into dangerous territory. At best, the ethicist has the tools to examine ethical problems with greater detail and nuance. In the end, people eventually have to act, and a thorough ethical analysis can help guide them.
But ethics courses have a greater importance. Imagine a society where no one ever studied or discussed ethical theory or ethical decisions. It is impossible to imagine such as society, I think, because we do have to make decisions, and that requires thinking about them in detail. Some people would always rely on their “gut feeling,” but others would worry and ponder and ruminate. And they might seek the counsel of others who have spent time worrying and pondering and ruminating. And soon we would see the rise of a priestly caste and a separate group of committed but imperfect thinkers devoted to analyzing ethics in both theory and practice. We would make many mistakes, and many people would be hurt, but at least we would be trying.
Ben Jonson’s Lucretius (Photo credit: Catablogger)
My interest in the topic of this blog arose several years ago from a conversation with a scholar visiting from China. She had studied Christianity in China and was interested in meeting Christians in the United States and learning more about their beliefs and culture. She admitted to me that she felt some disappointment to learn that a promise of a blissful eternity did not seem to decrease the fear of death for most American Christians. If life is filled with pain and challenges, why would Christians not welcome a release to a joy of eternity?
Lucretius would not be surprised by their fear. He noted that those who boast of fearlessness in the face of death will react to death in pretty much the same way everyone else does. He says:
Of course, we also know some turn to suicide, which may or may not reflect a loss of fear of death. It may only mean a fear of the misery of life has overtaken a fear of death, but I will return to that idea later.
On the other side, I can remember discussions with Christians describing the attitude of suicide bombers in armed conflict. I have heard at least a few people who equate a willingness to die for a cause with a lack of respect for the value of life rather than a lack of fear in the face of death. If we value our lives, must we fear death? Is there a greater moral advantage to reducing the fear of death or to emphasizing death as a loss of something of great value, life?
Brush drawing of German philospher Martin Heidegger, made by Herbert Wetterauer, after a photo by Fritz Eschen (Photo credit: Wikipedia)
While he doesn’t have much in common with Samuel Johnson, German philosopher Martin Heidegger also sees some advantages to our uneasiness with death. When we contemplate our own annihilation, he says, we are filled with dread, which forces us to confront what is authentic. When we are projected into Nothing, we are transcendent. If we were not “projected from the start into Nothing,” we could not relate to “what-is” or have any self-relationship. Only through confronting annihilation do we have any hope for authentic existence.
Simone de Beauvoir (9 January 1908 – 14 April, 1986) was a French author and philosopher. (Photo credit: Wikipedia)
Fear of death propels us forward through life, even in the face of injury, disease, and extreme hardship, and as it propels us forward it also gives meaning to our struggle. By working to overcome our fear, we establish ourselves as free beings capable of making meaning of our own suffering. And if we will ourselves free and full of meaning, we will strive for others’ freedom as well. Indeed, Beauvoir says we extend our own freedom through the freedom of others.
As a final note, let me say that part of willing freedom for others is an effort to remove obstacles that make suicide seem like a triumph. It is for this reason we should work to promote human capabilities and, specifically, to relieve the pain and suffering of depression.
I’ve been reading Kurt Vonnegut again. It is a bad habit I started as a teenager. When I began reading Vonnegut, I was a classic example of a depressed teenager, or at least that was how I saw myself.
Looking back, I realized I had many reasons to be sad. Extremely sad, even. A friend had died in a motorcycle accident when a car pulled in front of him in our own neighborhood, and then my uncle, who was 25 years old, died in a fire that consumed the mobile home he was living in. Of course, a few other bad things happened, too, and the world just seemed a little crazy to me, not fair at all.
My confusion was confounded by the fact that I would often hear family members ask one another, “Do you think someone is trying to tell you something?” They searched each devastating event for a message from God. If something bad happened, it was because we had done something wrong. At church, I learned that all the pain, all the trials, and all the trauma was part of God’s plan, even if no mortal could make heads nor tails out of the plan. I hadn’t read Kierkegaard yet, but I was told to take a “leap of faith,” and then I was thrown off a cliff of faith.
Søren Kierkegaard (Copenhague) (Photo credit: dalbera)
So, around that time, I read about Kurt Vonnegut’s unlucky sister. In the prologue to Slapstick, he told of how while his sister, Alice, was dying of cancer, her husband, who was to take care of their children after her death, died on “the only train in American railroading history to hurl itself off an open drawbridge.” It was bad luck—bad enough to make you feel a little depressed.
But Vonnegut always made me feel better about things. He said, “Since Alice had never received any religious instruction, and since she had led a blameless life, she never thought of her awful luck as being anything but accidents in a very busy place.” Although I have received prodigious religious instruction and led a life full of blame, that one line has gotten me though many dark moments.
Over the years, I’ve heard many people tell me that bad things were part of some tortuous plan by some deity or other, I’ve heard that children are only on earth as a “loan” from God, and I’ve heard that God won’t give us more than we can handle. It seems to me that people routinely get more than they can handle. Many people die from stress-related illness or suicide, brought about by despair and a massive inability to cope with life’s tribulations.
Ah, but the people who didn’t survive just didn’t have enough faith to get by. The message I got from this was: “Be strong—or God may kill you.” If I had no faith in the purely accidental nature of bad luck that I learned from the Vonneguts, I am not sure I could have survived my life, which really only has the normal amount of sorrow and trauma. I haven’t been spectacularly unlucky, even by first-world standards.
Thanks to some of the interpretations I have heard of the meaning of traumatic events, I get a little nervous when anyone starts talking about making meaning of suffering. I’m quite happy to believe that suffering is just one of the vagaries of an existence fraught with peril. According to a paper by psychologist Robert Neimeyer and his coauthors, people have an intense need to “make meaning” after an extreme event disrupts their life narrative. Through a process of making meaning, individuals are able to restore a coherent narrative of their lives.
Part of the problem, it seems, is that most people believe the world has a certain moral order, and that people who are good will be rewarded with positive outcomes. So, when bad things happen, we will surely ask, “Why me?” This is a question Alice Vonnegut never asked herself, according to her brother, anyway. The horrible luck she had did not interrupt her narrative because her narrative was one of randomness and accidental events.
Regardless of what narrative one tells regarding the moral order of the universe, many people do see their own moral or spiritual growth as a result of suffering. Indeed, when we meet young people who are self-satisfied and callous, we often think that they will grow as they meet with grief and loss, and that growth will bring wisdom. It is good to know that our loss can make us better people, but I can’t think of a time when I would not give up my personal growth in order to have a loved one restored.
It seems somehow wrong, ethically wrong, to look toward loss as an opportunity for growth, but we do not seem quite so bothered by looking backward to a loss as a catalyst for growth. Herein lies some of my discomfort with focusing too clumsily on making meaning—it almost implies approaching loss by asking, “What can I get out of this?” Alternatively, it invites people to celebrate what they gained from loss. This, in itself, can create moral distress.
To be sure, psychologists such as Robert Neimeyer emphasize accompanying the grief-stricken on their own journey without guiding them down any particular path. People will, naturally, have to determine what their loss means and also what meaning they assign to life after their loss. If they fail to find any meaning, they may lose their lives all together.
In the quest for meaning, though, I hope we can accept that we live in a world full of hazards, and they do not affect us in any rational order. It turns out that some really awful people live rather charmed lives, and the purest and most compassionate people in the world suffer, though not always.
If we have the strength, we put one foot in front of another one more time. And, maybe, once again.
Most patients realize doctors receive gifts from the pharmaceutical and device-
A patient having his blood pressure taken by a physician. (Photo credit: Wikipedia)
manufacturing industry. When we see industry logos on pens, clocks and posters, we don’t assume the doctors ordered these items from the merch page of these companies, but most of us aren’t aware of how lucrative the payments to doctors can be.
A story that ran in theNew York Times described the experience of Dr. Alfred J. Tria, who made $940,857 in about two years for promoting products and training doctors in Asia to use them. The article notes that Tria’s experience may be exceptional, but in two and a half years, industry paid out $76 million to doctors practicing in Massachusetts alone. I’ve been reading about this subject for a while, and even I was surprised that someone could make half a million dollars in a year on a side job.
In a slightly different type of payout, Oregon recently concluded a court case against two doctors who “put heart implants into patients without telling them that a manufacturer’s training program put a sales representative into the operating room.” The doctors would receive between $400 and $1,250 each time they completed a surgery using Biotronik defibrillators and pacemakers. The state argued that patients should know when doctors’ recommendations may not be based entirely on the needs of the patient. One of the doctors in the case earned more than $131,000 from 2007 to 2011 through implant surgeries. Doctors also received speaking fees, expensive meals, and other gifts.
As part of the Affordable Care Act (Obamacare), the Sunshine Act now requires industry to start collecting data on payments to doctors now, and the information will be made available to the public next year on a government website. This will enable patients to learn how much their doctor receives from the industry each year, though there may still be hidden incentives.
For example, doctors may be in profit-sharing arrangements with facilities or they may actually be the owners of the facility. A recent study by Dr. Matthew Lungren found that doctors who had a financial stake at an imaging facility ordered tests with negative results at a much higher rate (33 percent) than doctors with no financial stake. In other words, it appears that doctors order unneeded tests because they are making money off of them, not because the patients need them. Lundgren suggests that patients should ask whether they are being referred to a facility in which the doctor has a financial stake. I say the doctor should volunteer the information.
Beginning next year, the Sunshine Act will make it much easier for patients to discover their doctors’ financial relationships with industry, and I’m thrilled for this development. For those who think Obamacare is a complete disaster, just take a minute to relish this one positive development. Still, I think the movement should go further. I think financial disclosure should be part of the informed consent process. When your doctor is telling you all the risks and benefits of treatment, I think he or she should also say, “I get paid $1,000 to do this surgery,” “I will make $100 off this MRI,” or “I own stock in the company conducting this medical research.”
I believe patients want this information, and I don’t think they feel it is their responsibility to search for it. True informed consent is only possible in light of complete financial disclosure.
In the battle between conservatives and progressives, we are generally presented with a false dilemma. We are expected to choose between two positions: 1. Corporate greed is evil. 2. Profit is what drives innovation and improvements to our standards of living. Unfortunately, it is the progressives who are making the mistake here. Greed is only a problem because it results in human rights abuses, criminality, and grave injustice. We do better to focus on the abuses rather than the rather nebulous harm of greed itself.
When I talk to conservatives about specific instances of corporate criminality, they generally acknowledge that something should be done in such cases. For example, seven court cases from 1997 to 2008 resulted in convictions for slavery in Florida. Those convicted of slavery “threatened the immigrants, held their identification documents, created debit accounts they couldn’t repay and hooked them on alcohol to keep them working.” The workers were also beaten and forced to live in substandard conditions.
As the accounts of slavery came to light, activists organized and demanded that restaurants pay more for tomatoes in order to provide an actual wage for tomato workers. By May 2008, Burger King had joined McDonalds and Yum! Brands in meeting the demands of the Coalition of Immokalee Workers and paying more for tomatoes. It was a notable success, but the activism of the CIW continues. After progressive strides with fast-food restaurants, the coalition ran in to resistance from grocery giant Publix, which refused to join any agreements to improve working conditions. In 2010, a Publix spokesperson said, “If there are some atrocities going on, it is not our business.” Publix has not budged yet, but the Coalition of Immokalee Workers continue their work and will receive 2013 Freedom from Want Medal from the Roosevelt Institute.
When progressives argue that corporate greed leads to great evil, they may be correct, but they make it too easy for conservatives to simply point out all the innovation and convenience the profit motive has produced. When we argue against slavery, however, we force conservatives to either defend slavery or admit that the industry must be reformed in one way or another. It is not likely that progressives and conservatives will agree on what kind of reform is necessary, but at least the conversation has begun with some possibility of tangible results, as we see in the case of the Immokalee workers. Knowing of specific abuses, such as in the FoxConn factory in China or the sweatshops in Bangladesh, most consumers, both progressives and conservatives, demand reform, and corporations do listen to them. Progress is slow and frustrating, but it is progress.
It may be possible that severe and systemic structural reforms are required to eliminate slavery and other forms of corporate abuse, but it is the abuse and the desire to eliminate abuse that must motivate the change.
Like many people, Peter Singer was the first bioethicist to occupy any space in my consciousness. He first got my attention with his concern for animal welfare and calls for vegetarianism. I suppose he is best known for saying we should not eat animals but that it is sometimes acceptable to kill our babies, which many people find upside down, especially if they haven’t actually read all his arguments, and few of his critics seem to have read his arguments.
But Singer has also spent a great deal of effort offering suggestions on relieving the problems of globalization, wealth inequality, and further destruction of the planet. One can offer reasoned objections to his suggestions, of course, but his choice of topics and concerns helped define what bioethics was for me.
Singer’s concerns fit nicely with the term “bioethics” as originally conceived by Van Rensselaer Potter in 1970. Potter said bioethics should be “a new discipline that combines biological knowledge of human value systems.” Potter saw bioethics as a systematic attempt to ensure the survival of the planet and all the people on it. One of Potter’s goals was to eliminate “needless suffering among humankind as a whole.”
Van Rensselaer Potter
Unfortunately, by the middle of the 1970s, the term “bioethics” had already been co-opted by the medical establishment and applied primarily to medical ethics. Concerns for ensuring the well-being of humankind were replaced by concerns for patients and doctors, with a strong emphasis on patient autonomy. Today’s bioethicists tend to ignore problems that have nothing to do with healthcare or medical research, but millions of people in the world have no access to healthcare and so escape any attention from bioethicists at all, which is itself an injustice.
To be sure, bioethicists are still in the world working for justice and, in some notable cases, the survival of the planet, but those working on themes outside of healthcare or medical research are outsiders at best. (For a couple of examples, see Martha Nussbaum and Thomas Pogge.)
I will continue to argue that this is the wrong approach to bioethics. Potter’s and Singer’s concern for promoting the health of the earth and all its inhabitants is the only reasonable way to think of bioethics, and those who disagree are the ones who should defend their positions.
What are some of the issues we need to address? Just to get us started, we can look at environmental justice, war, climate change, worker’s rights, wealth inequality, access to water, human rights abuses, women’s equality, and children’s welfare. Too broad? The problems that threaten life and health are vast. Medical practice requires an enormous cadre of professional ethicists to develop policy and practice guidelines, of course, but bioethicists following the vision of Potter should be welcome at the table as well.