Dangers of Anthropomorphism in Medicine (#poem)

chimpanzee sitting on gray stone in closeup photography during daytime
Photo by Pixabay on Pexels.com

It is most important, he said, to never
Ascribe to your subjects the feelings,
Intentions, and desires of humans.

You must make assiduous reports
Of behavior devoid of motive or
Explanation. The maternal adult screamed

But never wailed in sorrow, for
We cannot assume she is capable
Of sorrow. We cannot assume

Her frantic clamoring expresses
Either desperation or lamentation for
The infant stolen from her hours before.

We cannot assume she feels what
Humans feel or, indeed, is capable
Of thoughts or intentions at all.

But do remember that our work
Is important, as these specimens
Are perfect subjects for the study

Of human medicine. Their biology
And neurology is so similar to human
Biology that we can safely assume

That any treatments developed
For them will have similar effects
On their human counterparts.

What is safe for your subjects
Will be safe and beneficial for
Humans. Any deleterious effects

Must be recorded, of course,
As you have an obligation to humanity.
Your aim is to improve human well-being.

Payment as Coercion: Researchers Versus Research Participants

In the world of medical research, ethicists say it is unethical to pay a substantial amount of money to research participants. If you give a hefty sum for participation, people might sign up for risky research that they would otherwise avoid, so they can only receive minimal compensation for their time. Large payments exploit them and violate their autonomy by removing their ability to refuse participation. Of course, people with little money and few resources will sign up for risky experiments, anyway, because they need the money, even if the sum is paltry. Poverty reduces one’s autonomy and makes one ripe for exploitation, unfortunately.

The other way to look at it, of course, is that individuals are participating in research that may yield lucrative products, may cause unpleasant or harmful side effects, and may be quite inconvenient, indeed. For loaning their bodies to this unpredictable, but likely profitable, enterprise, it might make sense to compensate them more generously for their time and willingness to risk their own health. After all, it is common for workers who engage in other types of risky work to be compensated above normal pay scale. So, I say the industries should compensate their research participants in ways that are commensurate with the risk and inconvenience they are accepting.

Finally, if payment is coercive for research participants, surely it is coercive for researchers as well. Even workers with six-figure salaries can be exploited and manipulated with large sums of money and other favors. Without large payments, doctors and researchers might well be doing the work they are doing, but surely large payments (much larger than any research participant ever gets) must compel them to conduct their research in ways they would not in the absence of such large payments. We might say they have, in effect, had their autonomy stripped from them through coercive payments.

And so it goes.

Review: John Gluck’s Voracious Science and Vulnerable Animals

The entire medical research enterprise is built on a foundation of intense and immense animal suffering. Most of the effective treatments we have now were previously tested on non-human animals before they were ever used on humans. On the other hand, most non-human animal research does not lead to an effective treatment or even publishable results.

In Voracious Science and Vulnerable Animals: A Primate Scientist’s Ethical Journey, John Gluck describes his glacially slow transition from primate researcher to animal welfare advocate. Early in his career, Gluck worked on the infamous monkey social-isolation experiments that provided the earth-shattering news that separating infants from their mothers and rearing them in isolation harms their emotional and intellectual development. Thanks to img_0327this ground-breaking research, mothers have learned not to raise their babies in small wire cages and occasionally perform painful surgeries on them.

In approximately the same amount of time it took for humans to evolve from other species, Gluck began to realize the great harm he was causing to his beloved monkeys. Gluck apprehended the harm he was doing after personally observing the excruciating suffering of the animals he was studying, seeing the shock in the eyes of non-scientists when he described his work and realizing that he could only describe his work to fellow scientists, having a student present him with Peter Singer’s accurate description of his work, having his lab broken into by animal rights activist, and, finally, talking with philosophers about the rights of animals.

The brilliance of his account is that he illustrates why it was so difficult for him to acknowledge the pain he was causing and why it is next to impossible to engage animal researchers in a debate over the welfare of research animals. Typically, animal researchers say they turn to non-human animals when it would be unethical to test on humans. When pressed, they will agree that animals should be used only when their use benefits the pursuit of scientific knowledge, should be given clean living quarters, should be fed appropriately, and should be given medical treatment when needed. Unless, of course, the scientist is studying the effects of food deprivation, lack of medical treatment, and so on.

The research is further justified by the fact that non-human animals have similar biological and neurological structures that ensure that results in non-human animals can be replicated in human animals. The human who doubts the similarity is scoffed at for being scientifically illiterate. Paradoxically, suggesting that non-human animals, similar to humans in other ways, are also similar to humans in terms of suffering or moral importance is accused of anthropomorphism. The argument is either that animals are not capable of suffering in any meaningful way or that their suffering is of no moral significance.

Gluck describes these arguments and explains that he himself held such seemingly contradictory views because they are taught and repeated ad nauseam until they become ingrained beginning with undergraduate study. Anyone who questions these basic beliefs is either met with laughter or denied entry and participation in research programs. People within the system become so closed off from contrary opinions that they are often surprised when descriptions of their work shocks and offends outsiders. The only explanation for the outrage many scientists will consider is that outsiders cannot understand the importance of their work.

One of the more fascinating events that led to Gluck’s change of heart concerned a human patient who was thought to be severely cognitively impaired. Staff in the patient’s room talked about the woman as if she were an object. Gluck was trying to solve a particular problem. At times, staff could feed the woman from a spoon but at other times she could not swallow. It turned out that she could swallow but was refusing to because she did not appreciate the way certain staff treated her. It was the only form of protest she had at her disposal. When Gluck realized how robust the conscious life of this patient was despite the appearance of minimal cognitive activity, he realized also that he could not say with certainty what thoughts, beliefs, or emotions non-human animals might experience.

Gluck eventually decided to get out of animal research and began teaching courses on research ethics that covered a variety of topics but included discussions of animal welfare. (If you care about the suffering of the animals in his lab, you will be disappointed by what happened to them.) Gluck’s educational programs on research ethics were successful in the sense that they attracted students from a myriad of disciplines and engaged both students and faculty in interesting and enlightening debate on the use of both human and non-human animals in research. Looking back, he is proud of his accomplishment to begin these discussions but admits that animal researchers were the one group that never engaged in the discussions.

Ethicists can attempt to change practices from inside or outside of institutions. Outsider ethicists have more freedom to make bold declarations of misconduct, express outrage, and threaten established practices. Insider ethicists have greater access and opportunity to speak directly with the people who have the power to change practices. Both kinds of ethicists are needed. Gluck is an insider whose thoughts and arguments were enhanced and supported by outsider ethicists. He says he was unable to effect a great deal of change inside research labs, but he was able to speak to researchers as an equal to engage in an ethical discussion. Sadly, insider ethicists who raise ethical alarms are often forced outside. It takes a great deal of courage to risk losing a privileged position inside the castle, and it also takes a great deal of courage to storm the castle gates.

If you are looking for a book with a detailed and comprehensive review of philosophical theory related to animals, you will be disappointed in Voracious Science and Vulnerable Animals; however, if you are looking for an insider’s perspective on the views and outlook of animal researchers, you will find Gluck’s insights and introspection fascinating, even if depressing. The book shows that it possible for researchers to be moved and gain compassion and understanding of the harm they are doing, but it also shows that such progress is slow and infrequent.

 

 

 

 

Book Review: The Experiment Must Continue by Melissa Graboyes

We all have a complicated relationship with medical research. We know that every effective treatment or therapy that exists was once an experimental treatment or therapy. We know that some drugs have been so effective that they eradicated various diseases completely, and we also know that someone had to be the first one to try all those new drugs. On the other hand, most new drugs don’t work out. Some are simply not effective, some are effective but have serious side effects that make them all but useless, and others turn out to be deadly.

Medical research is plagued with problems related to consent, coercion, therapeutic misconception, benefit, and access. All these problems exist Medical-Research-800pxin North America and Europe with both well educated, affluent populations and with so-called “vulnerable” populations.

Informed consent is an example. Virtually everyone agrees that patients who participate in medical research should know about and agree to their own participation. Ethics committees, lawyers, and bioethicists have gone to great pains to develop procedures for proper informed consent procedures. Sadly, too many people talk to their doctors about treatment options, hear about ongoing research, and sign consent forms without actually realizing they have agreed to participate in a medical experiment. Despite the best intentions of everyone involved, patients believe they are receiving treatment that is expected to help them (therapeutic misconception).

I sometimes use the HBO film adaptation of Margaret Edson’s play, W;t, in my classes. The main character in the play agrees to experimental treatment, is informed of the side effects and goals of the research, and then goes on to suffer tremendously for her decision. When I have my students write about the movie, more than half of them still believe the doctors were trying to cure the cancer of the main character. Despite all the frank discussions of the research, they still don’t understand that the protagonist was never expected to benefit from the treatments she was receiving. Furthermore, the character never seemed to fully realize that her participation was never expected to benefit her in any way.

If these kinds of misunderstandings happen between researchers and research participants from the same culture speaking the same language, the problems are sure to be compounded by cross-cultural communication. In her book, The Experiment Must Continue: Medical Research and Ethics in East Africa 1940 – 2014,Melissa Graboyes explores ethical challenges and lapses in numerous studies conducted in East Africa. Her book is a refreshing attempt to shed “conventional wisdom” about research in Africa.

For example, I think anyone who has studied research ethics has heard that African chiefs would sometimes provide consent for all the people in a village to participate in research projects. Graboyes says she could find no evidence that anyone in any of the locations under study ever recognized the right of anyone to give collective consent for a group of people. Further, many describe African research participants as “vulnerable” populations with little to no agency. In the sense that many people lack adequate medical care, they are vulnerable, but Graboyes challenges the notion that they lack agency and gives several examples of Africans responding actively and rationally to both exploitative research and beneficial research. In short, she shows that they are actually persons with wills, minds, autonomy, and awareness.

Another common theme for those studying research ethics is the use of coercion to get people to enroll in trials. Many wring their hands worrying over whether offering payment or gifts might unduly coerce potential participants whose desperate poverty might drive them to enroll. Those who did enroll, however, were more concerned about inadequate compensation than undue coercion. Participants realized that others would benefit from research carried out on their bodies or in their homes. In exchange for participating, they felt some reasonable benefit was due, whether it be in the form of cash, medicine, or health services.

One possible benefit, of course, is access to medicines researchers commonly advertise that participants will receive a new treatment at no charge. Many African participants assumed they were trading their blood for research and in turn would receive medicines that would benefit them. In some cases, participants did receive helpful medications, but those medicines were then withheld from them at the end of research, even if it proved to be effective. Researchers say it isn’t their responsibility to provide the medications, which may or may not be expensive, but leaving people with the knowledge that an effective treatment exists without making one available seems to me to be a particularly cruel kind of harm

In the United States, people also expect access to new medications. When people find they have a terminal illness, they will often (I want to say usually) demand to receive experimental medicines. In the 1980s, AIDS activists in the US demanded that experimental treatments be distributed to HIV-positive individuals, and demands for quick approval for experimental drugs have become routine. In this sense, medical research may be a victim of its own success. Most people in either America or Africa fail to appreciate the risk they take with unproven medicines.

Although many researchers view Africa as a fertile field for research (many describe Africans and “walking pathological museums) for the abundance of diseases present and for the relative low costs involved compared to research conducted in Europe and North American. Graboyes describes both successes and failures in East Africa, but the failures can be depressing. In some cases the research never got off the ground, in some it never produces usable results, and in some it made conditions much worse.

Is it unethical to conduct research in Africa? Graboyes doesn’t think it is necessarily unethical to conduct research in East Africa, but she does feel some of the research has been unethical, some simply misguided, and some poorly designed. Many Africans do not trust researchers, which is frustrating to researchers who feel they are on a noble quest to end disease, but many of them fail to realize how many researchers have told outright and deliberate lies in East Africa. People do not forget so easily.

I don’t want to give away too many details of the book, as it can become something of a page-turner. One last thing I will mention, though, is the fact that Graboyes was aware that she was another researcher visiting East Africa asking for cooperation. Although she wasn’t taking blood, spraying insecticides, or injecting treatments, she still needed to ensure that she was proceeding ethically and had the trust of the people she was interviewing. Her efforts are admirable but remind us that any reporting of facts is a matter of interpretation and may be subject to modification.

This book is admirable and compelling, especially for those interested in the ethics of international research. In addition, her insights might help to develop better ethical practices for domestic research, as many of the issues are the same.

Attribute Substitution and Public Health

Partly in response to a series of posts in the New England Journal of Medicine dealing with conflicts of interest in medical research, Austin Frakt wrote a piece for the New York Times titled, “A New Way to Think About Conflicts of Interest in Medicine.”  In the end, he claims that too many critics dismiss a study simply because it received industry funding, and he says this is a kind of attribute substitution (this is a fallacy whereby something may be rejected because it is associated with something negative, rather than on its own merits).

This is a bit of red herring, because attribute substitution is not always such a bad thing. If I am negotiating to buy a new car, it may be that everything checks out regarding the engine, interior, paint, brakes, and so on, but I may reject the car simply because I happen to know it is stolen. The fact that it is stolen doesn’t make it a bad car, but it does make it spectroscopeone I would not want to buy. In the same way, the fact that a study is industry funded does not prove it is a bad study, but it is possible for a reasonable person to object to it simply on the grounds that its funding encourages unethical behavior.

Also in his essay, Frakt mentions that research is often tainted by many things that are not industry funding: things like personal relationships, religious bias, and overweening personal ambition. On the other hand, industry-funded research often yields excellent, well-controlled studies with beneficial results.

All this is true, of course, and there may be critics out there who believe that no industry-funded research should be published, but I think that is an unusual view. What is more common is to call for disclosure of financial ties to industry. With such disclosure, readers can evaluate the data with an understanding of the possible bias of researchers. More importantly, in my opinion, is that disclosure helps us see whether anyone from outside of industry is working on the same problem.

Disclosure does nothing to eliminate bias. If I know that someone is working for Pharma Co. X, I know she is trying to develop profitable products for her employer. The best path to a high profit is probably through rigorously controlled research. The bias of the researcher is to develop a profitable product, and disclosure will not change that bias; it isn’t a conflict of interest as profit is really the only interest driving the research.

The problem is that most research is now funded by industry (in 2012, industry funded about 59 percent of medical research in the US).  When researchers are hired to create marketable products they are, indeed, motivated to show bias both in how they conduct their research and in what kind of research they begin in the first place. Unethical practices can happen both within and without industry, but we are better to have a variety of ways to fund research, and we are better to have transparency about how research is funded and how it is conducted. We need to know how research participants were recruited. We need to know what data was and was not used. On the issue of transparency, I agree completely with Frakt: “To the extent research design and methods are not up to snuff, that’s the red flag — the door through which conflicts of interest enter and exert undue influence. More rigorous, transparent and reliable research from both industry and nonindustry sources would reduce the need to lean so heavily on mental shortcuts like attribute substitution in judging scientific merit.”

Finally, we need to know whether research was aimed at reducing human suffering or merely at generating profits. On a good day, these two goals are perfectly aligned. On a normal day, reducing human suffering is at odds with creating products. I’ve mentioned before philosopher Thomas Pogge’s efforts to create incentives for companies to develop drugs for conditions that may not be profitable, and I think it is worth mentioning his Health Impact Fund once again. Pogge’s solution is one that works fairly well with market-based thinking. Love it or hate it, it is a good effort. Other solutions are possible, though. Governments could pool resources to simply set up labs and hire scientists to develop cures for diseases that affect global health. Capitalist investors might also want to develop cures in order to capitalize on improved human resources as John Rockefeller did about a hundred years ago.

Yes, I realize government funding and charitable institutes still exist (Rockefeller’s legacy continues), but research for profit (and only for profit) threatens our ability to continue advances in public health. We need greater transparency (of financial ties and data transparency) in research, greater protection for research subjects, more variety in funding sources, and more checks to replicate and confirm findings. It may be expensive, but mistakes are expensive, too.

Will industry-funded research kill you?

Last week, I wrote a blog about the effects of financial conflicts of interest (FCOI) on treatment decisions of doctors and whether disclosure alone will have any effect on eliminating bias and corruption. As a result, I received some comments and information on FCOI in published research.

Before I say more, I would like to clarify that someone who is conducting research funded by industry is not technically, in my studied opinion, involved in a FCOI, because such a person has the single interest of generating products that will result in profit for industry. It is possible that research undertaken with the aim of commercial success will benefit humanity, but if profit is not possible, humanity be damned. (I am making an assumption, which may be naive, that most of us think medical research should be aimed at making life better for humanity.)

To help combat the problem of bias in research, John Henry Noble suggests prison time for those found guilty of scientific fraud. In my opinion, he makes two strong claims: 1. “The false claims of the perpetrators rise to the status of crime against society, insofar as they endanger public health by sullying and misdirecting the physician’s ‘standard of care.’” 2. “The due process of law is likely to uncover and judge the evidence of guilt or innocence more reliably and fairly than will the institutions of science and the professions that historically have resisted taking decisive action against the perpetrators.”

I agree that jail time is appropriate for egregious cases of scientific fraud, but I’m not sure it eliminates the problem of industry-driven research. Another person told me industry-funded research should be published for two reasons: 1. Some people are biased without the benefit of industry funding. 2. Some industry-funded research proves to be quite beneficial. Perhaps surprisingly, I agree with both of these statements as well–as far as they go. Certainly, many people carry any number of biases that do not result from corporate funding, and the history of scientific fraud is littered with examples. Further, corporate labs frequently create products I enjoy immensely.

Oddly enough, the person defending industry-funded research sent me a link to a paper to support the contention that FCOIs are not a strong predictor of bias. I say it is odd because the paper didn’t seem to support that position. The paper analyzed the associate between industry funding and the likelihood that the researchers would find an association between sweetened beverages and obesity. The authors of the paper found that “Those reviews with conflicts of interest were five times more likely to present a conclusion of no positive association than those without them.” It is perhaps the conclusion of the paper that gives hope to those advocating for industry funding:

They [results of the study] do not imply that industry sponsorship of nutrition research should be avoided entirely. Rather, as in other research areas, clear guidelines and principles (for example, sponsors should sign contracts that state that they will not be involved in the interpretation of results) need to be established to avoid dangerous conflicts of interest.

In other words, it would reduce bias if sponsored researchers were limited to collecting data without analyzing it. This is hardly a ringing endorsement of industry-funded research, but so be it.

So, I do not think all industry-funded research should be banned. Rather, I think we (as a society) need to ensure that we have ample researchers who are free of FCOIs. In other words, we need substantial funding for independent research centers where researchers can work for the advancement of knowledge without a constant concern for the production of profit. Forcing our public universities and research labs to turn to corporations for funding corrupts our pursuit of knowledge and the advancement of society. We must restore public funding to education and research.

For more on the possible risks of funded research, read about Dan Markingson here. Or read about Jesse Gelsinger here.

Corporate funding of research.

Many of us are suspicious of health and safety claims based on research funded by corporations that get rich off public confidence in the health and safety of their products. I don’t really trust manufacturers of drugs or genetically modified foods to tell me that they are safe. I also would feel better hearing that an oil spill is no threat to life or environment from someone other than the company that spilled the oil. (Many people seem to have made one inexplicable exception to this rule, which I will mention in the postscript.)

Further, when corporations fund research projects or labs, they gain control over what information is published. The scientists involved may have enough integrity to conduct rigorous research, but unwanted results are likely to be suppressed, especially if they will hurt the bottom line. This may be justified by claiming that only “useful” data need be published, but negative data can also be useful and can avoid wasted money and energy. If one researcher finds that something doesn’t work, publishing that data can help others avoid the same mistakes. Of course, researchers do share data, but some studies are also suppressed. Publication of misleading data and suppression of useful data are two possible hazards of corporations funding research that will affect their bottom line.

On the other hand, if corporations are the ones to benefit from research, it seems they should bear the cost of supporting labs, scientists, and related endeavors. Of course, some research is in the public interest, and I believe the public should fund it, which may be the topic of another blog. To avoid obvious conflicts of interest in research, companies should not be permitted to hire and promote researchers directly. Funding should go in to a pool and be dispersed anonymously to research labs, scientists, and universities. For profit labs could still exist, but researchers should not be beholden to a specific entity. It was not that long ago that much university research was conducted in this manner. In that sense my proposal is regressive, not progressive.

Postscript: When people get sick, many of them demand the latest drug available, even if it hasn’t been tested thoroughly. They seem to feel that their suffering from the disease is always going to be worse than the effects of the drug. I recently had a student (not a medical student) argue vehemently with me that no one had ever died during a drug trial. For those who know anything about drug trials, this over confidence is baffling, but I fear many share his optimism regarding the safety and effectiveness of experimental drugs. If you don’t know this already, let me tell you that drug testing is there for a reason; not every drug tested turns out to be safe and effective.