A Poem about Conflicts of Interests

They call them velvet handcuffs,
the payments from pathological profiteering. 

The philosopher, physician, and preacher
Are undone with equal ease,

Justifying and embracing sin along an invisible
Slide from righteousness to depravity.

It started with so much hard work,
Surely remuneration was appropriate.

To be honourable doesn’t mean to be impoverished,
It only means to think independently.

This won’t affect my judgment, he assured us,
And he was offended anyone thought otherwise.

Someone simply doing his job has no conflict
Of interest, only the freedom to be objective.

One day he has an epiphany, but has one last lament:
“If I quit now, I will lose everything.” 

Essay: Some Conflicts of Interest Have Little Conflict

Let’s say you make a lot of money in some industry or another, and you’re lucky enough to get an appointment to an agency that regulates that very same industry. Your regulatory decisions could affect your bottom line, and so you have a conflict of interest and you should either be forced to give up your job as a regulator or get rid of all your financial interests in the industry with the provision that you may never acquire financial assets in the industry again. And if you’re a doctor on the payroll of a pharma company, your employment status most definitely affects your medical decisions.

That’s a pretty simple and obvious concept to anyone who doesn’t work in industry. People who work in any given industry tend to think “outsiders” wouldn’t know enough about the industry to regulate it, so of course you’d need someone with major conflicts to understand what really needs to be done. And so it goes.

But other people are described as being conflicted when they really don’t have any conflicts at all. Let’s say you are a researcher, and you apply to a corporation for funding for your research. Congratulations, you now have a huge grant from Megacorp Inc. to fund your lab, materials, research assistants, etc. in hopes of developing new products. You are now just a handsomely rewarded employee of Megacorp Inc. Your only interest is in developing new products for them.

It’s true that some will describe you as conflicted because they think you should be looking out for the public good, but that really isn’t in your job description. You’re just developing products.

And this is why we need public funding for research. So we can demand that researchers we are paying work for the public good and not in the interest of for-profit corporations.

man next to doctor
Photo by LinkedIn Sales Navigator on Pexels.com

Attribute Substitution and Public Health

Partly in response to a series of posts in the New England Journal of Medicine dealing with conflicts of interest in medical research, Austin Frakt wrote a piece for the New York Times titled, “A New Way to Think About Conflicts of Interest in Medicine.”  In the end, he claims that too many critics dismiss a study simply because it received industry funding, and he says this is a kind of attribute substitution (this is a fallacy whereby something may be rejected because it is associated with something negative, rather than on its own merits).

This is a bit of red herring, because attribute substitution is not always such a bad thing. If I am negotiating to buy a new car, it may be that everything checks out regarding the engine, interior, paint, brakes, and so on, but I may reject the car simply because I happen to know it is stolen. The fact that it is stolen doesn’t make it a bad car, but it does make it spectroscopeone I would not want to buy. In the same way, the fact that a study is industry funded does not prove it is a bad study, but it is possible for a reasonable person to object to it simply on the grounds that its funding encourages unethical behavior.

Also in his essay, Frakt mentions that research is often tainted by many things that are not industry funding: things like personal relationships, religious bias, and overweening personal ambition. On the other hand, industry-funded research often yields excellent, well-controlled studies with beneficial results.

All this is true, of course, and there may be critics out there who believe that no industry-funded research should be published, but I think that is an unusual view. What is more common is to call for disclosure of financial ties to industry. With such disclosure, readers can evaluate the data with an understanding of the possible bias of researchers. More importantly, in my opinion, is that disclosure helps us see whether anyone from outside of industry is working on the same problem.

Disclosure does nothing to eliminate bias. If I know that someone is working for Pharma Co. X, I know she is trying to develop profitable products for her employer. The best path to a high profit is probably through rigorously controlled research. The bias of the researcher is to develop a profitable product, and disclosure will not change that bias; it isn’t a conflict of interest as profit is really the only interest driving the research.

The problem is that most research is now funded by industry (in 2012, industry funded about 59 percent of medical research in the US).  When researchers are hired to create marketable products they are, indeed, motivated to show bias both in how they conduct their research and in what kind of research they begin in the first place. Unethical practices can happen both within and without industry, but we are better to have a variety of ways to fund research, and we are better to have transparency about how research is funded and how it is conducted. We need to know how research participants were recruited. We need to know what data was and was not used. On the issue of transparency, I agree completely with Frakt: “To the extent research design and methods are not up to snuff, that’s the red flag — the door through which conflicts of interest enter and exert undue influence. More rigorous, transparent and reliable research from both industry and nonindustry sources would reduce the need to lean so heavily on mental shortcuts like attribute substitution in judging scientific merit.”

Finally, we need to know whether research was aimed at reducing human suffering or merely at generating profits. On a good day, these two goals are perfectly aligned. On a normal day, reducing human suffering is at odds with creating products. I’ve mentioned before philosopher Thomas Pogge’s efforts to create incentives for companies to develop drugs for conditions that may not be profitable, and I think it is worth mentioning his Health Impact Fund once again. Pogge’s solution is one that works fairly well with market-based thinking. Love it or hate it, it is a good effort. Other solutions are possible, though. Governments could pool resources to simply set up labs and hire scientists to develop cures for diseases that affect global health. Capitalist investors might also want to develop cures in order to capitalize on improved human resources as John Rockefeller did about a hundred years ago.

Yes, I realize government funding and charitable institutes still exist (Rockefeller’s legacy continues), but research for profit (and only for profit) threatens our ability to continue advances in public health. We need greater transparency (of financial ties and data transparency) in research, greater protection for research subjects, more variety in funding sources, and more checks to replicate and confirm findings. It may be expensive, but mistakes are expensive, too.

Thought experiment: Financial Conflicts of Interest

Believe it or not, many people see no problem with financial conflicts of interest in health care. People who receive payments say they are only doing the same job they would do otherwise, except with more resources. This, they say, enables them to provide better health care. People who make the payments will claim that they are only trying to ensure that their beneficial products are able to improve the lives of as many consumers as possible. Even patients defend conflicts, saying they don’t mind their doctors making a few extra dollars in order to provide efficient, state-of-the-art service. Patients see these financial ties as a way to ensure groundbreaking treatments reach consumers.

Slippery Slope
A rather beautiful example of a slippery slope.

I’m not a doctor, but there are analogies for me. If we look at financial ties in another industry, it may be easier to see the problem. In education, the stakes are lower, but some parallels to the medical industry remain. I will begin with actual practices and then ask you to imagine further practices that parallel the medical industry.

First, instructors are commonly asked to review books for publishers seeking feedback on manuscripts or new textbooks. This gives the publisher an opportunity to get feedback from potential customers while also enabling instructors to provide input to publishers. Instructors get better books, and publishers are able to improve both their products and their marketing. The instructor is, of course, paid a small honorarium for the time invested in reading and reviewing the book.

Second, once instructors have given feedback, publishers may invite them to be more involved in the production of the textbook. They may be asked to write an instructor’s manual to accompany the text or participate in developing workbooks or online supporting materials for students. (Disclosure: I know that these first two items are practiced because I have reviewed textbooks and written an instructor’s manual for pay.) Instructors, of course, know the most about what instructors need and how students may use various materials. Improving the product benefits publishers, instructors, and students.

Now, imagine that an instructor sees an improvement in students’ success rates and general aptitude. The instructor begins to collect data and may even present at a teaching and learning conference on how these materials have benefited students. A publisher might (I don’t know of this happening in real life) offer to pay the instructor to give the same presentation at additional conferences. On the surface, this does not seem harmful. After all, the students really did improve using these materials, and the presentation was not developed with the aim of getting payouts from the publisher. Certainly, no students will be harmed by these presentations.

Finally, imagine this instructor begins to accept regular invitations from the publisher to present on the benefits of the products and encourages others to adopt the same materials for their classes. The instructor notes that most of her or his students are now earning A’s and B’s when the class averages were usually a B or C before the materials were adopted. To reward the instructor for this amazing success, the publisher begins to pay the instructor $100 for each A awarded and $80 for each B awarded. Soon, this instructor is widely hailed for improving student success and completion rates at a college that struggles with generally high rates of failure and incompletion.

Now, these payments to the instructor come to the attention of the student newspaper, which publishes the amounts paid to the instructor and the increase in high grades in the classes. The public is outraged, but enrollments in the class continue to increase. The instructor counters that no one has shown that even one student who received an A did not deserve an A. Further, the instructor says that the improvements in student success were documented even before the payments began. The publisher responds by saying that the materials it produces are of the highest quality and that it is proud of the success rates of the students using the products. Without the relationship between the publisher and instructor, fewer students would have benefited from these outstanding educational materials and that would be a real tragedy.

Questions to consider: 1. Did students really benefit from the relationship? 2. Were cheaper alternative materials available that were equally beneficial? 3. Is it possible that students received inflated grades, even if proving it so is impossible? 4. What would it take to identify this relationship as a moral problem? 5. Are all financial relationships with industry unethical? 6. If not, when does the relationship become unethical?

I think it is extremely rare for someone to go into a job with criminal intent to capitalize on the system and take home as much money as possible regardless of possible harm. No, everyone begins with the best intentions and becomes blinded to the possible effects of their actions. And, precisely because each person has no malevolent intentions, each person feels insulted by even a hint of judgment and defends her or his practices vehemently. Because good people do X or Y, it is easy to think it is impossible that X or Y is a bad thing, especially when we can show that many people have benefited from these practices.

åIt is easy to be blinded by the fog of good intentions and financial influence, and ethicists are not immune. The job of the ethicist is not to be perfect but to be on guard. The job of the ethicists is to constantly strive to get a clear view through the fog and to help others stay on the paved path running alongside that slippery slope.

 

Will industry-funded research kill you?

Last week, I wrote a blog about the effects of financial conflicts of interest (FCOI) on treatment decisions of doctors and whether disclosure alone will have any effect on eliminating bias and corruption. As a result, I received some comments and information on FCOI in published research.

Before I say more, I would like to clarify that someone who is conducting research funded by industry is not technically, in my studied opinion, involved in a FCOI, because such a person has the single interest of generating products that will result in profit for industry. It is possible that research undertaken with the aim of commercial success will benefit humanity, but if profit is not possible, humanity be damned. (I am making an assumption, which may be naive, that most of us think medical research should be aimed at making life better for humanity.)

To help combat the problem of bias in research, John Henry Noble suggests prison time for those found guilty of scientific fraud. In my opinion, he makes two strong claims: 1. “The false claims of the perpetrators rise to the status of crime against society, insofar as they endanger public health by sullying and misdirecting the physician’s ‘standard of care.’” 2. “The due process of law is likely to uncover and judge the evidence of guilt or innocence more reliably and fairly than will the institutions of science and the professions that historically have resisted taking decisive action against the perpetrators.”

I agree that jail time is appropriate for egregious cases of scientific fraud, but I’m not sure it eliminates the problem of industry-driven research. Another person told me industry-funded research should be published for two reasons: 1. Some people are biased without the benefit of industry funding. 2. Some industry-funded research proves to be quite beneficial. Perhaps surprisingly, I agree with both of these statements as well–as far as they go. Certainly, many people carry any number of biases that do not result from corporate funding, and the history of scientific fraud is littered with examples. Further, corporate labs frequently create products I enjoy immensely.

Oddly enough, the person defending industry-funded research sent me a link to a paper to support the contention that FCOIs are not a strong predictor of bias. I say it is odd because the paper didn’t seem to support that position. The paper analyzed the associate between industry funding and the likelihood that the researchers would find an association between sweetened beverages and obesity. The authors of the paper found that “Those reviews with conflicts of interest were five times more likely to present a conclusion of no positive association than those without them.” It is perhaps the conclusion of the paper that gives hope to those advocating for industry funding:

They [results of the study] do not imply that industry sponsorship of nutrition research should be avoided entirely. Rather, as in other research areas, clear guidelines and principles (for example, sponsors should sign contracts that state that they will not be involved in the interpretation of results) need to be established to avoid dangerous conflicts of interest.

In other words, it would reduce bias if sponsored researchers were limited to collecting data without analyzing it. This is hardly a ringing endorsement of industry-funded research, but so be it.

So, I do not think all industry-funded research should be banned. Rather, I think we (as a society) need to ensure that we have ample researchers who are free of FCOIs. In other words, we need substantial funding for independent research centers where researchers can work for the advancement of knowledge without a constant concern for the production of profit. Forcing our public universities and research labs to turn to corporations for funding corrupts our pursuit of knowledge and the advancement of society. We must restore public funding to education and research.

For more on the possible risks of funded research, read about Dan Markingson here. Or read about Jesse Gelsinger here.

Sunshine disinfects nothing

I seem to remember Jon Stewart once playing a clip of a politician declaring that sunshine is the best disinfectant. After the clip, Stewart warned viewers that using sunshine as a disinfectant could lead to a nasty infection. In response to the Sunshine (Open Payments) Act, bioethicist Mark Wilson sounds a similar alarm in a recent paper.

For years, many people, including myself, have argued that industry payments to physicians should be disclosed to the public, so that we will all be aware of possible financial conflicts of interest (FCOI). My hope was that disclosing conflicts of interest might help actually reduce corruption or even simple bias in medical practice, but Wilson points to our experience of Wall Street before and after the 2008 financial collapse to show that knowledge of conflicts of interest does not prevent them. Rather, disclosure only shifts the burden for reducing FCOI to patients, who are least empowered to eliminate them. Rather than fixing the problem, Wilson claims the Sunshine Act only “mythologizes transparency.”

Wilson pointed me to a paper (“Tripartite Conflicts of Interest and High Stakes Patent Extensions in the DSM-5”) in Psychotherapy and Psychosomatics that illustrates the problem. If you want the details, you can read the paper yourself, but I will skip right to the conclusion, which I admit is how I read most papers anyway:

[I]t is critical that the APA recognize that transparency alone is an insufficient response for mitigating implicit bias in diagnostic and treatment decision-making. Specifically, and in keeping with the Institute of Medicine’s most recent standards, we recommend that DSM panel members be free of FCOI.

Telling people about FCOI does not reduce bias and corruption; it only offers an opportunity for people to be aware that bias and corruption exist. I think it is valuable that the Sunshine Act is making people aware of FCOI. In response, though, I hope we will take steps to reduce FCOI. Unfortunately, the burden is indeed shifted to voters and consumers. The most disturbing and obviously true statement Wilson makes in his paper is this: “Until politicians end their own commercial COIs, the Sunshine Act will likely remain the governance order of the day.”

We can’t hope the experts will solve this problem. We must demand that FCOI are eliminated.