Disclosing Conflicts of Interest: The Case of Tamiflu

•June 3, 2011 • 1 Comment

An interesting story recently from CBC highlights some of the difficulties with the topic of conflict of interest in medicine and biomedical research.

Here’s a link to the story:

CBC Tamiflu Probe Sparks Drug Policy Review

“In the course of the CBC investigation, Zalac also reported that three of Canada’s most prominent flu experts — Dr. Donald Low and Dr. Allison McGeer of Mount Sinai Hospital in Toronto, and Dr. Fred Aoki of the University of Manitoba — had received research funding or acted as a consultant or speaker for Roche during the period when Tamiflu was being promoted.

Their research involvement with Roche and other anti-viral drug makers was not a secret within the industry.

All three would sign the now standard conflict-of-interest declarations when speaking at professional events or publishing papers. And the Public Health Agency says it has always been aware of the drug industry affiliations of its private sector advisers and takes these into account. But these relationships were rarely reported in broader public forums, in the media or even when some of these individuals would appear in marketing videos or flu-warning commercials on television produced by Roche.”

In biomedical research, typically ethics review boards ask researchers to “declare” potential conflicts of interest to them. That’s great. It’s certainly a start. But once potential for conflicts of interest have been “declared” to ethics review boards (in what is, essentially, a confidential review process), what happens then? Do research participants always know about potential conflicts of interest that researchers have? Is this made explicit in an understandable way to participants?

This story demonstrates that conflicts of interest in health care and medicine (and biomedical research) are, often, handled in a fairly superficial or limited way with an emphasis on disclosure and little beyond that. In this story, the three key “flu experts” who were, in 2009, promoting Tamiflu as a key defense against the H1N1 flu were also found to have received research funds or acted as a speaker or consultant for Roche, the company that makes Tamiflu.The Public Health Canada Agency of Canada (PHAC) states that it was aware that these experts had affiliations with the drug company, but these relationships were not reported broadly to the public. The public were watching these experts and PHAC closely for advice and guidance. Should they have known more about the relationships between these experts and Roche, the company who makes Tamiflu?

Tamiflu was seen, by much of the public, as the panacea for H1N1 and avian flu. Governments stockpiled the drug, hoping never to have to use their vast stores. Now that these stores are approaching expiry, a decision needs to be made about either replenishing them or exploring new alternatives. Over the last few years, while these stockpiles of Tamiflu have been sitting in storage, other independent researchers have been exploring whether or not Tamiflu really is all it’s cracked up to be as a first-line drug for H1N1 flu. As the story notes, researchers are challenging the “fact” that Tamiflu reduces morbidity and hospitalizations from the flu. Furthermore,the side effects (including bizarre behaviours and delusions) thought to have resulted from taking Tamiflu are viewed as serious enough to perhaps warrant exploring other options.

When we look closely at this, it may well be that the relationships that these experts had with Roche do not constitute an obvious conflict. As science moves forward, recommendations and best practices change and this may well have been the case with Tamiflu. It may also be true that Tamiflu was the first line defense in 2009 and the research since then has shown that it may not be as good as we thought. These three experts may have truly felt that Tamiflu was, in fact, the best treatment for the flu at that time, while they were vigorously promoting it. However, without full disclosure of the relationships that they had with Roche and some clarity from the PHAC on how these potential conflicts would be managed, the public may well doubt their abilities to be objective about Tamiflu. It now appears that the PHAC may well be making changes to how they manage such potential conflicts of interest, in regards to their experts and advisors.

From the CBC story:

“As for the Public Health Agency of Canada, it released a statement that said it would be inappropriate at this point to release the drug company connections of its advisers without their consent.

PHAC says that its advisory committees provide advice but that the agency makes the final decisions. However, because of the questions raised in the CBC documentary, the agency said it “intends to establish a policy on the release of information relating to members of its expert or advisory groups/committees.””

Prenatal Surgery, Hope for Spina Bifida and Ethical Reflection

•February 15, 2011 • 1 Comment

Spina bifida is a serious developmental birth defect in which the spinal cord and backbone fail to develop completely and infants may be born with some of the spinal cord protruding out of their middle or lower back. Treatment typically involves surgery as soon as possible to repair the birth defect. There are different forms of the defect and depending on the presenting form and the treatment, children born with spina bifida may have varying levels of disability.

Surgery on the spina bifida deformation can occur prenatally, i.e., while the fetus is still in the womb, or it can occur soon after birth. A recent story in The New York Times notes that there is now clinical evidence that surgery done while the fetus is still in the womb results in better outcomes and far less disability for children compared to children who have the defect repaired surgically after birth.

The story highlights a seven-year study (that recently ended) in which mothers of fetuses who had been diagnosed with spina bifida were randomized into one of two treatment arms: to either have reparative surgery prenatally or to have surgery soon after birth to repair the spina bifida defect. The study was stopped after it was found that the outcomes in the prenatal surgery arm were simply better — children were less disabled, were more likely to walk and had less neurological complications.

Here is the story from The New York Times: Risk and Reward In Utero

“It was one of the hardest decisions I had to make to be in the study,” said Ms. Shapiro [a pregnant woman who enrolled in the study], who knew how disabling spina bifida was because her sister-in-law has it. “It was a big disappointment that we didn’t get the prenatal surgery because I knew that that was the surgery that was most likely going to help him the most, because otherwise why would they be doing the study? But at the same time, he could have died or been born prematurely from prenatal surgery. When they explained everything to us, I wanted to be in it regardless.”

This is a fascinating study. The potential for the future is pretty amazing. But, as the story notes, the study raised some interesting ethical issues.

First, the study was stopped by the local monitoring board as clinical equipoise was disturbed when the prenatal surgery was found to be so much better, in terms of outcomes, than surgery done after birth. Prenatal surgery was seen as a potential panacea, in the 1990s and early 2000s, for repair of the spina bifida defect (see the link to the 2003 story below) but while outcomes were generally good, the surgery did (and still does) involve significant risk to both mom and fetus — and there wasn’t enough good data on outcomes, the safety risks and health complications. Thus the obvious need for a clinical trial. Did the trial start out with clinical equipoise though? We’ve written about clinical equipoise here before, i.e. the genuine uncertainty about what the preferred treatment might be at the outset of a clinical trial. Physicians and parents, as the story demonstrates, clearly believed that the prenatal surgery option resulted in better outcomes and was worth the risk. In fact, many pregnant women who enrolled in the trial did so just to have a chance at accessing prenatal surgery.

The physicians who were involved in the study knew that pregnant women would not enroll in the study if the prenatal surgery was offered elsewhere (as it was up until the study), so they persuaded other hospitals not included in the study to stop offering the prenatal surgery. Thus, pregnant women were essentially forced to enroll in the trial for a chance to be randomized into the treatment arm that provided the prenatal surgery. However, as a result of randomization, many of these women who enrolled were randomized into the other treatment arm and received surgery after the birth of their infant.

It’s interesting (from both a practical and an ethical perspective) that a procedure, albeit one with risks involved, that was offered before the start of the trial to pregnant women carrying fetuses with diagnosed spina bifida, was, for all intents and purposes, no longer offered except (possibly) through enrollment in a clinical trial. Physicians who, before the trial, could offer this treatment to their patients (with all the potential risks and benefits involved), were no longer free to do this if they were providing care in a hospital that was not part of the clinical trial. While some bioethicists quoted in the story feel that the move to limit access to the prenatal surgery was a bold, collaborative step, others state that it was ethically problematic to take away any notion of choice for pregnant women by severely limiting access to an option many already felt to be superior, that was desired by many even with the risks.

While many clinical trial participants enroll with altruistic objectives, others enroll in order to access treatments and procedures that might only be available through a trial. This is seen, often, with drugs that are not yet freely available on the market until clinical testing is complete. In this case, clearly, pregnant women would be interested and motivated to enroll, out of genuine concern for the future of the child they are carrying. Altruism might well be part of the motivation — as the story notes, one gracious woman who was randomized into the after-birth surgery felt that while her son did not benefit from the surgery as much as he might have from prenatal surgery (he has some significant disabilities), participation in the trial contributed to knowledge about his disease. Needless to say, it’s a good thing that the trial was done and also a good thing that it has been stopped in order to make what has been shown to be a superior treatment available, once again, to all.

If you’re interested in this really fascinating topic, here are two related stories. The first is a recent story from The New York Times on the possibilities related to the outcomes of this trial: Success of Spina Bifida Study Opens Fetal Surgery Door.

The second is a much older story from The National Review in 2003, on the emerging possibilities of fetal surgery for spina bifida. (I remember this story when it was published for the unforgettable image, shown above, called “The Hand of Hope”): Miracles of Life: The Beltway goes inside the womb

German Anesthetist Faces Massive Retraction of Published Research Articles

•February 14, 2011 • Leave a Comment

A German anesthetist, Joachim Boldt, has published over 350 peer-reviewed papers and is currently the target of a German medical board investigation for data manipulation, fabrication of data and failure to have human studies reviewed and approved by an ethics review board. This story began when the journal Anesthesia and Analgesia retracted a 2009 article authored by Boldt citing concerns of data manipulation. Now, after further investigation by a German medical review board, up to 90 of Boldt’s published articles are being considered for formal retraction.

Here’s more on the story from Retraction Watch: Unglaublich! Boldt investigation may lead to more than 90 retractions

Note: The Retraction Watch story refers to a joint letter posted to the websites of 11 major anesthesia journals, authored by their editors-in-chief regarding Boldt’s research, much of which was published in their 11 journals. Here is an excerpt from the letter that was included in the Retraction Watch story:

LÄK-RLP [the German medical board conducting the investigation] has reviewed 74 scientific articles describing clinical trials subject to the requirements of the German Medicinal Act. This includes the article by Professor Boldt recently retracted by Anesthesia & Analgesia and an article submitted by Professor Boldt to Anaesthesia but not published. By law these studies required IRB approval. Although the articles typically stated that IRB approval had been obtained, LÄK-RLP could not find evidence of approval for 68 of these articles.

The story on Retraction Watch notes that, in addition to these retractions, Boldt faces a fine of up to 100 000 Euros (approximately 140 000 Canadian dollars) and even jail time for conducting human research without ethics review board approval, a violation of German medical profession’s Code of Deontology (i.e., Code of Ethics).

The story notes that there is evidence that Dr. Boldt failed to obtain ethics review board approval for studies, forged signatures on copyright forms, fabricated data, fabricated clinical cases, and lied about the participation of co-authors. In fact, the claims and information suggest that Boldt’s 2009 now-retracted study never even took place at all. As a result of these claims, further investigation into Boldt’s other published research is taking place.

There are a plethora of ethical and practical problems with Boldt’s research, if even a few of the claims are found to be true. Certainly there are too many to discuss here. But what I did want to comment on is the burden placed on ethics review boards to “police” research, something most of us are loathe to do. As the story notes, upon submission to a scientific journal, researchers often are required to simply check a box to indicate that an institutional ethics review process took place. No one asks for confirmation of this. In this case, no one questioned the volume of research Boldt was producing. No one questioned the fact that Boldt was noting the use of albumin in his published studies when it hasn’t been used since 1999 in that institution. It seems no one asked for evidence or confirmation of some very basic science — something one would expect as part of a peer review process. The role of peer review, or scientific review is, to some degree, downplayed, with claims that scientists are too busy to do thorough reviews, can’t be expected to know everything in their discipline or that they certainly shouldn’t have to monitor their colleagues. Well, peer review is, in fact, about monitoring colleagues in a way that ensures the rigor and quality of information provided through research and science.

I know that I have had months in which I have been asked to do a number of scientific reviews of articles. I also know that I don’t sign up unless I know I can devote the time to doing a thorough review and not just “sign on the line” after reading it superficially or approving it for publication because it’s a smart friend or close colleague. There’s no clear evidence that those sorts of things happened in this case, but it is clear that, alongside other systemic problems, the process of scientific peer review failed miserably here.

Ethical research and ensuring ethical conduct in research shouldn’t be the sole responsibility of an ethics review board. Nor should concern with ethical research begin and end with the submission to an ethics review board. It should be an ongoing part of the entire research process, and a clear part of the research culture, embedded in professions, disciplines and institutions that fund, sponsor and publish research.

Parents Complain Over “Opt-Out” Consent In Kindergarten Study

•February 6, 2011 • 4 Comments

According to a recent story in Burnaby Now, a study notice sent out to parents of kindergarten-aged children has triggered complaints to the University of British Columbia’s Behavioural Research Ethics Board, which had approved the study. Parents received the notice about the the Early Development Instrument, a survey on kindergarten-aged children, which is part of a much larger project called “The Human Early Learning Partnership (HELP)“, a government-funded research consortium of universities, based at the University of British Columbia (UBC).

The Early Development Instrument is an assessment tool used to assess children’s development in kindergarten. It involves collecting information about a child’s physical health and well-being, social capabilities, emotional maturity, use of language and cognitive development and communication skills. According to the Burnaby Now story, the study notice offers brief information on what kinds of data will be collected about the children and that these data will be linked to other kinds of education and health information. It also notes that if parents do not want data collected on their child, they must contact their child’s teacher to indicate this. In other words, the consent, referred to as “passive” in the story, is an opt-out consent.

Here’s the story: ‘Passive consent’ triggers complaint: Burnaby parent says survey is ‘unethical’

The letter informs parents about the survey and mentions that data collected can be linked to education and health information. The letter also states that participation is voluntary, and that parents can contact their children’s teachers if they don’t want them to participate.

Ward [a concerned parent] has issues with the use of passive consent, meaning if parents say nothing, their kids will be included in the survey, rather than signing a consent form expressly stating they want their kids involved. Ward is also complaining about what she calls a lack of information on what the data gets linked to.

“They don’t say what they are linking it to,” Ward said. “It violates parents’ rights to informed consent.”

The parents have formally complained about two issues: the use of an opt-out consent and the reported lack of information about how data are being linked to other kinds of datasets, including health information. There’s one more issue here that interests me, in addition to these concerns. I’m wondering if the study researchers have a mechanism in place to report back to parents if serious developmental, social or health issues are identified in their children.

Here’s a few words on each of the issues raised by this story:

1. The use of an opt-out consent process is one that is discouraged across many research contexts. If an opt-out procedure is used, the burden rests with the researcher to ensure that there is thorough and explicit information about what participation really means and how consent, or in this case, non-consent, is expressed. We talk about “obtaining” informed consent in research and this does involve time and energy. It is a clear step that must be revisited and confirmed throughout the life of a research project. Here are three initial questions I would ask about this opt-out process: Are the researchers clear about sending information that is accessible in every parent’s home language and level of reading comprehension? Is it clear what opting-out or passively opting-in really means, to everyone? Is it clear that a traditional informed opt-in consent is impractical? Many potential participants will not fully understand the negative withdrawal option and will simply not make the call to the teacher for either more clarification or to exercise their right to opt out, thereby passively consenting – but this isn’t informed consent. An opt-out clause should never be used as a means to save time and energy of the researcher nor should it be used with a goal of simply “getting more participants”.

2. The notice states that linkages will be made between the data collected and other kinds of data, including health and educational data, considered to be private information. Full and detailed information should be provided to parents about the kinds of linkages being made and the kinds of other data being accessed by researchers. Furthermore, the kinds of strategies put in place to protect privacy should be outlined explicitly and clearly. From the parents’ reactions in this story, the data linkages and privacy measures have not been fully explained.

3. While the study is low-risk, it involves collecting potentially sensitive and very important data about kids. There are very likely to be findings that suggest or demonstrate mild to serious developmental, social or health-related concerns in children. Interestingly, the researchers also note that the information can be used to predict possible future criminality – something as a parent, I’d want to know, I think. The new second version of the Tri-Council Policy Statement(TCPS) discusses the notion of incidental findings, defined as “unanticipated discoveries made in the course of research but that are outside the scope of the research.” (Article3.4). The TCPS is also quite clear that a plan must be in place to inform participants of expected study results and incidental findings. In this case, the goal of the survey tool is to gather information on the development of children so concerning findings are not incidental, per se, but expected. I’m wondering what kind of information gets back to parents about their children.

The story does not include any response from the HELP project or the UBC Research Ethics Board. I found a great deal of information about The Human Early Learning Partnership on the web. I also found a great deal of information about the Early Development Instrument. But it did take time and work to read through a lot of information. If I was a parent getting a study notice like this, I’d be looking for information about the project and giving serious thought about whether or not to participate. However, I’m a researcher. Many parents are not. Many parents have low levels of literacy or are fluent in languages other than English. Many parents work long hours and at multiple jobs. So the burden of seeking information about a study like this should not be transferred to the parents.

The project itself looks like an important and laudable project with goals that are clearly in the interests of multiple stakeholders, including those providing funding to this very large undertaking. At the end of the day, however, even with multiple powerful stakeholders, the interests and the subjective experiences of participants, i.e. those providing the data, must be paramount. When participants are feeling that they are uninformed, unprotected and not being treated in an ethical manner, at the very least it appears that their interests are not being prioritized.

If Placebos Work, Why Placebo-Controlled Trials?

•January 3, 2011 • 3 Comments

Placebos have been a hot-button issue in human-subjects research ethics for years. Many regard placebo-controlled trials as the “gold standard” for clinical research. Others see the use of placebos in clinical research as typically-unjustified (at least in cases where a decent treatment exists). (To get a sense of this debate, check out this article by Charles Weijer: “Placebo trials and tribulations”.)

So it has been interesting to see placebos popping up as a topic of discussion lately. Placebos are supposed not to work. That’s the whole point of a placebo-controlled trial — to compare an experimental drug against something “known” not to work, in order to make sure that any positive effects observed aren’t merely the result of the “placebo effect,” i.e., roughly the psychological result of patients feeling cared-for.

But if placebos are (by whatever mechanism) more effective than previously thought, that kind of throws a monkey wrench into the equation.

As a starting point, see this Wired article: Placebos Are Getting More Effective. Drugmakers Are Desperate to Know Why.

For a more scholarly look at a complication, see this PLoS One article by Kaptchuk et al: Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome.

And for an overview of the whole issue, see this PLoS blog entry: Meet the Ethical Placebo: A Story that Heals.

Much of the latter blog entry is about clinical (not research) use of placebos. But still, it’s something those of us with an interest in Research Ethics need to think about. As knowledge grows about the size, limits, and exact mechanism behind the placebo effect, we may well need to rethink the role of placebos in randomized controlled trials.

Obama Announces Massive Review of Research Ethics in the US

•November 25, 2010 • 1 Comment

In a New York Times story yesterday, it was revealed that President Obama is ordering a massive review of bioethics in the US. Here’s the story: U.S. Orders Vast Review of Bioethics

“His action is a response to the revelation this year that American scientists intentionally infected people at a Guatemalan mental hospital with syphilis in the 1940s. In a memorandum released by the White House, Mr. Obama announced a review of federal and international standards to guard the health and well-being of research participants, known as human subjects. He also ordered a fresh inquiry into what happened in the widely condemned experiment in Guatemala….”

From this brief story, it appears that his greatest concern is, in fact, research ethics and the regulations, rules and standards that are in place in the US (and internationally) to protect human participants. While there are significant variations among national standards for the ethical conduct of research, many countries have used US standards as guidelines in creating or amending their own sets of regulations and standards. Furthermore, there are many international researchers who are involved in multi-centre projects or have links to researchers and research institutions in the US. Therefore, a major overhaul of US research ethics standards (if that is the result of this review) would have far-reaching implications.

We’ll continue to watch this story and keep you updated.

Tragedy and the Clinical Trial: Part 2

•November 21, 2010 • Leave a Comment

On October 1, I wrote about the New York Times Story about two cousins in the US, Brandon and Thomas, both with a lethal form of melanoma, who were also both enrolled into a Phase III Clinical Trial to test a new, breakthrough cancer drug. One cousin got the wonder drug, PLX4032, while the other cousin got what the author claims that oncologists consider to be the significantly inferior “standard of care” drug dacarbazine (alongside a plaintive apology from his treating physician). Here’s a link to the original blog entry: A Tale of Two Cousins: Tragedy and the Clinical Trial

As I noted in the original blog entry, there are a number of very serious questions and issues raised by this story. The first is about clinical equipoise. What’s clinical equipoise? Well, here’s an “Equipoise 101″ version. We think of clinical equipoise as genuine uncertainty over whether a treatment will be beneficial. In clinical trials, where researchers are comparing one treatment to another, it refers to the uncertainty over what the preferred treatment might be. Most of the time, researchers and clinicians have a suspicion or a hypothesis that one treatment may be superior to another in, for example, treating a particular condition or illness. Until adequate evidence exists in favor of one treatment over another, the hypothesis remains unproven and some degree of clinical equipoise still exists. Once sufficient evidence favoring one treatment over another exists, researchers have an ethical obligation to end a trial as equipoise no longer exists.

The current treatment that is provided and recommended for patients is known as the “standard of care”. In this case, it’s clear that clinical equipoise no longer exists, if it ever did. The standard of care in this case is dacarbazine, a drug that, according to the story, no oncologist wants to give. It’s ineffective at both shrinking tumours in a significant way and prolonging life. So to compare (by experimental use in seriously ill patients) a clearly inferior agent to what has been proven to be a markedly better drug seems, at first glance, unethical.

The structured phases of clinical trials, in some ways and in some cases, are antithetical to the notion of clinical equipoise as a measure of collective doubt at a particular point in time. Rigour demands that we strictly follow the phases of clinical trials to, in part, protect the best interests of patients. Yet, in the case of PLX4032, it’s difficult to see that the best interests of anyone are actually being protected. With a standard drug that is felt to do more harm than good, and an experimental “wonder” drug that has, in early phases of testing, produced dramatic responses but caused few serious side effects, this might seem like a case in which the highly-structured and years-long process of conducting clinical trials might be re-evaluated. Many might understandably object, saying that the clinical trial structure ensures that by the time a drug is actually available on the market, it has been through enough experimentation that physicians can be confident in prescribing it without worrying about unexpected harms to individual patients. True. But it’s not surprising that, in the case of what is seemingly a “wonder” drug for seriously ill patients with few to no other options, many oncologists in this story wondered if it’s time to reexamine whether the demands of rigour or the demands of ethics really require continuation of a two-armed trial.

It’s certainly something to think about. It seems to me that once the physicians involved in a trial are apologizing to a participant for being enrolled in a particular arm of a trial, we are no longer in a state of clinical equipoise.

 
Follow

Get every new post delivered to your Inbox.

Join 60 other followers