Research Fraud Case Raises Concerns Over Ethics of Psychology Research

•November 15, 2011 • 1 Comment

It’s a big story in research ethics and it has been reported in major newspapers all over the world. There’s even a Wikipedia page already about this case. The story centres on a fraud case against Diederik Stapel, a well-known psychologist and widely-published researcher at Tilburg University. It is claimed that Dr. Stapel drew inaccurate and false conclusions from analysis of collected data and also that he also falsified and “made up” entire experiments, which were written up and published in a variety of high profile journals, including Science and The Journal of Personality and Social Psychology, a publication of the American Psychological Association. Over his career, Dr. Stapel has published more than 150 papers which each, as the New York Times story about this case notes, are designed to “make a splash in the media” through provocative titles and topics. Not only have his own published papers been found suspect, many of the PhD theses he supervised have been found to be “tainted” according to a Science on-line magazine story.

Here’s one account of the case, from one of my favourite science writers, Benedict Carey of the New York Times: Fraud Case Seen as a Red Flag for Psychology Research

There isn’t a lot to say about this case that differentiates it from others out there about fraudulent conduct in science – there are plenty of stories about research fraud out there (far too many) describing claims of falsification of data, skewing of data, or misrepresentations of data, even “making up entire experiments”. But two things stood out for me the more I read about this case.

First, the claims of fraud extend beyond Stapel’s own work and publications. It’s not unusual to read stories about fraud that may imply that colleagues or lab assistants turned a blind eye, or wondered about the veracity of findings or the ‘incredible luck’ of colleagues with seemingly perfect data. But this case extends well beyond inattentive, ignorant or silent colleagues. In this case, Stapel, as a supervisor, included his own students in his fraud and has quite possibly ruined their careers and their prospects. The Tilburg University commission investigating Stapel’s misconduct has recommended that criminal charges may be appropriate based not only on the misuse of research funds, but also on harm done to these students. While many might ask, “Why would he also ruin the careers of students?” it would appear that the fraud was so extensive that he not only had no raw data to provide to students for secondary analyses or to inform their work, but that he really may not have ever conducted a well-run psychological study himself, in order to be able to guide someone else through the process. This “trickle-down” effect of research fraud is typically not addressed at all in stories like this even though it is clear that a key role of a senior scientist is almost always the mentorship of young scientists and researchers. The negative effects of scientific misconduct on those who are junior, who are entering the field with good intentions, thoughtfulness and enthusiasm is a serious potential fallout and one that should be treated just as seriously as the fraud or misconduct.

The second part of this story that interested me most was what Carey, writing in the New York Times, had to say. According to him, this case isn’t the straw that “broke the camel’s back”, rather it’s evidence that the camel’s back has already and most certainly been broken. Benedict Carey is a New York Times science and medicine writer since 2004. I’ve read plenty of Carey’s writing and he’s a smart journalist who writes thoughtfully and articulately about all kinds of topics in the science and medical research world. What Carey notes and the Science magazine story confirms is that one reason why Stapel was able to do this so easily is that he never showed anyone his raw data, nor was he ever required to do so. Incredulously, students who wrote their thesis with Stapel as a supervisor were not allowed to view raw data or data sets that were being used for their own papers. Some never even were allowed to conduct an experiment at all. The findings Stapel cited in many of this own studies his colleagues felt were “too good to be true” and their failure to be able to replicate these results were felt to be due to their own shortcomings. “Cutting corners” with data, “statistical sloppiness, falsifying data” and “reporting unexpected findings as predicted” are claims that Carey notes are prevalent in a discipline in which raw data are rarely shared in order to back up results or requested by reviewers to verify findings and conclusions. What I’ve found is that researchers in the area of psychology tend to keep datasets for long periods of time — longer than any other human-research discipline I’ve encountered. I find it interesting that no one, at any time, requested to see an anonymized dataset or something to back up the (often) controversial claims of Stapel. In ethics review processes, we often think about what we call “the life of the data”. We are concerned with what happens to datasets after collection and analysis are complete, details about how data are stored, and who might be able to access data in the future. From what I’ve read in research protocols about data management, researchers are not always thinking about the data as the very fundamental proof that can back up any claim they wish to make and publish. I often read graduate student research protocols, from a variety of disciplines, in which they state that the raw data will be destroyed as soon as the research or thesis is published. I always tell them that’s unwise — and that data should be stored for a reasonable amount of time after publication or dissemination of findings. It’s amazing, to me, that this is very often the first time (during the ethics review process) that anyone has talked to many new researchers about the idea that the data are there (and must be there!) as the fundamental link between a hypothesis and the claims made by a researcher.

It will be interesting to see if there is any further fall-out from this story. I’ll continue to follow the story and update it.

The University of Tilburg established a commission to investigate Stapel’s work and the claims made about fraudulent scientific conduct. As of two weeks ago, it is now available in English here.

What Scorpion Bites Can Teach Us About Placebo Trials

•August 14, 2011 • Leave a Comment

A recent story in Nature highlights a few important concepts in the conduct of placebo-controlled trials. This story is about a very small placebo-controlled trial involving only 15 children who were stung by bark scorpions in Arizona. The trial’s goal was to demonstrate effectiveness of a new antivenom, Anascorp. The bark scorpion is the only scorpion in the southwestern US thought to be potentially life-threatening for humans and, according to what I’ve read, is a very frequent and very unwelcome visitor in many Arizona homes.

Here’s a link to the story, from Nature: A tiny trial of 15 people helps convince the US to approve a drug that takes the pain out of scorpion stings

“It’s a rare, potentially fatal, emergency medical disease that predominantly affects rural children, and if you put all those adjectives together, it didn’t look possible to put [a placebo-controlled trial] together at all,” says Leslie Boyer, director of the Venom Immunochemistry, Pharmacology and Emergency Response (VIPER) Institute at the University of Arizona in Tucson, who led the supporting trial reported two years ago in the New England Journal of Medicine. “We started out with a daunting task.”

For one thing, trial investigators can’t enroll participants ahead of time because there’s no way of knowing who will get stung. Then there was the problem of getting hospitals to sign up to a placebo-controlled trial….”

This story is interesting because it demonstrates some of the difficulties with conducting placebo-controlled trials, and challenges with getting clear clinical research data about yet-to-be-approved treatments in the face of a life-threatening injury. It is also pretty unheard of for a trial involving such a small number of participants to be considered as clearly definitive as this one was.

First, it was difficult to think of using a placebo in the face of a potentially life-threatening scorpion sting. It would be unethical to agree to withhold any kind of treatment and to instead give a placebo to a child with the symptoms of a scorpion bite (which are described in the article as including “severe pain, extreme nausea, blurred vision and breathing problems”.) Children who experience such a reaction have difficulty, over time, coping with the pain and nausea, and extreme demands on the cardiopulmonary system can lead to more serious situations, including death. Faced with a child who has been stung by a scorpion, most parents would not wish to take the time to (a) go through an informed consent process in the face of such a potential emergency, and (b) go through a consent process when the end result might well be a placebo. As a parent, you’d want the antivenom and you’d want it fast, understandably. The researchers fully understood this dilemma and the difficulty with using a placebo in this case.

Second, there is another older antivenom that had been used between about 1965 and 2000. Produced at Arizona State University’s Antivenom Production Laboratory in Phoenix, this antivenom, while effective, was not highly purified, leading to anaphylactic reactions in many patients. Furthermore, the laboratory that produces it stopped making it. Some doctors and emergency rooms have stockpiles of the older antivenom drug and were, again understandably, very reluctant to withhold the old antivenom (which, while imperfect, did the trick) in order to enroll patients into a placebo-controlled trial.

So the researchers knew what they were up against, in terms of being able to conduct a placebo-controlled trial. And they also knew that, in many cases, a placebo-controlled trial is one of the necessary first steps in being able to make clear, quantitative claims about drugs in order to move forward to conduct larger trials investigating dosage, safety, side effects, etc. Once they got 2 hospitals to take part, they only needed 15 patients in order to demonstrate, quite definitively, that the new antivenom was, as the researchers say, “overwhelmingly effective.” The results were impressive: for all kids treated with Anascorp, symptoms resolved completely within 4 hours — compared to long, and more complicated recoveries for the kids who got a placebo.

Once word got out about the effectiveness of this new antivenom, a new ethical dilemma emerged. No one wanted to administer the old antivenom; nor did they want to be enrolled in yet further placebo studies. So Anascorp was made available, on an “open trial” basis, to all centres who might treat patients with bark scorpion stings. In a 6 year period, they effectively conducted the largest antivenom “trial” in history, with more than 1500 patients treated with the new antivenom, with only mild side effects. They then put together an FDA proposal to approve the use of the drug in treating scorpion stings.

Good news. As of August 4, 2011, the FDA has approved the use of Anascorp for use in scorpion stings. You can read about that here, if you’re interested (or live in Arizona!): FDA approves Anascorp for Use in Scorpion Stings.

Disclosing Conflicts of Interest: The Case of Tamiflu

•June 3, 2011 • 1 Comment

An interesting story recently from CBC highlights some of the difficulties with the topic of conflict of interest in medicine and biomedical research.

Here’s a link to the story:

CBC Tamiflu Probe Sparks Drug Policy Review

“In the course of the CBC investigation, Zalac also reported that three of Canada’s most prominent flu experts — Dr. Donald Low and Dr. Allison McGeer of Mount Sinai Hospital in Toronto, and Dr. Fred Aoki of the University of Manitoba — had received research funding or acted as a consultant or speaker for Roche during the period when Tamiflu was being promoted.

Their research involvement with Roche and other anti-viral drug makers was not a secret within the industry.

All three would sign the now standard conflict-of-interest declarations when speaking at professional events or publishing papers. And the Public Health Agency says it has always been aware of the drug industry affiliations of its private sector advisers and takes these into account. But these relationships were rarely reported in broader public forums, in the media or even when some of these individuals would appear in marketing videos or flu-warning commercials on television produced by Roche.”

In biomedical research, typically ethics review boards ask researchers to “declare” potential conflicts of interest to them. That’s great. It’s certainly a start. But once potential for conflicts of interest have been “declared” to ethics review boards (in what is, essentially, a confidential review process), what happens then? Do research participants always know about potential conflicts of interest that researchers have? Is this made explicit in an understandable way to participants?

This story demonstrates that conflicts of interest in health care and medicine (and biomedical research) are, often, handled in a fairly superficial or limited way with an emphasis on disclosure and little beyond that. In this story, the three key “flu experts” who were, in 2009, promoting Tamiflu as a key defense against the H1N1 flu were also found to have received research funds or acted as a speaker or consultant for Roche, the company that makes Tamiflu.The Public Health Canada Agency of Canada (PHAC) states that it was aware that these experts had affiliations with the drug company, but these relationships were not reported broadly to the public. The public were watching these experts and PHAC closely for advice and guidance. Should they have known more about the relationships between these experts and Roche, the company who makes Tamiflu?

Tamiflu was seen, by much of the public, as the panacea for H1N1 and avian flu. Governments stockpiled the drug, hoping never to have to use their vast stores. Now that these stores are approaching expiry, a decision needs to be made about either replenishing them or exploring new alternatives. Over the last few years, while these stockpiles of Tamiflu have been sitting in storage, other independent researchers have been exploring whether or not Tamiflu really is all it’s cracked up to be as a first-line drug for H1N1 flu. As the story notes, researchers are challenging the “fact” that Tamiflu reduces morbidity and hospitalizations from the flu. Furthermore,the side effects (including bizarre behaviours and delusions) thought to have resulted from taking Tamiflu are viewed as serious enough to perhaps warrant exploring other options.

When we look closely at this, it may well be that the relationships that these experts had with Roche do not constitute an obvious conflict. As science moves forward, recommendations and best practices change and this may well have been the case with Tamiflu. It may also be true that Tamiflu was the first line defense in 2009 and the research since then has shown that it may not be as good as we thought. These three experts may have truly felt that Tamiflu was, in fact, the best treatment for the flu at that time, while they were vigorously promoting it. However, without full disclosure of the relationships that they had with Roche and some clarity from the PHAC on how these potential conflicts would be managed, the public may well doubt their abilities to be objective about Tamiflu. It now appears that the PHAC may well be making changes to how they manage such potential conflicts of interest, in regards to their experts and advisors.

From the CBC story:

“As for the Public Health Agency of Canada, it released a statement that said it would be inappropriate at this point to release the drug company connections of its advisers without their consent.

PHAC says that its advisory committees provide advice but that the agency makes the final decisions. However, because of the questions raised in the CBC documentary, the agency said it “intends to establish a policy on the release of information relating to members of its expert or advisory groups/committees.””

Prenatal Surgery, Hope for Spina Bifida and Ethical Reflection

•February 15, 2011 • 1 Comment

Spina bifida is a serious developmental birth defect in which the spinal cord and backbone fail to develop completely and infants may be born with some of the spinal cord protruding out of their middle or lower back. Treatment typically involves surgery as soon as possible to repair the birth defect. There are different forms of the defect and depending on the presenting form and the treatment, children born with spina bifida may have varying levels of disability.

Surgery on the spina bifida deformation can occur prenatally, i.e., while the fetus is still in the womb, or it can occur soon after birth. A recent story in The New York Times notes that there is now clinical evidence that surgery done while the fetus is still in the womb results in better outcomes and far less disability for children compared to children who have the defect repaired surgically after birth.

The story highlights a seven-year study (that recently ended) in which mothers of fetuses who had been diagnosed with spina bifida were randomized into one of two treatment arms: to either have reparative surgery prenatally or to have surgery soon after birth to repair the spina bifida defect. The study was stopped after it was found that the outcomes in the prenatal surgery arm were simply better — children were less disabled, were more likely to walk and had less neurological complications.

Here is the story from The New York Times: Risk and Reward In Utero

“It was one of the hardest decisions I had to make to be in the study,” said Ms. Shapiro [a pregnant woman who enrolled in the study], who knew how disabling spina bifida was because her sister-in-law has it. “It was a big disappointment that we didn’t get the prenatal surgery because I knew that that was the surgery that was most likely going to help him the most, because otherwise why would they be doing the study? But at the same time, he could have died or been born prematurely from prenatal surgery. When they explained everything to us, I wanted to be in it regardless.”

This is a fascinating study. The potential for the future is pretty amazing. But, as the story notes, the study raised some interesting ethical issues.

First, the study was stopped by the local monitoring board as clinical equipoise was disturbed when the prenatal surgery was found to be so much better, in terms of outcomes, than surgery done after birth. Prenatal surgery was seen as a potential panacea, in the 1990s and early 2000s, for repair of the spina bifida defect (see the link to the 2003 story below) but while outcomes were generally good, the surgery did (and still does) involve significant risk to both mom and fetus — and there wasn’t enough good data on outcomes, the safety risks and health complications. Thus the obvious need for a clinical trial. Did the trial start out with clinical equipoise though? We’ve written about clinical equipoise here before, i.e. the genuine uncertainty about what the preferred treatment might be at the outset of a clinical trial. Physicians and parents, as the story demonstrates, clearly believed that the prenatal surgery option resulted in better outcomes and was worth the risk. In fact, many pregnant women who enrolled in the trial did so just to have a chance at accessing prenatal surgery.

The physicians who were involved in the study knew that pregnant women would not enroll in the study if the prenatal surgery was offered elsewhere (as it was up until the study), so they persuaded other hospitals not included in the study to stop offering the prenatal surgery. Thus, pregnant women were essentially forced to enroll in the trial for a chance to be randomized into the treatment arm that provided the prenatal surgery. However, as a result of randomization, many of these women who enrolled were randomized into the other treatment arm and received surgery after the birth of their infant.

It’s interesting (from both a practical and an ethical perspective) that a procedure, albeit one with risks involved, that was offered before the start of the trial to pregnant women carrying fetuses with diagnosed spina bifida, was, for all intents and purposes, no longer offered except (possibly) through enrollment in a clinical trial. Physicians who, before the trial, could offer this treatment to their patients (with all the potential risks and benefits involved), were no longer free to do this if they were providing care in a hospital that was not part of the clinical trial. While some bioethicists quoted in the story feel that the move to limit access to the prenatal surgery was a bold, collaborative step, others state that it was ethically problematic to take away any notion of choice for pregnant women by severely limiting access to an option many already felt to be superior, that was desired by many even with the risks.

While many clinical trial participants enroll with altruistic objectives, others enroll in order to access treatments and procedures that might only be available through a trial. This is seen, often, with drugs that are not yet freely available on the market until clinical testing is complete. In this case, clearly, pregnant women would be interested and motivated to enroll, out of genuine concern for the future of the child they are carrying. Altruism might well be part of the motivation — as the story notes, one gracious woman who was randomized into the after-birth surgery felt that while her son did not benefit from the surgery as much as he might have from prenatal surgery (he has some significant disabilities), participation in the trial contributed to knowledge about his disease. Needless to say, it’s a good thing that the trial was done and also a good thing that it has been stopped in order to make what has been shown to be a superior treatment available, once again, to all.

If you’re interested in this really fascinating topic, here are two related stories. The first is a recent story from The New York Times on the possibilities related to the outcomes of this trial: Success of Spina Bifida Study Opens Fetal Surgery Door.

The second is a much older story from The National Review in 2003, on the emerging possibilities of fetal surgery for spina bifida. (I remember this story when it was published for the unforgettable image, shown above, called “The Hand of Hope”): Miracles of Life: The Beltway goes inside the womb

German Anesthetist Faces Massive Retraction of Published Research Articles

•February 14, 2011 • Leave a Comment

A German anesthetist, Joachim Boldt, has published over 350 peer-reviewed papers and is currently the target of a German medical board investigation for data manipulation, fabrication of data and failure to have human studies reviewed and approved by an ethics review board. This story began when the journal Anesthesia and Analgesia retracted a 2009 article authored by Boldt citing concerns of data manipulation. Now, after further investigation by a German medical review board, up to 90 of Boldt’s published articles are being considered for formal retraction.

Here’s more on the story from Retraction Watch: Unglaublich! Boldt investigation may lead to more than 90 retractions

Note: The Retraction Watch story refers to a joint letter posted to the websites of 11 major anesthesia journals, authored by their editors-in-chief regarding Boldt’s research, much of which was published in their 11 journals. Here is an excerpt from the letter that was included in the Retraction Watch story:

LÄK-RLP [the German medical board conducting the investigation] has reviewed 74 scientific articles describing clinical trials subject to the requirements of the German Medicinal Act. This includes the article by Professor Boldt recently retracted by Anesthesia & Analgesia and an article submitted by Professor Boldt to Anaesthesia but not published. By law these studies required IRB approval. Although the articles typically stated that IRB approval had been obtained, LÄK-RLP could not find evidence of approval for 68 of these articles.

The story on Retraction Watch notes that, in addition to these retractions, Boldt faces a fine of up to 100 000 Euros (approximately 140 000 Canadian dollars) and even jail time for conducting human research without ethics review board approval, a violation of German medical profession’s Code of Deontology (i.e., Code of Ethics).

The story notes that there is evidence that Dr. Boldt failed to obtain ethics review board approval for studies, forged signatures on copyright forms, fabricated data, fabricated clinical cases, and lied about the participation of co-authors. In fact, the claims and information suggest that Boldt’s 2009 now-retracted study never even took place at all. As a result of these claims, further investigation into Boldt’s other published research is taking place.

There are a plethora of ethical and practical problems with Boldt’s research, if even a few of the claims are found to be true. Certainly there are too many to discuss here. But what I did want to comment on is the burden placed on ethics review boards to “police” research, something most of us are loathe to do. As the story notes, upon submission to a scientific journal, researchers often are required to simply check a box to indicate that an institutional ethics review process took place. No one asks for confirmation of this. In this case, no one questioned the volume of research Boldt was producing. No one questioned the fact that Boldt was noting the use of albumin in his published studies when it hasn’t been used since 1999 in that institution. It seems no one asked for evidence or confirmation of some very basic science — something one would expect as part of a peer review process. The role of peer review, or scientific review is, to some degree, downplayed, with claims that scientists are too busy to do thorough reviews, can’t be expected to know everything in their discipline or that they certainly shouldn’t have to monitor their colleagues. Well, peer review is, in fact, about monitoring colleagues in a way that ensures the rigor and quality of information provided through research and science.

I know that I have had months in which I have been asked to do a number of scientific reviews of articles. I also know that I don’t sign up unless I know I can devote the time to doing a thorough review and not just “sign on the line” after reading it superficially or approving it for publication because it’s a smart friend or close colleague. There’s no clear evidence that those sorts of things happened in this case, but it is clear that, alongside other systemic problems, the process of scientific peer review failed miserably here.

Ethical research and ensuring ethical conduct in research shouldn’t be the sole responsibility of an ethics review board. Nor should concern with ethical research begin and end with the submission to an ethics review board. It should be an ongoing part of the entire research process, and a clear part of the research culture, embedded in professions, disciplines and institutions that fund, sponsor and publish research.

Parents Complain Over “Opt-Out” Consent In Kindergarten Study

•February 6, 2011 • 4 Comments

According to a recent story in Burnaby Now, a study notice sent out to parents of kindergarten-aged children has triggered complaints to the University of British Columbia’s Behavioural Research Ethics Board, which had approved the study. Parents received the notice about the the Early Development Instrument, a survey on kindergarten-aged children, which is part of a much larger project called “The Human Early Learning Partnership (HELP)“, a government-funded research consortium of universities, based at the University of British Columbia (UBC).

The Early Development Instrument is an assessment tool used to assess children’s development in kindergarten. It involves collecting information about a child’s physical health and well-being, social capabilities, emotional maturity, use of language and cognitive development and communication skills. According to the Burnaby Now story, the study notice offers brief information on what kinds of data will be collected about the children and that these data will be linked to other kinds of education and health information. It also notes that if parents do not want data collected on their child, they must contact their child’s teacher to indicate this. In other words, the consent, referred to as “passive” in the story, is an opt-out consent.

Here’s the story: ‘Passive consent’ triggers complaint: Burnaby parent says survey is ‘unethical’

The letter informs parents about the survey and mentions that data collected can be linked to education and health information. The letter also states that participation is voluntary, and that parents can contact their children’s teachers if they don’t want them to participate.

Ward [a concerned parent] has issues with the use of passive consent, meaning if parents say nothing, their kids will be included in the survey, rather than signing a consent form expressly stating they want their kids involved. Ward is also complaining about what she calls a lack of information on what the data gets linked to.

“They don’t say what they are linking it to,” Ward said. “It violates parents’ rights to informed consent.”

The parents have formally complained about two issues: the use of an opt-out consent and the reported lack of information about how data are being linked to other kinds of datasets, including health information. There’s one more issue here that interests me, in addition to these concerns. I’m wondering if the study researchers have a mechanism in place to report back to parents if serious developmental, social or health issues are identified in their children.

Here’s a few words on each of the issues raised by this story:

1. The use of an opt-out consent process is one that is discouraged across many research contexts. If an opt-out procedure is used, the burden rests with the researcher to ensure that there is thorough and explicit information about what participation really means and how consent, or in this case, non-consent, is expressed. We talk about “obtaining” informed consent in research and this does involve time and energy. It is a clear step that must be revisited and confirmed throughout the life of a research project. Here are three initial questions I would ask about this opt-out process: Are the researchers clear about sending information that is accessible in every parent’s home language and level of reading comprehension? Is it clear what opting-out or passively opting-in really means, to everyone? Is it clear that a traditional informed opt-in consent is impractical? Many potential participants will not fully understand the negative withdrawal option and will simply not make the call to the teacher for either more clarification or to exercise their right to opt out, thereby passively consenting – but this isn’t informed consent. An opt-out clause should never be used as a means to save time and energy of the researcher nor should it be used with a goal of simply “getting more participants”.

2. The notice states that linkages will be made between the data collected and other kinds of data, including health and educational data, considered to be private information. Full and detailed information should be provided to parents about the kinds of linkages being made and the kinds of other data being accessed by researchers. Furthermore, the kinds of strategies put in place to protect privacy should be outlined explicitly and clearly. From the parents’ reactions in this story, the data linkages and privacy measures have not been fully explained.

3. While the study is low-risk, it involves collecting potentially sensitive and very important data about kids. There are very likely to be findings that suggest or demonstrate mild to serious developmental, social or health-related concerns in children. Interestingly, the researchers also note that the information can be used to predict possible future criminality – something as a parent, I’d want to know, I think. The new second version of the Tri-Council Policy Statement(TCPS) discusses the notion of incidental findings, defined as “unanticipated discoveries made in the course of research but that are outside the scope of the research.” (Article3.4). The TCPS is also quite clear that a plan must be in place to inform participants of expected study results and incidental findings. In this case, the goal of the survey tool is to gather information on the development of children so concerning findings are not incidental, per se, but expected. I’m wondering what kind of information gets back to parents about their children.

The story does not include any response from the HELP project or the UBC Research Ethics Board. I found a great deal of information about The Human Early Learning Partnership on the web. I also found a great deal of information about the Early Development Instrument. But it did take time and work to read through a lot of information. If I was a parent getting a study notice like this, I’d be looking for information about the project and giving serious thought about whether or not to participate. However, I’m a researcher. Many parents are not. Many parents have low levels of literacy or are fluent in languages other than English. Many parents work long hours and at multiple jobs. So the burden of seeking information about a study like this should not be transferred to the parents.

The project itself looks like an important and laudable project with goals that are clearly in the interests of multiple stakeholders, including those providing funding to this very large undertaking. At the end of the day, however, even with multiple powerful stakeholders, the interests and the subjective experiences of participants, i.e. those providing the data, must be paramount. When participants are feeling that they are uninformed, unprotected and not being treated in an ethical manner, at the very least it appears that their interests are not being prioritized.

If Placebos Work, Why Placebo-Controlled Trials?

•January 3, 2011 • 3 Comments

Placebos have been a hot-button issue in human-subjects research ethics for years. Many regard placebo-controlled trials as the “gold standard” for clinical research. Others see the use of placebos in clinical research as typically-unjustified (at least in cases where a decent treatment exists). (To get a sense of this debate, check out this article by Charles Weijer: “Placebo trials and tribulations”.)

So it has been interesting to see placebos popping up as a topic of discussion lately. Placebos are supposed not to work. That’s the whole point of a placebo-controlled trial — to compare an experimental drug against something “known” not to work, in order to make sure that any positive effects observed aren’t merely the result of the “placebo effect,” i.e., roughly the psychological result of patients feeling cared-for.

But if placebos are (by whatever mechanism) more effective than previously thought, that kind of throws a monkey wrench into the equation.

As a starting point, see this Wired article: Placebos Are Getting More Effective. Drugmakers Are Desperate to Know Why.

For a more scholarly look at a complication, see this PLoS One article by Kaptchuk et al: Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome.

And for an overview of the whole issue, see this PLoS blog entry: Meet the Ethical Placebo: A Story that Heals.

Much of the latter blog entry is about clinical (not research) use of placebos. But still, it’s something those of us with an interest in Research Ethics need to think about. As knowledge grows about the size, limits, and exact mechanism behind the placebo effect, we may well need to rethink the role of placebos in randomized controlled trials.

 
Follow

Get every new post delivered to your Inbox.

Join 64 other followers