HIV vaccine trial: a “shot in the arm”?
Last month, it was announced that an HIV vaccine trial carried out in Thailand showed hope for further progress towards use of a vaccine for HIV. This was not a small trial — it was carried out on more than 16000 Thai volunteers, costing $105 million USD. The initial press release which was heralded, blogged, tweeted and retweeted reported that the data from the trial showed a decrease in the risk of HIV infection by 31% compared to a placebo. Characterized as a veritable “shot in the arm” for those engaged in HIV vaccine research (so called by the Principal Investigator of the study, Dr. Scott Hammer) this trial seemed to be the most positive and hopeful news in this area to date.
But is it?
New reports shows that the initial data (demonstrating the “moderate” 31% decrease in infection rates) included a subgroup of participants who did not follow the trial protocol. According to the Scientist’s most recent report, if that subgroup is excluded, the data, when analyzed, still shows a moderate decrease in infection rates but the result is no longer statistically significant.
Here’s the most recent update, from The Scientist: Hubbub brews for HIV vax data
Although both sets of data were available to the researchers at the time of the original announcement, they chose not to report them alongside the initial analysis, Jerome Kim, a US Army scientist who was involved in the study, told the Wall Street Journal. “We thought very hard about how to provide the clearest, most honest message,” Kim said. “We stand by the fact that this is a vaccine with a modest protective effect.” But some AIDS researchers (who preferred to remain anonymous) have suggested that the study leaders were dishonest and put a positive spin on a study with, at best, inconclusive results.
For those of you who haven’t been in a stats class recently, statistical significance is not a trivial matter. It’s the measure by which you can state that a particular finding isn’t simply due to chance, that it can be reproduced, that it is a reliable finding. In this case, it’s the justification that would have allowed researchers to claim that the decrease in infection rates was really due to the vaccine, and not something else. If in fact a number of participants did not follow the protocol, it’s very difficult to say that confounding factors (things other than the vaccine that might well be responsible for some of the decrease in rates of infection) were adequately controlled.
We’ve talked about the responsibility of researchers and the media in reporting research stories to the public ( here, here, and here) and this story serves to remind us, yet again, of the responsibility when presenting data to the public in a non-scientific, easy-to-understand way (at the time of the update, the data had not been presented at a scientific meeting or in a publication). While it’s important to be clear and honest — and the researchers have said repeatedly that they were aiming to be as clear and honest as possible — researchers also have an ethical obligation to ensure that the data, disseminated in any context, are not misleading, are scientifically sound, and are presented in a complete way. Especially with an issue like this one, an issue in which many are actively looking for a glimmer of hope.
It may well be that the data support a conclusion that does provide hope — a decrease in infection rates that is really due to the vaccine. But with this current confusion and doubt, the burden of proof rests now with the researchers to present all the data to both the public (again) and the scientific community, in ways that everyone can clearly understand.
Great post, thanks for the info!