The Facebook Emotions Study

•July 24, 2014 • 1 Comment

Over at my Business Ethics Blog, I just posted my thoughts on the recent controversy over the Facebook / Cornell study. I look at the question not of the ethics of the Cornell researchers, but of Facebook as a corporation I basically argue that:

a) the risks involved were trivial;
b) the commercial context matters, and permits certain kinds of experimentation;
c) the study was, from Facebook’s point of view, closer to ‘program evaluation,’ (i.e., closer to a kind of study that is normally exempt from research regulations anyway).

You can read the whole thing, here: Facebook’s Study Did No Wrong.

Here also is the view of 33 bioethicists, published in Nature, saying that while prior ethical review would have been a good idea, the study was not fundamentally unethical:
Misjudgements will drive social trials underground.

Research ethics scandals in Canada, you ask? Sadly, yes.

•July 23, 2013 • 1 Comment

There are certainly plenty of people who think that research ethics scandals happen everywhere else, but not in Canada. Well, it seems that a recent report by food historian Ian Mosby at the University of Guelph has uncovered that, yes, in fact research ethics scandals can, do and have happened in Canada.

Mosby’s report, published in Histoire Sociale/Social History, provides “a narrative record of a largely unexamined episode of exploitation and neglect by the Canadian government” and describes ten years of nutritional experiments conducted on 1300 Aboriginal adults and children, including those in residential schools. These funded studies were done without community or individual consent, without an assessment of potential benefits and risks, without any consideration of the extreme vulnerability of the persons and without any clear humanitarian or altruistic aims or realization of benefits to any people involved while exposing them to real harms. That’s just the beginning of the problems.

The details of the report are horrific.

What is most shocking about this is that these researchers were in communities in which they already knew there were significantly higher general and infant mortality rates (compared to anywhere else at that time in Canada), high rates of malnutrition and hunger along with high rates of TB and other diseases, and yet when they arrived — and these documented facts were clearly confirmed by what they observed ‐ they saw this as a clear opportunity and a kind of living laboratory, rather than a humanitarian tragedy that required their intervention.

Many people, upon first hearing of these experiments, will wonder why the principles of research ethics were not followed here or why the obvious potential and actual ethical problems involved were not addressed in any way. Well, those who are familiar with research ethics know that, during the time these experiments were being conducted in Canada, the Doctor’s Trials were taking place in Nuremberg, Germany after the Second World War (1946-1947). Out of that trial came the Nuremberg Code which outlined ten principles of ethically sound research, including the requirement that research subjects must provide voluntary consent. This challenged the paternalistic approach to research that, in some cases, assigned little to no inherent value to persons who were simply research subjects and nothing more. But what’s important to realize is that it really isn’t the case that as soon as the Nuremberg Code was established, everything instantly got better for research subjects and researchers stopped exploiting persons. That didn’t happen and in many cases, research that was ethically problematic continued, despite the Nuremberg principles. It has taken many years for these principles to become the norm – to become realized, formalized, institutionalized and embedded in the culture of research. Consider the Tuskegee Syphilis Study which was not stopped until 1972 and only then after about 6 years of active lobbying by a persistent whistleblower.

According to the Toronto Star:

These experiments aren’t surprising to Justice Murray Sinclair, chair of the Truth and Reconciliation Commission. The commission became aware of the experiments during their collection of documents relating to the treatment and abuse of native children at residential schools across Canada from the 1870s to the 1990s.
It’s a disturbing piece of research, he said, and the experiments are entrenched with the racism of the time.
“This discovery, it’s indicative of the attitude toward aboriginals,” Sinclair said. “They thought aboriginals shouldn’t be consulted and their consent shouldn’t be asked for. They looked at it as a right to do what they wanted then.”

Here are some links to the media coverage of the release of the report:

Hungry Canadian aboriginal children were used in government experiments during 1940s, researcher says

Canadian government withheld food from hungry aboriginal kids in 1940s nutritional experiments, researcher finds

Past abuses linger over First Nations education debate

When Canada used hunger to clear the West

Here, as well, is a link to Chapter 9 of the Canadian TriCouncil Policy Statement on Research involving the First Nations, Inuit and Métis peoples of Canada.

Ethical Design for Cluster Randomized Trials

•November 22, 2012 • Leave a Comment

A team led by our friend and colleague Charles Weijer at the University of Western Ontario has just issued guideliens for what are known as “cluster randomized trials” (CRTs).

See the story here:
Western-led team delivers world-first ethics guidelines.

CRTs are clinical trials in which randomization occurs across groups of participants, or across institutions, rather than across individual participants. In other words, each participant is not randomized into one arm of the trial or another. Rather, randomization is done at the higher level — an entire institution’s patient, population is treated as a unit for purposes of randomization. This raises a number of interesting ethical issues. These new guidelines will surely help advance our understanding, as well as highlighting an important range of issues for those of us not previously aware of them.

Clinical Trials in Russia

•November 12, 2012 • Leave a Comment

Generally, when westerners think of people in foreign lands participating as human subjects in clinical trials, we think of the developing world. That image is somewhat incomplete.

This was from September, but well worth a look at this NYT piece if you missed it:
Russians Eagerly Participate in Medical Experiments, Despite Risks

As a test subject in a Russian clinical trial for an experimental weight loss drug, Galina I. Malinina had to inject herself in the stomach daily. … she threw up every day for two weeks, yet stuck to the regimen, something valued by companies, as dropouts are expensive.
“It’s wonderful,” she said of the test substance, a weight loss serum under development by the Danish biotechnology giant Novo Nordisk. In addition to losing 22 pounds in a year, she said, “I became more lively; I walk easier and I have energy.”

Why go through this? For the same reason that people sign up for clinical trials in India or rural China.

Patients, as was the case with Ms. Malinina, are eager to join trials because often it is the only way to receive modern medical care.

Is this predatory? Are drug companies testing drugs on poor Russians in order to sell drugs to wealthy Americans, Canadians, and Brits? The answer is not so simple. The Russian government, apparently, is pretty excited to provide incentives for drug companies to conduct trials there:

…under a law passed in 2010, ostensibly on health grounds, foreign drug companies must test medicine on Russians for it to be marketed in Russia.

Interpreting Canada’s TCPS2

•January 17, 2012 • Leave a Comment

Canada’s Interagency Advisory Panel on Research Ethics has begun putting online its interpretations of the second iteration of the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (a.k.a. “TCPS2”).

“The Interagency Advisory Panel on Research Ethics (the Panel) is pleased to share a growing collection of its responses to written requests for interpretation….”

The website also explains the role of the Panel in interpreting the TCPS2, as well as featuring a nifty feedback function attached to each interpretive note.

Cracking Down on Research Misconduct at a Chinese University

•January 14, 2012 • Leave a Comment

Here’s an interesting bit on one Chinese university’s efforts to crack down on research misconduct:

From Nature: Research ethics: Zero tolerance — A university cracks down on misconduct in China.

Most readers of this blog will know that research misconduct doesn’t fall under the heading of Research Ethics, as that term is normally applied to the work of Research Ethics Boards and Institutional Review Boards. But neither are the issues entirely separate.

Here’s a paragraph I found particularly interesting, about the causes of misconduct:

Cao and other experts on misconduct point to specific contributing factors. China’s research system has developed very rapidly, and universities are scrambling to train the influx of students, scientists and administrators. “As a large, newly developed system of research, China does not have the control of its research programmes that is found in the West,” says Nicholas Steneck, who studies research integrity at the University of Michigan in Ann Arbor. Some researchers are simply oblivious to the rules, says Zhong Haining, a neuroscientist who trained at Tsinghua University and is now starting a lab at Oregon Health and Science University in Portland. “The official guideline for scientific misconduct may (or may not) exist, but it’s not very well publicized, at least not emphasized so much in training,” he says.

I wonder if the causes of misconduct are so different in other places?

Research Fraud Case Raises Concerns Over Ethics of Psychology Research

•November 15, 2011 • 1 Comment

It’s a big story in research ethics and it has been reported in major newspapers all over the world. There’s even a Wikipedia page already about this case. The story centres on a fraud case against Diederik Stapel, a well-known psychologist and widely-published researcher at Tilburg University. It is claimed that Dr. Stapel drew inaccurate and false conclusions from analysis of collected data and also that he also falsified and “made up” entire experiments, which were written up and published in a variety of high profile journals, including Science and The Journal of Personality and Social Psychology, a publication of the American Psychological Association. Over his career, Dr. Stapel has published more than 150 papers which each, as the New York Times story about this case notes, are designed to “make a splash in the media” through provocative titles and topics. Not only have his own published papers been found suspect, many of the PhD theses he supervised have been found to be “tainted” according to a Science on-line magazine story.

Here’s one account of the case, from one of my favourite science writers, Benedict Carey of the New York Times: Fraud Case Seen as a Red Flag for Psychology Research

There isn’t a lot to say about this case that differentiates it from others out there about fraudulent conduct in science – there are plenty of stories about research fraud out there (far too many) describing claims of falsification of data, skewing of data, or misrepresentations of data, even “making up entire experiments”. But two things stood out for me the more I read about this case.

First, the claims of fraud extend beyond Stapel’s own work and publications. It’s not unusual to read stories about fraud that may imply that colleagues or lab assistants turned a blind eye, or wondered about the veracity of findings or the ‘incredible luck’ of colleagues with seemingly perfect data. But this case extends well beyond inattentive, ignorant or silent colleagues. In this case, Stapel, as a supervisor, included his own students in his fraud and has quite possibly ruined their careers and their prospects. The Tilburg University commission investigating Stapel’s misconduct has recommended that criminal charges may be appropriate based not only on the misuse of research funds, but also on harm done to these students. While many might ask, “Why would he also ruin the careers of students?” it would appear that the fraud was so extensive that he not only had no raw data to provide to students for secondary analyses or to inform their work, but that he really may not have ever conducted a well-run psychological study himself, in order to be able to guide someone else through the process. This “trickle-down” effect of research fraud is typically not addressed at all in stories like this even though it is clear that a key role of a senior scientist is almost always the mentorship of young scientists and researchers. The negative effects of scientific misconduct on those who are junior, who are entering the field with good intentions, thoughtfulness and enthusiasm is a serious potential fallout and one that should be treated just as seriously as the fraud or misconduct.

The second part of this story that interested me most was what Carey, writing in the New York Times, had to say. According to him, this case isn’t the straw that “broke the camel’s back”, rather it’s evidence that the camel’s back has already and most certainly been broken. Benedict Carey is a New York Times science and medicine writer since 2004. I’ve read plenty of Carey’s writing and he’s a smart journalist who writes thoughtfully and articulately about all kinds of topics in the science and medical research world. What Carey notes and the Science magazine story confirms is that one reason why Stapel was able to do this so easily is that he never showed anyone his raw data, nor was he ever required to do so. Incredulously, students who wrote their thesis with Stapel as a supervisor were not allowed to view raw data or data sets that were being used for their own papers. Some never even were allowed to conduct an experiment at all. The findings Stapel cited in many of this own studies his colleagues felt were “too good to be true” and their failure to be able to replicate these results were felt to be due to their own shortcomings. “Cutting corners” with data, “statistical sloppiness, falsifying data” and “reporting unexpected findings as predicted” are claims that Carey notes are prevalent in a discipline in which raw data are rarely shared in order to back up results or requested by reviewers to verify findings and conclusions. What I’ve found is that researchers in the area of psychology tend to keep datasets for long periods of time — longer than any other human-research discipline I’ve encountered. I find it interesting that no one, at any time, requested to see an anonymized dataset or something to back up the (often) controversial claims of Stapel. In ethics review processes, we often think about what we call “the life of the data”. We are concerned with what happens to datasets after collection and analysis are complete, details about how data are stored, and who might be able to access data in the future. From what I’ve read in research protocols about data management, researchers are not always thinking about the data as the very fundamental proof that can back up any claim they wish to make and publish. I often read graduate student research protocols, from a variety of disciplines, in which they state that the raw data will be destroyed as soon as the research or thesis is published. I always tell them that’s unwise — and that data should be stored for a reasonable amount of time after publication or dissemination of findings. It’s amazing, to me, that this is very often the first time (during the ethics review process) that anyone has talked to many new researchers about the idea that the data are there (and must be there!) as the fundamental link between a hypothesis and the claims made by a researcher.

It will be interesting to see if there is any further fall-out from this story. I’ll continue to follow the story and update it.

The University of Tilburg established a commission to investigate Stapel’s work and the claims made about fraudulent scientific conduct. As of two weeks ago, it is now available in English here.

What Scorpion Bites Can Teach Us About Placebo Trials

•August 14, 2011 • Leave a Comment

A recent story in Nature highlights a few important concepts in the conduct of placebo-controlled trials. This story is about a very small placebo-controlled trial involving only 15 children who were stung by bark scorpions in Arizona. The trial’s goal was to demonstrate effectiveness of a new antivenom, Anascorp. The bark scorpion is the only scorpion in the southwestern US thought to be potentially life-threatening for humans and, according to what I’ve read, is a very frequent and very unwelcome visitor in many Arizona homes.

Here’s a link to the story, from Nature: A tiny trial of 15 people helps convince the US to approve a drug that takes the pain out of scorpion stings

“It’s a rare, potentially fatal, emergency medical disease that predominantly affects rural children, and if you put all those adjectives together, it didn’t look possible to put [a placebo-controlled trial] together at all,” says Leslie Boyer, director of the Venom Immunochemistry, Pharmacology and Emergency Response (VIPER) Institute at the University of Arizona in Tucson, who led the supporting trial reported two years ago in the New England Journal of Medicine. “We started out with a daunting task.”

For one thing, trial investigators can’t enroll participants ahead of time because there’s no way of knowing who will get stung. Then there was the problem of getting hospitals to sign up to a placebo-controlled trial….”

This story is interesting because it demonstrates some of the difficulties with conducting placebo-controlled trials, and challenges with getting clear clinical research data about yet-to-be-approved treatments in the face of a life-threatening injury. It is also pretty unheard of for a trial involving such a small number of participants to be considered as clearly definitive as this one was.

First, it was difficult to think of using a placebo in the face of a potentially life-threatening scorpion sting. It would be unethical to agree to withhold any kind of treatment and to instead give a placebo to a child with the symptoms of a scorpion bite (which are described in the article as including “severe pain, extreme nausea, blurred vision and breathing problems”.) Children who experience such a reaction have difficulty, over time, coping with the pain and nausea, and extreme demands on the cardiopulmonary system can lead to more serious situations, including death. Faced with a child who has been stung by a scorpion, most parents would not wish to take the time to (a) go through an informed consent process in the face of such a potential emergency, and (b) go through a consent process when the end result might well be a placebo. As a parent, you’d want the antivenom and you’d want it fast, understandably. The researchers fully understood this dilemma and the difficulty with using a placebo in this case.

Second, there is another older antivenom that had been used between about 1965 and 2000. Produced at Arizona State University’s Antivenom Production Laboratory in Phoenix, this antivenom, while effective, was not highly purified, leading to anaphylactic reactions in many patients. Furthermore, the laboratory that produces it stopped making it. Some doctors and emergency rooms have stockpiles of the older antivenom drug and were, again understandably, very reluctant to withhold the old antivenom (which, while imperfect, did the trick) in order to enroll patients into a placebo-controlled trial.

So the researchers knew what they were up against, in terms of being able to conduct a placebo-controlled trial. And they also knew that, in many cases, a placebo-controlled trial is one of the necessary first steps in being able to make clear, quantitative claims about drugs in order to move forward to conduct larger trials investigating dosage, safety, side effects, etc. Once they got 2 hospitals to take part, they only needed 15 patients in order to demonstrate, quite definitively, that the new antivenom was, as the researchers say, “overwhelmingly effective.” The results were impressive: for all kids treated with Anascorp, symptoms resolved completely within 4 hours — compared to long, and more complicated recoveries for the kids who got a placebo.

Once word got out about the effectiveness of this new antivenom, a new ethical dilemma emerged. No one wanted to administer the old antivenom; nor did they want to be enrolled in yet further placebo studies. So Anascorp was made available, on an “open trial” basis, to all centres who might treat patients with bark scorpion stings. In a 6 year period, they effectively conducted the largest antivenom “trial” in history, with more than 1500 patients treated with the new antivenom, with only mild side effects. They then put together an FDA proposal to approve the use of the drug in treating scorpion stings.

Good news. As of August 4, 2011, the FDA has approved the use of Anascorp for use in scorpion stings. You can read about that here, if you’re interested (or live in Arizona!): FDA approves Anascorp for Use in Scorpion Stings.

Disclosing Conflicts of Interest: The Case of Tamiflu

•June 3, 2011 • 1 Comment

An interesting story recently from CBC highlights some of the difficulties with the topic of conflict of interest in medicine and biomedical research.

Here’s a link to the story:

CBC Tamiflu Probe Sparks Drug Policy Review

“In the course of the CBC investigation, Zalac also reported that three of Canada’s most prominent flu experts — Dr. Donald Low and Dr. Allison McGeer of Mount Sinai Hospital in Toronto, and Dr. Fred Aoki of the University of Manitoba — had received research funding or acted as a consultant or speaker for Roche during the period when Tamiflu was being promoted.

Their research involvement with Roche and other anti-viral drug makers was not a secret within the industry.

All three would sign the now standard conflict-of-interest declarations when speaking at professional events or publishing papers. And the Public Health Agency says it has always been aware of the drug industry affiliations of its private sector advisers and takes these into account. But these relationships were rarely reported in broader public forums, in the media or even when some of these individuals would appear in marketing videos or flu-warning commercials on television produced by Roche.”

In biomedical research, typically ethics review boards ask researchers to “declare” potential conflicts of interest to them. That’s great. It’s certainly a start. But once potential for conflicts of interest have been “declared” to ethics review boards (in what is, essentially, a confidential review process), what happens then? Do research participants always know about potential conflicts of interest that researchers have? Is this made explicit in an understandable way to participants?

This story demonstrates that conflicts of interest in health care and medicine (and biomedical research) are, often, handled in a fairly superficial or limited way with an emphasis on disclosure and little beyond that. In this story, the three key “flu experts” who were, in 2009, promoting Tamiflu as a key defense against the H1N1 flu were also found to have received research funds or acted as a speaker or consultant for Roche, the company that makes Tamiflu.The Public Health Canada Agency of Canada (PHAC) states that it was aware that these experts had affiliations with the drug company, but these relationships were not reported broadly to the public. The public were watching these experts and PHAC closely for advice and guidance. Should they have known more about the relationships between these experts and Roche, the company who makes Tamiflu?

Tamiflu was seen, by much of the public, as the panacea for H1N1 and avian flu. Governments stockpiled the drug, hoping never to have to use their vast stores. Now that these stores are approaching expiry, a decision needs to be made about either replenishing them or exploring new alternatives. Over the last few years, while these stockpiles of Tamiflu have been sitting in storage, other independent researchers have been exploring whether or not Tamiflu really is all it’s cracked up to be as a first-line drug for H1N1 flu. As the story notes, researchers are challenging the “fact” that Tamiflu reduces morbidity and hospitalizations from the flu. Furthermore,the side effects (including bizarre behaviours and delusions) thought to have resulted from taking Tamiflu are viewed as serious enough to perhaps warrant exploring other options.

When we look closely at this, it may well be that the relationships that these experts had with Roche do not constitute an obvious conflict. As science moves forward, recommendations and best practices change and this may well have been the case with Tamiflu. It may also be true that Tamiflu was the first line defense in 2009 and the research since then has shown that it may not be as good as we thought. These three experts may have truly felt that Tamiflu was, in fact, the best treatment for the flu at that time, while they were vigorously promoting it. However, without full disclosure of the relationships that they had with Roche and some clarity from the PHAC on how these potential conflicts would be managed, the public may well doubt their abilities to be objective about Tamiflu. It now appears that the PHAC may well be making changes to how they manage such potential conflicts of interest, in regards to their experts and advisors.

From the CBC story:

“As for the Public Health Agency of Canada, it released a statement that said it would be inappropriate at this point to release the drug company connections of its advisers without their consent.

PHAC says that its advisory committees provide advice but that the agency makes the final decisions. However, because of the questions raised in the CBC documentary, the agency said it “intends to establish a policy on the release of information relating to members of its expert or advisory groups/committees.””

Prenatal Surgery, Hope for Spina Bifida and Ethical Reflection

•February 15, 2011 • 1 Comment

Spina bifida is a serious developmental birth defect in which the spinal cord and backbone fail to develop completely and infants may be born with some of the spinal cord protruding out of their middle or lower back. Treatment typically involves surgery as soon as possible to repair the birth defect. There are different forms of the defect and depending on the presenting form and the treatment, children born with spina bifida may have varying levels of disability.

Surgery on the spina bifida deformation can occur prenatally, i.e., while the fetus is still in the womb, or it can occur soon after birth. A recent story in The New York Times notes that there is now clinical evidence that surgery done while the fetus is still in the womb results in better outcomes and far less disability for children compared to children who have the defect repaired surgically after birth.

The story highlights a seven-year study (that recently ended) in which mothers of fetuses who had been diagnosed with spina bifida were randomized into one of two treatment arms: to either have reparative surgery prenatally or to have surgery soon after birth to repair the spina bifida defect. The study was stopped after it was found that the outcomes in the prenatal surgery arm were simply better — children were less disabled, were more likely to walk and had less neurological complications.

Here is the story from The New York Times: Risk and Reward In Utero

“It was one of the hardest decisions I had to make to be in the study,” said Ms. Shapiro [a pregnant woman who enrolled in the study], who knew how disabling spina bifida was because her sister-in-law has it. “It was a big disappointment that we didn’t get the prenatal surgery because I knew that that was the surgery that was most likely going to help him the most, because otherwise why would they be doing the study? But at the same time, he could have died or been born prematurely from prenatal surgery. When they explained everything to us, I wanted to be in it regardless.”

This is a fascinating study. The potential for the future is pretty amazing. But, as the story notes, the study raised some interesting ethical issues.

First, the study was stopped by the local monitoring board as clinical equipoise was disturbed when the prenatal surgery was found to be so much better, in terms of outcomes, than surgery done after birth. Prenatal surgery was seen as a potential panacea, in the 1990s and early 2000s, for repair of the spina bifida defect (see the link to the 2003 story below) but while outcomes were generally good, the surgery did (and still does) involve significant risk to both mom and fetus — and there wasn’t enough good data on outcomes, the safety risks and health complications. Thus the obvious need for a clinical trial. Did the trial start out with clinical equipoise though? We’ve written about clinical equipoise here before, i.e. the genuine uncertainty about what the preferred treatment might be at the outset of a clinical trial. Physicians and parents, as the story demonstrates, clearly believed that the prenatal surgery option resulted in better outcomes and was worth the risk. In fact, many pregnant women who enrolled in the trial did so just to have a chance at accessing prenatal surgery.

The physicians who were involved in the study knew that pregnant women would not enroll in the study if the prenatal surgery was offered elsewhere (as it was up until the study), so they persuaded other hospitals not included in the study to stop offering the prenatal surgery. Thus, pregnant women were essentially forced to enroll in the trial for a chance to be randomized into the treatment arm that provided the prenatal surgery. However, as a result of randomization, many of these women who enrolled were randomized into the other treatment arm and received surgery after the birth of their infant.

It’s interesting (from both a practical and an ethical perspective) that a procedure, albeit one with risks involved, that was offered before the start of the trial to pregnant women carrying fetuses with diagnosed spina bifida, was, for all intents and purposes, no longer offered except (possibly) through enrollment in a clinical trial. Physicians who, before the trial, could offer this treatment to their patients (with all the potential risks and benefits involved), were no longer free to do this if they were providing care in a hospital that was not part of the clinical trial. While some bioethicists quoted in the story feel that the move to limit access to the prenatal surgery was a bold, collaborative step, others state that it was ethically problematic to take away any notion of choice for pregnant women by severely limiting access to an option many already felt to be superior, that was desired by many even with the risks.

While many clinical trial participants enroll with altruistic objectives, others enroll in order to access treatments and procedures that might only be available through a trial. This is seen, often, with drugs that are not yet freely available on the market until clinical testing is complete. In this case, clearly, pregnant women would be interested and motivated to enroll, out of genuine concern for the future of the child they are carrying. Altruism might well be part of the motivation — as the story notes, one gracious woman who was randomized into the after-birth surgery felt that while her son did not benefit from the surgery as much as he might have from prenatal surgery (he has some significant disabilities), participation in the trial contributed to knowledge about his disease. Needless to say, it’s a good thing that the trial was done and also a good thing that it has been stopped in order to make what has been shown to be a superior treatment available, once again, to all.

If you’re interested in this really fascinating topic, here are two related stories. The first is a recent story from The New York Times on the possibilities related to the outcomes of this trial: Success of Spina Bifida Study Opens Fetal Surgery Door.

The second is a much older story from The National Review in 2003, on the emerging possibilities of fetal surgery for spina bifida. (I remember this story when it was published for the unforgettable image, shown above, called “The Hand of Hope”): Miracles of Life: The Beltway goes inside the womb