Research Volunteers Unwittingly at Risk

Last in a series of occasional articles

By Rick Weiss
Washington Post Staff Writer
Saturday, Aug. 1, 1998; Page A1

It seemed a stunning example of medical research run amok: Physicians in New York offered to give Toys R Us gift certificates to 36 healthy black and Hispanic elementary school pupils if the children agreed to enroll in a medical study that required them to take a potentially life-threatening drug.

The three-year study, recently revealed by a patient advocacy group, brought back memories of the infamous Tuskegee experiments that came to light in the 1970s, in which doctors withheld syphilis treatments from black men so they could observe the disease's progression. Congress quickly convened two hearings, and federal officials launched an investigation.

Now, as additional facts have begun to emerge, officials are realizing that the facts involved in the New York study are much more nuanced and complicated than was initially thought. It's no longer clear whether the study endangered the children or was racist.

Indeed, more than anything, the controversy highlights the surprisingly thin line that can separate a justifiable human experiment from a research abuse. And it shows that important issues relating to the ethics of human research remain unresolved, even as U.S. medical research is poised to expand at an unprecedented rate.

Medical research is changing dramatically. Modest studies aimed at answering straightforward questions are giving way to large, complex research projects freighted with social and ethical baggage, such as those relating to reproductive technology or genetic predispositions.

Public perceptions are shifting too. "Guinea pig" was once a pejorative term, but many people with AIDS, cancer and other serious diseases are now demanding access to experimental drugs as their best hope for survival.

Those scientific and social changes have led to a sharp increase in the number of people participating in experiments, and have strained the nation's system of protections for research volunteers. With Congress now preparing to double the budget of the National Institutes of Health in the next five years, many legislators, ethicists and patient advocates believe it is time to upgrade that fragile system.

"It's a very special privilege to use human beings in research," said Vera Hassner Sharav, the activist who first rang alarms about the New York study three months ago. "It must not be handled cavalierly."

Redefining the rules will not be easy, however, because human experimentation raises profound moral questions, the answers to which have remained elusive even to philosophers and ethicists who have pondered them for years:

When, if ever, is it appropriate to conduct research on minors or the mentally incompetent, who are by definition unable to give legal consent?

If such research is inappropriate, then how can scientists develop drugs and therapies for these needy and often underserved patient groups?

Is there anything wrong with offering cash or other incentives to research volunteers, since the volunteers themselves rarely benefit from the research?

Or might incentives seduce poor or medically desperate people into taking unwise risks, and perhaps foster the creation of an underclass of professional human guinea pigs?

"Research most often requires that some people get exposed to risk for the sake of knowledge that will benefit others," said R. Alta Charo, a professor of law at the University of Wisconsin at Madison. "But there are deeply troubling questions about the distributive justice of how we decide who will bear that risk and who will not."

Evidence of Abuses
No one knows how many people participate in human research in this country. There is no central repository for such information, and no system for keeping track of how many are harmed.

But there is indirect evidence that abuses are taking place.

In June, a Department of Health and Human Services report concluded that the federal system for protecting human research subjects was breaking down. The report found that institutional review boards (IRBs), which judge the scientific and ethical merit of proposed human studies, are overburdened with work, staffed by insufficiently trained people and subject to conflicts of interest.

That report did not cite specific improprieties, but in 1995 the HHS Office of the Inspector General reviewed human tests of four experimental medical devices. In three out of four cases, physicians tested the devices in more people than they were supposed to or continued the experiments longer than they had permission to.

Moreover, in half the cases, volunteers were not adequately informed of the risks. In some cases, people were asked to sign consent forms only after the experimental devices had been surgically implanted inside them.

Additional evidence of trouble arose last year as several people testified before the National Bioethics Advisory Commission about alleged abuses they or their family members had endured in psychiatric research studies.

Robert and Gloria Aller of Los Angeles described a University of California study in which their schizophrenic son, Gregory, had enrolled. Gregory had been doing well, earning a 3.8 grade point average in college and working 15 hours a week. But the study, which started in 1989, required him to discontinue his medication. Deprived of his medicine, he became confused and violent and lost his ability to concentrate. Years later, Robert Aller said, Gregory still has not fully recovered.

Only after a federal investigation was launched did the Allers learn that more than 90 percent of previous subjects in that experiment had also relapsed, a fact they believe would have dissuaded Gregory from enrolling, had he been told about it. One participant, Tony LaMadrid, did not survive the experiment. During a portion of the study that did not include regular doctor visits, he jumped to his death from the roof of UCLA's engineering building.

"They claim that care in a research setting is better than the deficient care you'd get in the community," Aller said in an interview. "But [LaMadrid] was merely seen as a source of data."

A report by the federal Office for Protection From Research Risks (OPRR) criticized that study's consent forms for failing to properly warn participants that the research was likely to trigger a relapse. Now the Aller and LaMadrid families are suing the university.

"The current system was built for a research world that does not exist any more," said George F. Grob, an HHS deputy inspector general. "It is brittle, strained and I think even cracked. We certainly need a better one."

Children as Subjects
The New York case, which involved minority children age 6 to 10, highlights many of the issues that ethicists and regulators will face as they consider how to fix that system.

The research, which ended in 1995, was designed to study the biology and sociology of criminal behavior. Physicians at the New York State Psychiatric Institute and three other hospitals recruited the younger brothers of adolescents who had been arrested for various crimes. The goal was to identify which younger brothers might be at increased risk for trouble themselves, and intervene with counseling or other methods before it was too late.

As part of the study, researchers gave each younger sibling a dose of a drug called fenfluramine to help measure levels of a brain hormone implicated in antisocial behavior. When the work was published last year, it caught the attention of Sharav, director of Citizens for Responsible Care in Psychiatry and Research.

Sharav objected to several aspects of the study.

First there was the issue of using minors in a study that was nontherapeutic; that is, a study intended to answer general scientific questions and not to benefit the children themselves. Since children cannot fully evaluate for themselves the risks and benefits of participating in a study, some argue they should never be subjected to a research risk if they do not stand to personally benefit. Others, however, argue that the only way to develop new treatments for children is to study them directly.

Federal regulations try to address those conflicting views by precluding, in almost all instances, the use of minors in any nontherapeutic research that poses a "more than minimal risk." That sounds reasonable, Sharav said, except that "nobody has defined 'risk' or 'minimal' or 'more than minimal.'" And as the New York case shows, it is not obvious how to distinguish between those levels of risk.

For example, Fenfluramine was never approved for use in minors. And although it was approved for use as a diet drug in adults, it was pulled off the market by the Food and Drug Administration in September after it caused several deaths. In addition, as part of the fenfluramine test, the children had to fast for 18 hours and had a catheter placed in a vein from which multiple blood samples were drawn over several hours, a procedure that left some feeling nauseous and headachy.

Those details alone have convinced some people that such a test is unconscionable in children. The fact that it was approved by an IRB, said Rep. Christopher Shays (R-Conn.) at a hearing last month, is evidence that "the current system of bioethical review has failed miserably."

Not so, replied B. Timothy Walsh, co-chair of the Psychiatric Institute's institutional review board. Scientists have for many years used fenfluramine as a brain hormone stimulant in children with no apparent ill effects, Walsh told Congress. The drug's connection to heart damage was unknown when the experiments were conducted. And in any case, only a single small dose was used in the study, as compared with the much higher doses and longer periods of use that, when combined with another drug, have been shown to be dangerous.

All told, the fenfluramine test posed a risk no greater than other "routine physical tests," concluded Danny Pine, one of the researchers involved in the study, in a 1995 memo to the institute's IRB.

As it turns out, the institute's IRB deemed the study as posing "more than minimal risk" but approved it anyway, a decision that highlights another ambiguity in the regulations on human research. According to federal rules, "more than minimal" research risks are allowed in children only if the study promises to yield useful information about the children's "disorder or condition" (and even then the research must pose no more than a "minor" risk above minimal risk).

But what disorder or condition afflicted these children? The researchers noted in their study proposal that the younger siblings of juvenile delinquents have increased odds of eventually getting into trouble themselves. But it would be a stretch, critics said, to consider these young boys as having a "disorder."

Then there is the issue of whether it is appropriate to limit a study to certain racial groups.

The New York researchers initially proposed conducting their study on black and Hispanic children only. Their rationale remains unclear (and they have refused to speak to reporters), but scientists often try to work with as homogeneous a population as possible, lest racial or other variables confound their results.

The IRB rejected that aspect of the study design and demanded that whites be eligible as well. Now investigators want to know why, in the end, only blacks and Hispanics were included. Was it a reflection of the mostly minority population of the neighborhood where the study was conducted, as Walsh has suggested, or were whites systematically screened out of the study?

The New York researchers also offered the youngsters' parents $100 for their children's participation, and approached the children directly, offering them $25 gift certificates for toys and telling them that their participation would earn money for their families. That raises another issue that has dogged human research for decades: recruitment techniques.

Cash payments and other incentives are commonplace in medical research. But they are offered as compensation for a person's time, not as a fee for accepting risk, said Gary B. Ellis, head of the OPRR, and amounts are supposed to be modest enough so as not to tempt people to waive their better judgment.

The line between acceptable and exorbitant compensation is subjective, but the issue goes beyond the question of "How much?" Subtle issues of power must also be considered. When the youngsters were told that their families would be given cash, for example, might some of them have worried that their parents, mostly poor, would be angry if they refused to participate?

Moreover, the parents were first contacted by researchers who had gotten their names through the judicial system that had arraigned their older children. Might some parents have worried that a decision not to participate in the study might adversely affect their children's upcoming court cases?

The June HHS report cites several examples of recruitment ads that are decidedly lopsided in their descriptions of the risks and benefits of participation in research. One newspaper ad read: "Speed or Cocaine? Need help getting clean? Free Treatment & Medication! Repeat callers welcome!!! Get Paid $$$." As usual for such ads, there is no mention of risk.

"The kind of ads we've seen present a real problem," said Mark Yessian, a regional HHS inspector general. "When you see an emphasis on cash payments and benefits, and little or no mention of risks, that's a real concern."

Of course, ads are just the first round of recruitment; if the system is working well, then researchers will explain the risks in proper detail when a volunteer arrives. But as Gregory Aller and others learned, full and fair disclosure is not always provided.

"This is research. It is not standard treatment," Yessian said. "It does involve risk and one should fully recognize that from the start."

Glenn's Principles
The New York study is now under investigation by the OPRR, and a final judgment is not expected until fall. But the lack of easy answers to the questions raised by the case suggest it's going to be difficult to draft new, ethically sensitive protections for human research subjects.

The National Bioethics Advisory Commission has spent much of the past year analyzing the various ethical implications of human research. It has called for a more thorough informed consent process for all volunteers and tighter review of research, both public and private, by scientific and ethical review boards. Those are also the central principles of a human subjects protection bill sponsored by Sen. John Glenn (D-Ohio) that has languished since its introduction last year.

The commission has also expressed concern about the lack of explicit protections for the mentally ill, who frequently are subjected to "wash out" studies, in which helpful medicines are withdrawn, and "provocation" studies, in which symptoms are intentionally exacerbated for study. A commission report, due this fall and presaged already in a public draft, calls for added protections for this vulnerable population.

Meanwhile, a bill introduced in June by Rep. Edolphus Towns (D-N.Y.) would require that all biomedical or behavioral studies involving minors or people with mental disabilities be made public on a regular basis, a move that would facilitate community oversight of such research.

But it remains to be seen whether such steps will pass muster with the biomedical research lobby, or even with some patients. At a recent congressional hearing, the Association of American Medical Colleges and the Pharmaceutical Researchers and Manufacturers of America argued that the nation's system of research protections is working well. They cautioned against adding new restrictions that could slow the advancement of life-saving research.

Many patients too are wary of government efforts to restrict access to medical research.

"We have a society that tends to view everything that's new as better, and people are militating for access to these drugs, saying, 'Who are you to be protecting me against my own choices?'" said professor Charo.

Unfortunately, Charo and others said, while some trials are indeed testing tomorrow's miracle drugs, others are testing useless or even harmful compounds, or are so poorly designed that they won't yield any useful information at all. Lawmakers a and the public eventually will have to decide to what extent the government should help people distinguish between those kinds of research, and to what extent people will have to heed a new version of an old aphorism: "Volunteer beware."

Copyright 1998 The Washington Post Company