“Two entirely different standards emerge [in interpreting the meaning of ‘minimal risk’] depending upon whether researchers consider the daily or routine risks of harm encountered by some or all children,” wrote Loretta Kopelman, a professor at the Brody School of Medicine of East Carolina University and a member of the Institute of Medicine’s Committee on Research on Children. “With the first interpretation, or relative standard, the upper limit of harm would vary according to the particular group of subjects; with the second, or absolute standard, the upper limit would be the risks of harm encountered by all children, even wealthy and healthy children.” Kopelman reminded readers of the terrible consequences and ethical quandaries of such interpretative variation. In the 1960s and 1970s, for example, mentally retarded children had been used as subjects in the infamous Willowbrook hepatitis studies in which children were given hepatitis using the rationale that the “disease was endemic to the institution [and thus] the children would eventually have gotten hepatitis.”58
As the court of appeals’s ruling sank in, its implications appeared more profound and troubling. In an article titled “Canaries in the Mines,” Merle Spriggs, a medical ethicist at the University of Melbourne’s Murdoch Children’s Research Institute, gave perhaps the most cutting critique of the Johns Hopkins research: “The argument that the [KKI] families benefitted because they were not worse off can be compared with the arguments used in the infamous, widely discussed mother-child HIV transmission prevention trials in developing countries,” Spriggs said, referring to medical trials sponsored in the previous decade by the U.S. government in which researchers, seeking an inexpensive, effective way of reducing HIV transmission between mothers and children in African countries, provided AZT treatments to some mothers while comparing them with untreated “controls” who received only a placebo. Some argued that the research was justified in part because the African women who received the placebo would not normally have received any treatment at all, though ethical concerns would have precluded the research from being conducted in the United States. Both the HIV transmission prevention trials and the KKI research, Spriggs pointed out, “involved the problematic idea of a local standard of care,” an underlying assumption that “risky research is less ethically problematic among people who are already disadvantaged.” If this “relativistic interpretation of minimal risk” was considered acceptable, it opened a Pandora’s box of deeply disturbing issues and could virtually unleash the research community on poor people. She warned that such a stance “could allow children living in hazardous environments or who faced danger on a daily basis to be the subject of high risk studies.”59
Above all, what the KKI research effort exposed was a fault line that divides poor people from the rest of Americans and extends far beyond the ethics of occasional research. No one would suggest that a middle-class family allow their children to be knowingly exposed to a toxin that could be removed from their immediate environment. But for decades, as a society we have accepted that poor children can be treated differently. We have watched for over a century as children have, in effect, been treated as research subjects in a grand experiment without purpose. How much lead is too much lead? What are the limits of our responsibility as a society to protect those without the resources to protect themselves? As we confront new information about environmental toxins like mercury, bisphenol A, phthalates, and a host of new chemicals that are introduced every year into the air, water, and soil, whose reach extends beyond the poor, the issues raised by the KKI story—and by the modern history of the lead wars more generally—are issues that, by our responses, will define us all.
The history of lead poisoning and lead research is paradigmatic of the developing controversies over a range of toxins and other health-related issues now being debated in the popular press, the courts, and among environmental activists and consumer organizations, as well as within the public health profession itself. Public health officials struggle mightily with declining budgets, a conservative political climate, and a host of challenging and new health-related problems. Today, the public health community continues to have the responsibility to prevent disease. But it has neither the resources, the political mandate, nor the authority to accomplish this task, certainly not by itself. It is an open question whether it has the vision to help lead the effort, or to inspire the efforts needed.
Whatever the limitations of the bacteriological and laboratory-based model that public health developed in the early part of the twentieth century in response to the crises of infectious disease, there is no arguing that this model provided a coherent and unifying rationale for the profession. But, as we witness the emergence of chronic illnesses linked to low levels of toxic exposures, no powerful unifying paradigm has replaced bacteriology. Some suggest that the “precautionary principle” can serve as an overall guide, arguing that it is the responsibility of companies to show that their products are safe before introducing them into the marketplace or the environment, that we as a society should err on the side of safety rather than await possible harm. By adopting this approach, public health would reestablish prevention as its primary creed. Others insist that a renewed focus on corporate power, economic inequality, low-income housing options, racism, and other social forces that shape health outcomes is most needed to counter the antiregulatory regime of early twenty-first-century America. These ideas, or a more unified alternative, however, have yet to galvanize the field or the broader public, at least in the United States.
In this book we look at the shifting politics of lead over the past half century and the implications for the future of public health and emerging controversies over the effects of other toxins. The developing science of lead’s effects, the attempts of industry to belittle that science, the struggles over lead regulation, and the court battles of lead’s victims have taken place against the backdrop of a changing disease environment and, in more recent decades, an emerging conservative political culture, both in the broader society and in the public health profession.
Researchers have shown over the last five decades that the effects of lead, at ever-lower exposures tested, represent a continuing threat to children, a tragedy of huge dimensions. In the coming decades, without substantial political and social change, we will be placing millions more children at risk of life-altering damage. This research, combined with declining public will and resources to remove lead from children’s environment, has left the public health community and society at large with a difficult dilemma, not unlike that which Julian Chisolm and his young colleague Mark Farfel faced: Should we insist on the complete removal of lead from the nation’s walls, through some combination of full abatement and new housing, and therefore a permanent solution to this century-old scourge? Or should we search for a “practical” way to reduce the exposure of children to “an acceptable” level?
If we choose the former, the danger is that, without strong popular and political advocacy and a public health profession rededicated to the effort, nothing will be done—complete abatement may well be judged too costly, and we may encounter an ugly unwillingness to address a problem that primarily affects poor children, many of them from ethnic and racial minority groups. If we choose the latter, and if the dominant political forces give at best only grudging support to this ameliorative effort, the danger is that the children of entire communities will continue to be exposed, albeit at gradually declining levels, to the subtle and life-altering effects of lead. Public health as an institution, in trying to define what an “acceptable” level is, could lose in the process its moral authority and its century-long commitment to prevention, yet with no viable coherent intellectual alternative. This is a conundrum that affects us all, for we console ourselves with partial victories, often framed as progress in the form of harm reduction rather than prevention. We have become willing to settle for half measures, especially when what is at issue is the health of others, not of oneself. Isn’t this, so to speak, the plague on all our houses? In this sense, we are all complicit in the “experiment” that allows certain classes of people to be subjected to possible harm in the expectation of avoiding it ourselves.
2From Personal Tragedy to Public Health Crisis
All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, or to postpone the action that it appears to demand at a given time.
BRADFORD HILL, 1965
By