Exposed Science. Sara Shostak. Читать онлайн. Newlib. NEWLIB.NET

Автор: Sara Shostak
Издательство: Ingram
Серия:
Жанр произведения: Медицина
Год издания: 0
isbn: 9780520955240
Скачать книгу
there are issues of nutrition and genetic susceptibility (Interview S32, emphasis added).

      In addition to identifying the challenges of toxicology testing, the consensus critique emphasizes the uncertainty that derives from issues surrounding the extrapolation of data derived from in vivo testing in animal model systems to establish the risks to humans. This aspect of the consensus critique is highlighted especially by regulatory scientists. For example, one regulator stated starkly, “people . . . worry about the relevance of animal studies” (Interview P03). Another regulatory scientist expressed a similar concern by stating that the two-year rodent bioassay “gives you the answers: (1) this does cause cancer in rodents; (2) this does not cause cancer in rodents; (3) this might cause cancer in rodents. Then, you have to extrapolate to humans. This entire process is difficult, slow, and expensive” (Interview P02).

      Second, and related, environmental health scientists emphasize the consequences of expensive and time-consuming nature of the two-year rodent cancer bioassays, specifically, the tremendous number of high production chemicals that have not yet been tested. In fact, scholars estimate that of the chemicals on the EPA’s high production volume list (that is, chemicals produced in high volume in industry), an estimated 93% lack basic chemical screening tests, and 43% have not been subject to basic toxicology testing (Brown, Mayer, & Linder 2002; see also Altman et al. 2008). In explaining her enthusiasm for genomics research, a toxicologist referred to the need to “break the bottleneck” in testing and “deal with the backlog of chemicals that are still waiting to be tested” (Field Notes August 2002). Another toxicologist stated:

      The idea is that we’d like to be doing it better. We’d like to be doing it cheaper. We’d like to be doing it more quickly. Because you know, at eight compounds a year, and millions and millions of dollars [per compound] we’re never even going to make a dent in everything that’s out there that probably needs to be tested (Field Notes August 2002).

      

      In reflecting on his years as director of the NIEHS and National Toxicology Program (NTP), Olden also invoked issues of efficiency in reference to his support of efforts to establish new, molecular techniques for use in risk assessment:

      We were just using a few very standard assays that had been in existence for years, and there are still people who tell me we should still be doing that. But I felt that we were not providing a very good product—a quality product; and the efficiency was very poor. In other words, we were spending too much money and generating very little useful information (Olden, Oral History Interview July 2004).

      Even scientists who believe that animal bioassays provide the most reliable data for risk assessment note that the current system is unable to keep pace with the demand for testing. In fact, concerns about the backlog of chemicals awaiting assessment dates back to the NCI’s Cancer Bioassay Program, the precursor of the NTP (Smith 1979). However, the volume of testing has decreased over time, as the studies conducted have become more complex and costly. As this toxicologist noted, “One change that has occurred over the years is a decrease, a significant decrease in the number of chemicals that NTP studies for carcinogenicity. Whereas in the early ’80s there were about 50 per year, in the ’90s it drifted down to somewhere around 10 per year” (Interview S96).

      Third, the consensus critique positions unexamined variation in susceptibility among humans as a source of uncertainty in the risk assessment process:

      We do most of our assessments based upon the typical American. We think there is going to be so many cancers averted, so many reproduction problems and so on and so forth. We do not consider the fact that individuals are different . . . granted, we do look at subpopulations . . . but [not] . . . from a genomic point of view, whereas, in fact, that’s really what we’re talking about (Interview P03).

      Currently, risk assessors take the value that toxicology testing has determined to be an acceptable exposure limit for a standard human (e.g., the no observed effect level [NOEL]) and multiply it by ten (Smith 1996). Although many regulatory scientists believe that this is a conservative practice that is successful in protecting susceptible individuals, such ten-fold factors are seen as arbitrary and burdensome by regulated industries (Interview P03), and environmental health advocates question whether they are truly protective (Interview P06). This also raises the more general issue of how to protect particularly vulnerable individuals, who, with few exceptions, are not specifically protected under existing laws and regulations.

      In their articulations of the consensus critique, environmental health scientists highlight the fact that such broad domains of uncertainty provide opportunities for expensive and time-consuming legal challenges to risk assessments (Michaels 2008). In the words of a toxicologist, “people who want to promote political uncertainty will use scientific uncertainty as a basis” (Field Notes, NIEHS July 2002).6 However, even as they acknowledge the political (and, arguably, economic) interests that motivate controversy in the environmental health arena, scientists emphasize the potential of molecular genetic and genomic techniques to address the uncertainties and limitations in current toxicological testing practices and to improve the capacity of environmental health research to contribute to risk assessment and regulation (e.g., Paules et al. 1999; Olden 2002; Simmons & Portier 2002).

      The Prognostic Promise of Molecular Techniques

      The prognostic component of the consensus critique positions gene-environment interaction, broadly construed, as a means of addressing these challenges. Advocates of molecular genetic and genomic approaches claim that by reducing the scientific uncertainty that has previously made environmental health research particularly vulnerable to challenges in the context of risk assessment and regulation, it will be possible to definitively and more rapidly assess a larger volume of chemicals. In general, the molecular genetic and genomic technologies and methods championed by advocates of this approach vary substantially depending on their subfields.

      Toxicologists have been particularly concerned to articulate how genomic technologies can reshape toxicology testing. For example, they point to four distinct, though not mutually exclusive, means by which gene expression profiling, the signal technology of toxicogenomics, could reduce uncertainty in the risk assessment process. First, they promote gene expression profiles as a means of elucidating mechanisms of toxicity and enhancing the knowledge base of toxicology. Second, they suggest that gene expression profiles may provide a basis for a new, molecular rationale for the classification (and reclassification) of toxicants (that is, grouping toxicants that share similar gene expression profiles). Third, and related, scientists are actively pursuing the potential of gene expression profiles to enable the prediction of the toxicity of unknown compounds and thereby provide a basis for their classification (that is, without undergoing the two-year rodent bioassay). Fourth, they point to the possibility that gene expression profiles could serve as new molecular biomarkers of genetic susceptibility. Together, scientists argue, these innovations could increase the speed, efficiency, predictive capacity, and specificity of toxicology testing, making risk assessment more comprehensive and more certain (Bartosiewicz et al. 2000; Bartosiewicz et al. 2001; Burchiel et al. 2001; Fielden & Zacharewski 2001; Hamadeh et al. 2002a; Hamadeh et al. 2002b; Paules et al. 1999; Pennie et al. 2000; Tennant 2001).

      Some foci of the consensus critique have been taken up differently across specific subfields. The issue of human genetic variation in response to environmental exposures is the most prominent example of this; it is a central focus of initiatives in epidemiology and toxicology, as well as being the defining focus of the emerging field of environmental genomics. In the context of risk assessment, research on genetic susceptibility to environmental exposures is promoted as a means of providing more precise estimations of risk for specific humans and subpopulations thereof, replacing a one-size-fits-all approach with one that acknowledges variation among human bodies. Testifying in support of the NIEHS Institute’s Budget for 2002, then NIEHS Director Olden told the U.S. Congress that “individuals can vary by more than two-thousand fold in their capacity to repair or prevent damage following exposure to toxic agents in the environment” (Olden, Fiscal Year 2002 Budget Statement, emphasis added). This argument has been prominent also in publications by the NIEHS leadership:

      At present, human genetic variation is not implicitly considered in estimating dose-response relationships,