Be yourself…

While other readings (Ladd 1970) focused on organizations as evil or as putting a barrier around our individuality in order to serve its higher purpose, this reading posits that are only as strong as the individuals making it up (Bender 1992). To Bender, an organization should be considered the sum of its individuals. Along this line of logic, if a majority of individuals within an organization decide to act immorally, then the ‘net’ morality of it will be negative. While Ladd (1970) suggested that institutional immorality would suppress individualism, in the case of the Bender (1992) essay, it suggests that this simply represents a greater barrier over which individuals must climb in order to bring awareness of immoral behavior.

While individuals may be part of an organization that could be considered ‘systemically’ evil, a person has a choice to go against the flow and call out what his or her ‘internal judge’ deems to be morally wrong (Bender 1992). Ethics and morality can be formed from an early age, and certainly impacted by our own life experiences and organizational memberships, but individuals still have the freedom to choose their own path. To borrow from Bender’s analogy, we can choose to not put on a ‘mask.’

Throughout the semester, we have focused on a variety of cases where unethical situations may have occurred. As Bender (1992) mentioned, and as popularized by Freakonomics (2005), humans respond to incentives (e.g. money, power, etc.) that determine behavior. To which incentives should we respond? The answer is never clear, and always situation-specific. Although there are ethical theories that can provide a meaning and context through which we can address ethical problems, in the end it’s the individual who has to make that distinction. We are composites of our life experiences, so our own internal moral compass is constantly being tested, revised, and retested, just like the science and engineering tasks upon which we focus. Our lives, our experiences, and our paths in life are all individual choices. Choice is one thing that makes us human, so in the end only we, as individuals, can choose to be moral.

Works Cited:

Bender, K. 1992. The Mask: The loss of moral conscience and personal responsibility. The Elie Wiesel Foundation for Humanity.

Ladd, J. 1970. Morality and the Ideal of Rationality in Formal Organizations. The Monist 54(4):488-516.

Levitt, S.D., and S.J. Dubner. 2005. Freakonomics. Harper Collins: New York, NY.

The Fourth Estate…

There always seems to be a tension between the media (and there obsession with black and white issues and succinct storylines) and the nuances of complex scientific knowledge (Miller 2009, Sismondo 2010). Miller (2009) even goes so far as to say that ‘environmental journalists’ could benefit from additional training in order to better understand the type of science that they are covering, especially since he claims that these journalists are essentially assigned their task. On the other hand, Sismono (2010) differentiates ‘science journalists’ as those who have some sort of understanding of the field in which they cover. I almost get the feeling that ‘environmental journalists’ come from a general pool of availability, whereas ‘science journalists’ are at least scientifically trained and have a vested interest , at least based on ideas set forth in the Miller (2009) and Sismondo (2010) articles. Even so, ‘catastrophic’ events can energize any type of journalist.

The 1984 environmental disaster in Bhopal, India and the subsequent media storm (Hazarika 1994), is an excellent case study involving the interaction between media, industry, organizational and governmental policy. In Miller’s (2009) article, he notes the role of media executives in choosing what is ‘newsworthy’ . As such, stories that get the most attention are those having conflict and novelty (Miller 2009). The Bhopal disaster was rife with conflict between a major American multi-national chemical company (Union Carbide), government regulators, and the affected population around Bhopal (Hazarika 1994). The fact that AMERICA was involved helped elevate the story to ‘newsworthy’ status (Hazarika 1994). Pictures of dead bodies, afflicted children, and an unrepentant multi-national corporation were the red meat for media it seemed (Hazarika 1994, Miller 2009).

Even so, Miller (2009) noted that a major objective of the media is to balance sides of a story. In the Bhopal case, both industry and government regulators were equally chastised for failures (Hazarika 1994). This balance does come at a cost. Like many environmental and engineering disasters, it is a complex case and yet the media need to fill column inches or minutes of air time (Miller 2009, Sismondo 2010), which seems to fall under the dominant model.  In the dominant model (Sismondo 2010), complex scientific issues are simplified to reach the broadest audience. In the case of the media, or ‘infotainment’ (Miller 2009), does the dominant model of ‘simplification’ imply the deficit model (Sismondo 2010)? In other words, is the media rationale that science is too complex to understand, so there is a need to ‘dumb it down’ for the public?

Another timely point is that the media are reactive, not proactive (Hazarika 1994) and lose interest (or reduce column inches or time slots) over the long-term outcomes of a story such as Bhopal. Miller (2009) even noted that media do not that think that progress is newsworthy. For example, one of the major outcomes of the Bhopal disaster was an update of the Superfund bill (known as SARA), and the addition of ‘Title III’ where it called for chemical industries to report their hazardous waste and inform the community (Right to Know). Hazarika (1994) reported that newspapers “ignored Title III altogether when the bill passed” even though it was the most salient issue to people who may be affected by these disasters in the future.

Although the story of Union Carbide and Bhopal had no major whistleblowers, according to Davis (1998), it should have. Two of Davis’ (1998) criteria for making whistleblowing a good idea, including 1) poor management, and 2) organizational trouble were predominant at Union Carbide (Hazarika 1994). The plant in Bhopal was so mismanaged and unorganized that it was considered for closure by Union Carbide prior to the disaster (Hazarika 1994). Warning signs and small leaks were ignored until the worst happened. Although whistleblowing can act as a check on an organization’s improper practices, Davis (1998) argues that the prevention of it by changing organizational procedures, education, and structure of risk management is more preferred. This incident was unique as it uncovered such a systemic problem of the chemical industry (Bhopal, Industry (WV), and other leaks) that regulation of the chemical industry was an organic phenomenon (Hazarika 1994). It took the disaster to lead to changes in the industry as a whole, along with the spurring by negative media coverage. Despite the tension between policy, media, and science, in this case, the media coverage was the catalyst for massive organizational change. Whistleblowing and the media seem to be important checks on systems that may potentially falter under their own improper practices.

Works Cited:

Davis, M. 1998. “Avoiding the Tragedy of Whistleblowing.” In Thinking Like an Engineer: Studies in the Ethics of a Profession, pp. 73-82. New York, NY and Oxford, UK: Oxford University Press.

Hazarika, S. 1994. “From Bhopal to Superfund: The News Media and the Environment,” pp. 1-14.

Miller, N. 2009. “The Media Business.” In Environmental Politics: Stakeholders, Interest, and Policymaking, 2nd ed., pp. 149-165. New York and London: Routledge.

Sismondo, S. 2010. “The Public Understanding of Science.” In An Introduction to Science and Technology Studies, 2nd ed., pp. 168-179. West Sussex, UK: Wiley-Blackwell.

Just asking…

Much of the information relating to local knowledge, public participation, and environmental awareness from this week’s readings seems to be epitomized at my field site in Coweeta Creek basin, North Carolina, a Long Term Ecological Research (LTER) site. The current LTER funding for Coweeta sets aside a portion of the allotment for use in socio-economic studies, and currently our site is one of the major pioneers in melding ecological science and socio-economic input. The sociologists working on this component use questionnaires (similar to those described by Corburn 2005), to assess how the local communities in NC and GA value watersheds in their livelihood. We also use community input to help find sites that we may use to monitor changes to forest and stream ecosystems in the context of local land usages and potential future usage. Economists on our team work with local landowners to put a value on the services of streams and forests (e.g. Sismondo 2010), to better help inform our science at the LTER. While much of the local input at Coweeta does not qualify as “environmental advocacy” (Miller 2009), and while there is still some distrust of our science and the governmental agencies funding us, the community still seems invested in what we do. There are many “Coweeta schoolyard” programs, where young children are given tours of the site, and allowed to identify plants and animals and perform simple experiments. There is also a great deal of hiking, hunting, and tours that occur at the site to help educate and get input from the community. While our model is by no means perfect, there seems to be some positive interactions occurring to bridge the gap in our knowledge and experiences.

On the complete other end of the spectrum, agencies and ‘scientists’ involved in the DC Lead in water crisis could have benefitted from a little public input. Similar to the activists standing up in outrage to environmental damage by industries (Miller 2009), DC citizens stood up to the agencies claiming to hold public health in high regard (Lambrinidou 2010, also see the letter from the national coalition mentioned below). The continued unwillingness of the CDC to remove its erroneous 2004 MMWR from their website (see: Edwards 2010, national coalition letter) says volumes about their trust in the public: they have none. Their behavior implies that they feel their status as a public health agency gives them an ‘expert’ status (Sismondo 2010), where the public can offer nothing of importance to their studies. Even worse is that the CDC is only increasing ignorance in the public arena by not retracting the paper (Edwards 2010, national coalition letter). As noted by Yanna and others, the report has been used by other municipalities to justify doing nothing which only increases the danger to public health. As Dr. Edwards’ letter (2010) to Secretary Sebelius notes, the public is very willing to provide their own input based on their experiences with lead in water; how was the CDC unable to get access to these people? The answer is that they did not even try, it appears.  In the CDC’s talking points attached to the unpublished letter from the national coalition below, the agency seems to suggest that they will attempt to “improve the quality of state and local surveillance data.” All they needed to do was to knock on a few doors and spend a little time talking to residents. The public is smarter than we, as scientists or agency representatives, want to admit sometimes; but when there is a cause (e.g. environmental pollution or lead in water) that can affect how people live their lives, the public will come together to make a stand.

Works Cited:

Corburn, J. 2005. “Street Science: Characterizing Local Knowledge.” In Street Science: Community Knowledge and Environmental Health Justice, pp. 47-77. Cambridge, MA and London, UK: The MIT Press.

Edwards, M., 2010, unpublished letter to the US Department of Health and Human Services (5/27), 2 pages.

Lambrinidou, Y., WAMU 2010 commentary (http://wamu.org/news/10/07/08/commentarylead_in_dcs_wateryanna_lambrinidou).

Miller, N. 2009. “The Growing Sophistication of Environmental Advocacy.” In Environmental Politics: Stakeholders, Interest, and Policymaking, 2nd ed., pp. 74-95. New York and London: Routledge.

National coalition of public health and environmental groups, 5/20/10, unpublished letter to the CDC requesting retraction of 2004 MMWR publication.

Sismondo, S. 2010. “Expertise and Public Participation.” In An Introduction to Science and Technology Studies, 2nd ed., 180-188. West Sussex, UK: Wiley-Blackwell.

 

“It’s not me…it’s you” and other questionable practices

It seems that the more that an industry, person, or group is trying to hide (or lie and cover up) vital information, the more that they go on the defensive. In the case of the DC lead in water crisis, WASA, the CDC, and the EPA seemed to point the finger at others; they held firm in their lies about blood lead and the link to drinking water because they would lose face even more (CDC 2010a, b). Even though work by the Washington Post, Rebecca Renner, and Marc Edwards demonstrated otherwise, these agencies spent a great deal of time calling them names and otherwise behaving in an immature fashion. This was similar in the lead industry historically (Markowitz and Rosner 2002). Instead of admitting that lead was harmful, the industry went to extremes to discredit any scientists that were using good science to dispute claims of lead being benign. One of the most ridiculous ploys from the lead industry was blaming industrial workers for their elevated blood levels; the industry claimed that they liked their “Budweiser” too much. Additionally they claimed that any issues with lead causing lower fertility in men were not due to the metal, but instead due to the test subjects not being honest about having sex prior to submitting a sample for testing (Markowitz and Rosner 2002). It seems that any logical human being could understand that these were nothing more than hasty attempts to hide what everyone in the public already seemed to know: lead is harmful in any amount.

Additionally, the Markowitz and Rosner (2002) article helped me understand the position of the CDC in terms of not wanting to link lead in drinking water to increased blood lead levels in Washington, DC. The 1969 industry-produced “Policy and Program of Childhood Lead Poisoning” seemed to be nothing more than window dressing by stating that paint was the major source of increased blood lead levels, not gasoline, exhaust, deposition, or anything else. Apparently the lead industry of the past and the DC agencies involved in the water crisis believed that lead in any liquid form is not important. It is unfortunate to see how some things do not change.

With so much risk present in today’s society from industry, it is easy to understand how the public would have a hard time trusting anything scientific or coming from industry itself (Jasanoff 2012). The fact that the lead industry had a small, insulated research group producing the data that claimed lead was harmless should have been a red flag to the public (Markowitz and Rosner 2002).  As many authors have noted, including Freedman (2010), money is quite an impetus for generating ‘results’ that please funding agencies (i.e. industry, pharmaceutical companies, etc.). What is more surprising is that one group in which the public puts a great deal of faith, physicians, are not as honorable as one would expect (Freedman 2010). Even as billions of dollars in NIH grants are funded for medical research, 90% of the work in this field is considered flawed, misleading, or wrong (Freedman 2010). This is disheartening to someone in my field, who have to fight tooth-and-nail to get any amount of support. It is ironic that “NIH budgets are easier to justify” (Jasanoff 2012) compared to those for NSF proposals, while it seems that a greater proportion of biomedical research is prone to dishonesty. Even my primary care physician apparently does not read medical journals, such as the New England Journal of Medicine, because he feels that the studies are questionable. Maybe that is a medical  opinion worth noting.

Works Cited:

CDC. 2010a. Notice to Readers: Examining the Effect of Previously Missing Blood Lead Surveillance Data on Results Reported in MMWR. MMWR 59(19):592, http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5919a4.htm.

CDC. 2010b. Notice to Readers: Limitations Inherent to a Cross-Sectional Assessment of Blood Lead Levels Among Persons Living in Homes with High Levels of Lead in Drinking Water. MMWR 59(24):751, http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5924a6.htm.

Freedman, D. H. 2010. Lies, Damned Lies, and Medical Science. The Atlantic (Nov.), pp. 1-12, http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/.

Markowitz, G. and R. Rosner. 2002. “Old Poisons, New Problems.” In Deceit and Denial: The Deadly Politics of Industrial Pollution, 108-138. Berkeley, CA: University of California Press.

Jasanoff, S. 2012. Technologies of Humility: Citizen Participation in Governing Science. In M. Winston and R. Edelbach, eds., Society, Ethics, and Technology, pp. 102-113. Boston, MA: Wadsworth.

 

 

“Trust us…”??

When growing up, it seems that all of our parents gave advice on whom to trust: “firemen and policemen keep us safe”, “don’t take candy from strangers”, “respect your elders because they know what is best”.  As we get older and begin to learn more how the world works, we are better able to make our own judgments about who is trustworthy, or at least who should be. Ever since I was young, I have had a love of science, so I was drawn to my teachers and scientist family member to feed my interest. I had naively believed that they represented the greater scientific community, in that they had integrity in the information they relayed to me, and were interested in continuing to be a trusted source of information for a ‘new generation’ like myself. Even during my fourth year as a Ph.D. student, I still had that naïve hope that scientists would be truthful, moral, and unbiased in their data collection and conclusions. What a difference 9 weeks makes.

A recurring theme in the past few weeks has been the idea that scientists and engineers are important creators and distributors of information into the public and policy arenas. As Resnick (2011) clearly outlines, trustworthiness is a virtue by which practitioners must strive towards. To me, this ‘public trust’ of scientists and engineers (Resnick 2011) suggests that there is an unwritten (and tacit) contract by which data are provided that would help those in society make informed decisions about their well-being, and any attempt by professionals otherwise would be dishonest and unethical (Harris et al. 2009). Reading through Harris et al. (2009), and thinking back on issues involved in the DC Lead Crisis (e.g. Edwards 2010), I wonder if the benefits of dishonesty (lying, data fabrication, etc.) outweigh the risks of getting caught? This is a purely ‘devil’s advocate’ approach, but the benefits of various parties involved in the DC Lead Crisis cannot be ignored: Tee Guidotti made money as a WASA consultant,  the CDC was able to claim ‘authoritative’ status in their erroneous MMWR report (and use it as ammunition against any claims contrary to their findings), and EPA R3 was able be awarded for their despicable behavior. As Dr. David Lewis indicated during last week’s class, it is challenging to stand up (as he and Edwards) did against a monolith of bureaucracy, lies, and money as the sole source of truth and objectivity. Given this unfortunate fact, these organizations (and the decision makers within) seem to gain this inflated sense of invincibility, so there is no incentive for them to even attempt to be trustworthy (i.e. they can ‘lawyer up’ or resort to character assassination).

In spite of all of this, who were the losers? In the DC case, it was the public. There were potentially thousands of children who were impacted (Edwards 2010), and parents were unable to make informed choices on behalf of their children (Harris et al. 2009). Of course, Dr. Edwards had his time, finances, and character hurt as well, but the difference was that he was able to rise above it all and fight the system using his ethical judgment and scientific training. The actions of the CDC, WASA, DC DOH, and multiple players within was simply a ploy to discourage an informed response from those members of the public whose health was in the greatest jeopardy (and thus had the least capability to stand up and be heard). These groups hedged a bet that no one would call them out, and mostly they won. Years after the crisis broke, it seems that these organizations would want to try to improve their image, given the obvious lack of trust they now have from the public, but yet continue to laud their incompetent science through not retracting any reports and giving themselves ‘gold medals’ (Edwards 2010). The public trust is important, but what CDC, WASA, DC DOH, and other government agencies seem to forget is the “public trust fund” and tax dollars that gave them the jobs in the first place.

Works Cited:

Edwards, M. 2010. “Experiences and observations from the 2001-2004 ‘DC LeadCrisis’”: Testimony. U.S. House of Representatives’ Committee on Science and Technology, 111th Congress.

Harris, C. E., Jr., et al. 2009. “Trust and Reliability.” In Engineering Ethics: Concepts & Cases, pp. 115-134. Belmont, CA: Wadsworth.

Resnick, D.B. 2011. Scientific research and the public trust. Science and Engineering Ethics 17: 399-409.

 

Ignorance is bliss?

Even though I have only been in science for less than a decade, I feel that one of the most important stumbling blocks for the field is the trouble with communication.  I have spent years learning about my field, new techniques, data collection and analysis, and how to publish, but yet was in no way trained on how to deliver that knowledge in the public venue. Although all fields of science may be technical to those who are not ‘in the know,’ the public does in fact WANT to know how their lives are being affected (Hadden 1989, Markowitz and Rosner 2002, Bucchi and Neresini 2008). Industries and many scientific professions seem unwilling to acknowledge that the public have their own ideas and are willing to learn and participate in the process of moving science forward (Bucchi and Neresini 2008). Taken to an extreme, the public has a ‘right to know’ (Hadden 1989) about the impact of science, industry, and technology on their lives. As such, they also need a voice that will be heard by those ‘in the know’ who can integrate a public component into future science and technological advances (Markowitz and Rosner 2002, Bucchi and Neresini 2008).

In an article by Bucchi and Neresini (2008), the authors mention a study where patients were asked medical questions to test their knowledge, while their physicians were asked to independently assess their patients’ level of understanding. Not so surprisingly, 76% of the patients were well informed of medicine, while only 50% of their doctors could “estimate their patients’ knowledge accurately” (Bucchi and Neresini 2008). Even more telling was that, given the statistics above, the doctors refused to alter their communication style, thusly reinforcing stereotypes of perceived patient ignorance. On the other hand, both Markowitz and Rosner (2002) and Hadden (1989), address the idea that industry wishes to keep the public willfully ignorant, and undermine the ‘right to know.’ Hadden suggests that industry simply would not reveal their environmental emissions, even as the surrounding communities were becoming sickened and pleading for information.  Even more troubling is the fact that the lead industry, unwilling to acknowledge massive public health issues, chose to run an advertising campaign for decades suggesting that lead was a good and valuable resource for consumers (Markowitz and Rosner 2002).

While it is hard to believe that such cases of unwilling communication or outright lying occur between the public and industry (or scientists), believed to hold their health in high regard, the Washington D.C. lead in water crisis provides an astonishing case study. As one of the most important governmental agencies, the CDC is supposed to be a responsible source for information regarding public health. In the case of the DC lead in water issue, the CDC misrepresented the dangers for the public, especially children, in a 2004 MMWR report (Edwards et al. 2009). Their report was based on inaccurate, missing, and badly analyzed data, where they concluded no significant relationship between lead concentrations in water and lead concentrations in blood (Edwards et al. 2009). These errors were pointed out in an article by Renner (2009) and a peer-reviewed publication by Edwards et al. (2009). Interestingly, the CDC (2009) immediately responded ‘within hours’ (WASAwatch 2009), totally refuting all of the claims made by those sources.

In this case, CDC represents a scientific ‘industry’ that, according to WASAwatch (2009) was “unable or unwilling to refute the serious questions raised by Salon, [and] chose to stay on the path of deception and obfuscation in order to try and salvage its reputation, at the expense of public health.” WASAwatch (2009) continued by specifically outlining errors in the CDC statement, going so far as to suggest ways to improve their vague, misleading statements, much like a reviewer of a scientific article would. A response time of 2 hours by the CDC (WASAwatch 2009) indicates that either 1) no thought went into the response or 2) they had a prepared (and generalized) response to upload to the web. The bloggers at WASAwatch spent more time conferring on the Renner articles, Dr. Edwards’ reports, and other sources prior to posting, so why could the CDC not hold itself to those same standards, at the very least? The public are not a ‘mass of ignorance’ as some believe, but instead want to know more and participate in the process of scientific advancement and the impacts of industry. WASAwatch is just one example of participation and public awareness of science; as they say in their 2009 posting, “we have learned to tell truth from spin.”

Works Cited:

Bucchi, M. and F. Neresini. 2008. Science and Public Participation. In E. J. Hackett, et al., eds., The Handbook of Science and Technology Studies, pp. 449-472. Cambridge, MA: The MIT Press.

Centers for Disease Control and Prevention (CDC). 2009. CDC Responds to Salon.com Article [Media Statement] (April 10), 2 p., http://www.cdc.gov/media/pressrel/2009/s090410.htm.

Edwards, M., S. Triantafyllidou, and D. Best. 2009. Elevated Blood Lead in Young Children Due to Lead-Contaminated Drinking Water: Washington, DC, 2001-2004. Environmental Science & Technology 43:1618-1623 (with supporting information).

Hadden, S. G. 1989. “The Need for Right to Know.” In A Citizen’s Right to Know: Risk Communication and Public Policy, pp. 3-18. Boulder, CO: Westview Press.

Markowitz, G. and R. Rosner. 2002. “Introduction: Industry’s Child.” In Deceit and Denial: The Deadly Politics of Industrial Pollution, pp. 1-11. Berkeley, CA: University of California Press.

Renner, R. 2009. Health Agency Covered Up Lead Harm: The Centers for Disease Control and Prevention Withheld Evidence that Contaminated Tap Water Caused Lead Poisoning in Kids. Salon.com (April 10):1-3, http://www.salon.com/news/environment/feature/2009/04/10/cdc_lead_report.

WASAwatch. 2009. What the CDC Can Learn from the National Research Council and the Public [blog entry] (May 3), 10 p.,  http://dcwasawatch.blogspot.com/2009/05/what-cdc-can-learn-from-national.html.

 

“Who cares?”

“The care of human life and happiness and not their destruction is the first and only legitimate object of good government.” Thomas Jefferson

“No matter how brilliant a man may be, he will never engender confidence in his subordinates and associates if he lacks simple honesty and moral courage.” – J. Lawton Collins

As the two quotes above suggest, honesty and care should be major guideposts to our ethical decision making. Of all of the ethical theories discussed (utilitarianism, Kantian ethics, value ethics), care ethics seems to be the most vague and open to interpretation (van de Poel and Royakkers 2011). Even so, this theory may be more realistic in terms of having a greater appreciation for how groups of people interact within a society. Complex ethical issues do not arise in isolated boxes, where simple (i.e. individual) judgments can be made (i.e. Pantazidou and Nair 1999). Instead, intricate interactions across groups and organizations occur, so understanding the connections between “individual(s) with respect to the group” (van de Poel and Royakkers 2011) is key. Is the term ‘ill-structured’ as related to ethical issues just another way for saying ‘vague’ (van de Poel and Royakkers 2011)?

The ethical cycle seems to be a way to provide structure to chaos, in terms of attacking these “ill-structured” ethical issues (van de Poel and Royakkers 2011). I think that this is a very logical thing to do, especially since we are all scientists. Even so, I wonder if it is realistic. Do people facing ethical dilemmas actually go through all of the steps? Are ethical decisions not just user-defined anyway? Although theories provide a verbal roadmap, essentially a combination of our values, beliefs, and knowledge will be the guide that gets us to an ethical decision.

The idea of care ethics seems to get murky when we think of how science and policy interact within society. This relationship can be a slippery slope and trying to understand the motivations of scientists, politicians, and advocates can be difficult (Pielke 2007). Although Guidotti was charged with the task of using his expertise to enlighten the public on issues of lead in water, he used his position to advocate for those who were being dishonest due to his financial stake (Renner 2009a, 2009b, 2010). Instead of stepping back and admitting his mistakes and correct the record, he simply deflected any wrongdoing (Guidotti 2009), thus looking more guilty (and less trustworthy) in the end (as suggested by the Collins quote above).

On an interesting side note, Guidotti (2009) displayed all of his credentials at the top of his statement, whereas I was unaware of Renner’s Ph.D. until I noticed it mentioned in a small footnote (Renner 2010). Could this be another example of her attempting to present a balanced story, in that she is not putting her credentials ahead of the facts? Do scientists (or ‘experts’) use their knowledge as a shield against honesty as Guidotti did? In all of the Renner articles (2007, 2009a, 2009b, 2010), she simply presents the data without value judgments. Of course she also called Guidotti to task for his dishonesty in the 2009 articles. Should this not be the job of an ‘honest broker’ (Pielke 2007)?

As a scientist, being a true ‘honest broker’ can be a difficult task. Pielke (2007) makes the case that science and policy are hard to separate; so much so that the grants that we write as scientists specifically have to address the value of our work to society. This seems to be the crux of the determination between ‘basic’ and ‘applied’ science. Is science is becoming applied? Can we do science for the sake of discovery without involving society? In my field of aquatic ecology, it is necessary to do the basic research in order to be able to understand complex ecosystem interactions before I can even apply it to societal needs, yet funding agencies have been unwilling to support this research due to its ‘narrow focus.’  It seems to me that our role as scientists puts us at the forefront of care ethics, where our individual roles in producing knowledge have the potential to affect the whole society. Even so, if we are unable to do basic research (i.e. unaffected by policy and the ‘who cares?’ criticisms), how can we be the ‘honest brokers’ that society trusts us to be?

Works Cited:

Guidotti, T. L. 2009. [Letter to the Editor in response to Renner’s “Troubled Waters” articles]. AAAS Professional Ethics Report XXII(3):4. (Renner’s final response to Guidotti, is in PDF “W7 Renner Response.”)

Pantazidou, M. and I. Nair. 1999. Ethic of Care: Guiding Principles for Engineering Teaching & Practice. Journal of Engineering Education 88(2):205-212.

Pielke, R. A., Jr. 2007. “Four Idealized Roles of Science in Policy and Politics” and “Making Sense of Science in Policy and Politics.” In The Honest Broker: Making Sense of Science in Policy and Politics, pp. 1-7 and 135-152. Cambridge, UK: Cambridge University Press.

Renner, R. 2007. Lead Pipe Replacement Should Go All the Way. Environmental Science & Technology 41(19):6637-6638.

Renner, R. 2009a. “Troubled Waters: Controversy Over Public Health Impact of Tap Water Contaminated With Lead Takes on an Ethical Dimension.” AAAS Professional Ethics Report XXII(2):1-4.

Renner, R. 2009b. “Troubled Waters: On the Trail of the Lost Data.” AAAS Professional Ethics Report XXII(3):1-3.

Renner, R. 2010. Reaction to the Solution: Lead Exposure Following Partial Service Line Replacement. Environmental Health Perspectives 118:A202-A208.

Van de Poel, I. and L. Royakkers. 2011. “Care Ethics” and “The Ethical Cycle.” In Ethics, Technology, and Engineering: An Introduction, pp. 102-108 and pp. 133-160. West Sussex, UK: Wiley-Blackwell.

“R-E-S-P-E-C-T”

Aretha Franklin’s classic song kept running through my head while reading this past week. While the ideas of risk assessment presented in Harris et al. (2009) and Corburn (2005) have an essentially utilitarian feel (i.e. quantifying the extent to which harm may be done, and vice-versa), it seems to me that the whole analysis essentially ignores a major virtue: respect. In attempting to put a number on public happiness, the innate complexity of the world and the human experience are removed from the equation. Clearly, the readings this week outlined the lack of respect that some agencies have towards the public, the scientific process, or the free dissemination of data for informed decision-making.

Van de Poel and Royakkers (2011) describe virtues as the building blocks for a moral person, who will use rational decision making to choose the most ethical path.  The authors also mention that virtues can be learned and practiced. As such, ‘respect’ seems like an appropriate virtue given the EPA’s response to sustainable fishermen (Corburn 2005). Prior risk assessments described by the EPA did not respect complexity of toxins (i.e. Harris et al. 2009), socioeconomic factors, or public input (Corburn 2005). Essentially, a ‘default’ condition was used to assess risk of only one chemical on an average (i.e. white) man, with no respect towards cultural traditions or the interactive nature of chemical pollutants.    The agency learned the value of public input and adding complexity to their model of risk assessment, eventually leading to the creation of new methods of assessing multiple contaminants on affected, diverse groups of people, therefore becoming more virtuous in the process.

On the other hand, the DC DOH did not respect the public enough to provide a clear evaluation of the risk of lead in their drinking water. By painting themselves as the experts, Guidotti et al. (2007) were able to put their own spin on risk assessment and possibly taint free informed consent (Harris et al. 2009). Interestingly enough, the authors felt the need to establish that their study was “undertaken as a public health intervention by the DC DOH rather than a research project and was therefore not subject to internal review board review” (Guidotti et al. 2007). If the goal was to truly assess the public health effects of lead in the drinking water, why was there no respect for the process of science? It seems rather convenient that their data were not reviewed, did not show a correlation between lead pipes and drinking water lead levels, and allowed them to conclude that “the lead elevation has now abated.” While the EPA in Coburn’s (2005) article seemed to also claim expert status (at least in sticking to tried and true methodology and ignoring the social and cultural component), once they began to respect community input, a more complete picture emerged of the deleterious effects of subsistence fishing in New York City.

As scientists, I feel that we have a difficult time sharing our knowledge with others who are less informed. Personally, I know other scientists who hold the public in contempt as ‘ignorant’ and ‘ill informed.’ This attitude puts blinders on people, and prevents them from attempting to reach out to the public that entrusts them with the production of new scientific knowledge.  Therefore, it seems that simply respecting the public is the first step: respecting their opinion, respecting their need to understand why we do our science, and respecting the implications that our data may have on their lives. Of all virtues that can guide our ethical decision making, I feel that respect of others (both the public and the scientific community as a whole) is the most important, and that we must continue to improve upon it.

Works Cited:

Corburn, J. 2005. “Risk Assessment, Community Knowledge, and Subsistence Anglers.” In Street Science: Community Knowledge and Environmental Health Justice, pp. 79-109. Cambridge, MA and London, UK: The MIT Press.

Guidotti, T. L., et al. 2007. Elevated Lead in Drinking Water in Washington, DC, 2003-2004: The Public Health Response. Environmental Health Perspectives 115(5):695-702.

Harris, C. E., Jr., et al. 2009. “Risk and Liability in Engineering.” In Engineering Ethics: Concepts & Cases, pp. 135-164. Belmont, CA: Wadsworth.

Van de Poel, I. and L. Royakkers. 2011. “Normative Ethics.” In Ethics, Technology, and Engineering: An Introduction, pp. 95-101. West Sussex, UK: Wiley-Blackwell.

 

Responsibilty in Science and the Kantian ethic

In attempting to put my role as a scientist into perspective, I wonder if research that I do could be considered ‘good will,’ in the Kantian sense (van de Poel and Royakkers 2011). Is the science that I do for the betterment of society? Do I live up to my own standards of ethical practice so that my results may indeed contribute to new knowledge that may indeed benefit society? It is clear from the Steneck (2006) article that scientific responsibility is paramount in order that reliable data may be disseminated and trusted in the public arena. Of course, Kant also implies that human nature, and the ‘ends’ to which researchers wish to obtain, may impinge upon the good will of research. Clearly, as Edwards (2007) describes, the CDC hoped to show no causal relationship between lead in water and human effects, thusly used and misused human data as means to make their case. Also, the Tuskeegee study into syphilis (Gostin 2010) implied that researchers would rather have a complete, as opposed to wholly ethical, study by not providing a basis for informed consent for participants (Bayer and Fairchild 2010). In this vain, I feel that research is a scientist’s mean by which we can improve society (as an ‘end’).

Several thoughts come to mind in how responsibility in research may come into play within not only science in general, but my field, specifically. First and foremost is the peer-review process. Steneck (2006) suggests that peer review may catch some scientific malfeasance, and clearly the CDC’s report would have benefitted from an outside perspective (Edwards 2007). I have also wondered about the role of reviewer comments in this process. Given my past experience with publishing my work (limited as though it may be), it seems that feedback from associate editors and reviewers has the potential to contribute to, and possibly amplify, questionable research practices. For example, having reviewers ask me to ‘streamline’ my methods, or cite a paper or two instead of clearly outlining my techniques seems that it may impact the replicability of my experimental design. Also, having reviewers tell you to cite certain papers (i.e. obviously ones that they wrote) to strengthen an argument or conclusion seems odd to me as well. Also, for contentious issues (such as the coal mining that I have mentioned before), the biases of a particular reviewer may keep your paper from getting published, unless you work with the associate editor or make a strong enough case for your stand in the cover letter.

Secondly, the Steneck (2006) article brought up an interesting point about ‘salami cutting’ datasets, given that I work primarily at NSF Long-Term Ecological Research (LTER) sites. The goal of this type of research is to provide long-term data on various biotic and abiotic aspects of a particular study area that may be used to examine patterns across time. In order to make progress though, graduate degrees and papers must be published during the course of a particular experiment. The ‘salami cutting’ of data into multiple publications appears to fall under questionable research practices, but yet is the status quo for LTER. It seems that Ross’ self-evident norms idea (van de Poel and Royakkers 2011) may apply in this situation, where the universal good is dependent upon other condition (here, the progression of research and graduate degree-granting). In general, it appears that the usefulness of scientific data is in great part due to the responsibility of the practitioners and their objectives for carrying out a study.

Works Cited:

1) Bayer, R. and A. L. Fairchild. 2010. “The Genesis of Public Health Ethics.” In L. O. Gostin, ed., Public Health Law & Ethics, pp. 65-70. Berkeley, CA: University of California Press.

2) Edwards, M., 2007, unpublished letter of concern to CDC (1/17), pp. 1-27. (link: https://www.filesanywhere.com/fs/v.aspx?v=8a6d668c606071ab72a2)

3) Gostin, L. O. 2010. “Public Health Ethics.” In Public Health Law & Ethics, pp. 59-65. Berkeley, CA: University of California Press.

4) Steneck, N. H. 2006. Fostering Integrity in Research: Definitions, Current Knowledge, and Future Directions. Science and Engineering Ethics 12:53-74.

5) Van de Poel, I. and L. Royakkers. 2011. “Normative Ethics.” In Ethics, Technology, and Engineering: An Introduction, pp. 89-95. West Sussex, UK: Wiley-Blackwell.