Fin…

Bender’s essay “The Mask” brings to light the ideology that people have when they act immorally.  While there are certainly other reasons, I agree that this is a contributing factor to how persons justify or ignore unethical acts.

“The greatest sin of our generation is being caught for a crime, not committing the crime itself.”

I believe this statement defines our society and guides our moral compass.  There are numerous examples of this in history and even today.  It might even be the case that people would rather commit additional acts of unethical behavior covering up a primary act (leading to more sever punishment) than to be caught for the first.  This doesn’t mean that we don’t know what is or isn’t ethical, the true nature of an act is defined by exposing the truth.  If the truth is never revealed, we consider it to be acceptable. (Again, still not ethical)

“Our only hope for ethical reform resides in these courageous individuals of integrity, these transparent heroes who remain true to the moral conscience within.”

Not to be a “doomsdayer” but our definition and adherence of morality has weakened over time.  Without a plan to alter this course, moral and ethical behavior will only diminish and the use of “masks” will become more and more transparent until we cannot even recognize morality.

Reflecting on how my viewpoints have changed over the past few months, I don’t think that I’ve become more or less ethical, instead I think I’ve become more aware.  More aware of what can happen when people fail to act responsibly and reasonably for their own self-interests.  I’ve found that ethics can’t be taught, I think that you can tell someone what is or isn’t ethical, but knowing and doing are not even closely related (as exhibited by this course).  There’s a risk is falling into the delusion, behind the mask, that one’s actions are justifiable.  As Tarantulana just posted, “learning means admitting that we know less than we thought we did when we started out”.  Part of success is knowing when to “drop back 10 and punt”.

References:

Bender, K. 2010. The Mask: The Loss of Moral Conscience and Personal Responsibility. In An Ethical Compass: Coming of Age in the 21st Century, pp. 161-173. New Haven: Yale University Press.

Posted in Weekly Critiques | Leave a comment

Week 14

ASCE Code of Ethics, Fundamental Cannon 1:

Engineers shall hold paramount the safety, health and welfare of the public and shall strive to comply with the principles of sustainable development in the performance of their professional duties

If you graduated as a civil engineer, you could probably recite this sentence in your sleep; this is the basis for all engineering decisions.  However, is this reasonable, practical, achievable? NO.  Engineering is never completely risk free. [1]  In every engineering decision, there’s always risk involved.  From designing a building, to launching a rocket to outer space, to opening your tap and drinking the water.   Every design factor has been chosen to allow for the least amount of risk that could practically be attained .  Most regulations are simply a confidence that there will be no negative outcome.  But there are instances where public safety and welfare is not held paramount.  Take the Hyatt Regency walkway collapse, the Challenger incident, or the DC lead crisis.  In these three circumstances (and many others) the engineers failed to recognize severe safety and health concerns and proceeded haphazardly.

When we consider how this relates to regulation, there’s clear evidence that our regulation system has flaws and is failing to protect the general public in several aspects.[2]  I’m not claiming that the options I discuss are how things should go, and these are open to criticism and changes, but there are two extremes for regulation.  (A) Complete deregulation; every man, woman, child, corporation, institution, for themselves. (B) Complete regulation; big brother knows what’s best and makes the decisions for everyone.  Neither option is completely feasible or implementable.  Consider the involvement of the public in these two circumstances.

Option A.  In complete regulation, there is a set of committees to designate committees to designate agencies to designate regulations that they feel is necessary to protect the public.  The majority of the public trusts these agencies to do the right thing (therefore not questioning their authority or participating in policy making whether they’re welcomed or shunned from the process).  However, these committees who designate agencies might involve politicians that need funding to get re-elected. (scratch my back, I’ll scratch yours)  We tumble down the rabbit hole and end up in a corrupt system where the public is sitting there being told by their all-knowing government what’s good for them.  Ideally there is no corruption, but that’s ideal not real…  Corporations can “legally” ignore public safety and welfare by maintaining their regulatory restrictions even though current practices are hazardous.

Option B.  In complete deregulation, we embrace the laissez-faire mindset.   However, in this option, public involvement is imperative.  A fundamental, perhaps flawed, idea is that society sets what is acceptable (In option A, the public would discourage corruption).  Desirable practices are rewarded while practices that are unwanted or unsafe are penalized through various means.  Unfortunately, this option is less preventative than the first, however fear of the public’s reaction could prevent unwanted practices from occurring.

As with both extremes, neither is practical.  Society has to be involved to have proper regulations.  Option A could co-exist with Option B, the primary difference is public participation.  Whether or not street science is welcome, society can react to encourage involvement.  “…when street science identifies hazards, highlights previously ignored questions, provides hard-to-gather data, involves difficult-to-reach populations, and expands the possibilities for intervention alternatives, science and democracy are improved” [3].  There should be some objective third party that has authority over the corporations however some of these third party agencies have been reduced to their biased counterparts.  Engineers, scientists, and policy makers have to acknowledge the benefits of street science to ensure environmental safety and justice for all parties involved.

References:

  1. Broome, T. H., Jr. 1986. The Slippery Ethics of Engineering. Washington Post (December 28):4S1.
  2. Michaels, D. 2008. “Sarbanes-Oxley for Science: A Dozen Ways to Improve Our Regulatory System.” In Doubt is Their Product: How Industry’s Assault on Science Threatens Your Health, pp. 241-265. Oxford, UK: Oxford University Press.
  3. Corburn, J. 2005. “Street Science: Toward Environmental Health Justice.” In Street Science: Community Knowledge and Environmental Health Justice, pp. 201-218. Cambridge, MA and London, UK: The MIT Press.
Posted in Weekly Critiques | Leave a comment

Week 12

Throughout studying the DC Lead Crisis, I’ve always wondered why there hasn’t been more media coverage nationwide regarding the high lead levels in DC water and its long-term effects.  Normally people are excited about an inter-agency cover up but for some reason, the media attention of this event has been somewhat isolated to the local news.  Unless I was introduced to the topic by Dr. Edwards’ many presentations he’s given to various classes on campus, I probably would have only read about this crisis in an ethical case study in a textbook somewhere.  Maybe I’ve come up with a few ideas of why there hasn’t been as much attention.

Most of what is presented as newsworthy from the media has clear and evident risk that is easy to understand. [1]  In the case of lead in water, if someone drinks contaminated water, they’re not going to curl up into a ball and have their eyes fall out.  The effects are more long-term and very difficult to quantify, and there is some debate over if the risk is even that pronounced.  “Just a few IQ points.”  Also, while the media claims to be unbiased and nonpolitical, they may be more prone to attack private corporations involved in corruption instead of governmental agencies that, most of the time. would be proponents of environmental regulation and more influential.

The media has untold power to shape public opinion.[2]  Some people may consider the media to be on the side of the public, searching out those who are misleading others or putting the population at risk.  Also the media has the great ability to take information and reduce, clarify, and disseminate it to the general public.  While the media is typically very viewer oriented, it still operates under a deficit model [3]:  presenting information that it perceives as beneficial to the viewer and withholding information that may contradict a different viewpoint or outcome.  These few points may explain why there is a lack of national attention when related to the DC case.

When considering Davis’ outlook on whistle blowing, I think his position is a bit idealistic.  Dealing with bad news is something that companies should improve.[4]  However I don’t think that every case can be solved with this outlook.  There may be times where companies, or governmental agencies, are so invested in a position that arguably, it would be better to ignore bad news and get rid of the whistle blower.  Not to say that this is right, but there may be circumstances where the reward is not worth the risk, either financially or non monetary (such as reputation or prestige).  To me his argument could be compared to, “Well we wouldn’t need police officers if no-one committed crimes”.

 

References:

  1. Miller, N. 2009. “The Media Business.” In Environmental Politics: Stakeholders, Interest, and Policymaking, 2nd ed., pp. 149-165. New York and London: Routledge.
  2. Hazarika, S. 1994. “From Bhopal to Superfund: The News Media and the Environment,” pp. 1-14.
  3. Sismondo, S. 2010. “The Public Understanding of Science.” In An Introduction to Science and Technology Studies, 2nd ed., pp. 168-179. West Sussex, UK: Wiley-Blackwell.
  4. Davis, M. 1998. “Avoiding the Tragedy of Whistleblowing.” In Thinking Like an Engineer: Studies in the Ethics of a Profession, pp. 73-82. New York, NY and Oxford, UK: Oxford University Press.
Posted in Weekly Critiques | 1 Comment

“need to know (and include)”

“…when lay people offer local knowledge about problematic solutions, they might highlight expert’s inaccuracies and spontaneously disrupt the taken-for-granted trust and credibility the public confers on expert institutions.” [1]

When I read this, I was instantly reminded of multiple instances where various agencies’ credibility or knowledge was tested.  For instance, the questioning by the DC town council when the “expert” witness claimed that the lead levels that one citizen reported must have been inaccurate when pertaining to flushing times.  There were several more instances where this occurred and I’m sure there were many that the class hasn’t discussed.

Governmental agencies and regulators have typically had a paternalistic role over the general public.  However, in recent years, with increased cases of fraud, falsification, and corruption among other things, the public has begun to question the ultimate authority of these agencies.  Society is “demanding greater public participation in science and technology decision making, changing the traditional ‘trust us, we’re experts’ science–society relationship.” [2] Some agencies are horrified by this shift while others have taken advantage of this movement.  Much of what society can offer is incredibly beneficial to science and technology.

There are multiple methods to involve the public such as task forces and town meetings [3].  In the political arena, “town hall meetings” have become more and more common.  I would venture to say that most perceptions of politicians are not positive, driven primarily by the disconnect that has occurred between law makers and citizens. This outlet provides citizens the opportunity to express their concerns or offer insights to issues, greatly improving efficiency and acceptance when making policy decisions.  In relation to the DC case, the agencies that were trying to address the increased water lead levels and ensuing increased blood lead levels could have benefited from community based research (CBR) [2].  CBR involves the community that is affected by the problem the research attempts to address.  By having community participation in research, trust is built between the agency and public since both are working together.  In most cases involving DC lead, the agencies kept the public at arms-length.

Through this week’s readings (and one that I found separately), it’s apparent that public involvement is a key essential in good science and policy making.

 

References:

  1. Corburn, J. 2005. “Street Science: Characterizing Local Knowledge.” In Street Science: Community Knowledge and Environmental Health Justice, pp. 47-77. Cambridge, MA and London, UK: The MIT Press.
  2. Chopyak, J., & Levesque, P. (2002). Public participation in science and technology decision making: trends for the future. Technology in Society, 24(1-2), 155-166. doi: 10.1016/s0160-791x(01)00051-3
  3. Sismondo, S. 2010. “Expertise and Public Participation.” In An Introduction to Science and Technology Studies, 2nd ed., 180-188. West Sussex, UK: Wiley-Blackwell.
Posted in Weekly Critiques | Leave a comment

“Correlation is not Causation”

Do scientists, engineers, or researchers deserve trust?

This question has been ringing in my head for the past week.  I think through our discussion, we concluded that most people initially have trust for others unless they have acted in a way to lose that trust.  I considered, “Do I deserve to be trusted?”  The common misconception is that education and intellect has magically endowed those to be trustworthy but through my own experience, it seems that those who should be most trustworthy are not.  I used to think that there were only a few bad apples in the orchard but maybe there’s a few bad trees in there too.  With this weeks reading about published research, it brought light to the issue of valid findings that other professionals, such as doctors, use everyday.

When reading the article about Dr. Ioannidis, it offered a brief glimmer of hope [1].  The unfortunate truth is that most research is under pressure to deliver results (from funding, political pressures, etc.).  This leads to researchers ignoring limitations on their study or not including past research or publishing data that may contradict their findings, such as in the case of lead in products and more specifically water.  It seems that some researchers are very secretive about their data or methods, maybe they have something to hide.  I would consider good science to include limitations and concerns about the conclusions that have been drawn and be open to criticism.

There’s no new “good” science to suggest that lead is anything but harmful to humans, however some have tried to hide its effects through botched research and false data [2].  Still years after the original MMWR, the CDC is still trying to cover up the fact that their original findings were inaccurate and misleading.  In both articles, they mention some “limitations” or “misguiding statements” but fail to address the issues brought forth by other researchers and scientists dealing with greater flaws in their work [3,4].

The magic with statistics is that you can manipulate the data and perform various tests to show what you want to see.  By including or excluding certain information, or taking just a subset of the whole, correlation or significance can be shown.  But in the case of medical research specifically, it’s nearly impossible to relate the results to the true controlling variable given all the complexity of interactions and variability.  Oftentimes correlation exists between two variables being tested but that does not always prove causation.  Likewise, lack of correlation does not prove lack of causation.  When the CDC came out and showed the 300 ppb study, they essentially said, “we see no correlation, therefore lead in water isn’t a problem”.  This argument is not only false but was detrimental to the health of others facing elevated WLL.  Specific care and caution should be exhibited when drawing conclusions from data and limitations and conflicts in results should be noted.

Also, go read this article, http://uncyclopedia.wikia.com/wiki/Random_Statistics, it brought joy to my day…

Excerpts from article (it has quite a bit of sarcasm in it, so reader beware):

“…pie charts DO NOT have to add up to 100%. That is a lie first embedded in our modern culture by communist soviet spies in the 1930’s.”

“99% of journalists use random statistics but only one in out of every ten of the 12.5% who responded yes to the question know why 68% of the quarter second half are wearing green jerseys. This is not to say that these reporters are lying 100% of the time. Although random, the statistics serve a purpouse. Which is to explain another thing, after saying you were going to explain the first thing, which is not statistically relevant, but your editor thinks its cool.”

Now you really want to go read it don’t you…

References:

[1] Freedman, D. H. 2010. Lies, Damned Lies, and Medical Science. The Atlantic (Nov.), pp. 1-12, http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/.

[2] Markowitz, G. and R. Rosner. 2002. “Old Poisons, New Problems.” In Deceit and Denial: The Deadly Politics of Industrial Pollution, 108-138. Berkeley, CA: University of California Press.

[3] CDC. 2010. Notice to Readers: Examining the Effect of Previously Missing Blood Lead Surveillance Data on Results Reported in MMWR. MMWR 59(19):592, http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5919a4.htm.

[4] CDC. 2010. Notice to Readers: Limitations Inherent to a Cross-Sectional Assessment of Blood Lead Levels Among Persons Living in Homes with High Levels of Lead in Drinking Water. MMWR 59(24):751, http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5924a6.htm.

Posted in Weekly Critiques | Leave a comment

“Truth be known”

Dishonesty includes the common actions, lying, withholding information, deception, and failure to seek out the truth (Harris 2009).  Each of these is applied to engineers and professionals in science and research and have great examples for each, but I’ve always tried to apply these concepts to everyone, including myself.  The last concept, failure to seek the truth, is evident in the actions taken by CDC in their 300 ppb study but I reckon that failure to seek the truth also applies to the general public and other professionals.

Was it ethical for the town council, the media, some residents, other localities facing similar instances, etc. to take the word of the CDC as absolute authority?  I think that it’s part of our culture not to be skeptical or cynical, but to verify and confirm information or directives that are presented to us.  Unfortunately I think that this has become more difficult when there are some who only try to mask the truth and interfere with good science.  When Dr. Lewis was telling us the various research projects he’s worked on and authorities that he’s fought, I was a little ashamed because I was completely oblivious to these issues.  I never sat in the dentist chair and wondered if they heat sterilized the utensils, that’s a trust that I have toward my dentist.  I assume that when I take a drink out of the water faucet that I won’t be exposed to harmful pathogens or heavy metals, that is a trust that I have toward the water company.  Likewise, I assume that agencies designed to protect public health are doing that, protecting public health, but if there’s a take home message here, it’s that this is not always the case.  Not to downplay the complexity of risk assessments, which include exposure times, rates, bioavailability, etc., but the complexity arises when these, supposedly impartial, agencies begin to tango with the truth as a consequence of political pressures or funding that may result in publishing potentially damaging information (not only to public health but to the trustworthiness of the agency).

Our responsibility as engineers, scientists, researchers is to use our “knowledge and skill for the enhancement of human welfare and the environment” (ASCE Code of Ethics).  Therefore I think that it’s our duty to adhere to the honesty principles such as being truthful, transparent, and, most especially, seek the truth.

 

References:

Edwards, M. 2010. “Experiences and observations from the 2001-2004 ‘DC LeadCrisis’”: Testimony. U.S. House of Representatives’ Committee on Science and Technology, 111th Congress.

Harris, C. E., Jr., et al. 2009. “Trust and Reliability.” In Engineering Ethics: Concepts & Cases, pp. 115-134. Belmont, CA: Wadsworth.

Posted in Weekly Critiques | 2 Comments

“Need to Know”

Living in the age of infinite information and knowledge at our fingertips, I’m always surprised at what the CDC has gotten away with when dealing with lead in DC water. Repeatedly, the CDC has been proven wrong (not only by decades of science and research but) by legitimate scientists and researchers and still the agency continues to defend their actions and research practices.   I question how they can continue to deny wrongdoing and why there is not a massive public outcry against the agency and “heads rolling” at the top.  If the CDC can succeed at keeping the wool over the public’s eyes respecting lead in water I begin to question what else could have negative effects on health that has been kept secret or denied over the past decades.  Have risks associated with DDT, asbestos, dioxins, PCE, TCE, etc. always been known and covered up in the interests of protecting agencies, increasing profits, or some other strategy that outweighs public health?  Is it the fault of the agency for not providing accurate information, fault of the public for not fact checking or something entirely different?

“Right to know” has helped to inform the public about the risks associated with certain chemicals or products, specifically dealing with those in communities where the use of dangerous chemicals in manufacturing processes can be hazardous to the nearby population (Hadden 1989).  However I thought, just because people know, or have the right to know doesn’t mean that they will use that information to protect themselves.  The first thing that I thought of was with cigarettes.  There’s no doubt that consumers can find out what chemicals are in cigarettes and how these chemicals can affect their health.  Until recently cigarette companies spent much of their revenues on campaigns to cover the negative effects of cigarettes and market their product (and probably still do).  Nevertheless, many people (some 20% of adults) smoke, knowing (or having the ability to know) what the negative health effects are, especially since there are nice images of diseased lungs, dead bodies, and tracheostomies on packages.  Being informed and making informed decisions is two different things.  However, I don’t think it’s the agencies’ place to govern what information the public will have available when dealing with public health.

In the case for the CDC, there were many people questioning their reports and conclusions (Edwards et al. 2009, Renner 2009), however the CDC still stood by their guns and refused to set the record straight.  Even after multiple attacks on original publications and rebuttals, the CDC remains scot-free.  The public has to assume that most of what the government tells them is true; which puts those in power very influential and authoritative.  Unfortunately, I think that the right to know, in practice, is only reserved for the cases where the authority was forthright and honest to begin with.  Analogously, a locked door only keeps honest people honest, if an unhonest person wants thru the door, the lock isn’t going to stop them.  Those in authority have ways to defend themselves and there are few checks to keep them honest (because they should be the ones at the top and leaders in honesty not needing additional checks).  Advocacy groups such as Parents for Nontoxic Alternatives are sometimes the only ones who are there hold accountable the authorities however the government’s pockets are a lot deeper than those who are trying to get factual information.

I can recognize that some agencies may try to keep some information as confidential as possible because of our litigation loving society, even if it was an honest mistake on the side of the agency.  However in the situation with DC lead, the problem isn’t with the water chemistry change had negative consequences, something like that probably could have been prevented, but regardless, that changewas something that wasn’t done with malicious intent.  The problem arose when the agengies failed to inform the public and take steps to mitigate the problem, that’s when they went wrong.

 

References:

Hadden, S. G. 1989. “The Need for Right to Know.” In A Citizen’s Right to Know: Risk Communication and Public Policy, pp. 3-18. Boulder, CO: Westview Press.

Edwards, M., S. Triantafyllidou, and D. Best. 2009. Elevated Blood Lead in Young Children Due to Lead-Contaminated Drinking Water: Washington, DC, 2001-2004. Environmental Science & Technology 43:1618-1623 (with supporting information).

Renner, R. 2009. Health Agency Covered Up Lead Harm: The Centers for Disease Control and Prevention Withheld Evidence that Contaminated Tap Water Caused Lead Poisoning in Kids. Salon.com (April 10):1-3, http://www.salon.com/news/environment/feature/2009/04/10/cdc_lead_report.

Centers for Disease Control and Prevention (CDC). 2009. CDC Responds to Salon.com Article [Media Statement] (April 10), 2 p., http://www.cdc.gov/media/pressrel/2009/s090410.htm.

WASAwatch. 2009. What the CDC Can Learn from the National Research Council and the Public [blog entry] (May 3), 10 p.,  http://dcwasawatch.blogspot.com/2009/05/what-cdc-can-learn-from-national.html.

Posted in Weekly Critiques | 1 Comment

The Golden Rule(s)

Finishing up the four ethical theories we’ve discussed is the final theory:  care ethics.  Care ethics focus primarily on the relationships between people, organizations, or agencies.  It focuses less on the basic moral principles but instead looks at the parties involved and how each of them is affected with a certain decision.  The golden rule “Do unto others as you would have them do unto you” applies to this theory.  As described by van de Poel each theory, Utilitarianism, Kantian, Virtue, and Care ethics,  has its benefits and criticisms [1]. I believe that virtue ethics is the theory that resembles the approach that I follow more than the rest.  Utilitarian and Kantian have their purpose and uses but in general, virtue ethics combines the best of the three theories and offers a little bit of “wiggle” room if there are grey areas.

Considering a more research and scientific prospective, Pielke offers his viewpoint of the “sides” a person can be on when they are offering up knowledge:  pure scientists, science arbiter, issue advocates, and honest broker. [2] I think that each position has pros & cons and times that they are more ethical or not.  A pure scientist just offers direct information and may not note its importance, validity, or completeness while a science arbiter just provides answers to questions posed to them without adding additional information that may sway the true meaning or usefulness of the answers. On the other hand, issue advocates are more subjective in their giving of information and would prefer a specific decision regardless of the data or unknown consequences.  Lastly an “honest broker” tries to combine all of the aspects into one, mostly impartial, opinion.  It is most definitely important for scientists and researchers to offer their opinion on issues because sometimes policy makers don’t understand the technical side that the scientists do, however this is important because many times there may be conflicts of interests that can drive the opinion of some scientists.  These conflicts should be fully disclosed so that policy makers that receive information from these scientists can judge whether the scientists is just getting “paid off” to have someone else’s opinion.

This is made apparent in the articles by Renner and the involvement of Guidotti’s with DC WASA. [3,4] It’s clear that many of the agencies had “expert” witnesses that they hired to give the agencies’ opinion and not that of an impartial expert witness.  This is repeated in the restrictions given to the hired consultants related to the freedom to publish their findings without the consent of DC WASA.  In Guidotti’s response to the first article, he tries to salvage what he can by poking holes in the primary article and calling it “incorrect and irresponsible” and tries to blame the publisher for printing such an article but Renner’s response clears any doubts that he may have raised. [5]

“Expert” witness testimony still effects policy as the focus changes from high lead levels due to corrosion by the water chemistry to concerns about the EPA mandated lead service line replacement (LSLR) program, it’s clear there’s no debate that full LSLRs are preferred over partial replacements. [6,7]  With the extensive data to support that there is absolutely no short term benefit to a PLSLR and likely no long-term benefit, this regulation has to be re-evaluated, primarily in two options: mandate full LSLR or no LSLR.  If utilities were financially responsible for full replacements, it could discourage some utilities from properly sampling for the LCR (which probably is already an issue).  There has to be some shared responsibility between the homeowner and utility.  Like the instance in Madison, Wisconsin, there are options available to offset financial burden on water utilities.  In the end there will be some decision that one side will not like, if there was no negative consequence, the decision would have been made many years ago.

It seems that anyone can be paid to say or agree to anything, which is alarming due to the fact that research and science requires funding, a lot of funding.  Also, what knowledge can you trust if it belonged to the highest bidder?  The ideal golden rule no longer applies, instead the updated version “he who has the gold, makes the rules” tends to outweigh.

References:

  1. Van de Poel, I. and L. Royakkers. 2011. “Normative Ethics” and “The Ethical Cycle.” In Ethics, Technology, and Engineering: An Introduction, pp. 102-108 and pp. 133-160. West Sussex, UK: Wiley-Blackwell.
  2. Pielke, R. A., Jr. 2007. “Four Idealized Roles of Science in Policy and Politics” and “Making Sense of Science in Policy and Politics.” In The Honest Broker: Making Sense of Science in Policy and Politics, pp. 1-7 and 135-152. Cambridge, UK: Cambridge University Press.
  3. Renner, R. 2009. “Troubled Waters: Controversy Over Public Health Impact of Tap Water Contaminated With Lead Takes on an Ethical Dimension.” AAAS Professional Ethics Report XXII(2):1-4.
  4. Renner, R. 2009. “Troubled Waters: On the Trail of the Lost Data.” AAAS Professional Ethics Report XXII(3):1-3.
  5. Guidotti, T. L. 2009. [Letter to the Editor in response to Renner’s “Troubled Waters” articles]. AAAS Professional Ethics Report XXII(3):4. (Renner’s final response to Guidotti, is in PDF “W7 Renner Response.”)
  6. Renner, R. 2007. Lead Pipe Replacement Should Go All the Way. Environmental Science & Technology 41(19):6637-6638.
  7. Renner, R. 2010. Reaction to the Solution: Lead Exposure Following Partial Service Line Replacement. Environmental Health Perspectives 118:A202-A208.

 

Posted in Weekly Critiques | 1 Comment

First Post

Test

Posted in Uncategorized | Leave a comment