Weekly Critique — December 6

Corporate Masks.  I find Bender’s exploration of the concept of masks intriguing.  This article provides a continuation of the subject I discussed in my blog for last week – the inevitable moral conflict between the rationality of organizations and the personal ethics of the members of those organizations.  In last week’s reading, Michaels referred to corporations as “faceless” or a “shield” behind which individuals can no longer be held responsible for their actions [1].  This aligns well with the reading for today, which claims:

It is this desire to act unethically while avoiding guilt and maintaining ethical integrity that has led to the creation of masks [2].

However, the treatment of the organization presented by Ladd [3], which claims that moral considerations cannot be a part of organizational decision-making, seems to fall into the category of what Bender calls “addressing masks,” or the view that unethical behavior and corruption are innate traits of complex systems [2]. 

Personal Reflection.  Although I found Bender’s discussion of masks enlightening, I found his views on the necessity of religion for morality insulting.  According to the article,

. . . we must seriously reconsider the necessity of religion for morality.  Secular ethics have not met the task [2].

I see this as an attack on secular philosophies, such as humanistic atheism, that provide a framework for moral living that is separate from religion.  I would like to see the author address the concept that religion can also be a mask.

Full Circle.  In this last blog, I thought it would be appropriate to reflect on what I said in my first weekly critique.  In particular, I said:

. . . after reading this article I think I better understand the difference between personal and professional ethics and the need for this distinction.

Sometimes, I think, learning means admitting that we know less than we thought we did when we started out.  In this last blog, I am considering the claim made in the article for today that it is the excision of our personal morality from our public and professional lives that has caused the moral crisis in our society today.  The distinction between personal and professional ethics, then, may more complex than I originally thought, and something that requires further reflection.


Thank you for reading this final blog of the semester!  Today I am both focusing on the article assigned and trying to consider how my thinking has changed throughout the semester – please feel free to add your ideas, especially if you disagree.  Some questions you might be able to help me by answering:  Did you find any parts of “The Mask” particularly enlightening or offensive?  Re-reading your first critique of the semester, how do you feel about what you wrote? 



 [1] Michaels, D. 2008. “Sarbanes-Oxley for Science: A Dozen Ways to Improve Our Regulatory System.” In Doubt is Their Product: How Industry’s Assault on Science Threatens Your Health, pp. 241-265. Oxford, UK: Oxford University Press.

 [2] Bender, K. 2010. The Mask: The Loss of Moral Conscience and Personal Responsibility. In An Ethical Compass: Coming of Age in the 21st Century, pp. 161-173. New Haven: Yale University Press.

 [3] Ladd, J. 1970. Morality and the Ideal of Rationality in Formal Organizations. The Monist 54(4):488-516.

Weekly Critique — November 29

Inevitable Moral Crisis.  I found the viewpoint presented by Ladd [1] in the readings for today fascinating.  In this article, organizations are idealized as machines, working within practical constraints to achieve specific goals.  According to the logic presented,

Morality as such must be excluded as irrelevant in organizational decision-making. . . [1].

Extending this logic, Ladd argues that a consideration of morality by an official agent of the organization is “irrational” under the ideal organizational model and violates basic rules of how an organization works.  Although this is an extreme model, and does not reflect the behavior of actual organizations, this concept may explain why, as others have noted:

We cannot rely on old-fashioned notions of conscience because they do not seem to operate when people can hide behind the shield of a faceless corporation [2].

It is intriguing that an organization, made up of individuals, each, presumably, with a sense of morality, can be amoral.  According to Ladd [1], this will inevitably lead to conflict between the individual morality of members of the organization and the rational, amoral activities of organizations.  The organization is inhuman, but made up of individual human beings.  Is the whole, then, less than the sum of its parts? 

Organizations, Professions, and Responsibility.  One powerful statement from the reading is worth discussing further:

Organizations have tremendous power, but no responsibilities [1].

By this, the author means that as amoral entities, organizations have no moral obligation to those affected by their actions.  Who, then, is responsible for these actions?  For the protection of the pubic?  In some ways, I wonder if this property of organizations is responsible for the necessity of professions.  Doctors, for example, must work within large organizations, such as hospitals.  Even if it made “rational sense” to do so from an organizational perspective, these doctors cannot violate their code of professional ethics.  In this way, they are introducing this code as a constraint on the administration of the hospital, forcing ethical considerations into the decision-making process.  Is this true of all professions?  Can they be used to make ethics part of the process?  And how does the “semi-professional” status of engineering come into play?


Thank you for reading!  Today I am discussing the moral characteristics of organizations and the role of professions in organizational decision-making – please feel free to add your ideas, especially if you disagree. Some questions you might be able to help me by answering:  How do you feel about the statement that organizations have tremendous power but no responsibilities?  What, do you think, is the role of professionals in organizations in contributing to organizational decision-making?



 [1] Ladd, J. 1970. Morality and the Ideal of Rationality in Formal Organizations. The Monist 54(4):488-516.

 [2] Michaels, D. 2008. “Sarbanes-Oxley for Science: A Dozen Ways to Improve Our Regulatory System.” In Doubt is Their Product: How Industry’s Assault on Science Threatens Your Health, pp. 241-265. Oxford, UK: Oxford University Press.

Weekly Critique — November 8

Science and Journalism – For Better . . .

According to one article from today’s reading, “the media determines what is on our minds at any time” [1].  To the extent that this is true, the media has incredible power over public thinking and, as a result, incredible potential for the distribution of knowledge.  One of the benefits of the power wielded by the media is that it can serve as a check/balance for industry and government experts or to give voice to a need for change.  For example, Hazarika argues that it was media coverage that allowed the Bhopal incident to “register on public consciousness” and drive needed changes in policy [1].  According to an EPA expert, this incident was a critical turning point:

If you wanted to inspect a chemical plant before the Bhopal incident, they’d laugh you out; they were not obligated under law to permit inspections or even report chemical spills [1].

Or Worse.

On the other hand, it is possible for the media to distort knowledge rather than enhance it, resulting in misunderstandings.  The article by Miller is very critical of the media’s treatment of environmental issues, claiming that they suffer from the application of “principles for newsworthiness” [2].  According to the article, environmental issues that lack a “sensational” aspect are often be neglected.

Others quoted in this article imply that the view of the media as a catalyst for social change is a romanticism, and that such ideas are beyond the scope of journalism.  For example, according to one risk communication expert: 

 The reporter’s job is news, not education; events, not issues or principles [2].

Finally, Miller proposes that the very basis of journalism can lead to misunderstandings.  Journalists strive to present a balanced story, with viewpoints given from both sides of the issue [2].  As a result,

Even on issues where the weight of scientific opinion seems to be disproportionately on one side, conflicting versions of the truth are afforded virtually equal coverage [2].

This is very interesting to consider in the context of some of our earlier readings, which concluded that the public trust in science has recently been shaken by the lack of scientific consensus on important issues.  Is it possible that, in some cases, the media conjures an image of “bickering experts,” when, in reality, most scientists have reached a consensus?   This leads to another question:  how does the media influence public trust in science?

A Happy Marriage?

Although Sismondo states that scientists and science journalists are closely linked because of the journalists’ dependence on their scientific contacts [3], Miller argues that scientists and journalists are unhappy with one another [1].  According to data presented by Miller:

Four out of five find the media are more interested in “instant answers and short term results” ; and three out of four feel that “sensationalism is more of a goal than scientific truth [1].

Statistics like these give the impression that scientists view journalists as almost childlike.  As Sismondo points out, this introduces an interesting contradiction:  although scientists depend on popularization for their influence, they often resist it at the same time, condemning it as distortion [3].    

Thank you for reading!  It seems like today I’m working from the “journalist” perspective and presenting both sides of the relationship between science and the media – please feel free to add your ideas to the debate. Some questions you might be able to help me by answering:  How do you see the media with respect to science – manipulator of truth or catalyst for change?  How do you feel about the state of the “marriage” between science and journalism?

Special note to HokiePokey:  Consider the quotation from Miller below in the context of our earlier discussion about how “binary” is not exclusive to machines, but is rooted in human nature:

As a society we tend to view things from a dual perspective – liberal or conservative, guilty or innocent, right or wrong, safe or dangerous [2].



[1] Hazarika, S. 1994. “From Bhopal to Superfund: The News Media and the Environment,” pp. 1-14.

[2] Miller, N. 2009. “The Media Business.” In Environmental Politics: Stakeholders, Interest, and Policymaking, 2nd ed., pp. 149-165. New York and London: Routledge.

[3] Sismondo, S. 2010. “The Public Understanding of Science.” In An Introduction to Science and Technology Studies, 2nd ed., pp. 168-179. West Sussex, UK: Wiley-Blackwell.

Weekly Critique — November 1

Power, Expertise, and the Public.  One key idea in the readings for class today was the balance of power between “experts” and “the public.”  The ethical implications of this unequal power relationship have been discussed in class and addressed in a previous blog entry (see Weekly Critique – October 18).  In that critique, I discussed how this power stems from the trust that we, as the public, place in experts.  However, the power that experts wield is derived from more than our trust in their recommendations.  As Corburn’s article tells us:

 Power is expressed in public decision making by who gets to define problems, offer evidence, be heard, and design solutions [1]. 

In our society as it exists today, problems are often both defined and solved by experts.  The results are then evaluated using evidence that is also collected by experts, creating a climate where the non-expert often goes unheard.  Some might argue that in engineering, for example, this is only true for decisions that are technical in nature.  However, as Sismondo points out, more is involved in a large project than engineering calculations [2]; engineers, perhaps unknowingly, make choices on behalf of “the public” that are value-laden.

According to Sismondo, “inequalities in the distribution of knowledge and expertise undermine citizen rule” [2].   To shift the balance of power, this article argues for an increase in “citizen science.”  Furthermore, the article implies that if the resources needed to participate are widely available and inexpensive, citizen science will happen naturally, as it did with the open source software community [2].  Where would this end?  The end of expertise?  Although I believe it is sometimes necessary to challenge the decisions made by experts, and that citizens should be able to access the resources necessary to do so, a world without expertise seems impractical.  Clearly, “experts” can, and often do, make valuable contributions, so the key question becomes – when do we decide that the “expert opinion” is not good enough? 

When Experts Fail.  The cases discussed by Corburn and others suggest that people begin to question expertise when it conflicts with local knowledge and common sense.  In cases where this discrepancy is large, citizens are often motivated to take action on the belief that the experts have failed them.  One case in which this can occur is when experts fail to apply the science in the context of the “real world” as the public understands it.  According to Corburn, when experts failed, it was because

Residents failed to “see themselves” in the science, and the professional study failed to bring on board the very public whose health it was trying to assess [1].

In Washington, D.C., one parent, Thomas Walker, knew the 2004 CDC MMWR must be false after his daughter’s lead poisoning was linked to drinking water [3].  Direct observations like this one, as well as the fact that the 2004 MMWR disagrees with all previous work on lead in drinking water, served as red flags that the CDC experts were missing something.  For these reasons and others, the residents of Washington, D.C., with the help of concerned scientists and physicians, took action to disprove the “expert opinion” given by CDC.


Thank you for reading!  I’m really focusing today on the relationships between power, expertise, and the public, and I would love to hear your ideas on this issue, especially if they disagree with mine. Some questions you might be able to help me by answering:  What, do you think, is the ideal role of “experts” in society?  What are some “red flags” that experts have failed?



[1] Corburn, J. 2005. “Street Science: Characterizing Local Knowledge.” In Street Science: Community Knowledge and Environmental Health Justice, pp. 47-77. Cambridge, MA and London, UK: The MIT Press.

[2] Sismondo, S. 2010. “Expertise and Public Participation.” In An Introduction to Science and Technology Studies, 2nd ed., 180-188. West Sussex, UK: Wiley-Blackwell.

[3] Edwards, M., 2010, unpublished letter to the US Department of Health and Human Services (5/27), 2 pages.

Weekly Critique — October 25/6

Here is my blog for this week!  Better late than never, I guess.  In some ways, I am glad I waited to post this, because the presentation and discussion last night definitely changed my opinion about some things.  How about you? 


Scientific Research is Flawed.  Many of our class discussions have centered on the idea that much of scientific research is flawed; however, I was (perhaps naively) shocked to hear Ioannidis’s claim that this includes 90% of the medical research that doctors rely on when treating patients [1].  He attributes this to increasing pressures placed on researchers in the “publish or perish” environment they work in.  He also points out that the manipulation of results biasing the study may be done unintentionally, and that researchers see what they want to see in the data.  After the presentation and our discussion last night, I also started to wonder:  to what extent do these researchers really believe the flawed science they are publishing?  My initial assumption was that most researchers are only able to “fool themselves” in cases where the results are borderline and require a judgment call (e.g. it’s almost significant at 95% confidence, or it’s sort of correlated); however, the events of last night make it clear that I was grossly underestimating the power of self-deception in assuming this. 

Are These Flaws Corrected?  Among those that accept that much research is flawed, there are many that believe that will be solved naturally by the self-correcting nature of science, and that good research will eventually win through.  However, as we have discussed in class, this may not be the case.  One article argues that even after research errors are spotted, they can persist for years or decades [1].   This is definitely true of the articles regarding the health consequences of the D.C. Lead Crisis.  In one of the recently published corrections to the 2004 CDC MMWR, the CDC states:

These results should not be used to make conclusions about the contribution of water lead to blood lead levels in D.C. [2].

Never mind that these results were used for this purpose in the intervening six years.  And never mind that this study has not been retracted, and that it may continue to be cited for this purpose by others.  By failing to retract this study and others, the CDC may be perpetuating the idea that water cannot cause lead poisoning in children.  As one article describes it:

It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals [1].

We Don’t Know Everything.  I think before we can address the flaws in science we, as scientists and engineers, need to admit that we don’t know everything.  This is related to the idea of “technologies of humility” discussed in the readings for this week.  Jasanoff implies here that our tendency is to ignore what we do not understand, and states that “the unknown, unspecified, and indeterminate aspects of scientific and technological development remain largely unaccounted for in policy making” [3].  Why do we do this?  Are we afraid of losing the public trust?  I propose that this is beginning to happen anyway, as the general public becomes even more aware of the shortcomings of science as it exists today.  Isn’t it better that they hear it from us?  And as one author puts it:

How long will we be able to fool the public anyway? [1]


Thank you for reading!  Today I really question the scientific community as it exists today and the idea of self-correcting science, and I would love to hear your ideas, especially if you disagree. Some questions you might be able to help me by answering:  Do you truly believe science is self-correcting?  Do you see any “way out” of the current problems with science?



[1] Freedman, D. H. 2010. Lies, Damned Lies, and Medical Science. The Atlantic (Nov.), pp. 1-12, http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/.

[2] CDC. 2010. Notice to Readers: Limitations Inherent to a Cross-Sectional Assessment of Blood Lead Levels Among Persons Living in Homes with High Levels of Lead in Drinking Water. MMWR 59(24):751, http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5924a6.htm.

[3] Jasanoff, S. 2012. Technologies of Humility: Citizen Participation in Governing Science. In M. Winston and R. Edelbach, eds., Society, Ethics, and Technology, pp. 102-113. Boston, MA: Wadsworth.

Weekly Critique — October 18

A key theme for today’s readings involved the concept of public trust and its applicability to the D.C. Lead Crisis, particularly with respect to the CDC.  For any government organization, including the CDC, to operate effectively, it must have the trust of the public.  Without this trust, any recommendations the agency makes, such as those concerning public health, will go unheard.  The organization will have no credibility.  Perhaps this is why Dr. Edwards, in his Congressional Testimony, says “it is critically important that the CDC retain the public’s trust” [1]. 

However, trust is fragile, in large part because it involves an unbalanced relationship; according to one of the readings for today:

Trust involves risk-taking . . . Because the entrusted person could harm, manipulate, or exploit the trusting person, the trusting person is in a position of vulnerability [2].

This relationship between power and trust has at least two important consequences.  First, the unbalanced nature of the trust relationship places an obligation on the entrusted person/organization, in this case, the CDC, to protect the trusting person.  Second, it follows that in order to maintain their powerful position as the ultimate authority on health in the U.S., the CDC must be ever careful of maintaining this trust.

Obligations of the Entrusted.  The obligations involved in the trust relationship that “experts” have with the public is something we have often discussed in this class.  As an example, in one of the class readings, Harris explains the obligations of engineers:

The obligation of engineers to protect the health and safety of the public . . . sometimes requires that engineers aggressively do what they can to ensure that the consumers of technology are not forced to make uniformed decisions [3].

This idea of an obligation to protect public health and safety can also be applied to the EPA and the CDC, and provides a sharp contrast to the actions these organizations actually took:

[the actions of the CDC and EPA] created a public relations coup that protected the agencies’ interests at the expense of public health [1].

Consequence of Loss of Trust.  Because of the fragile nature of trust and the necessity of trust for the agencies to perform their function, it is logical that these groups would generally act in an effort to preserve this trust.  For this reason, it makes sense that, in the aftermath of the D.C. Lead Crisis, the agencies would need to make an effort to recover public trust:

I expected that the responsible agencies would work hard to redeem themselves and once again make themselves worthy of the public trust [1].

However, as we know, these agencies did not do this.  This contrast between the level of concern demonstrated by the agencies and the precarious situation they are in with respect to the public trust is itself noteworthy.

Reflection on Last Week.  Last week, I discussed the CDC’s view of the public and its seeming reluctance to provide proof to support their refutation of the Salon article.  The following definition, given by Resnik in today’s readings, seems particularly appropriate to that discussion:

TRUST is different from FAITH because it is usually based on some evidence [2].

Speaking as a member of the public, we have faith in the mission of the CDC – what we now require is the evidence.


Thank you for reading!  I talked a lot today about the concept of trust how it relates to the actions of the CDC in the D.C. Lead Crisis, and I would love to hear your ideas. Some questions you might be able to help me by answering:  How has your trust in government organizations been altered as a result of these readings?  This class in general?  Do you see anything I missed?



[1] Edwards, M. 2010. Experiences and Observations from the 2001-2004 “DC Lead Crisis” [Testimony before the US House of Representatives Committee on Science and Technology, 111th Congress] (May 20), pp. 1-40.

[2] Resnik, D. B. 2010. Scientific Research and the Public Trust. Science and Engineering Ethics (8/29), pp. 1-11.

[3] Harris, C. E., Jr., et al. 2009. “Trust and Reliability.” In Engineering Ethics: Concepts & Cases, pp. 115-134. Belmont, CA: Wadsworth.

Weekly Critique — October 11

Learning to Listen in Washington, D.C.  As I read the assignments for today’s class, I found that the topic of public participation in science is one that really resonates with me.  In particular, I was struck by the disconnect between the CDC’s perception of the public’s knowledge and the truth.  The Salon article by Ms. Renner is critical of the CDC’s actions, but, more than anything, it asks for answers.  Doctors, parents, and activists have questions that the CDC should be able to answer:

Parents wondered whether the water could have caused speech and balance problems, difficulty with learning, and hyperactivity [1].

A pediatric epidemiologist . . . says, “It is critical to investigate how and why these earlier studies failed to show any increase in children’s blood-lead levels” [1].

Why has CDC kept quiet about these results?” asks Yanna Lambrinidou [1]

However, CDC’s response is disappointing.  Statements intending to reassure abound, but the response contains little hard evidence against the accusations in the article.  It seems to me that CDC operating under the “deficit model” of public scientific understanding, which is characterized by “a linear, pedagogical, and paternalistic view of communication [2]” and emphasizes the idea that the public cannot understand and appreciate the achievements of science [2].  This view of the public is implied by the lack of concrete scientific evidence in the statement.  In addition, some actions are explained away using an argument that sounds very much like “This is how we always do things in science.  Not that you would understand.”  For example, the section that begins: 

It is common practice in scientific circles to present preliminary findings at scientific meetings . . . [3].

Contrary to CDC’s view of the public, the WASAwatch blog demonstrates unequivocally that this insult to the intelligence of D.C. residents will not go unnoticed.  The blog demands scientific evidence to refute the accusations of misconduct, challenging stereotypes of “the public” as scientifically uninformed [4].  This particular phrase summarizes this perfectly:

We have learned to tell truth from spin.  And our patience with spin has ended [4].

As a final thought, it is interesting to consider how this disconnect between perception and reality of the public’s understanding of science impacts the balance of power between the “expert” and the “public.”  In particular, one article noted:

 The growing public perception of scientific expertise as interest-laden . . . is damaging the credibility of traditional decision-making arrangements that involve only experts and policy makers [2].

To me, this passage means that the public is “on to” the fact that experts can and do have agendas, making a claim to be an “expert” ever more suspect.  Already, this is changing the way decisions are made; it will be interesting to see how much more the decision-making process will change in the future.



Thank you for reading!  I really focused on the D.C. case today, but I recognize that there are many possible examples, and I would love to hear your ideas. Some questions you might be able to help me by answering:  Do you also see a disconnect between how CDC treats the public’s level of knowledge and what it really is?  In what ways do you think the public perception of “expertise” has changed and is changing?  Do you see anything I missed?




[1] Renner, R. 2009. Health Agency Covered Up Lead Harm: The Centers for Disease Control and Prevention Withheld Evidence that Contaminated Tap Water Caused Lead Poisoning in Kids. Salon.com (April 10):1-3, http://www.salon.com/news/environment/feature/2009/04/10/cdc_lead_report.

[2] Bucchi, M. and F. Neresini. 2008. Science and Public Participation. In E. J. Hackett, et al., eds., The Handbook of Science and Technology Studies, pp. 449-472. Cambridge, MA: The MIT Press. 

[3] Centers for Disease Control and Prevention (CDC). 2009. CDC Responds to Salon.com Article [Media Statement] (April 10), 2 p., http://www.cdc.gov/media/pressrel/2009/s090410.htm.

[4] WASAwatch. 2009. What the CDC Can Learn from the National Research Council and the Public [blog entry] (May 3), 10 p.,  http://dcwasawatch.blogspot.com/2009/05/what-cdc-can-learn-from-national.html.

It’s Called the “Freedom of Information Office.” So Why Isn’t the Information Free?

I’ll be posting up my usual Weekly Critique later today, but, for now, here’s an amusing clip (around 40 seconds) that I uploaded to YouTube today.  It’s from the TV show “The Lone Gunmen,” and it’s related to the characters’ frustration with FOIA.  Our recent discussions in class have touched on the FOIA system and how it doesn’t always work, so I thought this clip was at least somewhat appropriate to the topic.  The title of this post is taken from another part of the same episode. 



*********************** Additional Information ***********************

Link to the complete episode, in case you’re interested:  http://www.youtube.com/watch?v=zwhuVw_HhQc
More about this TV show:  http://en.wikipedia.org/wiki/The_Lone_Gunmen_(TV_series)

Weekly Critique — October 4

D.C. Revisited – Paternalistic versus Kantian Perspectives.  While reading the “Troubled Waters” articles and the subsequent exchange of letters by Renner and Guidotti, I was intrigued by the two differing ethical frameworks used by the authors in an attempt to justify their irreconcilable views.  Guidotti seems to be arguing from an almost paternalistic perspective, given the definition of paternalism given in our readings from two weeks ago, which say that paternalism is “the interference with a person’s liberty of action justified by reasons referring exclusively to the welfare, good, happiness, needs, interests, or values of the person being coerced” [1].  A particularly striking example of this is his statement in a letter to the Washington Post about the erratum he submitted at the request of the EHP panel:

The suggestion that our conclusion was published by mistake . . . risks creating panic in the community where none is warranted [2].

My interpretation of this statement is that, to Guidotti, avoiding a public “panic” is more important than getting to the truth concerning the D.C. Lead Crisis.

On the opposing side, Renner is arguing from a position that is unmistakably Kantian.  According to our earlier readings, “Kant . . . stresses the rational nature of humans as free, intelligent, self-directing beings . . . a person – as a rational being – should have the right to make up her or his own mind [3].”  Ms. Renner echoes this in the closing statement to her final letter to Guidotti, in which she says:

The people who lived through the crisis, the public health community, and parents everywhere deserve to know the full health consequences and circumstances surrounding this event [2].

By making this statement, she stresses that the truth about this situation is something that we, as self-directing beings, have the right to demand.  Although it is likely, given his history of misconduct, that Guidotti is taking this position for reasons that are less than pure, it nonetheless presents an interesting case of paternalism versus the Kantian perspective and the different conclusions that can result.

Personal Reflection on Care Ethics and the Role of Ethics in Society.  Two weeks ago, I wrote a section in my weekly critique that was highly critical of the concept of ethical theory.  In that critique, I said “in my opinion, it is unrealistic, even arrogant, to assume that we can construct a theory that is independent of place, time, and culture.” For this reason, it really resonated with me when one of our readings for today pointed out that one of the requirements for these theories is the supposition that moral decisions are made “in a vacuum [4].”  The introduction of care ethics, for me, alleviates some of my earlier concerns about moral theories and their practical application.  The foundation of care ethics is something I can definitely agree with:

In care ethics the connectedness of people is key [4].

After reading this, I started thinking about ethics and its role in society.  I propose that, perhaps, ethics is only relevant in the context of the connections we have to our fellow human beings.  Thinking in extremes, the concept of ethics would have little relevance in a “society” of one person.  There is no one to lie to, steal from, etc.  There are no potential victims here; no connections to other human beings.  It is when we begin forming societies with norms of behavior that these issues begin to arise.


Thank you for reading!  I expressed a lot of personal opinions today, and I would love to hear your opinions on these issues, *especially* if they disagree with mine.  Some questions you might be able to help me by answering:  How do you feel about my attribution of Mr. Guidotti’s behavior to a paternalistic ethical view?  Is it too generous/harsh?  Do you agree/disagree with the statement I made about the role of ethics in society?  Do you see anything I missed?


[1] Gostin, L. O. 2010. “Public Health Ethics.” In Public Health Law & Ethics, pp. 59-65. Berkeley, CA: University of California Press.

 [2] Renner, R. 2009. [Letter to the Editor – Response to letter by Tee L. Guidotti] AAAS Professional Ethics Report XXII(4):3-4.

 [3] Van de Poel, I. and L. Royakkers. 2011. “Normative Ethics.” In Ethics, Technology, and Engineering: An Introduction, pp. 89-95. West Sussex, UK: Wiley-Blackwell.

 [4] Van de Poel, I. and L. Royakkers. 2011. “Normative Ethics”  In Ethics, Technology, and Engineering: An Introduction, pp. 102-108. West Sussex, UK: Wiley-Blackwell.