On Kantian theory – a response to SciTechCyber

Dear SciTechCyber,

  • If the first formulation of Kant’s categorical imperative (i.e., obligatory rule) is the universality principle (i.e., “Act only on that maxim which you can at the same time will that it should become a universal law”), and
  • If Kant’s categorical imperative also “implies a postulate of equal and universal human worth,”

do you think that adoption of Kant’s categorical imperative by any of the key participants in the DC Lead Crisis may have changed the history of the case and prevented or reduced DC residents’ exposure to lead?

I wonder what you think!

On in-depth interviewing – a response to Signalinda

Dear Signalinda, excellent list of highlights from  Liamputtong’s (2009) chapter on in-depth listening. In your comments you, very appropriately I think, emphasize the importance of “insider” perspectives. Indeed, I think that the ethnographic approach to knowledge-making is unique in requiring accurate and complete portrayals of interviewees’ views, and expecting that interviewees recognize themselves and their words in researchers’ representations of them. Why, do you think, spending time to hear and understand “insider” perspectives is important, especially when the insiders are marginalized? What, do you think, ethnographic listening skills might offer to professionals in positions of power?

On utilitarianism and engineering ethics – a response to B-coming Future Engineers

Dear B-coming Future Engineers,

Thanks for your reflection. I was interested to read your observation that utilitarianism is inconsistent with the engineer’s code of ethics. I don’t have a position on this (mostly because I haven’t thought about it this way), but I do wonder what led you to your assessment. The Ford Pinto case you mention seems, indeed, to be a clear example of utilitarianism’s inconsistency with the engineer’s duty to “Hold paramount the safety, health, and welfare of the public.”

At the same time I wonder if utilitarianism is used routinely in engineering decisions about what products to make or public policies to support, defend, or promote, and is taken for granted as an appropriate moral framework for making technoscientific judgments. Is it possible that under certain circumstances, utilitarianism is consistent with engineering ethics? Or that it can be simultaneously consistent and inconsistent?

The creation of the Lead and Copper Rule (LCR) was based on a utilitarian calculation. See below:

 

 

 

 

 

Thanks to the LCR, today all large water utilities (and all small and medium water utilities that exceed the Lead Action Level) are required to implement corrosion control treatment, which is believed to have lowered significantly lead levels in US tap water. This is an undeniable victory. However, on the basis of another utilitarian calculation involving estimates of public health harm from lead in water, EPA decided to make 15 ppb lead (instead of zero, which is the health-based standard) the LCR’s enforceable level that triggers remedial requirements. See below:

 

 

 

 

 

EPA also decided that the LCR would allow every high-risk home to dispense up to 15 ppb lead and up to 10% of high-risk homes to dispense any concentration of lead whatsoever. See below:

 

 

 

 

 

Given that the LCR was created to protect the public’s health from lead in drinking water, is it a morally sound regulation? Does it conform with utilitarianism? Does it conform with the moral standard of many parents who want to know that when their water utility declares their water “safe,” there is no lead  coming out of their taps and no possibility of health harm?

I was intrigued by your question about the potential “genderdness” of utilitarianism. I think it’s a valid question.  I have often wondered about the “genderdness” of the LCR. How would this regulation look if it were written by pregnant women and moms of young children? Two articles related to this issue that I’ve found interesting are:

Speaking While Female, and at a Disadvantage

What Happens When Women Legislate?

I hope you find them useful too.

On Lacey’s proposal about “impartiality” – a response to Ethical Frameworks

Dear Ethical Frameworks,

I was interested to see in your key highlights from Hugh Lacey’s “The Idea that Science Is Value Free” the suggestion that there might exist alternative pathways for achieving impartiality in science. Lacey describes impartiality as:

Involving the development and use of “proper grounds for accepting scientific posits or making scientific judgments.”

Impartiality in the dominant scientific worldview is connected to the belief that “unambiguous choices about which theories to accept, reject or deem as requiring further investigation” are achievable through the application of formal rules that guide scientists on how to collect, process, and turn empirical data into evidence. When scientists develop theories impartially, the claim goes, these theories are valid regardless of the scientists’ values.

Scientific education trains scientists to develop theories impartially by teaching them the “scientific ethos” – i.e., “the practice of such virtues as honesty, disinterestedness, forthrightness in recognizing the contributions (and opening one’s own contribution publicly to the rigorous scrutiny) of others, humility and courage to follow the evidence where it leads.” “Clearly,” asserts Lacey, “this is the stuff of which myths are made.”

Lacey is professor of philosophy at Swarthmore College. He opens his examination of the idea that science is value free with a point of tension: that this idea is contested by voices from “an eclectic variety of viewpoints: feminism, social constructivism, pragmatism, deep ecology, fundamentalist religions, and a number of third world and indigenous people’s outlooks.” It seems to me that, by extension, Lacey suggests that these voices also contest the foundations of the idea that science is value free: namely impartiality, neutrality, and autonomy. These foundations are interconnected – impartiality implies the “neutrality” of scientific theories (i.e., that they embed no values) and requires professional “autonomy” (i.e., that science is made by scientists alone, that criteria for membership in the scientific community are developed by scientists alone, and that the content of scientific education is created by scientists alone).

All of this suggests that:

  • The construct of value-free science embeds (and, in fact, necessitates) the exclusion of non-scientists from scientific knowledge making (what might the implications of such a construct/vision/value be for cases of environmental contamination like the ones in DC and Flint?)
  • Membership in the scientific community is value based.

In this context, I see Lacey’s proposal that…

Perhaps impartiality can be regularly achieved only if there is a diversity (with respect to values and interests) of practitioners in critical interaction and some diffusion of cognitive authority. “Method” may require clashing value perspectives rather than the activities of practitioners who act individually out of the scientific ethos. Scientific appraisal may be communal or social: the product of interaction rather than the sum of individual acts of following the method (Longino 1990; Solomon 1992; 1994).

…as a possible opening for the inclusion in scientific knowledge-making of alternative viewpoints (e.g., those from feminism and indigenous communities across the globe) as well as of non-scientists.

Wondering what you think.

Reflection on Porter’s category of the “technical” – a response to Anonengineer

Dear Anonengineer, thanks for your reading reflection. Theodore M. Porter is a historian of science. He wrote the article “How Science Became Technical” to place “the category of the technical into historical perspective.” Porter associates the “technical” with difficulty, inaccessibility, esoteric concepts and vocabulary, and knowledge that resides in the domain of experts and away from the public sphere.

I share your observation about Porter’s argument that science became fundamentally and explicitly “technical” in the 20th century. I also appreciated reading what struck me as Porter’s complex account of the history that came before the 1900s, which included different gradations of the “technical” (as well as debates about it) and in different scientific disciplines (in math, for example, inaccessibility ran “through the whole history of science”).

For the purposes of our class, I am especially interested in the question of whether science is (or must be) fundamentally inaccessible to the public, or if science’s inaccessibility is a 20th century development that does more to secure science’s authority than ensure the creation of robust science. Porter tells us that in the 1700s, some scientists made concerted efforts to render science accessible to non-scientists because they viewed scientific education as necessary for enlightenment and for the advancement of justice and morals. He also submits that, “…the expansion of the technical [in the 1900s] encouraged [scientists] to adopt a stance of neutral, self-effacing objectivity. Ironically, the pose of disengagement has become one of the key supports for the authority of science in regard to practical, contested decisions about public investment, medicine, public health, and environmental questions. And this objectivity works most effectively not at times of open political contestation, but when the experts act as cogs in the machinery of bureaucratic action, advising administrators rather than appealing to an engaged public.”

Does this observation remind you of DC and Flint?

If the category of the “technical” has enhanced science’s status and granted scientists “absolute authority” over their specific scientific domain but has limited scientist’s authority over other domains, as Porter suggests, might this category involve important risks? Namely, that when it comes to their technical domain, scientists can:

  • Operate in a disengaged/insular fashion and grow resistant to “outside” evidence contradicting their established mindsets (e.g., the case of Jill Viles; the cases of DC and Flint, which involve multiple occasions of experts ignoring and discounting valid information from affected publics)
  • Convince themselves that their only professional responsibility is their given “scope of work,” and anything outside this scope can be ignored and even downplayed (e.g., in the case of DC DOH’s Lynette Stokes, the news that Washington, DC was in the middle of the most severe and extensive lead-in-water contamination event in modern US history)
  • Reinforce, perpetuate, and embrace conceptions of “the public” that are based on the deficit model, lacking understanding about non-scientists’ capacity to learn and master science
  • Exclude publics from participating in investigations and contributing their knowledge to technical issues that affect them directly (e.g., the case of the bacterial controversy in Flint).

Porter suggests that under the guise of the “technical,” scientists can exercise significant non-technical influence over both policy and politics. I wonder if you or others think that under this guise scientists can also exercise significant influence over affected publics’ access to critical information (e.g., information they would need to protect themselves from harm) as well as ability (right?) to learn the relevant science and contribute meaningfully (and as equal partners) to scientific investigations about matters affecting them directly.

Would love to hear your thoughts.

On Mulkay’s “vocabularies of justification” – a response to the Skeptical Ethic

Dear Skeptical Ethic, thank you for your reflection. I am intrigued by Michael J. Mulkay’s concept “vocabularies of justification” and was glad to see that you mentioned it.

Mulkay is a British sociologist of science. He argues that the visible – and highly advertised – normative structure of science (i.e., that science is guided by principles of “rationality, emotional neutrality, universalism, individualism, disinterestedness, impartiality, communality, humility and organized skepticism”) ought to be understood as:

  • One of at least two normative structures (the other one being a less visible structure of “counter-norms” that offers a counter-principle for every principle in the visible structure, and that allows/justifies actions in direct opposition to the actions that the visible structure would prescribe)
  • An occupational ideology developed from idealized principles of leading/renowned scientists.

According to Mulkay, this ideology is articulated and disseminated through a specific vocabulary that scientists employ to a) portray and justify their actions to non-scientists and b) support their interests. The vocabulary is skillfully crafted to be compelling and persuasive but paints a picture of science that is incomplete and, even worse, misleading. Mulkay calls it a “vocabulary of justification,” claiming that a) it has been widely accepted, b) it is used to justify science’s claim for special political status, and c) it exposes science as an interest group with a dominating elite.

When you say that the visible normative structure of science ignores class, gender, and race, do you mean that:

  • It doesn’t explicitly consider or include class/gender/race in the principles that underlie it, or
  • It ignores the class/gender/race of its creators and, therefore, the possibility that science might actually be – to borrow from Held’s 1990 analysis of dominant moral theory – a class/gender/race-biased enterprise?

I would consider both observations accurate and would love to know more of your thoughts on this.

A question I have for everyone is: do you see any of the principles underlying the visible normative structure of science operating among scientists involved in the DC crisis? What about in the Flint crisis?

Also, what do you think about Mulkay’s concept of “vocabulary of justification”? If, as Mulkay suggests, this vocabulary is not only incomplete but also misleading, what specific interests does it serve? What is the special political status it promotes? How might it shape scientists’ relationship with the public? Where does it leave the public?

Given the interconnectedness between values and science, it does seem that values prevalent in local settings in which science is developed/expressed/applied ought to be considered as a potential influencing factor on the specific science that is developed/expressed/applied by scientists in different cultural/institutional settings. At the same time, I think Mulkay argues that the vocabulary of justification operating in the US operates in the UK as well. I would not be surprised if scholars of science have seen this vocabulary operating in many parts of the world (which could explain calls from grassroots movements in non-western countries for alternatives to the “materialist strategies” described by Lacey 1999).

Would love to hear your thoughts!

STS 6234: Welcome all

This is our class blog — the space that will bring together your posts and will allow us to read and respond to your thoughts. Looking forward to exciting, creative, risk-taking, enlightening, and thought-deepening conversations! Yanna