The Effect of Power on the Brain

Jerry Useem, “Power Causes Brain Damage” The Atlantic, July/August 2017, p24-26

“Absolute power corrupts absolutely,” said 19th century British politician Lord Acton.  The corruption of power is often manifested as hubris among those who possess the power.  The degree to which power corrupts not only the bad but also the otherwise good, moral person differs among individuals.  You can find countless books and articles on the subject of power and hubris in business journals and military leadership books.  The hubris of powerful people has been attributed to many things, from cold heartedness or greed, to weakness of character, to personality defects or personal insecurity.  Wherever it comes from, it has led to numerous disasters throughout history (for example, Napoleon’s ill fated invasion of Russia).  Last week I found an article in the Atlantic that explores another source for disorders of the powerful – brain damage.

Jerry Useem’s article highlights research on what seems observable and obvious to many of us – that people in positions of power seem to lose their ability to relate to their subordinates, and in some cases lose touch with reality in general.  Useem points to what neurologist Lord David Owen and co-author Jonathan Davis call “Hubris syndrome,” a disorder of people in positions of power for extended periods of time, characterized by contempt for others, loss of contact with reality, restless or reckless action and displays of incompetence.  This is the first time I’ve seen these traits treated as a “disorder” with “clinical features” rather than a personality flaw or leadership defect.

Useem also mentions several studies that demonstrate impairment of certain neural processes, including “mirroring.”  Mirroring, as used here, is a subconscious form of mimicry in which watching someone do something causes the part of the brain we would use to do that same action to “light up in sympathetic response.”  Research shows that among those studied who were considered powerful, the mirroring response worked less well than those in the nonpowerful group.  Even when the powerful group was asked to make a conscious effort to increase the mirroring response, the results did not change.

Fortunately there are techniques for avoiding “hubris syndrome” and other disorders of the possession of power.  Extremely powerful people such as Franklin D. Roosevelt and Winston Churchill had confidants who kept them grounded and even humbled simply by treating them as if they had the same obligations as the rest of us.  Unfortunately, as Useem points out, there is not a lot of appetite in the business world (nor, I would add on the military or government side) for research on hubris.  I would recommend more of this research for use in leadership training.  Most of the toxic leaders I have come across are the last to realize or admit that they are responsible for the toxic atmosphere in their organizations.  In some cases it would be more effective to treat it as an unconscious response of the brain to the experience of sustained power instead of as a personality defect that most would deny having.

The research described in this article is also useful for ethnographic researchers who seek to understand the perspectives and insights of members of groups that are disempowered, silenced and victimized.  It helps explain in part why “good people do bad things.”  Indifference toward victims, loss of touch with reality, and acts of blatant incompetence can be result at least in part from “hubris syndrome,” especially when leaders do not make a conscious effort to remain grounded and in touch with their subordinates and clients.  Also, researchers themselves must be aware of how they come across to the people they are interviewing.  As Yanna pointed out in class, how one listens can be a source of either empowerment or annihilation to the person being interviewed.  The ability to see yourself as others see you is critical in ethnographic studies, and the researcher must adopt strategies to keep that ability from becoming “anesthetized.

On Kantian theory – a response to SciTechCyber

Dear SciTechCyber,

  • If the first formulation of Kant’s categorical imperative (i.e., obligatory rule) is the universality principle (i.e., “Act only on that maxim which you can at the same time will that it should become a universal law”), and
  • If Kant’s categorical imperative also “implies a postulate of equal and universal human worth,”

do you think that adoption of Kant’s categorical imperative by any of the key participants in the DC Lead Crisis may have changed the history of the case and prevented or reduced DC residents’ exposure to lead?

I wonder what you think!

Science Experts and Ethics

When discussing science experts, a great example comes from Ibo van de Poel and Lamber Royakkers description of the Ford Pinto. As a political form of science and technology, the automobile industry is powerful! As in the Ford Pinto case, the science experts from Ford neglected the safety of the public by design for the location of the car’s fuel tank.   The design made the Ford Pinto susceptible to combustion during a rear-end impact. The Ford Pinto case demonstrates ethical issues when a self-governing science community releases products in the public domain.

By self-governing, science experts have the power within their own ethical boundaries. The ethics of science experts become visible as the scientific artifact intersects with the public. The intersection of scientific expertise in the public domain exposes ethical decision for public safety. As a protection of public safety, independent or third party organizing began to intervene.

Beyond our reading assignment, the outcome of the Ford Pinto case generated the need of governance outside the science experts. As an example, both government and private entities help oversee that experts and politics of car manufacturers do not supersede the ethics for public safety.

At a closer look of the governing bodies for crash testing, the contributors for overseeing the process of “crash testing” cars come from three primary organizations; the National Highway Traffic Safety Administration (NHTSA), the Department of Transportation and the Insurance Institute for Highway Safety (IIHS) which is an independent contributor by auto insurers. In comparison to the DC and Flint water crisis, there are three distinct differences:

  1. Part of the testing of car safety comes from an independent contributor, IIHS. The role of the IIHS removes the power car manufacturers overseeing the safety of their technology.
  2. The link between car manufacturers and the auto insurers. A great influence for car safety comes from auto insurers. The dynamics of auto insurers establish a 3rd actor which economic influence connects to the ethical safety of car owners.
  3. Today, the governance of car safety has the authority that enforces car manufacturers to comply. As an example, some state requires annual vehicle safety inspections that work independently from car manufacturers. The inspection inspects a vehicle from hazards such as CO2, brakes for stopping, the vehicle frame and suspension. Since the late 1990’s, several states require an emissions test which machine generated.

In relating this to the water crisis of DC and Flint, both federal and state governments have generated policies for public safety pertaining to automobiles. However, the detail for governance for residents drinking water lacks the same level of granularity. Why? In the case of DC and Flint water crisis, the governance of public water lacks authority and influence of the independent contributor such as the IIHS. The EPA can only report issues without enforcing mitigation.

In a comparison, the science of the IIHS works in governs the products of the automobile industry. The relationship between the IIHS and the auto industry helps balance the ethics in the design and production of car safety. Yet, the EPA remains without influential power. Theoretically, the EPA science generates policy and procedures without the power of enforcement. Without any “power for enforcement,” the EPA lacks the ability to ensure the ethics by the scientific experts.

On in-depth interviewing – a response to Signalinda

Dear Signalinda, excellent list of highlights from  Liamputtong’s (2009) chapter on in-depth listening. In your comments you, very appropriately I think, emphasize the importance of “insider” perspectives. Indeed, I think that the ethnographic approach to knowledge-making is unique in requiring accurate and complete portrayals of interviewees’ views, and expecting that interviewees recognize themselves and their words in researchers’ representations of them. Why, do you think, spending time to hear and understand “insider” perspectives is important, especially when the insiders are marginalized? What, do you think, ethnographic listening skills might offer to professionals in positions of power?

On utilitarianism and engineering ethics – a response to B-coming Future Engineers

Dear B-coming Future Engineers,

Thanks for your reflection. I was interested to read your observation that utilitarianism is inconsistent with the engineer’s code of ethics. I don’t have a position on this (mostly because I haven’t thought about it this way), but I do wonder what led you to your assessment. The Ford Pinto case you mention seems, indeed, to be a clear example of utilitarianism’s inconsistency with the engineer’s duty to “Hold paramount the safety, health, and welfare of the public.”

At the same time I wonder if utilitarianism is used routinely in engineering decisions about what products to make or public policies to support, defend, or promote, and is taken for granted as an appropriate moral framework for making technoscientific judgments. Is it possible that under certain circumstances, utilitarianism is consistent with engineering ethics? Or that it can be simultaneously consistent and inconsistent?

The creation of the Lead and Copper Rule (LCR) was based on a utilitarian calculation. See below:






Thanks to the LCR, today all large water utilities (and all small and medium water utilities that exceed the Lead Action Level) are required to implement corrosion control treatment, which is believed to have lowered significantly lead levels in US tap water. This is an undeniable victory. However, on the basis of another utilitarian calculation involving estimates of public health harm from lead in water, EPA decided to make 15 ppb lead (instead of zero, which is the health-based standard) the LCR’s enforceable level that triggers remedial requirements. See below:






EPA also decided that the LCR would allow every high-risk home to dispense up to 15 ppb lead and up to 10% of high-risk homes to dispense any concentration of lead whatsoever. See below:






Given that the LCR was created to protect the public’s health from lead in drinking water, is it a morally sound regulation? Does it conform with utilitarianism? Does it conform with the moral standard of many parents who want to know that when their water utility declares their water “safe,” there is no lead  coming out of their taps and no possibility of health harm?

I was intrigued by your question about the potential “genderdness” of utilitarianism. I think it’s a valid question.  I have often wondered about the “genderdness” of the LCR. How would this regulation look if it were written by pregnant women and moms of young children? Two articles related to this issue that I’ve found interesting are:

Speaking While Female, and at a Disadvantage

What Happens When Women Legislate?

I hope you find them useful too.

Parity of Participation

In Nancy Fraser’s book, Scales of Justice, she uses an analogy of the evolution of justice with Thomas Kuhn’s scientific revolutions. As you are already familiar with Kuhn, I won’t describe his theory in detail except to emphasize the similarities made by Fraser. As with Kuhn’s scientific revolutions, a build-up of knowledge which challenges the existing paradigm (normal periods of science and in Fraser’s case justice) will result in a shift to new, more truthful knowledge and a co-produced evolution of society (Kuhn, 1962). Fraser ties periodization of justice to the idea of paradigm shifts with an emphasis on the current transition from Keynesian-Westphalian, with an emphasis on nation-state boundaries, to postwestphalian justice, characterized by globalization (Fraser, 2009, 16). Fraser divides justice into normal and abnormal situations, where abnormal indicates a conflict of shared views or lack of consensus on the what, the who, and the how of justice, and portends a challenge to, and therefore an evolution of, the current paradigm (ibid.). Significant to this idea is understanding that, although periods of normal justice may bely consensus, it is likely that injustice is occurring but not yet challenged (ibid.).

Kuhn argues that scientific paradigm shifts rely on empirical data. However, Fraser cautions that, when examining framing disputes, abnormal justice is not, “reducible to simple questions of empirical fact,” (ibid., 68). In Fraser’s argument, justice in abnormal situations cannot use the authority of power or science to resolve. When deciding questions of justice, society must reject this authority, which she terms “monological,” since it is not, “accountable to the discursive give-and-take of political debate” (ibid.). Instead, society must approach these abnormal justice paradigms with, “unconstrained, inclusive public discussion” (ibid.). The result of this, “dialogical” discussion would be, “a new paradigm of normal discourse about justice, premised on new interpretations of the what, the who, and the how” (ibid. 72).

A further point of discussion is the foundation of these new normals (scientific and justice). Scientific research requires a peer-reviewed foundation of knowledge which can be experimented against until a build-up of anomalies generates another paradigm challenge. What is the foundation for justice – can it be measured and is it empirical? Fraser’s dialogical theory of justice for abnormal times is a theory for transition. For normal times, what would be our methodology for identifying injustice – analogous to Kuhn’s scientific anomalies? I propose that Fraser addressed this question in her 2001 article Recognition without Ethics? Fraser’s central idea in her theory of justice is the principle of parity of participation. This standard, claims that justice must have, “social arrangements that permit all (adult) member of society to interact with one another as peers” (Fraser, 2001, 29). She states, “only those claims that promote parity of participation are morally justified” (ibid, 31). Fraser also says that parity of participation can act as the standard to evaluate demands for change where injustice is purported to occur. The parity of participation principle can provide the foundation upon which to build new knowledge and challenge current paradigms in a normal period of justice.


Kuhn, Thomas S. (1962). The Structure of Scientific Revolutions. Chicago IL:                University of Chicago Press.

Fraser, Nancy. (2001). Recognition without Ethics?. Theory, Culture & Society. 18,21-42.

Fraser, Nancy. (2009). Scales of Justice. New York, NY: Columbia University Press.

On Lacey’s proposal about “impartiality” – a response to Ethical Frameworks

Dear Ethical Frameworks,

I was interested to see in your key highlights from Hugh Lacey’s “The Idea that Science Is Value Free” the suggestion that there might exist alternative pathways for achieving impartiality in science. Lacey describes impartiality as:

Involving the development and use of “proper grounds for accepting scientific posits or making scientific judgments.”

Impartiality in the dominant scientific worldview is connected to the belief that “unambiguous choices about which theories to accept, reject or deem as requiring further investigation” are achievable through the application of formal rules that guide scientists on how to collect, process, and turn empirical data into evidence. When scientists develop theories impartially, the claim goes, these theories are valid regardless of the scientists’ values.

Scientific education trains scientists to develop theories impartially by teaching them the “scientific ethos” – i.e., “the practice of such virtues as honesty, disinterestedness, forthrightness in recognizing the contributions (and opening one’s own contribution publicly to the rigorous scrutiny) of others, humility and courage to follow the evidence where it leads.” “Clearly,” asserts Lacey, “this is the stuff of which myths are made.”

Lacey is professor of philosophy at Swarthmore College. He opens his examination of the idea that science is value free with a point of tension: that this idea is contested by voices from “an eclectic variety of viewpoints: feminism, social constructivism, pragmatism, deep ecology, fundamentalist religions, and a number of third world and indigenous people’s outlooks.” It seems to me that, by extension, Lacey suggests that these voices also contest the foundations of the idea that science is value free: namely impartiality, neutrality, and autonomy. These foundations are interconnected – impartiality implies the “neutrality” of scientific theories (i.e., that they embed no values) and requires professional “autonomy” (i.e., that science is made by scientists alone, that criteria for membership in the scientific community are developed by scientists alone, and that the content of scientific education is created by scientists alone).

All of this suggests that:

  • The construct of value-free science embeds (and, in fact, necessitates) the exclusion of non-scientists from scientific knowledge making (what might the implications of such a construct/vision/value be for cases of environmental contamination like the ones in DC and Flint?)
  • Membership in the scientific community is value based.

In this context, I see Lacey’s proposal that…

Perhaps impartiality can be regularly achieved only if there is a diversity (with respect to values and interests) of practitioners in critical interaction and some diffusion of cognitive authority. “Method” may require clashing value perspectives rather than the activities of practitioners who act individually out of the scientific ethos. Scientific appraisal may be communal or social: the product of interaction rather than the sum of individual acts of following the method (Longino 1990; Solomon 1992; 1994).

…as a possible opening for the inclusion in scientific knowledge-making of alternative viewpoints (e.g., those from feminism and indigenous communities across the globe) as well as of non-scientists.

Wondering what you think.

Reflection on Porter’s category of the “technical” – a response to Anonengineer

Dear Anonengineer, thanks for your reading reflection. Theodore M. Porter is a historian of science. He wrote the article “How Science Became Technical” to place “the category of the technical into historical perspective.” Porter associates the “technical” with difficulty, inaccessibility, esoteric concepts and vocabulary, and knowledge that resides in the domain of experts and away from the public sphere.

I share your observation about Porter’s argument that science became fundamentally and explicitly “technical” in the 20th century. I also appreciated reading what struck me as Porter’s complex account of the history that came before the 1900s, which included different gradations of the “technical” (as well as debates about it) and in different scientific disciplines (in math, for example, inaccessibility ran “through the whole history of science”).

For the purposes of our class, I am especially interested in the question of whether science is (or must be) fundamentally inaccessible to the public, or if science’s inaccessibility is a 20th century development that does more to secure science’s authority than ensure the creation of robust science. Porter tells us that in the 1700s, some scientists made concerted efforts to render science accessible to non-scientists because they viewed scientific education as necessary for enlightenment and for the advancement of justice and morals. He also submits that, “…the expansion of the technical [in the 1900s] encouraged [scientists] to adopt a stance of neutral, self-effacing objectivity. Ironically, the pose of disengagement has become one of the key supports for the authority of science in regard to practical, contested decisions about public investment, medicine, public health, and environmental questions. And this objectivity works most effectively not at times of open political contestation, but when the experts act as cogs in the machinery of bureaucratic action, advising administrators rather than appealing to an engaged public.”

Does this observation remind you of DC and Flint?

If the category of the “technical” has enhanced science’s status and granted scientists “absolute authority” over their specific scientific domain but has limited scientist’s authority over other domains, as Porter suggests, might this category involve important risks? Namely, that when it comes to their technical domain, scientists can:

  • Operate in a disengaged/insular fashion and grow resistant to “outside” evidence contradicting their established mindsets (e.g., the case of Jill Viles; the cases of DC and Flint, which involve multiple occasions of experts ignoring and discounting valid information from affected publics)
  • Convince themselves that their only professional responsibility is their given “scope of work,” and anything outside this scope can be ignored and even downplayed (e.g., in the case of DC DOH’s Lynette Stokes, the news that Washington, DC was in the middle of the most severe and extensive lead-in-water contamination event in modern US history)
  • Reinforce, perpetuate, and embrace conceptions of “the public” that are based on the deficit model, lacking understanding about non-scientists’ capacity to learn and master science
  • Exclude publics from participating in investigations and contributing their knowledge to technical issues that affect them directly (e.g., the case of the bacterial controversy in Flint).

Porter suggests that under the guise of the “technical,” scientists can exercise significant non-technical influence over both policy and politics. I wonder if you or others think that under this guise scientists can also exercise significant influence over affected publics’ access to critical information (e.g., information they would need to protect themselves from harm) as well as ability (right?) to learn the relevant science and contribute meaningfully (and as equal partners) to scientific investigations about matters affecting them directly.

Would love to hear your thoughts.

On Mulkay’s “vocabularies of justification” – a response to the Skeptical Ethic

Dear Skeptical Ethic, thank you for your reflection. I am intrigued by Michael J. Mulkay’s concept “vocabularies of justification” and was glad to see that you mentioned it.

Mulkay is a British sociologist of science. He argues that the visible – and highly advertised – normative structure of science (i.e., that science is guided by principles of “rationality, emotional neutrality, universalism, individualism, disinterestedness, impartiality, communality, humility and organized skepticism”) ought to be understood as:

  • One of at least two normative structures (the other one being a less visible structure of “counter-norms” that offers a counter-principle for every principle in the visible structure, and that allows/justifies actions in direct opposition to the actions that the visible structure would prescribe)
  • An occupational ideology developed from idealized principles of leading/renowned scientists.

According to Mulkay, this ideology is articulated and disseminated through a specific vocabulary that scientists employ to a) portray and justify their actions to non-scientists and b) support their interests. The vocabulary is skillfully crafted to be compelling and persuasive but paints a picture of science that is incomplete and, even worse, misleading. Mulkay calls it a “vocabulary of justification,” claiming that a) it has been widely accepted, b) it is used to justify science’s claim for special political status, and c) it exposes science as an interest group with a dominating elite.

When you say that the visible normative structure of science ignores class, gender, and race, do you mean that:

  • It doesn’t explicitly consider or include class/gender/race in the principles that underlie it, or
  • It ignores the class/gender/race of its creators and, therefore, the possibility that science might actually be – to borrow from Held’s 1990 analysis of dominant moral theory – a class/gender/race-biased enterprise?

I would consider both observations accurate and would love to know more of your thoughts on this.

A question I have for everyone is: do you see any of the principles underlying the visible normative structure of science operating among scientists involved in the DC crisis? What about in the Flint crisis?

Also, what do you think about Mulkay’s concept of “vocabulary of justification”? If, as Mulkay suggests, this vocabulary is not only incomplete but also misleading, what specific interests does it serve? What is the special political status it promotes? How might it shape scientists’ relationship with the public? Where does it leave the public?

Given the interconnectedness between values and science, it does seem that values prevalent in local settings in which science is developed/expressed/applied ought to be considered as a potential influencing factor on the specific science that is developed/expressed/applied by scientists in different cultural/institutional settings. At the same time, I think Mulkay argues that the vocabulary of justification operating in the US operates in the UK as well. I would not be surprised if scholars of science have seen this vocabulary operating in many parts of the world (which could explain calls from grassroots movements in non-western countries for alternatives to the “materialist strategies” described by Lacey 1999).

Would love to hear your thoughts!

Scientist as Expert, Superman and Superwoman

Robert Merton was on a quest.  Science has been very successful in his lifetime.  Science split the atom and cured polio among other things.  Science has delivered the “goods”.  Merton’s quest is a functionalist one.  He looks at the scientific community and tries to identify the “special” things at play in science that do not occur in society at large.  What does he find?  Merton tells us that the scientific community plays by different social rules.  Specifically, Merton argues that scientific communities are ruled by universalism (impersonal criteria for truth), communism (common ownership of ideas), disinterestness (rational objective motivation) and Organized Skepticism (show me!).   When scientist follow all of these rules, they act as supermen and superwomen, as they put aside all the normal human motivations of success, greed, power and other special interest.

There are those that disagree with Merton.  An easy violation of scientific norms is fraud, such as Piltdown Man.  Mitroff Looked at the Apollo space project.  He found that the scientists who worked on the project were anything but disinterested.  They had an extreme emotional commitment to the project.

Another to disagree is Michael Mulkay.  Mulkay shows that scientist routinely violate these norms.  But, Mulkay goes further than breaking the rules and points out that sometime scientific norms are as odds, where one norm is broken in order to follow another.  Mulkay gives us the example of the book “Worlds in Collision” by Immanuel Velikovsky.  The book proposed that historical catastrophes on Earth were the result of near collisions with large bodies.  Other scientists saw this as pseudo-science and would not even read it.  The norms of organized skepticism and disinterestness were being ignored because the scientists through the claim was inconsistent with the laws of mechanics.  Here, protection of established truths are held as more important than other scientific norms.  OF course, this raises the ideas of Kuhn.

Kuhn argues that it is not rules of the community that make science, but it is the agreement of ideas that creates the scientific paradigm.  Scientific behavior is the problem solving done within the paradigm.  Mitroff, Mulkay and Kuhn are correct, then the norms of science that Merton describes seem to be more like cognitive norms, rather than social ones.  When looking at actual scientific behavior, the norms of science are always in negotiation.  What does this say about the role of scientist out in the community?  There seems to be some social assets available for the scientist when they are at large in the lay community.  But, the violation of these superman and superwoman rules could cost or add to veracity.  Emotional attachment may bring greater acceptance by a lay communities.  But, taking the spotlight and appearing on TV all the time as the “expert” may squander good feelings.  The bottom line is that the expert status of the scientist in the lay community is always in negotiation and tied to the place.  What works in one community may not work in another.  An expert today may be the forgotten tomorrow.



Kuhn, Thomas, The Structure of Scientific Revolution, 1962

Merton, Robert, The sociology of science, 1973

Mitroff, Ian, American Sociological Review, 1974

Mulkay, Michael, Some Aspects of cultural growth in science, Social Research 1969