Experts and the Public – a camera obscura?


In The German Ideology (1845), Marx makes the case that men exist under a, “false conception” of their political and social circumstances (Calhoun, 142). The circumstances within which a man lives and works is a fabrication of his brain – created both from within and from the dominant relationships of other men. Men go through their lives seeing as through a, “camera obscura” – an illusion (ibid., 143). This illusion comes mostly from ideas of the ruling class which when taken up by the working class is called “false consciousness” (Calhoun). Marx’s intent is to teach men to, “exchange these imaginations for thoughts which correspond to the essence of man” (ibid., 142).  Why did Marx use the metaphor of a camera obscura? Why does Marx extend the metaphor in his critique of idealism and development of historical materialism? And can this metaphor and historical materialism methodology be extended as a lens through which to examine STS concepts – specifically the relationship of experts and the public?

Why does Marx use the metaphor of the camera obscura? In the Economic and Philosophic Manuscripts of 1844, Marx details his observations of the inverse relationships of the worker and the capitalist.

“Labor produces wonderful things for the rich – but for the worker it produces privation. It produces palaces – but for the worker, hovels. It produces beauty – but for the worker deformity. It produces intelligence – but for the worker, stupidity, cretinism” (ibid., 149).

One can understand from this repeating, rhythmic conception, how Marx began to view the differences between the worker and the capitalist as opposites. Marx sees this disparity and in order to express it as a phenomenon which workers can identify and potentially understand better, crafted his metaphor of a camera obscura. He extends the metaphor from the empirical observations of the extreme differences in circumstances between workers and capitalists, to a more esoteric critique of Hegal and idealism (Giddens, 18). “If in all ideology men and their circumstances appear upside-down as in a camera obscura, this phenomenon arises just as much from their historical life-processes as the inversion of objects on the retina does from their physical life-process” (ibid., 144).

Marx extends the metaphor as a way to articulate and differentiate materialism from the contrasting idealism of Hegel. Marx argues that ideology creates a distorted, alienated consciousness in man. Man’s imagination of his circumstances arises from being taught history as a fulfillment of idealistic and universalistic goals. But these goals are socially constructed by the ruling class for the purpose of maintaining the normative power structure. Ideology may make it appear that man’s understanding of his circumstances is true, but what it is actually doing is flipping reality (the materialist understanding) on its head. According to Giddens, Marx flips his camera obscura using the methodology of historical materialism, “as a perspective for the analysis of social development”. (Giddens, 20). In short, historical materialism is a bottom up methodology which examines actual worker circumstances as empirical data. It portends to identify the root causes of conflict and does not allow ideological or normative assumptions to bias results.

Marx’s camera obscura metaphor and methodology of historical materialism would be an effective lens and framework from which to examine power relations among scientific experts and the public. Marx developed his methodology to help us empirically understand our world without the bias of ideology – to make reasoned, observed assessments of our circumstances without a road map created by the ruling class (experts) for the workers (public). As Marx’s workers did in 1845, does the public now also embrace an ideology where experts have the power to determine their circumstances (facts) as it relates to science and technology? Has the public been taught through ideology/idealism to assume the judgements of experts as facts and to alienate themselves from their role in helping to create knowledge? How can the public and experts flip their perceptions and approach using the framework of historical materialism?

Shared Responsibility for Risk?

Based on our discussion yesterday, I see a strong need for a health based standard not on water quality, but how water quality impacts people medically. The lead and copper rule (LCR) standard is necessary for water utilities to measure the content of water coming out of their plant and potentially through the distribution system. More discussion needs to happen around who is responsible for the water distribution once it leaves the plant. Every city is different and every city has citizens with different opinions on who bears the burden and the cost for pipe replacement. So how can we prevent lead poisoning of people through the water supply? What would be an effective health based standard that citizens impacted by the risk and those responsible for resourcing the fix, agree to. Blood lead levels (BLL) does not seem like a good metric to resolve this problem. As Dr. Simoni Triantafyllidou noted, BLL is contingent on so many different variables. How can governments or their contracted utilities agree to a standard with so many variables out of their control? Is it the governments’ responsibility in total to provide clean water with zero medical impacts? How would any institution be able to agree to that standard? How would a city or state budget for medical impacts on people? Where does the responsibility manifest today and what has resulted? I think we can see that the attempt or assumption of putting responsibility for medical impacts upon the government creates an environment where “risk” studies do not identify risk.

How can we bring those responsible for governing and those governed together in shared responsibility? I don’t think this idea can even be discussed in an honest and open way, unless the conversation can take place with full transparency without retribution. Science could be the mechanism where this takes place. A shared responsibility for science: data collection, research question development, monitoring, testing, loosening of definitive causality requirements for resourcing, analysis of risk – may create an environment where understanding of the science (trust of the information) becomes owned by all impacted. Could effective decisions on avoiding or mitigating risk and resourcing against that risk be prioritized or even accomplished if government and citizens through the shared mechanism of science, shared the responsibility for identifying and mitigating risks to include lead in drinking water?


The visual display of scientific information – is it accessible?

During the Washington D.C. water crisis, the Center for Disease Control (CDC) conducted a study purportedly to examine if there was any harm done to the public between 2001 and 2004. The study concluded that no harm occurred. After listening and talking to Dr. Simoni Triantafyllidou about her work on the same study objective as the CDC study, it becomes readily apparent that data manipulation is a key factor in the visualization of study outcomes. The CDC study lumped all blood lead levels (BLL) into a single data category whereas Dr. Triantafyllidou isolated the BLLs by high risk and low risks populations. This simple adjustment to the data set yielded significantly different visual results.  By combining the data set, CDC softened the curve and generated like results as previous years. Their conclusion that this like result demonstrated no risk is just bad analysis but it wasn’t obvious from just looking at their graphic. An increased risk was demonstrated by the CDC study because they showed the Washington D.C. data as a shift above the national trend line.   Dr. Triantafyllidou adjusted the picture to amplify that shift and wrote a different story line.

Edward Tufte has written several books on the visual display of quantitative information. Graphics reveal data and are instruments of communication and in this way, “data graphics are no different from words, [and] any means of communication can be used to deceive” (Tufte, 2001, 53). Tufte points out that when people are presented with visual information, they, “quickly and naturally” direct their attention, “toward exploring the substantive content of the data rather than toward questions of methodology and technique” (ibid., 20). “Our visual impression of the data is entangled,” in the ideologies of the producer and consumer of the graphic (ibid.).

In the same way a scientist or an engineer can harness authority just by their stated profession, a scientific graphic can harness authority because it looks like a fact. In the spirit of increased community participation in science, a sharedness of meaning must occur. How can complexity become more accessible? Most scientific or engineering graphics do not do this well. “Imagine if graphics were replaced by paragraphs of words and those paragraphs scattered over the pages out of sequence with the rest of the text [or meaning] – that is how graphical and tabular information is now treated in the layout of many published pages, particularly in scientific journals and professional books” (ibid., 181).

Tufte recommends some techniques in graphic display which might help. Words, graphics, and pictures should be combined in the display of information. Segregation of meaning between the words and the graphics should always be avoided. Specifically, in graphics in exploratory data analysis (as in BLL over time box charts), “words should tell the viewer how to read the design and not what to read in terms of content” (ibid., 182). The art and creativity of science lies in taking the data or facts and determining a finding. And citizens impacted by the risk a scientist or engineer may be studying, have a right to see the data and draw their own conclusions.


Tufte, Edward R. (2001). The Visual Display of Quantitative Information. Cheshire, CN: Graphics Press.

A case against the ideal of universal citizenship and just science

Iris Young, in her chapter, “Polity and Group Difference”, from her book Throwing Like a Girl, examines society’s baseline assumption that “modern political thought generally,” assumes that the universality of, “citizenship status transcends particularity and difference” (Young, 1990, 114). Given that modern society has (over time) awarded full citizenship status (here read as equal political and civil rights) to all groups, why do we still see inequality and consequently oppression (ibid. 114)? Young postulates that this inequality still exists due to the irreconcilability of the specificity of groups trying to align with established assimilated norms of citizenship. And, unless you are a member of the group which created the assimilated norms, you cannot, metaphorically speaking, cross the river. To cross over suggests different types of groups (women, blacks, American Indians, Hispanic, elderly) must change holistically to create the homogeneity (historically – white bourgeoisie male) required for assimilation. And how does this idea of citizenship and assimilated norms impact science and the development of new knowledge? In Flint, we can see that the maldistribution of clean water was exacerbated by an extreme injustice of misrecognition – particularly among the poor and minority population of the city. These people should be heard by those in power (government) and those with authority to influence government decisions (scientists/engineers). They can and should identify data and create knowledge which has a direct impact on mitigating any negative risk brought upon them. In her essay, “The Five Faces of Oppression”, Young says that, “social justice requires not the melting away of differences, but institutions that promote reproduction of and respect for group difference without oppression” (Young, 1998, 94).  So what are these institutions and how can they help bring participatory parity? As a STS community we need to ask ourselves, how should scientists and engineers engage effectively with a community such as Flint? How can they educate and inform a community and what (metrics) would we use to know if that community was indeed informed? What is the approach for these two entities (science and society) to achieving a shared understanding.

Young, Iris. (1990). Polity and Group Difference: A critique of the Ideal of Universal Citizenship. (I. Young). Throwing Like a Girl and Other Essays in Feminist Philosophy and Social Theory. (pp.114-137). Bloomington and Indianapolis: Indiana University Press.

Young, Iris. (1998). Five Faces of Oppression. In A.E. Cud and R.O. Andreasen, eds., Feminist Theory: A Philosophical Anthology. Malden, MA: Blackwell Publishing.

Using FWC as a case study for uniting environmental and ecological justice movements

Is the Flint Water Crisis a case study for environmental justice or ecological justice? How can these two movements unite around a specific definition of injustice as it relates to Flint? If activists and movements employ the term in the same way, will they leverage more focus and more power? And in addition, if these movements were able to critique themselves and address the gaps laid out of Schlosberg, could they create a more powerful and effective resolution for future environmental and ecological justice concerns?

David Schlosberg asks these questions in his book Defining Environmental Justice (2007):

  • “What is the relationship between environmental justice, which addresses environmental risks within human communities, and ecological justice, focused on the relationship between those human communities and the rest of the natural world” (Schlosberg, 2007, 3.)? 
  • Can we, “apply the same conceptions of justice, and the same broad discourse of justice, to both sets of issues – environmental risks in human populations and the relationship between human communities and nonhuman nature” (ibid., 6)? 
  • “Do those who speak of environmental justice, and those who call for ecological justice, understand the concept of ‘justice’ in similar ways” (ibid., 3)?

 Schlosberg identified two major gaps:

First Gap: the distance and relationship between justice theory (and theorists) and the environmental movement (and its activists). They are not considering, much less integrating, each other’s contributions effectively.

The Flint Water Crisis (any many other soon to be crisis in other American cities) is a case study of justice theory. Maldistribution of clean water as a function of a maldistribution of lead pipes and resources occurred. In addition, misrecognition of those impacted by the polluted water occurred. Those without a ‘voice of authority’ could not get their concerns heard or respected. As a consequence, they were poisoned by water that they knew was bad but had been assured was clean. Without the ‘credentials’ to make their voices heard, how can they overcome procedural injustice with political and decision-making power? The Flint Case Study is ongoing, but how can the environmental movement learn from this and position themselves to support not only Flint, but other communities in the future? What should the environmental movement consider and integrate from a justice theory assessment of the Flint Water Crisis?

  • “The problem that I see is not that distributive theories of justice cannot be applied to environmental justice. Rather, the issue is that justice theory has developed a number of additional ways of understanding the processes of justice and injustice – and these developments have rarely appeared in the literature on the environmental justice movement” (ibid., 4).
  • “The environmental justice movement supplies ample evidence that all of these conceptions of justice are used in practice, and that, in fact, a comprehensive understanding of the way that movements define the ‘justice’ of environmental justice must include all of these discourses” (ibid., 5).

In addition to Distributive, the environmental movement should consider the following unapplied justice concepts:

  • Recognition – addressing the processes that construct maldistribution (Young, Fraser, Honneth)
  • Capability – capacities necessary for individuals to fully function in their chosen lives (Sen, Nussbaum)
  • Participation – necessary for individuals to ensure functioning (Fraser, Sen, Nussbaum)

Second Gap: disconnect between environmental justice and ecological justice

“We should extend the organizing framework of environmental justice, “to include the conception of ecological justice as well” (ibid., 7). – think Rachel Carson Silent Spring

Lead pipes are poisonous – period. Every lead pipe in the country used for water that touches humans, should be removed. But why isn’t anyone talking about the initial problem of the severity of pollution of the Flint River? Why can we not incorporate environmental justice efforts with ecological justice efforts? They are completely integrated.

  • “The vast majority of work on environmental justice does not concern itself with the natural world outside human impacts, and most work on ecological justice does not pay attention to issues raised by movements for environmental justice.” (ibid., 6).
  • “We can draw parallels between the application of notions of justice as distribution, recognition, capability, and participation in both the human and nonhuman realms. A broad set of theoretical concerns, notions, and tools can be applied to both environmental and ecological justice” (ibid.).
  • There is, “potential of using the same languages(s) of justice in addressing both sorts of issues and relationships” in the environmental and ecological justice movements (ibid.).

The Flint Water Crisis could be used as a case study of how to expand justice in general by connecting the environmental justice movement with the ecological justice movement.

“Issues of inequality, recognition, participation, and the larger question of the capabilities and functioning of individuals and communities – human and nonhuman – can come together in a broad and inclusive discourse that can strengthen the explanatory (and mobilizing) power of the movements that use the language of environmental and ecological justice” (ibid., 8).

Schlosberg, David. (2007). Defining Environmental Justice. New York: Oxford University Press.

Poisoned Water

Documentary – Poisoned Water: What exactly went wrong in Flint—and what does it mean for the rest of the country?    Airing on PBS 31 May 2017, 7-9pm

Should we equalize the authority of citizens in regulatory science?

Richard Sclove asserts in his book, Democracy and Technology, that, given equal opportunity, non-scientists can not only understand technical information, but that they can also contribute to new scientific knowledge (Sclove, 1995, 50). He further argues that whether a non-scientist can grasp complex technical information is not the real problem. He contends that the real problem is the reverse argument – scientists making technical decisions without understanding the societal or political implications:

“Those who argue against lay involvement in socio-technological decision making often seem to be alluding to horrendous decisions and social consequences that they know will ensue. Yet review of actual experience with lay participation does not yield a bumper crop of disastrous decisions. After all, it was not panels of laypeople who designed the Three Mile Island and Chernobyl nuclear plants; who created the conditions culminating in tragedy at Union Carbide’s Bhopal, India, pesticide factory; who bear responsibility for the explosion of the U.S. space shuttle Challenger; or who enabled the Exxon Valdez oil spill” (ibid., 50).

Sclove argues that new scientific knowledge and technology are, “ineradicably value-laden and specific to particular cultural contexts, and thus should never be a candidate for monopolized production by supposedly impartial experts” (ibid., 50).

Sclove, Richard E. (1995). Democracy and Technology. New York, NY: Guilford Press.

Federal Register Notice

The lead modeling public peer review meeting will be held on June 27 and 28, 2017 in Washington DC. The registration deadline to attend the meeting in-person or via teleconference, and to request to make a brief oral statement at the meeting, is June 22, 2017.

What is Regulatory Science?

Regulatory science is created, “through evolving systems of framing, classification, calculation, and control,” with the goal of, “producing a new state of technologically improved, or designed, nature” (Jasanoff, 2005, 95). According to Jasanoff, regulatory science is the mechanism for providing legitimacy to basic science when it encounters society. She calls this effort normalization. It signals the starting point of the process where public trust institutions (such as EPA or OSHA) make agreements with society about risk. Jasanoff discusses how regulatory science can be used as a mechanism to normalize scientific innovations that have a tremendous impact on the public. This attempt at normalization, typically driven by industry, is an attempt to create legitimacy by making, “vague, unnamed, and unbounded fears,” into, “specific and tractable” ideas (Jasanoff, 2005, 95).

Jasanoff posits that the normalization of regulatory science typically would bring, “public perception back in line with the rational risk calculations made by experts” (Jasanoff, 2005, 99). In most events the normalized regulatory science would not be challenged. The authority of that regulatory science and its risk standards would stand as the measure of what society could consider safe. She argues that this, “apparent closure of controversy was achieved in a period of American deregulation that reduced the type and intensity of scrutiny,” established by regulatory science. (Jasanoff, 99). Expert, and other regulatory agencies would interpret this, “lowering of skeptical oversight as evidence that the research was acceptably safe, but the stability of this conclusion strongly depended on the credibility of the U.S. regulatory process as a whole” (Jasanoff, 2005, 100).

So what do we as a society think about that credibility? Can we find examples when designated experts (EPA, OSHA, utilities, etc) have used this normalized “legitimacy” to defend themselves from liability? Do experts claim the authority of science to support their conclusions and test results regarding the safety of – for example – drinking water? How does society challenge this regulatory science? Or better yet – how does society get involved in its creation?

Jasanoff, Sheila. (2005). Designs on Nature. Princeton NJ: Princeton University Press.