Monthly Archives: September 2013

Is there a professional code of conduct for philosophy?

Very recently[1], I was asked to find a professional code of conduct or ethics statement for my academic discipline. The academic discipline I am currently in is philosophy. In looking for a professional code of conduct, one of the more trusted sources is the American Philosophical Association (APA). Thus, it seemed most logical to consider the “Statements and Policies” page on the APA website, where there are links to 22[2] distinct statements. Hence, I took some time to examine the Statements and Policies page. However, the Statements and Policies page seems quite disorderly to me in the following ways: (1) there is a large number of disparate statements; (2) the statements themselves vary widely in length (i.e., from one sentence long to several pages long); (3) many of the statements give recommendations or suggestions on what philosophers may do, as opposed to setting rules or clear guidelines on what is ethically (un)acceptable (e.g., sexual harassment, discrimination) for actual practice.

Overall, the APA Statements and Policies page provides plenty of useful information on the importance of studying philosophy well and the nature of philosophical inquiry. However, such information about the nature of studying the discipline itself is not what I think about when exposed to the phrase “professional code of conduct” or “ethics statement.” Instead, what comes to my mind in looking for a professional code of conduct is something similar to a code of conduct for engineers, or a code of ethics for engineering education, or a code of ethics for educational research: a single document that clearly lays out rules of practice, or ethical standards (e.g., plagiarism, avoiding harm), or principles/guidelines that aid in establishing ethical courses of action in different contexts. Moreover, I see such principles or guidelines for philosophical practice – if they exist – as consisting of normative statements of appropriate ethical behavior for philosophers, as well as providing direction on the types of issues that philosophers are likely to encounter in their professional work. After all, I do not think that philosophical research is done in a vacuum (i.e., without interacting with other people), even though philosophers may not necessarily conduct experiments in the same way as researchers in disciplines that emphasize empirical approaches (e.g., science, engineering, business). Besides, I think that a professional code of conduct for philosophy could be used to highlight how similar philosophy is to various disciplines, in terms of ethical standards not being drastically different across disciplines (e.g., honesty, integrity, respect).

Having described what I look for in a professional code of conduct or ethics statement, such a code of conduct does not seem to exist for philosophy. Or, if such a code of conduct exists then it is safe to infer that not all philosophers are aware of its existence. It seems hidden because most participants[3] in my program are not aware of such a code of conduct or ethics statement, whereas graduate students in other disciplines (especially engineering) are exposed to such professional codes of conduct earlier in their training. Nonetheless, I will end by asking[4] other philosophers (or philosophically-minded people) the following questions: do you know of a professional code of conduct or ethics statement for philosophy that meets my expectations (stated in the previous paragraph)? Do you think the existence of a code of conduct is beneficial (or not) for philosophy as a profession?


[1]A week and a half ago to be exact.

[2]Well, I counted there are 22.

[3]both current students and alumni.

[4]I encourage posting answers to my questions as comments.

Office of Research Integrity Case: Karnik, Pratima

The Office of Research Integrity (ORI) lists many cases of research misconduct every year. In choosing a case to focus on, I want the case to be very recent. (The shorter time elapsed between the case update and my comments permits my remarks to (potentially) be that much more proactive!) Thus, the case I will be commenting on was last updated on August 8, 2013: “Dr. Pratima Karnik, Assistant Professor, Department of Dermatology, Case Western Reserve University (CWRU), engaged in research misconduct in research submitted to the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), National Institutes of Health (NIH), in grant application R01 AR062378.”

In this case, Dr. Karnik plagiarized in two major instances: the first one was for her NIH grant application; the second one was for her research. In her grant application, she inserted text from a NIAMS grant application that she reviewed. The second instance is slightly more complicated; specifically, she plagiarized “significant portions of text”[1] from a list of eight articles (listed in the case summary) into her own research reports! She also plagiarized from “one U.S. patent application available on the internet”[2].

For all this plagiarism, Dr. Karnik’s research is scrutinized for only two years! That is quite a short time, in my opinion, considering all the plagiarism she has done. Nevertheless, the “Voluntary Settlement Agreement” that she settled on for two years is much closer to how research ought to be performed. There are three parts to the Voluntary Settlement Agreement: (1) A plan for supervision of Dr. Karnik’s duties must be submitted to ORI for approval; (2) Any institution employing Dr. Karnik must submit a certification to ORI – with each application for funding from U.S. Public Health Service (PHS), or with each report involving PHS-supported research – that the content (of applications or report) is free of plagiarized material; and (3) Dr. Karnik must exclude herself voluntarily form serving in any advisory capacity to the PHS.

Out of the three parts, the one that stood out most to me was part (2). In addition to not allowing plagiarism, the certification(s) that each institution employing Dr. Karnik will submit for the next two years must affirm that data she provides is derived from a legitimate source (e.g., actual experiments), and that she accurately reports the “data, procedures, and methodology” sections in her application (e.g., to PHS for funding) or research reports based on PHS-supported research. For this reason, the more institutions Dr. Karnik is affiliated with, the more cumbersome[3] this process of obtaining such certification(s) will be! These requirements (i.e., of accurate reporting in Dr. Karnik’s manuscripts and obtaining data from a legitimate source) highlight that there is an ethical component to doing empirical research that cannot be overlooked. After all, it is very important to have the ability to effectively communicate and present one’s research to audiences outside of the investigator’s research field (e.g., Dermatology in Dr. Karnik’s case). And presenting empirical research[4] well to people outside of one’s specialization often comes with being prepared to answer questions about the data collection, as well as being able to provide lucid explanations of which methodology is used and why that methodology is (or was) chosen for the particular inquiry at hand.


[1]This phrase is subject to equivocation, and raises the question of how much text is “a significant portion”? Though, it is never acceptable to copy text verbatim without citing the original source.

[2]The Internet is never a safe place to store things, and should be treated with more care!

[3]notably with the increasing numbers of non-tenure-track or affiliated faculty.

[4]especially in cases where the research area makes some use of statistical methods!

Data-based decisions in Higher Education

The question posed in this article is:

how is it possible to simultaneously base decisions on data and innovate?

Succinctly, the author believes that data-based decisions and innovation are theoretically opposed[1] but not opposed in practice. Behind this question is a strong assumption and a habit that Matt Reed had to apparently “unlearn”[2]: that we must look for the unassailable position – things must be certain or determinate. As Reed puts it, “Grad school teaches … that if you meet a theory on the road, you try to kill it. The idea is to spot flawed arguments, so you can build strong ones.” Of course, I do not dispute that a prominent goal in various research areas (including Administration in Higher Education) is to build strong arguments, yet I have not encountered any (meaningful) arguments that are utterly irrefutable. I am not sure what exactly Reed studied while he was in graduate school, but I do not know of any disciplines that insist on arguments being absolutely irrefutable or certain.

On the other hand, it is fairly uncontroversial that some disciplines make use of statistical concepts (e.g., variability, uncertainty, sampling, model) in one way or another! I have seen a variety of mathematical and statistical models applied to an assortment of research spanning from biology to engineering to social sciences. Administration (i.e., Matt Reed’s current area of focus) is no different[3] in that it, too, uses statistical concepts. After all, it is widely accepted that the common role of statistical methods used across various academic disciplines is to give a framework for learning accurate information about the world with limited data. The pervasiveness of researchers accepting uncertainty in their models implies the extensive usage of statistical methods, and that scientific reasoning can never be entirely certain.

Another thing that stood out to me is how early the author acknowledged that there is a philosophical issue around causality – that is something I find pleasant. Though, what startled me just as much as this early acknowledgement is the lack of attention and detail Reed gives to explicating the philosophical issue(s) pertinent to causality. (In fact, causal inference is a topic quite heavily discussed both in statistics and philosophy of science.) Reed even admits that “the problem of inference and causality is real” (emphasis mine) in administration but says very little about it and how that affects research in administration. He also states, “When something new comes along, it’s easy to object that the idea is ‘unproven.’” But I find this statement to be unqualified because there could be evidence for a new idea, especially with the large amount of data available in the Internet. Hence, it is not necessarily a question of whether there is evidence or not – rather, it is more about how one uses the available evidence, as well as the nontrivial[4] process of evaluating how strong (or weak) the evidence is for the research question at hand.

References

Mayo, Deborah G., and David R. Cox. 2010. “Frequentist Statistics as a Theory of Inductive Inference.” In Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, edited by Deborah G. Mayo and Aris Spanos, 247–274. Cambridge: Cambridge University Press.


[1]I do not find the conflict obvious – how exactly are data-based decisions and innovation opposed? Matt Reed unfortunately does not go into more detail, and takes it for granted. However, I think that insight from an experiment (i.e., a data-based decision) can be a starting point for innovation.

[2]The fact that Reed had to unlearn such a habit surprised me greatly, as reasoning in a probabilistic manner is second nature to me, especially with my copious training in probability theory and statistics.

[3]Some disciplines rely more heavily on statistical methods; others make less direct use of statistical concepts – the reliance on statistical methods may not be equal, but is still existent in various research areas!

[4]My current research advocates a methodology (described in (Mayo and Cox 2010)) that provides a systematic way to evaluate how strong the evidence is for a particular (statistical) hypothesis.

Inside Higher Ed: Making Sense of the Higher Ed Debate

Today, higher education is under scrutiny to explain what it does and why, while reformers from the White House to Wall Street are eager to provide alternatives.

What Johann Neem calls “the Higher Ed Debate” in this very recent article is a verbose way of stating the perhaps unsurprising fact that disagreements continue to exist – and do not disappear easily – between (1) the Obama administration, (2) higher education administrators and policy makers, and (3) faculty members in universities/colleges. Plainly put, such disagreements between the 3 groups arise from different assumptions each group has about the nature and purpose of higher education in America. More than mere disagreements, however, is the deplorable fact that members of each group do (or may) not sufficiently understand the perspectives other than the one their own group holds to, or even not have a good grasp of the perspective their own group adheres to.

Yesterday’s article on “Understanding the different perspectives in the higher ed debate” from Inside Higher Ed (titled Making Sense of the Higher Ed Debate) seized my attention because of the illuminating way Johann Neem framed the Higher Ed Debate, in terms of 3 schools of thought (or “languages” as Neem calls them) that the groups (listed above) are thought to represent respectively: Pragmatism, Utilitarianism, and Virtue Ethics. As a philosophy major at the Master’s level, I think this way of framing the Higher Ed Debate rightfully brings out the importance of learning and practicing philosophy! My philosophical training thus far has taught me to be open-minded, to learn about various perspectives before taking on a specific one, as well as how to provide arguments for or against a specific perspective. Thus, grasping the 3 schools of thought (mentioned previously) and how they relate to each other in the context of higher education will undoubtedly help everyone who is in the debate to fathom this momentous[1] debate in higher education.

However, what stood out to me was the expositions of Pragmatism, Utilitarianism, and Virtue Ethics, which I found to be far too simplified, especially when Neem associates each “language” with only one philosopher: John Dewey for the Pragmatists, Jeremy Bentham for the Utilitarians, and Aristotle for the Virtue Ethicists. Though, it is fairly uncontroversial that these 3 particular philosophers were highly influential to the corresponding school of thought. To clarify, I do not claim to be an expert in any of these 3 schools of thought, but the little exposure I have had to all 3 perspectives (in separate courses, of course) is more than enough to recognize and appreciate the danger of associating each perspective with only one philosopher. At this point, I encourage and invite comments from philosophers, or other philosophically-minded individuals, on Neem’s description of Pragmatism, Utilitarianism, or Virtue Ethics (or all 3). Do you think that these 3 schools of thought accurately capture how people are thinking about the nature and purpose of higher education in America?


[1]I would also welcome comments about the importance (or lack thereof) of this debate.