Data-based decisions in Higher Education

The question posed in this article is:

how is it possible to simultaneously base decisions on data and innovate?

Succinctly, the author believes that data-based decisions and innovation are theoretically opposed[1] but not opposed in practice. Behind this question is a strong assumption and a habit that Matt Reed had to apparently “unlearn”[2]: that we must look for the unassailable position – things must be certain or determinate. As Reed puts it, “Grad school teaches … that if you meet a theory on the road, you try to kill it. The idea is to spot flawed arguments, so you can build strong ones.” Of course, I do not dispute that a prominent goal in various research areas (including Administration in Higher Education) is to build strong arguments, yet I have not encountered any (meaningful) arguments that are utterly irrefutable. I am not sure what exactly Reed studied while he was in graduate school, but I do not know of any disciplines that insist on arguments being absolutely irrefutable or certain.

On the other hand, it is fairly uncontroversial that some disciplines make use of statistical concepts (e.g., variability, uncertainty, sampling, model) in one way or another! I have seen a variety of mathematical and statistical models applied to an assortment of research spanning from biology to engineering to social sciences. Administration (i.e., Matt Reed’s current area of focus) is no different[3] in that it, too, uses statistical concepts. After all, it is widely accepted that the common role of statistical methods used across various academic disciplines is to give a framework for learning accurate information about the world with limited data. The pervasiveness of researchers accepting uncertainty in their models implies the extensive usage of statistical methods, and that scientific reasoning can never be entirely certain.

Another thing that stood out to me is how early the author acknowledged that there is a philosophical issue around causality – that is something I find pleasant. Though, what startled me just as much as this early acknowledgement is the lack of attention and detail Reed gives to explicating the philosophical issue(s) pertinent to causality. (In fact, causal inference is a topic quite heavily discussed both in statistics and philosophy of science.) Reed even admits that “the problem of inference and causality is real” (emphasis mine) in administration but says very little about it and how that affects research in administration. He also states, “When something new comes along, it’s easy to object that the idea is ‘unproven.’” But I find this statement to be unqualified because there could be evidence for a new idea, especially with the large amount of data available in the Internet. Hence, it is not necessarily a question of whether there is evidence or not – rather, it is more about how one uses the available evidence, as well as the nontrivial[4] process of evaluating how strong (or weak) the evidence is for the research question at hand.

References

Mayo, Deborah G., and David R. Cox. 2010. “Frequentist Statistics as a Theory of Inductive Inference.” In Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, edited by Deborah G. Mayo and Aris Spanos, 247–274. Cambridge: Cambridge University Press.


[1]I do not find the conflict obvious – how exactly are data-based decisions and innovation opposed? Matt Reed unfortunately does not go into more detail, and takes it for granted. However, I think that insight from an experiment (i.e., a data-based decision) can be a starting point for innovation.

[2]The fact that Reed had to unlearn such a habit surprised me greatly, as reasoning in a probabilistic manner is second nature to me, especially with my copious training in probability theory and statistics.

[3]Some disciplines rely more heavily on statistical methods; others make less direct use of statistical concepts – the reliance on statistical methods may not be equal, but is still existent in various research areas!

[4]My current research advocates a methodology (described in (Mayo and Cox 2010)) that provides a systematic way to evaluate how strong the evidence is for a particular (statistical) hypothesis.

Print Friendly, PDF & Email

Leave a Reply