Napoleon had finally been defeated, and once more Britain sat relatively unchallenged on the imperial throne. Between Napoleon’s exile in 1814 to the onset of WW1, Britain subsumed over 10 million square miles of territory and 400 million people into its already distended empire. New technologies, notably the steamship and telegraph, were chiefly responsible for this feat, and epitomized the prevailing attitudes of the century. Cue Red Brick Universities. Non-collegiate institutions that favored practical skills to academia sprang up in the major industrial cities of England, fueling tremendous advances in civic science and engineering. And you guessed it, they were constructed with burgundy building blocks. Originally called Red Brick Universities (RBUs) as a derogatory term, the name stuck and was later adopted by their proponents, much like the Big Bang or the Suffragettes.
Straight from the off, RBUs were reviled by the likes of Oxford and Cambridge. In contrast to these long-standing, prestigious colleges that required the stringent 39 Articles test of loyalty to the Catholic Church for admittance, not to mention a Baron- or Marquis- stapled to the front of your name, RBUs permitted people of all backgrounds; how dare they! Luckily, the snobs at Oxbridge were ignored (the best course of action FYI, it really annoys them), and RBUs now stand as a pillar of higher education in the British Isles. At present, eight of the nine recognized RBUs are members of the Russell group that receives over two thirds of all research grant money in the UK. Among the alumni of RBUs are Hans Kreb (Sheffield), Peter Medawar (Birmingham), and Ernest Rutherford (Manchester), all titans in their respective fields.
RBUs have overcome the elitism of other institutions, have proven the distinction between pure and applied research to be artificial, and helped Britain cement its position as the dominant global power. This list of achievements is even more impressive given that these institutions emerged simply to provide a practical education to the working classes. From humble beginnings, as they say.
Ecology is one of the most quantitative fields in the biological sciences. Despite this, the vast majority of ecologists exhibit an abject fear of statistics of any kind; they are in it for the animals, not the numbers. Nevertheless, as Galileo knew, “The laws of Nature are written in the language of mathematics”. One must remember that ecologists are trying to understand the most complicated process in the known universe (ie. life), and so the mathematics can be pretty hard-core. As modelling techniques and analytical tools become more and more advanced, we run the risk of dividing the research community into those with and those without statistical backgrounds. This disconnect is most noticeable, and most worrying, in the peer review process. Far too many “statistically questionable” papers make it to print seemingly unchallenged. I believe this happens for two main reasons:
- Reviewers are embarrassed to admit to their editors that they do not understand the methodologies of a paper and are therefore unqualified to adjudicate on its merit.
- Reviewers see a reputable name in the list of co-authors and proceed to skim over the analysis, assuming that the methods are sound.
The first is simply unacceptable. Given that the most common utterance of a scientist is “I don’t know”, no reviewer should be ashamed to admit ignorance and ask for something to be explained in clearer terms. Indeed, this would be beneficial to both the reviewer and the paper. It is extremely likely that if the reviewer cannot fathom the intricacies of the statistical analysis, the bulk of the readership is also going to have a hard time.
On the second point, reputation should never precede content; if the methods are flawed, the methods are flawed. As scientists, we are purely concerned with how nature really is, not how some smart person thinks nature is. The reputation of a researcher should have no influence on how their future work is perceived. In fact, it should be scrutinized all the more closely. After all, “Give a man a reputation as an early riser and he can sleep ’til noon.”
To alleviate these two problems, I propose two recommendations:
- For each ecological journal, have a resident statistician to review submissions; someone who will not be swayed by co-authors or conclusions, but will purely evaluate the robustness and justification of the methods.
- Introduce the need of quantitative skills much earlier in the training of ecologists. Statistics need to be taught to undergraduates from day one, to confer the importance of numbers in nature.
With these, hopefully we can raise the standards of publishing and instill a greater love of numbers in future generations. What a wonderful world.