This morning, my wife gave birth to a lovely (and massive) baby boy and so, until I’m back in the saddle, I’m going to post some “oldies but goodies.” Blessings!
“Oh Timothy, guard the deposit entrusted to you. Avoid the irreverent babble and contradictions of what is falsely called ‘knowledge,’ for by professing it some have swerved from the faith.” – I Timothy 6:20
Some things don’t seem to have changed. Despite Paul’s best efforts, it’s undeniable that faith in “knowledge” and scientific research rules the roost these days. Materialistic determinism still seems to be the predominant philosophical worldview of academics, whereby the world is here by chance and is merely the product of matter interacting with other bits of matter. For the laity, “peer-reviewed” is synonymous with indisputable fact and the professional suffix of PhD is tantamount to unquestionable brilliance. But what if the scientific literature, as revealing and fascinating as it can sometimes be, isn’t worthy of such idolatry, but is often contrived, biased, of mediocre quality, and, thus, of little value to anyone? Moreover, what if whatever redeeming qualities that can be found in scientific research only exist because of God’s consistent nature (Mal. 3:6) and the fact that man, every unbelieving academic included, is made in His image?
Ultimately, God makes scientific research possible, which means non-believing researchers are unwittingly borrowing from the Christian worldview when they assume the constancy of certain physical laws and the predictability of the future. According to the late theologian and apologist, Greg Bahnsen,
The espoused or consciously used presuppositions of the non-Christian cannot account for or theoretically ground factuality or knowledge, explain the amenability of unifying logical principles and diversifying factual particulars, or do anything but inevitably destroy reason and human personality and the very possibility of meaningful prediction. On the other hand, the presuppositions of the Christian (taken from the Word of God) salvage the scientific endeavor, account for the fruitful connection of logic and facts, ground human personality, provide a theoretical basis for knowledge, and guard the meaningfulness of prediction and the usefulness of reason.
Of course, this is not how godless men see things. Such men place scientists on a pedestal under the pretense that they are somehow devoid, or at least less prone, to the crass biases of ordinary folks and are better equipped to assess and interpret reality. But that doesn’t seem to be the case.
First of all, a lot of research is not replicable. For example, a former researcher and his team at Amgen Inc., a prestigious biotech firm, attempted to replicate the results of 53 landmark studies in the field of oncology, most of which came from university labs, but his team only managed to do so for 6 of the 53 studies. Mind you, these were important, high-impact studies. In another attempt to replicate a large number of studies, all of which claimed to find sex differences, Dr. John Ioannidis and his colleagues were able to replicate the results of only one out of 432 studies. That’s a whopping 0.2%. Dr. Ioannidis’s comment on the matter was simple: “There is an increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims. A new claim about a research finding is more likely to be false than true.” Ouch.
To be fair, results may not be replicable for a variety of reasons. Sometimes variables are assumed to be held constant, when, in fact, they are not. Sometimes poor experimental design is to blame, and sometimes it’s something more subtle like bias. Occasionally, even fraud is to blame. One practice that seems to be fairly common is that of torturing the data until they confess. As Robert Lee Hotz, a writer for the Wall Street Journal, put it, “Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets.”
Secondly, the peer review process, though held in high esteem for its alleged rigor, is prone to bias and serious error. Richard Smith writes in the Journal of the Royal Society of Medicine that “People have a great many fantasies about peer review, and one of the most powerful is that it is a highly objective, reliable, and consistent process.” Smith goes on to describe a notable study by Peters and Ceci that looked at bias among reviewers. According to Smith, Peters and Ceci “took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors’ names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realize that they had already published the paper, and eight of the remaining nine were rejected—not because of lack of originality but because of poor quality.” That sounds like bias against “less prestigious sounding” institutions, if you ask me.
Some biases make academics’ sensibilities out to be a lot more crass than what the awestruck laity is aware of or would like to believe. Publication bias against negative studies is certainly one example of such base sensibilities. There’s nothing sexy about publishing studies that find interventions to be ineffective, so they usually don’t make it into the journals. What’s sad for those holding out hope that academia will change and improve is that this particular problem was addressed in the literature some 35 years ago when psychologist Michael Mahoney pondered the possible courses of action against, what he described as being, the “apparent prejudice against ‘negative’ or disconfirming results.” Mahoney continues by saying, “I have argued elsewhere that this bias may be one of the most pernicious and counterproductive elements in the social sciences” (Mahoney, 1976). But, hey, maybe academia will things right in the next 35 years.
And despite the perception of scientists being rational and thorough, researchers are about as vain and cliquish as a bunch of high school girls. If one submits good work in a field that is nascent and underdeveloped or currently unpopular, then it’s likely to be rejected. A paper with noticeable design flaws that fits the current paradigm held by the reviewers is more likely to get a pass than a paper that challenges their paradigm, but possesses sound analysis and robust results.
What’s particularly odd is that peer review’s exalted, high-quality status seems to be unsubstantiated and based, ironically, upon faith. In 2002, Drs. Jefferson, Wager, and Davidoff attempted to measure the quality of editorial peer review and found that “Despite its wide acceptance, peer review has been subjected to a variety of criticisms, and, indeed, surprisingly little is known about its effects on the quality and utility of published information, much less about its beneficial or adverse social, psychological, or financial effects.” Simply put, scientists haven’t a clue how deleterious or beneficial the peer review process is or has been, but cling to it anyway.
Finally, there is the problem of just outright mediocrity. Academia is simply brimming with average scholars and is awash in unimportant research. As the number of researchers has grown over the years, the number of refereed/scholarly publications has grown with it at a pace of 3.26% per year. As wonderful as that may sound in theory, it’s seemingly leading to poorer research quality. Bauerlein et al. found that while innovative research continues in some areas, “the amount of redundant, inconsequential, and outright poor research has swelled in recent decades.” Twenty years ago, Science found that only 45 percent of the articles published in the top 4,500 scientific journals were cited within the first five years after publication. In 2009, Péter Jacsó published an article finding that only 40.6 percent of the articles published in the top science and social-science journals were cited between 2002 and 2006. If most research ends up being of little value even to fellow academics, either because of study design flaws or inconsequential findings, then wouldn’t it be fair to say that most research has been a waste of precious time and resources?
A good example of just how shoddy research might actually be comes from just a few years ago. A British study assessed the experimental design, analyses, and the reporting of 271 recent studies using animals as test subjects. The results were embarrassing. Kilkenny at al. found “problems both with the transparency of reporting and the robustness of the statistical analysis of almost 60% of the publications surveyed.” Only 8% of the studies surveyed provided the raw data necessary for others to reproduce the results. Only 59% of the studies stated a hypothesis or the objective of the study and the characteristics of the animals used. Only 70% of the publications that used statistical analyses described their methods with any measure of error or variability. Only 13% of the studies were randomized, while only 14% used blind assessments, thus indicating that bias probably distorted the results in most of the studies.
What makes this pervasive mediocrity even more frustrating is that retractions are almost nonexistent. According to the aforementioned Wall Street Journal article, “informatics expert Murat Cokol and his colleagues at Columbia University sorted through 9.4 million research papers at the U.S. National Library of Medicine published from 1950 through 2004 in 4,000 journals. By raw count, just 596 had been formally retracted, Dr. Cokol reported.” There is no way that such a minuscule number of retractions is reflective of the quality of the research, especially considering the copious flaws discovered in the research as of late. If anything, it alludes not only to how overworked many reviewers are, but it also sheds light on the academic culture at large. Namely, there’s no glory for those who want to double-check the work of others and set the record straight. Furthermore, picking apart the work of one’s colleagues is a show of poor decorum in the academic world. Quite frankly, I can’t help but believe that academia thrives on the laity’s presumption that men in lab coats almost always know better.
Now, I’m not trying to claim that scientific experiments are pointless intellectual exercises. I’m sure some atheistic liberal will read this and simply, albeit fallaciously, dismiss this as another “anti-science” screed from a loopy Christian, but I can assure you that I’m not opposed to science. Rather, I believe that the God of the Bible must be presupposed in order for scientific research to be possible. In my mind, it’s God’s world that scientists scrutinize, not some inert ball of matter that is the product of chance. Lord willing, I hope, at the very least, this post challenges some deeply held convictions about the primacy and supposed excellence of scientific research. Ineptitude, hubris, fallacious reasoning, and ignorance are common to all men, even our most venerated scientists. After all, science is only ever performed by biased and flawed humans, which will necessarily result in biased and flawed research. In other posts, I’ve talked about how laymen, even well-educated laymen, are often guided by gross misconceptions, like the belief that gays constitute 25% of the population when the number is actually 1-3%. I wonder—if people actually knew that the majority of research out there is irreplicable and irreparably burdened by false claims, would academia still hold such a hallowed status in the minds of so many? I think not.