Does science require unbiased truth seeking?
Reflections on Andrew Gelman's complaints about biased researchers and junk science.
Update, April 20th: You can read Andrew Gelman’s response to this here.
It's now a common complaint that many empirical sciences are an epistemic nightmare. Psychology, medicine, nutrition science, cancer research, economics, and other disciplines have dealt with embarrassing replication crises over the past decade. All have massive, conflicting literatures, suggesting that much of the published research is wrong.
One reason for the unreliability of this research is the unwieldy application of statistical tests. Journal reviewers require that empirical claims come with evidence, and this evidence is quantified using statistics. This, of course, incentivizes researchers to engage in shoddy practices (consciously or not) to pass the required tests and get their paper published.
Statistician Andrew Gelman thinks many researchers are doing this consciously:
I think that many or most researchers think of statistical tests as a kind of annoying paperwork, a set of forms they need to fill out in order to get their work published. That’s an impression I’ve had of researchers for a long time: they feel they already know the truth–they conducted the damn experiment already!–so then they find whatever p-values are necessary to satisfy the reviewers.
This has been a grievance for years, and statisticians have been standing on the sidelines watching in horror as their favorite statistical tools get hijacked. Gelman goes on to argue that that researchers aren't just manipulating the statistics to get what they want, they are manipulating their entire research agenda:
My new thought (or, new to me, at least) is that many researchers also think of research itself as a kind of annoying paperwork. They already know what they want to say about disparities, or climate change, or evolutionary psychology, or the 2020 election, and, yeah, they’ll do an experiment or a data analysis or whatever, but that’s just a means to an existing end. They’re not doing science. My thought in the above-linked post is that there maybe is so much of this hackwork that it overwhelms the system.
In other words, we're swimming in an ocean of junk science because scientists are biased. Researchers come to the table with preconceived notions of how the world does and ought to work. They set out to prove their ideas, not to critically assess the evidence and uncover the truth.
This suggests the question: Does science only work if scientists are unbiased? Does each scientist need to be engaged in a fearless pursuit of objectivity for the system to be reliable? Need we hope that each individual scientist is a beacon of hope and light, always ready to sacrifice their personal beliefs at the altar of truth?
It would of course be nice if scientists were such saintly figures. But scientists are humans, and given everything we know about human nature, this is an unreasonable expectation. We now have extensive evidence that humans are biased and stubborn, not easily swayed by evidence that contradicts our preexisting beliefs, especially when that evidence does not conform with our social group.
But surely we can't pin our hopes on scientific progress on the idea of unbiased scientists? No: the whole point of scientific institutions is to create a system whereby progress is made despite the flaws of the individual researcher.
In fact, the bias of individual scientists is often a good thing. The system works better when scientists are adamant, even dogmatic, about their hypotheses. It means that the scientific community as a whole is exposed to the strongest version of each idea. (You might call this the paradox of dogmatism.) If people gave up on their ideas at the first sign of trouble, many correct ideas would have been forgotten. For example:
Barry Marshall and Robin Warren proposed that stomach ulcers were caused by bacteria rather than stress or spicy food. Doctors were in disbelief but they persisted. Marshall famously drank a broth containing H. pylori, the bacteria in question, to prove his point, which eventually resulted in a Nobel Prize. As The Lancet documents: Warren recalls:
Warren recalls: “Every time I spoke to a clinician they would say, ‘Robin, if these bacteria are causing it as you say, why hasn't it been described before?'.” Orthodox medical teaching at the time was that bacteria did not grow in a normal stomach. However, as Warren wrote in the 2002 book Helicobacter Pioneers, “I preferred to believe my eyes, not the medical textbooks or the medical fraternity.” Marshall also believed what he saw through Warren's microscope. “The first time I sat down with him he didn't really have any trouble convincing me there were these organisms in the stomach”, he told The Lancet. “As far as I was concerned he was right, and I thought this was a unique observation.”

Lynn Margulis introduced symbiogenesis which is now the leading account of the origin of eukaryotic cells. After unanimous opposition for many years, her ideas finally gained traction. In 1995, Richard Dawkins wrote about her tenacity:
I greatly admire Lynn Margulis's sheer courage and stamina in sticking by the endosymbiosis theory, and carrying it through from being an unorthodoxy to an orthodoxy. ... This is one of the great achievements of twentieth-century evolutionary biology, and I greatly admire her for it.
Ignaz Semmelweis observed that when doctors washed their hands before delivering babies, maternal mortality dropped dramatically. His findings contradicted miasma theory (the prevailing belief that "bad air" caused disease). He committed to an asylum because of his ideas, but was ultimately vindicated by the germ theory of disease.
There are many others: Alfred Wegener's theories of continental drift, Barbara McClintock's "jumping genes" theory; Stanley Prusiner's prions and, of course, Galileo.

Even if it were always a bug and not a feature, the dogged commitment of scientists to their own theories is not going to stop. Our goal should be to design scientific institutions, and cultivate cultural norms, which constrain and channel the fallibility of individual scientists. That is, instead of asking the question "how do we make scientists less biased?" we should ask the question "what kinds of scientific practices, both formal and informal, result in scientific progress despite dogmatic researchers?"
There is, undoubtedly, a lot of progress to make on this front. Practices like peer-review, preregistration, and meta-analyses are good but could be improved. Other things to try are overlay journals, incentivizing replications, incentivizing the publication of null results, harshly penalizing scientific fraud, and using more sophisticated statistical tools that make p-hacking harder. We could require higher standards for publications, demand more adversarial collaborations, incentivize the sharing of all raw data and code. There is a lot of innovation to be had—considering how long homo sapiens have been around, we haven't been doing science for very long.
I agree with Gelman that it's a mess out there—many empirical sciences are in crisis, and much “research” of the past 20 years should be heavily discounted. The amount of junk science out there is horrifying, and the system feels broken in all sorts of ways. But our solution can't be to yell at individual scientists until they're robotic truth seekers. It has to be to design the system in a way that incrementally approaches truth despite the inevitable bias of the researchers.