Most psychological research doesn't replicate

Unfortunately, the Replication Crisis has revealed that most published psychology research doesn't replicate.
As such, the existing decades of literature in experimental psychology should be consumed with extreme caution and the findings should largely be considered invalid. Most published results that were re-tested failed to replicate and, as a reader, without re-testing a study ourselves, it becomes practically impossible to know whether the original study was trustworthy or not. Better to err on the side of skepticism.

Replication problems are not limited to history: dubious research still gets published today.
Given the mix of valid and invalid research that has been and still gets published, skeptical readers cannot take any particular study at face value. Readers need to become scientifically literate in order to evaluate the quality of each particular study. The highest mark of validity is when a study follows Open Science and has a pre-registration. Even then, findings should be understood as preliminary until consistent independent replications are published, sometimes in the form of meta-analyses or systematic reviews.

While the quality of research in experimental psychology is slowly improving, new research can never improve what has already been published. Decades of historically published research will always be dubious at best. Future high-quality studies do not redeem the poor quality of already-published studies, even understanding that future research has the potential to be sound if done properly and independently replicated.

News and social media are not even dubious...

"Science journalism" tends to oversimplify and sensationalize.
So, as a blanket generalization, never trust "science journalism".
If understanding the results matter to you, read the original paper.

The Replication Crisis is not merely science working as it should.
It isn't that we are using more advanced methods now so we are overturning limited theories from a prior age of limited measurement tools. The problems in the literature stem from deep methodological errors that cannot be retroactively corrected for or untangled. Issues such as poor designs, small sample sizes, low power, p-hacking, HARKing, selective reporting of results, and biased publication practises have resulted in a literature full of dubious research that is beyond saving. There have even been some famous cases of outright research fraud where the researchers made up data and published findings for studies they never ran. Just as one cannot remove a rotten vegetable once it has been added to a stew, the problems in the psychological research literature cannot be removed or corrected after the fact.

We are not doomed, though.
We just have to run new studies.

Info

The Reproducibility Project tried to replicate 100 canonical psychology findings and only 39 were replicated. Even when replication was successful, effect sizes were usually smaller in the replications.

In the following review, the authors say:

Quote

“[...] we conclude that more than 50% of published findings deemed to be statistically significant are likely to be false. We also observed that cognitive neuroscience studies had higher false report probability than psychology studies, due to smaller sample sizes in cognitive neuroscience.”

Szucs, D., & Ioannidis, J. P. A. (2017). Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature. PLOS Biology, 15(3), e2000797. https://doi.org/10.1371/journal.pbio.2000797

The past does not define us.
While new studies cannot fix the issues of the past, so too issues from the past do not necessitate repetition. We can do trustworthy research starting now and, with each new paper we publish, we will overturn invalid old research. So long as we don't constrain ourselves to following up on previous unreliable findings, we won't have a problem. We can start fresh: a new beginning.

The solution to bad science is more and better science

Though shameful for the field, we should not hide the past.
Dishonesty is what got us here in the first place. We should be honest about how bad research has been. We scientists should be held to higher standards: by ourselves, by our funding agencies, and by the public that relies upon us. Given the amount of research funded by taxpayers, I assert that it is our professional responsibility to steward that money by spending it on high-quality replicable research, something out scientific elders failed to do. Scientists have done a great wrong by wasting public resources.

Honesty can result in upset; let this be our penance.
We should not pretend like nothing bad happened, sweeping the replication crisis under the rug or subtly diminishing its impact by implying that science always works this way. The public would be right to scorn the current state of modern psychology and those of us that inherit this field have the burden of building new trust on new foundations. We can be honest and build trust in the future of the field. We can earn the public's trust by doing good work and by acknowledging the errors of our scientific elders.

What should we do with the extant research?

Treat existing research as exploratory.
It is normal to feel frustrated that our scientific elders wasted so much time and money publishing dubious research. I would have loved to come into a mature field where I could test theories and overturn them as we edge ever closer to lasting truths. That bright vision is not the field we have, though. Research can still be read as a source for ideas, but, as a scientist, I have to start from scratch or by replicating studies. The responsibility falls to us to make psychology a stronger science by doing good research.

Tip

If you want to do research based on any existing finding, it behooves you to replicate that finding first so you don't waste your time.
A replication is a great Bachelor's or Master's level project.

We have to pick our battles.
Funding is often a limiting factor and one cannot necessarily expect to replicate a large fMRI study as an undergraduate or graduate student. That said, even an undergraduate honours thesis could run replications of studies with less expensive methods and seek to publish the results. Many studies use undergraduate participant pools or volunteer populations so the financial cost of the study is zero or close to zero. Rather than money, such research costs time: your time. You decide whether that cost is worth it to you. If you need the results to replicate, I suggest that it becomes well worth your time to attempt to replicate.

Index

Return to Psychology The State of the Field

Continue to Replication Crisis

Jump to Undergraduate Psychology