A magnifying glass magnifies the word "Method" three times.

Global Water Security Center

Providing decision makers with the most reliable, ground-breaking research, applied scientific techniques, and best practices so that the hydrologic cycle and its potential impacts can be put in a context for appropriate action and response by the United States

The Science Credibility Questions that Keep Us Up at Night

This opinion article was written by GWSC Environmental Data Scientist Dr. Kaitlin Kimmel-Hass.

I recently gave a talk for the University of Colorado’s Center for Social and Ecological Futures about my most recent publication that talks about exaggeration biases and selective reporting in ecology. During the question-and-answer portion of the talk, several people asked questions about some of the things that keep me up at night. In this blog post, I want to explore three of these questions in more depth. 

TL;DR Empirical evidence of widespread exaggeration bias and selective reporting in ecology

In our paper, Meghan Avolio, Paul Ferraro, and I look for empirical ‘fingerprints’ of practices that can reduce the credibility and transparency of science. Our two main findings in this manuscript are that (1) over 60% of estimates (how big of an effect something has) reported in ecology studies are likely twice as large as they should be and (2) there is evidence for selective reporting of statistically significant results [if you want to know more, watch my talk where I go through these and other findings in more depth]. 

From these results, we can conclude that our scientific evidence base is biased against small effects and studies which show no statistically significant effects (it is just as important to know when something doesn’t matter as when it does, right?). Therefore, we are not seeing the whole truth because some studies are excluded from publication just because the results are not ‘exciting’ enough. 

These findings have led me to a series of questions that I cannot get out of my mind. 

How should people interpret scientific papers?

Science is likely producing a biased evidence base – even with the peer review process and scientists generally wanting to remain impartial to their results. Does that mean every publication is grossly misleading? I don’t think so, but we should be more careful in how we talk about results and what papers we value. 

I have started to change what types of papers I value. Papers with clear and detailed methods sections are my priority over those with exciting results. When interpreting results, people may fixate on p-values (a value from statistical tests that indicates if an analysis is statistically significant at a specific threshold) and only look at the statistical significance of a result. 

However, we can gain more information by looking at the precision of the result. Results with less variability likely are more accurate representations of reality. I am less worried about if an error estimate overlaps zero (another way people say if something is statistically significant) and instead am more worried about how big the error is compared to the effect that is estimated. 

Of course, scientists and others that consume scientific publications usually synthesize information from several related publications to better understand a topic – and this synthesis may be subject to similar issues as interpreting the results from any single publication. 

How do we interpret meta-analyses?

A meta-analysis is a type of study that takes many studies around a similar topic and quantitatively synthesizes them to find a more accurate effect. For example, a researcher may be interested in the impact of species loss on ecosystem function. They then take all the papers in the last 20 years that look at this topic and perform an analysis to calculate the effect from all these studies together. 

I used to take meta-analyses at face value and did not think much about what methods were used to make these calculations. When I was talking to a colleague about this issue, she said “garbage in, garbage out” – meaning that if we have only biased results going into the meta-analysis, we are certainly going to calculate a biased effect. 

To be honest, when this question was posed to me during my talk, I did not have a great answer of how we should interpret these types of results. After some digging, I found this article that talks about some methods scientists can use to quantify biases in meta-analysis and how to account for them. Hopefully more people start including these types of methods in meta-analyses so that we can interpret the credibility of the result more clearly. 

What do I tell my family about the credibility of science?

I do not want to sow distrust in the scientific community – especially during a time where a large faction of society already is pushing back against scientific evidence.  Science is important and it is continually improving and correcting itself. 

Of all the scientists I know, all of them value credibility and transparency in their work. None of them are misleading people intentionally or making up data to support their claims. They are conducting studies with rigor, analyzing data in the best ways they know how, and trying to be impartial to the outcomes of their research. Most scientists are incredibly curious people who are just trying to understand the world better through their work. 

I would tell my family that I am hopeful about science. There are many dedicated organizations and people pushing forward initiatives to create systematic change in how science is produced and consumed and what is valued. I would also tell my family that not all science is misleading – there are trends that we have observed for decades or more that help us understand the world better. When new evidence suggests that new conclusions need to be drawn, they are. 

Lastly, I would tell my family to remember that scientists are people just like the rest of us. We will make mistakes, we will drop the ball, but I am confident that this mistakes will be corrected and the ball will be picked up again in time.