There’s a common belief that if reputable researchers design and implement a study, then the results must be true and immune to challenge.

But if that’s the case, then why do conclusions change so often? Why was butter bad, but now considered by various experts as better than some alternatives? Why were avocados bad, but now THIS kind of fat is good? And what about reversed thinking by some investigators on moderate alcohol or coffee consumption?

I won’t go into the constant changes in prophylactic COVID-19 thinking over the last six months. News sources are filled with reports of controversial conclusions and “expert” opinions, which leave the public scratching their heads and longing for reliable answers.

Just a few years ago, surgeries were mainstreamed procedures and within the standard of care for stomach ulcers. Stress was thought to be the primary cause. But in 1982, two researchers discovered that in most cases, H. pylori bacterium was the culprit. Now the vast majority of patients are treated with pharmaceuticals.

There are basically two reasons why scientific conclusions can become obsolete. The first is discovery of additional knowledge that challenges contemporary thinking. The other is erroneous methodology — a situation more common than most people think.

One illustration of the latter is known as “reverse causality.” This happens when researchers fail to control variables that effect the outcome of the correlation model.

An example is a study of Peruvian toddlers, which associated breastfeeding with stunted growth. It seemed logical in its conclusion until doctors realized that undersized infants were more likely to be breastfed than normal-sized infants.

Another instance of reverse causality is a recent paper published by researchers on cannabis. They concluded that chronic marijuana users who started as teenagers end up with less gray matter in the orbital frontal cortex and lower IQs. What’s not clear is whether these conditions were preexisting and led to marijuana usage in the first place.

An additional problem related to methodology is the phenomenon of “false positives.” Quality science needs replication to be valid. This simply means that if an experiment is repeated under the same conditions, the results should be the same. However, the present financial paradigm usually does not support replication of previous research.

Stanford professor Dr. John Ioannidis reviewed 45 papers found in major medical journals. Slightly less that half of these were authentically replicated — leaving conclusions of possible false positives as factual and unchallenged.

And what about bias in science? Can this affect methodology a well?

Perception and observation are subject to personal experience and belief systems. There are several types of biases that can appear in scientific research — some accidental and some deliberate. Here are three common examples:

1. Design: This is where research has been designed to support a conclusion already believed to be true. Where design bias happens (also known as confirmation bias), it may ignore evidence that does not validate expectations of those funding the project. Examples can be seen in studies that are connected to politically motivated agendas.

2. Sampling: A study might be created to find a correlation between factors A and B in the general population. But the sample may omit certain ethnic, age, cultural or gender differences — thus invalidating the conclusions.

3. Biased questionnaires: these are sometimes found in surveys, which involve forced choices or compound questions. For example, if an inquiry allows only one of two possibilities, such as: “Do you drink coffee early in the morning or in the late afternoon?”

It omits all other possibilities — including both morning and afternoon.

Still another problem with inadequate methodology involves faulty data analysis. Without going into mathematical and statistical variables, this is where results can go astray. Several reasons can cause this ranging from human error to lack of knowledge on the part of the researchers as to how to evaluate their own data.

So with all its flaws, why do we spend billions on research that too often lead to incorrect interpretations? I suppose it’s because imperfect science is better than none at all.

A final factor to consider: Evolving conclusions and misguided methodologies are two important reasons why skepticism must always be a part of scientific discovery. Without it, the following would have been inarguable certainties: Earth is the only planet that can support life in the Milky Way Galaxy. “Blood letting” is a cure for most diseases, and phrenology is the only true science of the mind.

Steve Hansen is a Lodi writer and former histological research technician at the Armed Forces Institute of Pathology in Washington, D.C.

Recommended for you

comments powered by Disqus