Why is science so often wrong?
There’s a common belief that if reputable researchers design and implement a study, then the results must be true and immune to challenge. But if that’s the case, then why do conclusions change so often? Why was butter bad, but now considered better than certain types of margarine? Why were avocados bad, but now THIS kind of fat is good? And what about reversed thinking by some investigators on moderate coffee or alcohol consumption?
Just a few years ago, surgeries were mainstreamed procedures and within the standard of care for stomach ulcers. Stress was thought to be the primary cause. But in 1982, two researchers discovered that in most cases, H. pylori bacterium was the culprit. Now the vast majority of patients are treated with pharmaceuticals.
There are basically two reasons why scientific conclusions can become obsolete. The first is discovery of additional knowledge that challenges contemporary thinking. The other is erroneous methodology — a situation more common than most people think.
One illustration of the latter is known as “reverse causality.” This happens when researchers fail to control variables that affect the outcome of a correlation model.
An example is the study of Peruvian toddlers, which associated breastfeeding with stunted growth. It seemed logical in its conclusion until doctors realized that undersized infants were more likely to be breastfed than normal-sized infants.
A more recent example of possible reverse causality is a study published by researchers at the University of Texas’ Center for Brain Health and the Mind Research Network in Albuquerque. They concluded that chronic marijuana users who started as teenagers have less gray matter in the orbital frontal cortex and lower IQs. What’s not clear is whether these conditions led to marijuana usage in the first place.
Another problem related to methodology is the phenomenon of “false positives.” Quality science needs replication to be valid. This simply means that if an experiment or observational study is repeated under the same conditions, the results should be the same. However, the present financial paradigm usually does not support replication of previous research.
This is indeed, unfortunate. Stanford professor Dr. John Ioannidis reviewed 45 papers found in major medical journals. Slightly less than half of these were authentically replicated, thus leaving conclusions of possible false positives in the minds of many people as factual and unchallenged.
And what about bias in science? Can this affect methodology as well?
Whenever humans are involved, bias is always a factor and difficult to control. All perception and observation is based on experience and personal belief systems. There are several types of bias that can appear in scientific research — some accidental and some deliberate. Here are three common examples:
• Design: This is where research has been designed to support a conclusion already believed to be true. When design bias happens (also known as confirmation bias), it may ignore evidence that does not validate expectations of those funding the project. Examples can be seen in studies that are connected to politically motivated agendas.
• Sampling: A study might be created to find a correlation between factors A and B in the general population. But the sample may omit certain ethic, age, cultural or gender groups — thus invalidating the conclusions.
• Biased questionnaires: These are sometimes found in surveys, which involve forced choices or compound questions. For example, if an inquiry allows only one of two possibilities, such as: “Do you drink coffee early in the morning or in the late afternoon?” it omits all other possibilities — including both morning and afternoon.
Still another problem with inadequate methodology involves faulty data analyses: Without going into mathematical and statistical variables, this is were results can go astray. Several reasons can cause this — ranging from human error to lack of knowledge on the part of the researchers as to how to properly evaluate their own data. Incomplete evidence can also result in distorted conclusions.
So with all its flaws and continuing changes, why do we spend billions on research that too often lead to incorrect interpretations? I think it is safe to assume that imperfect science is better than none at all.
A final factor to consider: Evolving conclusions and misguided methodologies are two important reasons why skepticism must always be a part of scientific discovery. Without it, the following are inarguable certainties: There would still be an undisputed nine planets in our solar system, surgeons would be using “blood letting” to cure most diseases, and phrenology would continue to be called “the only true science of mind.”
Steve Hansen is a Lodi writer.