When I was in high school and college, I thought science classes were pretty boring. Why did I have to learn this stuff? Who cared about the phases of cell mitosis, the difference between ionic and covalent bonds, and Newton’s laws of motion? What practical value would science ever hold for me?
I wasn’t alone in my opinion that science was boring and had no relevance to my daily life. The US is currently facing a crisis of far too few students entering STEM fields, and I think basic science education is a major contributing factor.
I’m not saying I had bad science teachers. I had great science teachers! The problem isn’t with teachers themselves—it’s with how our education system demands that science be taught.
In most of the western world, science is presented as an endless litany of facts. Students memorize enough information to pass their tests, but too few students leave the classroom with a true understanding of what science actually is, and why it’s so important.
And because we see science as a meaningless encyclopedia of boring facts, a wide variety of misconceptions about science creeps into our worldviews as we age and form our identities. This makes us susceptible to all sorts of agenda-driven groups who manipulate our opinions and perceptions of science, its practitioners, and its findings.
It’s hard for me to describe how deeply this saddens me, because today, I love science, and I value the clear arc of progress that science has gifted to our world.
Not only is science awesomely useful, it’s also unfathomably cool. If you understand what it is…
The 16th and 17th centuries were an exciting time to be alive. It was the time when modern science began, mostly in Europe. The Protestant Reformation was well underway, fueled by the invention of the printing press and the rapid spread of information that came with it.
A new continent had recently been discovered across the Atlantic. For the first time in a long time, people openly criticized the monarchies that governed them. It was a time when centuries-old traditions could be safely questioned, and the world seemed a strange and intriguing place that we knew far less about than we once thought.
The money they saved on food, they spent on education, specifically on learning to read. Then, with so many new readers, the popularity of the printing press exploded. (Who would have thought that peasants actually wanted to learn things, too?)
For centuries, education had been the responsibility of the Church, so most of the smartest people in Europe at the time (and most people in Europe, period) were Christians. They, too, were eager for the theological and academic discourse that the printing press let them have, not only with their peers, but with other smart people in other academic fields.
People started meeting each other in coffeehouses to talk about each other’s work, the news of the day, and the fresh ideas that were spreading like wildfire across Europe. To be allowed into a coffeehouse, all you had to do was buy a cup of coffee. This led to peasants, newly literate and hungry for knowledge, starting to visit the coffeehouses, too.
They would eavesdrop on the conversations being had by the academic elites, and before long they’d learned enough to participate in those conversations themselves. Academics were happy and excited to share knowledge of the advances they were making.
Coffeehouses became a great equalizer, exposing both rich and poor to the new ideas of the day.
Eventually this new dialogue fed a growing disillusionment with the way society operated. The wealthy academics asked themselves why the advances they were making now hadn’t been discovered centuries before. The peasants asked themselves why they’d never before wanted to (or been allowed to) participate in their countries’ economies, or in their intellectual spheres.
The answer to these problems did have a political element, but the thinkers of the time were also concerned with the philosophical reasons for them. And a consensus grew that the core reason behind them was simply that up until that point in history, humans only had unreliable ways of knowing what was true about the world.
For most of human history, they surmised, most of what people knew as “true” was determined by four ways of knowing:
1) Tradition: “We’ve always believed this, so it must be true.”
2) Authority: “The king says it’s true, so it must be true.”
3) Emotion: “I feel like it’s true, so it must be true.”
4) Popularity: “Everyone else thinks it’s true, so it must be true.”
Pretty familiar, huh? All four of these fallacies are still hugely influential today. But just read The Emperor’s New Clothes for an illustration of why these are terrible ways to discern what’s true.
No single person invented science (although this guy does come pretty close). But as the thinkers of the 17th century came together to discuss everything they knew, they asked themselves if there was a better way to know what was true about the world. The academics of the day, nearly all of whom were men of God, thought, “We can do better.”
“What if we start from a place of humility?” they thought. “A place of not knowing anything, assuming that everything we think we know may not actually be true? If we start from that point, how can we then learn what the truth is?”
They decided they could at least trust the five senses. Since they could see, hear, touch, smell, and taste, they could make accurate observations of the world around them. Most excitingly, they decided to base what they knew was true on those observations. For them, every single truth claim was required to have evidence to support it. That way they could be confident that it was actually true!
In order to collect this evidence, they designed experiments. They assumed as little as possible, and then tested and measured to see what the truth turned out to be. They didn’t start with answers and then determine questions that would lead to those answers. That would have been bias. Instead, they started with questions, and then they got answers.
This way of thinking, called the scientific method, was nothing short of a paradigm shift in the history of civilization.
It led to better farming techniques that could feed millions more people, it led to the Industrial Revolution, to telescopes and cameras, to cars, lasers, airplanes, x-ray machines, computers, satellites, pacemakers, solar power, and cures for countless diseases.
We would have none of these, or any of the advantages of living in the modern world, without the scientific method.
But now, as then, there are an awful lot of powerful organizations competing for our opinions, our money, our votes, and our hearts and minds. Occasionally scientific truth gets in the way of these organizations.
When that happens, they’ll often launch all-out attacks on the very concept of science, and they’ll try to convince us that science isn’t to be trusted. They often try to turn it into a “team” conflict, warning us that science isn’t to be trusted simply because scientists are on the “other side” of the debate in question.
I won’t name specific offenders here because attacks on science come from such a broad range of special interests. These organizations are both left wing and right wing, for profit and not for profit, religious and nonreligious.
What they have in common is that they see our allegiance to them as being more important than us learning what’s true about the world.
I’d like to offer a defense of science against some of the most common anti-science arguments. We owe so much to the ideas of those initial thinkers who started the spark that led to the Scientific Revolution. We owe it to ourselves to kindle the flames of intellectual humility, of openness to new evidence. We owe it to that hunger for knowledge that started centuries ago but is now under assault as much as it’s ever been.
(The list of anti-science arguments below was partially stolen from this article, which does a great job of summing up many of the most common attacks on science but unfortunately doesn’t try very hard to debunk them, which is what I’ll attempt to do here—and in the process, I’ll explore how science works.)
Anti-science Argument #1: Scientists have been wrong in the past and therefore should not be trusted now.
Science has several built-in mechanisms to make sure it doesn’t get things wrong. Results have to be replicable, meaning other scientists must be able to repeat the study with the same results; falsifiable, meaning that if they’re false, it’s possible to prove them false; and predictive, meaning they can predict future conditions related to the field in question.
Every study goes through a lengthy yet imperfect process of peer review, where other experts pick the paper apart with criticism.
But despite all this, it’s true that science often gets things wrong. Sometimes scientists use flawed experiments, or a limitation in their technology or methodology creates a false result. The history of science reads like one giant “oops” after another.
The world’s most intelligent academics used to believe that Earth was the center of the universe, that diseases were caused by “foul vapors,” and that a mysterious element called phlogiston was what created fire. With such faulty ideas widely believed by yesterday’s scientists, you can bet that many of today’s scientists will be proven wrong in the future as well!
But do you know what it was that proved science wrong in all these cases? It wasn’t politics, religion, or personal intuition. It was more science.
The very fact that we now know science used to be wrong about some things shows not weakness, but science’s greatest strength: that it adapts itself to new evidence.
Science is not a worldview, clinging stringently to its beliefs no matter what. Science changes. Science gets refined. What we know tomorrow will be more true than what we know today.
And when science gets something wrong, it admits it (sometimes grudgingly) and moves on in a new direction. This is a fairly unique feature of science—I can’t think of another societal institution that’s so willing to change course and correct its own mistakes.
Of course, science does have entrenched biases, and it’s inherently conservative and overly cautious when considering any major change to itself. But the consensus does change. The consensus isn’t dogma, and many of the people we remember as history’s greatest scientists were people who challenged the consensus with new tests and evidence and successfully changed it.
Even the most established of insiders will rock the boat if they discover something cool. Egos, funding conflicts, and biases do motivate some individual scientists, but collectively, the thing that the scientific community cares the most about is simply discovering what’s true.
Anti-science Argument #2: Scientists are biased by personal prejudices, financial incentives, and the desire for personal or professional success, and therefore their conclusions are suspect.
There certainly are a few disreputable scientists who fake their evidence for profit and prestige. But scientists can get even more profit and prestige by proving those people wrong. If you think that a prominent hypothesis is false, you have incentive to prove it false, and other scientists will celebrate you for doing so!
Scientists know this, so they try to remain aware of their personal prejudices and hunger for success in order to keep those things from seeping into their work. Because if it does seep into their work, their reputations are at stake.
Scientific systems are also continually audited by outsiders: non-science academics (especially philosophers), the press, governments and businesses without a direct stake in the research, and even, if imperfectly, the general public. So if a scientist has motivations other than finding the truth, plenty of barriers stand in their way.
Most worrying are financial incentives, which are a major problem. There’s a dark history of companies paying scientists to doctor evidence to support conclusions that favor the company, who then turn around and launch publicity campaigns to cloud the issue in the public’s mind.
This is why the global consensus is so important. With governments and companies constantly competing with each other worldwide, they have every reason to reject science that’s created to give only one of them a competitive edge and every reason not to collude to hide the truth.
The result is that the consensus is always more reliable than any individual study. Individual studies can be bought by special interests; the consensus can’t be bought.
We all have biases that cloud our thinking, but this is the exact reason why science was created in the first place. Science is a method for limiting bias and discovering the truth.
Anti-science Argument #3: Scientific results are not certain, and therefore they can be discounted.
The idea that lack of certainty equates to weakness is a common misconception about science.
The first practitioners of science recognized that no matter how true something is, it can always be more true. For example, it’s true that the Earth is round. It’s more true to say that the Earth is an oblate spheroid.
It’s even more true to say that the Earth is generally round but so unevenly shaped both inside and out that the surface presents different levels of gravity depending on where you stand on it.
And if we keep going, we can get even more true than that. To infinity! Science actively works to either refine or falsify its theories.
Now imagine scientists stopped way back at the Earth being round and said they were certain that this was the final, ultimate truth on the matter. Case closed. We never would have learned further interesting and useful facts that were even more true. (And we might never have been able to get GPS to work.)
Even worse, imagine ancient protoscientists stopping all the way back at the Earth being flat, determining that this was the final, ultimate truth on the matter. Case closed? Certainty would have been a serious weakness in this case.
If we’re always certain that we’re right, it’s very easy for us to not realize when we’re wrong. This is why science avoids absolute certainty. Holding literally everything as being only tentatively true allows science to turn on a dime when it encounters new evidence.
The answers science gives us aren’t always perfect or all-encompassing, but because we can test whether or not they’re true, those ideas get either falsified or refined over time. Science is self-correcting largely because science itself isn’t certain.
There are degrees of certainty, though. Scientists have something they call a “confidence level,” and many of their results indicate how confident they are that their result is true, with math to back that confidence level up.
Any major result, such as the discovery of the Higgs Boson, requires an extremely high confidence level. Still, nothing in science is ever 100% certain. And that’s a strength.
Anti-science Argument #4: Science is just another belief system, or way of knowing, that should not be given primacy over other ways, such as intuitive knowledge or personal experience.
Science has numerous safeguards against faulty logic and personal bias that other ways of knowing don’t have.
Neither intuitive knowledge nor personal experience is as predictive as science. Neither adapts well to new evidence, and neither is particularly falsifiable.
Deciding what’s true through loyalty to your group’s ideas is just as riddled with pitfalls. Trusting in these ways of knowing, there’s no way to know if you’re wrong—only ways to feel very strongly that you’re right.
Of course, culture, art, and emotions can be good ways to know about ourselves and about other people… if we remember their limits. Truth found through these methods is not necessarily replicable, objective, falsifiable, or even verifiable. And their conclusions apply to humans rather than to the natural world. Saying your feelings about your relationship with your partner are true is quite different than saying your feelings about quantum mechanics lead to accurate conclusions because you just feel it to be so. Even personal feelings can be heavily flawed, because the human brain is subject to numerous cognitive biases that distort our perceptions. This is why eyewitness recollection, for example, is considered the least reliable evidence in a courtroom. The supposed divide between emotion and reason is a false dichotomy, though; emotions hold an important place in public discourse. Emotions can also hold valuable information about the person who holds them. But can really warp reality, too.
Some critics of science chastise it for claiming that it leads to objective knowledge when in fact it is subject to the cultural biases of its practitioners. In many cases, this criticism is accurate, and scientists themselves readily admit they have more work to do to get bias out of science. But what better tool do we have for fighting bad science (and oppressive public policies that can result from it), than good science?
Science’s basic approach to knowledge makes it unique. Other ways of knowing usually start with, “X is true, so how does the evidence point to X?” Science, on the other hand, asks, “What is true, and how can we learn it?” If this question leads scientists to Y or Z, science demands that they accept it just as readily as they’d accept X. Among all the safeguards against bias described in this blog post, this one is key.
It’s important to realize that science is not a belief system. It’s a way of determining what’s true about the world, and it’s the body of knowledge that’s been drawn from that method. Scientists come from a diverse array of belief systems, and they all trust in science.
That trust is not blind trust. At its most fundamental level, it’s based in the five senses. You can see, hear, touch, taste, and smell science. At any moment you can go look up the results of any given study, see them with your own eyes, and if they’re solid results, replicate them yourself.
Science is true not because scientists feel deeply that it’s true, but because it’s been rigorously and quantifiably tested, verified, and quadruple-checked if the finding is important enough. Science is true because it can be used to accurately predict future events. This is how all science works. It’s deeply transparent.
But how can laypeople like us be expected to put science, which we encounter only if we look for it, ahead of the intuitive knowledge and personal experience that we encounter constantly, every moment of every day? After all, we can’t possibly check the work of every single scientist to verify its accuracy. We don’t have that kind of time.
The answer lies in realizing that since we can’t all know everything, we need to defer to those who have expertise in specific fields. Appeal to expertise is not the same as appeal to authority, and when we understand how and why science works, and know that even the experts are subject to the rules of science, we can rest much more comfortably in our assumptions that the experts know what they’re talking about.
If we can’t trust people who’ve dedicated their entire lives to studying a particular subject through the transparent and humble lens of science, then who else can we reliably trust about that subject? Certainly not political pundits. Certainly not a friend who feels very strongly about a particular scientific issue. Certainly not our own intuition, tempting as that may be.
This isn’t to say that experts should not be criticized, or that we should blindly take them at their word. And experts need to get a lot better at communicating science in ways that let laypeople engage with it. But there are many important factors that make someone an expert in the first place, and we shouldn’t ignore those factors when deciding who to trust.
Also, it should go without saying that intuitive knowledge and personal experience don’t give us longer lifespans, higher qualities of life, thousands of diseases cured, and all the modern technology that benefits our lives. Science is not just another way of knowing; it’s the most demonstrably reliable way of knowing that we’ve ever discovered.
Anti-science Argument #5: Some scientists disagree with the consensus view so there is no way to assess who is right.
There are plenty of ways to assess who is right, such as asking:
-What are the motives of the organization that is paying the scientist(s) to do the research?
-Does the scientist who disagrees with the consensus have qualifications in the field she’s holding the debate in? Most reputable scientists won’t debate publicly in a field outside of their own.
-Which side has more evidence?
-Which side’s experiments are set up the most soundly?
-Which side’s claims explain the evidence and the experimental results the best?
-Which side’s experimental evidence can be reproduced?
-Which side can make accurate, testable predictions?
-Which side do I want to be true? Once I’ve identified that side, I should give a little more leeway to the other side to account for my own bias.
-Do the sides actually exist in the first place, or does one side represent a special-interest group while the other side represents the actual scientific consensus?
There are also plenty of red flags that can warn us when a scientific claim is probably wrong.
Anti-science Argument #6: Science is the cause of the problems resulting from technology and therefore suspect.
Science and technology can be used for nefarious purposes, and scientists themselves will be the first to warn about these purposes and try to stop them.
Physicist J. Robert Oppenheimer, father of the atomic bomb, later came to oppose the technology he’d helped to create, speaking out about scientists’ ethical responsibilities and lobbying fiercely for arms control.
It’s usually governments and corporations who are the ones to actually develop and use the unethical technology that science can provide, but I won’t sugarcoat this issue: Sometimes scientists can and do create unethical technology.
And yet, is this problem worth scaling back the scientific enterprise as a whole? I find it suspect how selective opponents of science can be, often targeting politically charged issues where the science has been settled while ignoring more legitimate and unnerving problems with scientific ethics.
Such critics also ignore the bountiful benefits science has gifted to us. Why aren’t the internal combustion engine, the television, and penicillin controversial inventions when they came from the exact same method as the more controversial discoveries?
Demeaning science or doubling down on ideological arguments is not the way to fix problems with scientific ethics. Those approaches will only stunt actual progress and suppress discussion of real ethical issues.
The answer is to accept science’s conclusions, then have an open, honest, and unbiased discussion regarding what we should ethically do about those conclusions.
Anti-science Argument #7: Scientists are arrogant, cold, and selfish. They think they know everything.
If you hold this negative view of scientists, please go out and actually talk with some scientists! Most scientists are humble and are constantly questioning their own findings. In fact, no one wants science to be wrong more than science itself.
The scientific method is built not to prove facts, but to disprove facts. All scientific facts are only deemed true because they haven’t been disproved yet. Scientists take this to heart, and are constantly aware that they could be wrong.
I’m sure some scientists actually are arrogant and cold, but they’re hardly representative of scientists as a group. On the contrary, this is a stereotype that’s usually perpetuated by movies and TV. If you ask any real-life scientist for their motivation, almost unanimously, their answer will be some form of, “My goal is to help people. I’m working for the betterment of humankind.”
History is replete with stories of scientists acting selflessly. Take Alexander Stchukin, a botanist who, along with eight of his peers, died of starvation during the Siege of Leningrad while guarding a vault of seeds and food. Surrounded by things they could eat, these scientists chose to starve to death so others could eat after the siege ended.
Or take Clair Patterson, a geochemist well known for other achievements who, instead of resting on his laurels, dedicated the last thirty years of his life to fighting industrial lead contamination in the environment, likely saving tens of thousands of lives.
And of course there’s the famous case of Jonas Salk, a virologist who passed up a $7 billion patent for his polio vaccine so he could give it away for free. Salk was described as “a person of great warmth and tremendous enthusiasm.”
Stories like these are common in science. Just watch the TV series Cosmos and you’ll discover several other stories of scientists’ love for others, of heartbreak and perseverance, of standing up for what’s right in the face of great opposition.
Lastly, consider all the arrogant and selfish people in our society who we do like: politicians, celebrities, etc. It seems strange to single out scientists for having these traits, not only because they mostly don’t have them, but also because these traits aren’t seen as a bad thing in many other fields where people obviously do have them.
Anti-science Argument #8: There’s so much that science doesn’t know.
Anti-science Argument #9: Science can’t answer every question I have.
It’s frustrating, isn’t it? I feel your pain, and so does every one of the world’s scientists. The burning desire to know is extremely powerful, and it’s the engine that drives the scientific endeavor.
But no matter how frustrating it is to not know all the answers, it’s better that the answers we do know are based in evidence.
This goes back to the concept of uncertainty: the idea that it’s okay not to know. Just because science hasn’t explained something yet doesn’t mean we get to choose any explanation for it that we want.
Not understanding something—yet—makes scientists want to learn more about it, not grasp for the nearest bias-confirming source that presents itself as reliable knowledge.
Anti-science Argument #10: I don’t like the answers that science provides.
It’s fine to disagree with science. Science encourages you to disagree with it, as long as you can back up your contravening claim with evidence. That’s how scientific knowledge grows: skeptical people challenging the status quo with new information.
But there’s a difference between skepticism, which is necessary, and denialism, which is counterproductive. Skeptics demand evidence before they’ll commit to a belief, but for denialists, no amount of evidence will ever be enough to convince them.
For example, no one in their right mind would describe flat-earthers as brave skeptics speaking truth to power, continuing to spread real science in the face of a dominant culture that dismisses their ideas. No, these people aren’t skeptics—they’re denialists. Their beliefs aren’t based on evidence, so no amount of evidence will ever convince them they’re wrong.
In fact, this is a great litmus test for testing whether your own beliefs are based in evidence. Ask yourself: “What evidence could others provide that would convince me I’m wrong?” If your answer is, “There is no evidence that would convince me I’m wrong,” then your beliefs aren’t based in evidence.
“Okay, enough!” you might be thinking. “I’ve been through all that. I don’t want to succumb to denialism, but I still can’t provide solid evidence against a scientific claim I don’t like.” If you’re thinking this now, then you may find yourself very uncomfortable with the thought that the claim you don’t like… may be true.
And you know what? That’s understandable. It can be scary to realize you might be wrong about something, especially something major and foundational to your worldview. A lot of people have been through this process before and there are several supportive communities online for people who suddenly become unsure that what they thought they knew is actually true.
If, on the other hand, you don’t think you’re wrong and you really want to keep fighting that claim you don’t like, more power to you. Seriously.
But please don’t go slandering the whole scientific enterprise just so you can hold on to your treasured idea. Instead, dig deeply into the evidence and counterevidence. Science asks that we question not only our opponents’ ideas, but our own ideas, too. Groupthink is the antithesis of this, so try to avoid believing something just because one group or another wants you to conform to its views—even your own group.
Also, read your opponents’ actual arguments, rather than straw man arguments created by people you agree with.
But mainly, if you don’t like one of science’s answers, don’t fight it with arguments from tradition, authority, popularity, or emotion. Fight it with more science.
Talking louder and being more confident than your opponent may score you points with your in-group, but if you want to actually get anywhere with your argument, you have to make sure it has a solid foundation. And that means digging deeper into science, not attacking science or running away from it.
So what is science?
Science is a method that organizes and tests knowledge in order to learn what’s true about the world. It requires the collection of evidence, the competition of ideas, the testing of those ideas, the formulation of ideas that can be falsified if they’re untrue, and a humble willingness to abandon, change, or refine incorrect beliefs.
Science is valuable because it can predict future events—flipping a light switch will turn on a light—and because it allows us to build things to better our own lives and the lives of others. It has a long track record of reliability.
Science thrives on curiosity. It promotes both personal and societal growth by leading us ever closer to the truth. And science is humble, following evidence where it leads instead of manipulating evidence to achieve desired results.
If you enjoyed this discussion of science and the ideas at its foundation, you’ll find more where this came from in Thorn, my epic thriller novel about a demon who realizes he has a conscience, and decides to try to become good. Give it a read!
Many thanks to Crystal Watanabe and Reid Nicewonder for their feedback on this blog post.