The Nonsense Detection Kit provides guidelines that can be used to separate sense from nonsense. There is no single criterion for distinguishing sense from nonsense, but it is possible to identify indicators, or warning signs. The more warnings signs that appear the more likely that claims are nonsense.
Below is a brief description of indicators that should be
useful when separating sense from nonsense.
These indicators should be useful when evaluating claims made by the
media, on the Internet, in peer-reviewed publications, in lectures, by friends,
or in everyday conversations with colleagues.
Nonsense indicator- claims haven’t been verified by an independent
source
Nonsense perpetuators often claim special knowledge. That is, they have made specific discoveries
that only they know about. Others lack know how, or do not have the proper
equipment to make the finding. These
findings are often reflected in phrases such as, “revolutionary breakthrough”,
“what scientists don’t want you to know”, “what only a limited few have
discovered”, and so on. These findings
are not subject to criticism or replication.
That is not how science works.
When conducting studies it is imperative that researchers operationalize
(provide operational definition-
precise observable operation used to manipulate or measure a variable)
variables so the specifics can be criticized and replicated. Non-scientists are not concerned with others
being able to replicate their findings; because they know attempted
replications will probably be unsuccessful.
If a finding cannot be replicated this is a big problem, and it is
unreasonable to consider a single finding as evidence. It is also problematic when only those making
the original finding have replicated successfully. When independent researchers using the same
methods as those used in the original study are not able to replicate this is a
sign that something was faulty with the original research.
Nonsense indicator- claimant has only searched for confirmatory
evidence
The confirmation bias is a
cognitive error (cognitive bias) defined as tendency to seek out confirmatory
evidence while rejecting or ignoring non-confirming evidence (Gilovich,
1991). Confirmation bias is
pervasive, and may be the most common cognitive bias. Most people have a tendency to look for
supporting evidence, while ignoring or not looking very hard for
disconfirmatory evidence (showing a dislike for disconfirmatory evidence). This is displayed when people cherry pick the
evidence. Of course, when you’re a
lawyer this is what you need to do. You
don’t want any evidence entering into the case that may be incongruent with the
evidence you present. However, as a
scientist it is important to look for disconfirming evidence. In fact, it has been suggested that a good
scientist goes out of their way to look for disconfirmatory evidence. Why look for disconfirmatory evidence? Because when discovering reality is the
objective it is necessary to look at all the available data, not just the data
supporting one’s own assertions. Confirmation bias occurs when
the only good evidence, according to the claimant, is the evidence that
supports their claim. Often,
perpetuators of nonsense may not even be aware of disconfirmatory evidence. They have no interest in even looking at it.
A study by Frey & Stahlberg (1986) examined how people
cherry-pick the evidence. The
participants took an IQ test and were given
feedback indicating their IQ was either high or
low. After receiving feedback
participants had a chance to read magazine articles about IQ tests. The participants that were told they had low IQ scores spent more time
looking at articles that criticized the validity of IQ tests, but those who were
told they had high IQ scores spent more time
looking at articles that supported the claim that IQ tests were valid measures
of intelligence.
Scientific thinking is structured to minimize confirmation bias. The late Richard Feynman (Nobel Laureate,
Physics) suggested that science is a set of processes that detects
self-deception (Feynman, 1999). That is,
science makes sure we don’t fool ourselves.
Nonsense indicator-
claimant does not adhere to the standard rules of reason and research
A large number of nonsense advocates do not even know what
the standard rules of reason and research is, let alone adheres to them. They often lack any training in research
methodology, and are ignorant to the accepted rules of scholarly work (Shermer,
2001). Consider the following example
provided by Shermer (2001, p.21).
Creationists (mainly the young-earth creationists)
do not study the history of life. In fact,
they have no interest in the history of life whatsoever since they already know
the history as it is laid down in the book of Genesis. No one fossil, on one piece of biological or
paleontological evidence has “evolution” written on it; instead there is
convergence, they have to abandon the rules of science, which isn’t difficult
for them since most of them, in fact, are not practicing scientists. The only reason creationists read scientific
journals at all is to either find flaws in the theory of evolution or to find
ways to fit scientific ideas into their religious doctrines.
……………………………..
Other Nonsense indicators featured in the Nonsense Detection
Kit: personal beliefs and biases drive the conclusion, excessive reliance on authorities,
use of logical fallacies, cannot be falsified, avoidance of peer review,
overreliance on anecdotes, extraordinary claims, and use of excessive “science
sounding” words or concepts.
The complete Nonsense Detection Kit is featured in the book- In Evidence We Trust: The Need for Science,
Rationality and Statistics.
References are available
upon request