When discussing evidence it is important to point out that Common
Evidence (evidence in the context of everyday discussion) is drastically
different than Scientific Evidence (evidence derived from scientific
processes). Common Evidence,
generally consists of proof or testimony.
Webster’s New Dictionary of The English Language (2006) provides
the following definitions for evidence:
“1: outward sign 2: proof or
testimony.” An outward sign, proof or
testimony are ambiguous, and can mean almost anything. From a scientific perspective, testimonials, anecdotes, they-says, wishful thinking and
so on do not count for evidence.
Testimonials exist for almost any claim you can imagine. That does not mean that claims of this sort
have no value. However, they have little value in the context of science. Experiences are confounded (confused by
alternative explanations). Experiences may be important in some contexts, and
they may serve as meaningful research questions. However, a meaningful question or a possible future finding is
not synonymous with evidence; although, in the future either could become
evidence (Hale, 2013). Scientific
evidence is derived from scientific studies.
All scientific evidence is not created equal. Many bad studies get published and many good studies do not get
published.
The contents of this
article address scientific evidence. I address Common Evidence in a
different article-
Testimonials Aren't Real Evidence
Understanding research methods & statistics
Reading and understanding research methods and statistics
are not easy. For most people formal
training may be necessary to gain a firm understanding of these relatively
difficult subjects. There are people
that lack formal training, in these areas, that have exceptional knowledge on
these topics.
Scientific methods are the most
powerful methods we have for discovering reality. Statistics allows us to organize, summarize, and interpret
research data collected from samples. In order to fully appreciate and apply the knowledge that has
been acquired through the scientific process it is imperative to have a basic
understanding of scientific research methodology. Scientific Methodology-scientific techniques used to
collect and evaluate data.
It is important to understand that all research methods play
an important role in leading us to tentative conclusions concerning how things
work in the observable universe. But,
it also important to realize different types of research should be interpreted
and applied in a different manner. As an example, the primary goal of
correlation research is prediction, while the primary goal of experimental
research is explanation / understanding (determining cause and effect
relationships).
Quantitative research is different than qualitative
research. With quantitative research
the results are presented as numbers or quantities; qualitative research
presents the results in words (Patten, 2004).
Knowledge of statistics is required if one is interested in
understanding quantitative research.
For a detailed discussion on quantitative vs. qualitative research refer
to - Understanding Research Methods by M.L. Patten and Health
Psychology by L. Brannon and J. Feist.
An understanding of research methods and statistics is
attainable by most people. However, it
requires a lot of effort for most.
Understanding research requires more than reading an abstract, glossing
over the Discussion section of a paper, or repeating what your favorite guru
said about the results of a study.
Bad Evidence
When considering the value of evidence, reliability and
validity must be considered. The type
of study also needs to be considered. In addition, other factors should be
considered when evaluating studies. A
concise discussion regarding reliability and validity is addressed here: Reliability & Validity Refer to In Evidence We Trust (Hale,
2013) to learn more about evaluating research.
It is imperative to recognize that not all scientific journal
articles are quality articles. Journals
often publish poor studies. And, good
studies are sometimes not published.
Students in research methods and stats courses know there are a lot of
bad studies published. Students are
often required to critique bad journal articles. I hated doing this in graduate school. However, it was great learning experience, and my ability to spot
bad evidence was enhanced.
In terms of evidence bad evidence can be thought of as no
evidence. Deciding the value of
evidence is an intense intellectual activity and becomes increasingly difficult
with complex studies.
Experimental Research Fallacy
It is a fallacy that experimental research is always good
research. This fallacy is not generally
explicitly stated, but may be suggested when only experimental research seems
to count in regards to the discussion or topic being discussed. As with other research methods, the
reliability and validity must be considered along with additional factors that
may impact the outcome or inferences regarding the outcome. Considering internal validity (in addition
to external, construct, and statistical validity) is important when evaluating
experimental results. Research methods,
other than experiments, can provide valuable information, contrary to what some
appear to think. As an example,
epidemiological studies were the first to detect a relationship between the
behavior of smoking and heart disease (Brannon & Feist, 2010).
If the goal is determining causation true experimentation is
required. Some researchers suggest that some level of causation can determined
using methods other than just experiments (Stanovich, 2007; Gore, 2013). True experiments require tedious work and
high levels of control. However,
experiments are not always practical or ethical. Thus, one of the reasons other types of methods are needed. If we didn’t have other research methods in
addition to experiments many questions couldn’t be examined.
Evidence Based Practice
Operationism (using operational definitions) removes the concept from the feelings and intuitions of an individual and allows it to be tested by anyone with the resources to carry out the measures (Stanovich, 2007).
When clinical recommendations are incongruent with statistical recommendations one should generally prefer statistical recommendations. Refer to the following article to learn more about clinical vs. statistical prediction- When Experts Are Wrong
Procedures for enhancing medical adherence may involve not only scientific findings. Adherence to the program or treatment plan should be considered. The treatment provider and patient relationship should also be considered in terms of efficacy. Research indicates a positive relationship between clinician and patient is often associated with a positive outcome (Benedetti, 2011) Does one practicing medicine need to be well read in science to perform well as a clinician? Of course, in order to understand and explain how things work a scientific understanding is needed. But, recipe knowledge may work fine in regards to successful clinical practice.
I have asked many of the self-proclaimed evidence based fitness crowd to provide a definition, or at least an approximate definition of evidence based fitness, but I haven’t even received an answer. If there are no guidelines, criteria, or approximations of the concept- Evidence Based Fitness (EBF)- the concept is weak. If this concept is hard to test or as some have suggested non-testable, then it is a non-scientific matter. If what is implied by EBF is that some elements of the training program adhere to scientific findings then it is reasonable to suggest most successful programs are evidence based. Just as successful diet programs are successful due to some scientific principles, whether the proponents of the diet programs are aware or not of what those principles are. In addition to lacking a definition, other problems exist regards EBF- logical inconsistency, and no evidence indicating that clients of evidence based fitness practitioners has better outcomes.
The following questions need to be addressed:
Do trainers need to be well read in science and statistics to be good trainers?
Isn’t a large part of fitness training artistic in nature?
Should evidence based practitioners ridicule people for making non-evidence based fitness claims, and then proceed to make irrational claims associated with other domains of knowledge?
What is the ultimate objective of the EBF crowd? To be well read in science, to design quality training programs or both?
How can EBF be operationalized?
Do those that call themselves evidence based fitness trainers understand research methods and statistics?
The evidence based fitness movement can have some positive implications- highlighting the importance of science, encouraging people to learn more about science, and encouraging thorough evaluation of popular fitness claims.
Conclusion
Bad evidence is often published in science journals. Understanding that not all scientific journal articles are created equal is imperative. To reiterate, experimental research is one of many scientific methods. Other methods can contribute to an understanding of the universe.
No comments:
Post a Comment