Jamie Hale

Jamie Hale

Thursday, October 28, 2010

Dysrationalia: Intelligent people behaving irrationally

by Jamie Hale

The following interview features the Stanovich, West, Toplak Research Lab.

Your research shows that intelligence does not imply rationality. Could you please briefly explain your general findings?

Those findings are easy to summarize briefly. They are simply that the correlations between measures of intelligence and various tasks from the cognitive psychology literature that measure aspects of rationality are surprisingly low. We use the term “surprisingly” here, because for many years it has been known that virtually all-cognitive ability tasks correlate with each other. Indeed many show quite high correlations. So, being psychologists, the surprise is in the context of this wide and vast cognitive ability literature, which has the technical name “Spearman’s positive manifold.” This positive manifold--that performance on cognitive tasks tends to correlate, and often quite highly--is more than 100 years old.

Thus, it was in this particular context, when we started observing fairly modest or low correlations between measures of intelligence and rational thought, that we thought this quite startling. Indeed, in restricted samples of educated adults this correlation can be virtually zero on certain tasks in the literature. Most often the correlation is positive, but, again, in light of 100 years of correlations between cognitive ability tasks, the correlations are often surprisingly low.

Of course one of the implications of this is that it will not be uncommon to find people whose intelligence and rationality are dissociated. That is, it will not be uncommon to find people with high levels of intelligence and low levels of rationality, and, to some extent, the converse. Or, another way to put it is that we should not necessarily expect the two mental characteristics to go together. The correlations are low enough--or moderate enough--that discrepancies between intelligence and rationality should not be uncommon. For one type of discrepancy, that is for people whose rationality is markedly below their intelligence, we have coined the term dysrationalia by analogy to many of the disabilities identified in the learning disability literature:

http://en.wikipedia.org/wiki/Dysrationalia

What is the definition of rationality?

Dictionary definitions of rationality tend to be of a weak sort—often seeming quite lame and unspecific. For example, a typical dictionary definition of rationality is: “the state or quality of being in accord with reason”. The meaning of rationality in modern cognitive science has a much stronger sense, it is much more specific and prescriptive than typical dictionary definitions. The weak definitions of rationality derive from a categorical notion of rationality tracing to Aristotle, who defined “man as the rational animal”. As de Sousa (2007) has pointed out, such a notion of rationality as “based on reason” has as its opposite not irrationality but arationality. Aristotle’s characterization is categorical—the behavior of entities is either based on thought or it is not. Animals are either rational or arational.

In its stronger sense, the sense employed in cognitive science and in this book by de Sousa (2007), rational thought is a normative notion. Its opposite is irrationality, and irrationality comes in degrees. Normative models of optimal judgment and decision making define perfect rationality in the noncategorical view employed in cognitive science. Rationality and irrationality come in degrees defined by the distance of the thought or behavior from the optimum defined by a normative model. This stronger sense is consistent with what recent cognitive science studies have been demonstrating about rational thought in humans.
We would also warn that some critics who wish to downplay the importance of rationality have been perpetuating a caricature of rationality that involves restricting its definition to the ability to do the syllogistic reasoning problems that are encountered in Philosophy 101. The meaning of rationality in modern cognitive science is, in contrast, much more robust and important. Syllogistic reasoning and logic problems are one small part of rational thinking.
Cognitive scientists recognize two types of rationality: instrumental and epistemic. The simplest definition of instrumental rationality, the one that is strongly grounded in the practical world, is: Behaving in the world so that you get exactly what you most want, given the resources (physical and mental) available to you. Somewhat more technically, we could characterize instrumental rationality as the optimization of the individual’s goal fulfillment.

The other aspect of rationality studied by cognitive scientists is termed epistemic rationality. This aspect of rationality concerns how well beliefs map onto the actual structure of the world. The two types of rationality are related. In order to take actions that fulfill our goals, we need to base those actions on beliefs that are properly calibrated to the world.

Although many people feel that they could do without the ability to solve textbook logic problems, virtually no person wishes to eschew epistemic rationality and instrumental rationality, when properly defined. Virtually all people want their beliefs to be in some correspondence with reality, and they also want to act to maximize the achievement of their goals. Psychologist Ken Manktelow (2004) has emphasized the practicality of both types of rationality by noting that they concern two critical things: What is true and what to do.

Epistemic rationality is about what is true and instrumental rationality is about what to do. For our beliefs to be rational they must correspond to the way the world is—they must be true. For our actions to be rational they must be the best means toward our goals—they must be the best things to do.

De Sousa, R. (2007). Why think? Evolution and the rational mind. Oxford: Oxford University Press.

Manktelow, K. I. (2004). Reasoning and rationality: The pure and the practical. In K. I. Manktelow & M. C. Chung (Eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 157-177). Hove, England: Psychology Press.

What are some of the rational thinking skills that are positively associated with intelligence? How about rational thinking skills that are not associated with intelligence?

Various probabilistic reasoning tasks have moderate correlations with intelligence. However, myside bias (the tendency to view evidence from one’s own side) is pretty much independent of intelligence in university samples. There are many, many domains of rational thinking measures and they each have important characteristics that will impact whether they are associated with intelligence. Stanovich’s Yale book contains a theoretical explanation of why some rational thinking tasks correlate with intelligence and others do not:

Stanovich, K. E. (2009). What intelligence tests miss: The psychology of rational thought. New Haven, CT: Yale University Press.

In a TV interview you (Toplak) mentioned the need for RQ testing. Do you think we can expect to see RQ testing within the public domain, in the near future?

Yes, this would be a great thing, but it is not likely to happen in the near future. The development of such an instrument would be a logistically daunting task, partly because rational thinking is such a big construct with so many parts. We use the term “multifarious” to describe this, and a metaphor we use is that it is like going to your family doctor for a check-up: there is not one test that will tell you that your health is good, rather the doctor checks multiple things to make this assessment.

The purpose of our work, and many of our recent publications, has been to speed the development of an RQ test along. We have done this by showing that there is no impediment, theoretically, to designing such a measure. The tasks that would be on such a measure have been introduced into the recent literature. In several recent publications we have been working on bringing them together into a coherent structure. Of course there are many, many, more steps that are needed before one has an actual standardized test. Standardization samples would need to be run and items would need to be piloted. In terms of the corporations that produce mental tests, it’s an endeavor that, if one were to measure it in dollars, would be millions of dollars.

Again, the purpose of some of our recent work has been to sketch out what such an endeavor would look like, to show that there is no theoretical or empirical impediment to such a thing, and to recruit others into this endeavor of working on such an instrument. We would like to include others in this endeavor, because we believe that it is way beyond the capabilities of a single laboratory. Our hope is that such an instrument might someday stand in parallel to the intelligence tests. This has been one of the motivations in our recent books and chapters, such as the following:

Stanovich, K. E., West, R. F., & Toplak, M. E. (in press). Intelligence and rationality. In R. J. Sternberg & S. B. Kaufman (Eds.), Cambridge handbook of intelligence (3rd Edition), Cambridge,UK: Cambridge University Press.

We need to emphasize, however, that there is no reason for this to be an all or nothing, rather than an incremental, process. There clearly would be immediate practical uses of less all-encompassing instruments that focused on important components of rational thinking (e.g., economic thinking, probabilistic thinking, scientific thinking, reduced myside biased thinking).

Is rationality more important than intelligence?

No, we would never make such a blanket statement. We would only say that the magnitude of its importance at least approaches that of intelligence. Differences in rational thought have real world consequences that cash out in terms of important outcomes in peoples lives. We don’t want to get into a contest of which is more important. We acknowledge that intelligence, as assessed by standardized tests, is one of the most important psychological constructs that has ever been discovered. But outlining the nature of rational thought, how to theoretically conceive it, and how to measure it empirically, is certainly up there with intelligence in terms of the most important five or six mental constructs that psychologists have investigated.

Can a person be highly rational, but rank low in intelligence?

Yes. This was addressed in our response to question number 1, that the whole point of our research showing that the correlation between the two is not excessively high is that you can have discrepancies, and that one can be high on one and low in the other.

Tell our readers how they can improve their rational thinking skills.

A good first start is education, which readers have already started here by reading this blog entry. Having an understanding of how cognitive scientists have expanded what is meant by rationality is important, namely that rationality is about two critical things: What is true and what to do.

There are numerous books that deal with rational thinking. Some of the chapters and books in our own research lab have contributed to this, and we will list them at the bottom of this entry.

Do you think a good starting point would be becoming educated on basic logic?

Basic logic would be part of a rational thinking skills curriculum, but not necessarily the first part. Again, rational thinking in cognitive science encompasses decision theory, epistemic rationality, and many areas beyond simply the study of basic logic in philosophy 101. It is very important to understand that rational thinking in cognitive science is rooted in good decision-making. Good decision making skills and good skills of knowledge acquisition do have logical thinking as one subcomponent. But there are many subskills that are even more important than logic. The subskills of scientific thinking, statistical thinking, and probabilistic reasoning, for example. Many of these are listed in the books that we will recommend here.

Baron, J. (2008). Thinking and deciding (Fourth Edition). Cambridge, MA: Cambridge University Press.

Hastie, R., & Dawes, R. M. (2001). Rational choice in an uncertain world. Thousand Oaks, CA: Sage. (a new 2010 edition is just out)

A recent chapter of ours contains a large number of citations to successful attempts to teach the skills of rational thought:

Toplak, M. E., West, R. F., & Stanovich, K. E. (2011). Education for rational thought. In M. J. Lawson & J. R. Kirby (Eds.), The quality of learning. New York: Cambridge University Press.

Is there a particular book that you recommend- for people interested in increasing their rationality- for the lay public?

Yes, some of the books that we have already mentioned. We will be so immodest as to recommend a small textbook of our own.

Stanovich, K. E. (2010). Decision making and rationality in the modern world. New York: Oxford University Press.

Weblinks with bios and further information:

http://web.mac.com/kstanovich/iWeb/Site/Home.html
http://www.yorku.ca/mtoplak/
http://web.me.com/westrf1/Site_2/Welcome.html
Video- Stanovich Grawemeyer Lecture- Third link from the top of the page
http://web.mac.com/kstanovich/iWeb/Site/Audio_Visual.html

Friday, October 22, 2010

Good Thinking: More Than Just Intelligence

by Jamie Hale

Are intelligent people good thinkers? Some are, some are not. Society is replete with examples of intelligent people doing foolish things. There is a plethora of scientific data showing intelligence does not necessarily predict rationality. Intelligence shows a low to moderate association with some critical thinking / rational thinking skills, while showing little to no association with other rational thinking skills. A study published in 2008, in Thinking & Reasoning (Stanovich & West), investigated two key critical thinking skills- avoidance of myside bias and avoidance of one side bias.

THINKING & REASONING
2008, 14 (2), 129 – 167

On the failure of cognitive ability to predict myside and one-sided thinking biases
Keith E. Stanovich
University of Toronto, Canada
Richard F. West
James Madison University, Harrisonburg, VA, USA

Two critical thinking skills—the tendency to avoid myside bias and to avoid one-sided thinking—were examined in three different experiments involving over 1200 participants and across two different paradigms. Robust indications of myside bias were observed in all three experiments. Participants gave higher evaluations to arguments that supported their opinions than those that refuted their prior positions. Likewise, substantial one-side bias was observed—participants were more likely to prefer a one-sided to a balanced argument. There was substantial variation in both types of bias, but we failed to find that participants of higher cognitive ability displayed less myside bias or less oneside bias. Although cognitive ability failed to associate with the magnitude of the myside bias, the strength and content of the prior opinion did predict the degree of myside bias shown. Our results indicate that cognitive ability—as defined by traditional psychometric indicators—turns out to be surprisingly independent of two of the most important critical thinking tendencies discussed in the literature.

Key cognitive skills required for critical thinking are the ability to evaluate evidence in an objective manner, and the ability to consider multiple points of view when solving a problem, or coming to a conclusion. Most people fail to demonstrate these critical thinking tendencies. Myside bias is displayed when people evaluate evidence and come to conclusions that are biased towards their own beliefs and opinions. One side bias is demonstrated when people prefer one sided arguments over arguments presenting multiple perspectives. Intelligent people are just as likely as less intelligent people to demonstrate these thinking biases. Before going further, it is important to mention that intelligence in this context refers to cognitive abilities measured by popular intelligence tests and their proxies. These tests do a good job assessing computational power and certain types of declarative knowledge. But, they do not adequately assess critical thinking skills. Avoidance of myside bias and one side bias are not measured on intelligence tests. It seems that intelligence tests are missing an important element of good thinking- evaluating evidence in a unbiased manner, and considering a multitude of perspectives when problem solving. I don't think any sane person would argue that these skills are not important.

In a series of experiments Stanovich and West examined the association between cognitive ability and two cardinal critical thinking skills- avoidance of myside bias and avoidance of one side bias. In Experiment 1 natural myside bias was investigated in 15 different propositions. In Experiment 2 myside bias and one-sided bias was studied. In Experiment 3 associations between thinking dispositions- in addition to cognitive ability- and one-sided and myside bias were investigated.

In Experiment 1, the researchers concluded, there was "no evidence at all that myside bias effects are smaller for students of higher cognitive ability" (p.140). The main purpose of Experiment 2 was to investigate the association of cognitive abilities with myside and one side bias. "The results... were quite clear cut. SAT total scores displayed a nonsignificant 7.03 correlation with the degree of myside bias and a correlation of .09 with the degree of one-side bias (onebias1), which just missed significance on a twotailed test but in any case was in the unexpected direction" (p.147). It was also revealed that stronger beliefs usually imply heavier myside bias. In Experiment 3 "the degree of myside bias was uncorrelated with SAT scores", and "[t]he degree of one-side bias was uncorrelated with SAT scores" (p.156). Myside bias was weakly correlated with thinking dispositions. One side bias showed no correlation with thinking dispositions.

The final two sentences or the research report read: "Our results thus indicate that intelligence—as defined by traditional psychometric indicators—turns out to be surprisingly independent of critical thinking tendencies. Cognitive ability measures such as the SAT thus miss entirely an important quality of good thinking" (p.161). The good news is critical thinking abilities are malleable, and in fact, probably more malleable than intelligence

Reference
Stanovich, K. & West, R. (2008). On the failure of cognitive ability to predict myside and one-sided thinking biases. Thinking & Reasoning, 14 (2), 129-167.

Additional Sources
What IQ Tests Miss- Dr. Toplak Interview
http://www.youtube.com/watch?v=mGka5bQIgS4

 
Stanovich, K. E. (2009, Nov/Dec). The thinking that IQ tests miss. Scientific American Mind, 20(6), 34-39. First Link at the top of this page -
http://web.mac.com/kstanovich/iWeb/Site/Research%20on%20Reasoning.html

Next week I will be publishing an interview with the Stanovich, West, Toplak Research Lab.

Wednesday, September 8, 2010

How We Know What Isn't So: Interview w/ Thomas Gilovich

by Jamie Hale

Recently, I had a chance to conduct an interview with one of my favorite writers- Thomas Gilovich. Gilovich is the author of one of the most celebrated books- How We Know What Isn’t So- in the skeptic, critical thinking community. I first learned about the book when reading Shermer’s- Why People Believe Weird Things. In addition, Gilovich has co-authored three other books. He is also a prominent primary researcher and professor & chairperson at Cornell University.

If you have any interest in critical thinking, everyday judgments and beliefs you will love the following interview.

Does the "hot hand" or "streak shooting"- as defined by basketball enthusiasts, really exist? What are the origins of this idea? How prevalent is this idea among basketball enthusiasts?

Despite what everyone "knows" to be the case, basketball players do not shoot in streaks. That is, their streakiness does not exceed the level of streakiness one observes when, say, flipping coins. The widespread belief in streak shooting seems to stem from a common misconception about what chance outcomes look like. Statisticians refer to this as the clustering illusion. Purely randomly arranged stimuli "clump" together more than one would expect and so when we see instances of randomness, it doesn't look random to us. Applied to basketball, when we see a player make four, five, or six shots in a row, we think the player is hot. But careful statistical analyses reveal that the frequency of such sequences do not exceed what one would expect if the outcomes of prior shots had no influence on the outcomes of subsequent shots.

Does money buy happiness? Explain.

Money is associated with happiness. People in rich countries are happier, on average, than people in poor countries and, within a country, people with more money are happier, on average, than people with less. As you might imagine, it's not a huge effect, but it's there. Whether money "buys" happiness depends in part on how one spends it and a great deal of recent research in psychology and economics has been devoted to figuring out what type of expenditures yield the most happiness, and the most enduing happiness.

What do you think is the most common, or maybe a short list of a few of the most common errors in thinking that lead to bad decision-making?

At the top of the list, what my colleague Scott Lilienfeld refers to as "the mother of all biases," is what is known as the confirmation bias, or the tendency to examine whether an idea is true—to test a hypothesis—by looking disproportionately for evidence consistent with that idea. Someone testing whether professors tend to be pompous will search their memories (or the outside world) for pompous professors; someone testing whether professors tend to be modest will search their memories for modest Profs. Because there is SOME evidence for nearly any idea (there are many pompous profs and many modest ones), this bias leads to an excess of credulity.

What advice can you give people who are interested in increasing their critical thinking skills?

Take a course in statistics and in psychology (social psychology in particular)

What is your favorite book? Favorite website?

I'm not a big fan of favorites because there's so much great stuff at the top end of almost any category, and who wants to assign one member of the upper echelon to a lower rung (Is Citizen Kane really better than The Godfather, as the American Film Institute would have us believe?) But Guns, Germs, and Steel and The Omnivore's Dilemma are certainly favorites, as is the book I just finished, Nicholson Baker's fabulous The Anthologist. As for a favorite website, it's hard to beat The New York Times—and I can't resist a plug for my brother's website, www.surfline.com

Of the books you have written, which one is your favorite?

Again, I don't much like the idea of favorites but my first, How We Know What Isn't So, will always be special to me.

What are your current research interests?

I remain interested in trying to understand how people can become convinced of things that dispassionate analysis and careful inquiry indicate are not true. False beliefs, superstitions, faulty judgments in all walks of life—in politics and government, in economics and personal finance, in sports, and in personal relationships.


Recommended Sources

Gilovich faculty page
http://www.psych.cornell.edu/people/Faculty/tdg1.html


Gilovich Books
http://www.psych.cornell.edu/tdg1/Books.html

Tuesday, August 24, 2010

Sports Illustrated Jinx: Is it really a jinx?

By Jamie Hale

There are many coaches, athletes, sports commentators and sports fans that believe being featured on the cover of Sports Illustrated is not a good thing for an athlete. Supporters of the Sports Illustrated Jinx claim being featured on the cover leads to bad luck. SIJ proponents can cite numerous cases to support their belief.

Victims of the Sports Illustrated Cover Jinx (Wikipedia excerpts):

“May 26, 1958: Race car driver Pat O’Connor appears on the cover. He dies four days later on the first lap of the Indianapolis 500.

August 7, 1978: Pete Rose appears on the cover the same week that his 44-game hitting streak ended.

May 8, 1989: Jon Peters, of Brenham High School in Texas, sets the national high school record for games won by a pitcher, with a 51-0 record. The next game after the cover, he loses for the first (and only) time of his high school career.

In November 2007, Kerry Meier of the Kansas Jayhawks appeared on the cover, which stated "Dream Season (So Far)" after the Jayhawks were 11-0. In their next game they lost to their archrivals, the Missouri Tigers, 36-28, ending the Jayhawks perfect season.

November 9, 2009: Iowa's Derell Johnson Koulianos appears on the front cover with the words "Still Perfect." The Hawkeyes lost to Northwestern two days before the issue date, ending the longest winning streak in school history.”

Maybe the SIJ is a real phenomenon, or maybe, or almost certainly, it is an erroneous belief produced by the regression fallacy. Gilovich explains (1991) how the regression fallacy applies to the SIJ Myth:

It does not take much statistical sophistication to see how regression effects may be responsible for the belief in the Sports Illustrated jinx. Athletes performance at different times are imperfectly correlated. Thus, due to regression alone, we can expect an extraordinary good performance to be followed, on the the average, by a somewhat less extraordinary performance. Athletes appear on the cover of Sports Illustrated when they are newsworthy- i.e., when their performance is extraordinary. Thus, an athlete’s superior performance in the weeks preceding a cover story is very likely to be followed by somewhat poorer performance in the weeks after. Those who believe in the jinx, like those who believe in the hot hand, are mistaken, not in what they observe, but in how they interpret what they see. Many atheltes do suffer a deterioation in their performance after being pictured on the cover of Sports Illustrated, and the mistake lies in citing a jinx, rather than citing regression as the proper interpretation of this phenomenon.


I wonder what SJM supporters think of Michael Jordan's 57 appearances on the cover (Greenfield, 2010), or Vince Young who appeared on the cover of Sports Illustrated twice during Texas's National Championship season (Zahn, 2002)?

References

Gilovich, T. How. (1991). How We Know What Isn’t So: The Fallibility of Human Reason In Everyday Life. New York: Free Press.

Greenfield, J. (2010). Michael Jordan: The Sports Illustrated Covers. http://www.chicagonow.com/blogs/sports/2010/01/michael-jordan-the-sports-illustrated-covers-1.html (accessed August 23, 2010)

Wikipeida. Sports Illustrated Cover Jinx. http://en.wikipedia.org/wiki/Sports_Illustrated_Cover_Jinx (accessed August 23, 2010)

Zahn, P. (2002). Is Their (sic) a “Sports Illustrated Cover Jinx. CNN. http://transcripts.cnn.com/TRANSCRIPTS/0201/25/ltm.01.html. (accessed August 23, 2010).

Monday, August 9, 2010

Common Sense Doesn't Matter

by Jamie Hale

“Albert Einstein said common sense is the collection of prejudices acquired by the age of 18. It is also a result of some pervasive and extremely stupid logical fallacies that have become embedded in the human brain over generations, for one reason or another. These malfunctioning thoughts--several of which you've had already today--are a major cause of everything that's wrong with the world” (Shakespeare, 2009).

Webster’s New World Dictionary (2003) defines common sense as: “good sense or practical judgement.” This is probably the most commonly accepted definition of the word.

Wikipedia says:

“Common sense, based on a strict construction of the term, consists of what people in common would agree on: that which they "sense" as their common natural understanding.

Some people (such as the authors of Merriam-Webster Online) use the phrase to refer to beliefs or propositions that — in their opinion — most people would consider prudent and of sound judgment, without reliance on esoteric knowledge or study or research, but based upon what they see as knowledge held by people "in common".

The most common meaning to the phrase is good sense and sound judgement in practical matters.”

A better definition of Common Sense is: commonly held belief, regardless of it’s truth value.

It doesn’t matter which definition you prefer to use when discussing Common Sense, referring to Common Sense as reason for a particular claim is fallacious, it makes an argument invalid. Yesterday’s Common Sense is often today’s Common Nonsense. Once upon a time it was common sense that the World was flat. History is replete with examples of Common Sense failure.

The list below was contributed by Frank Lovell, Kentucky Assocation of Science Educators and Skeptics Member.

Common Sense Counterfactuals

“The sun orbits Earth once a day. FALSE -- Earth rotates under the (relative to Earth, essentially) stationary sun once a day, and orbits the stationary sun once a year.

Velocities are simply additive (1mph+1mph=2mph, and 100,000mps+100,000mps=200,000mps). FALSE -- special relativity.

Time is absolute. FALSE -- Special Relativity.

Space is absolute. FALSE -- special relativity (what IS absolute is "space-time").

Earth's continents do not move. FALSE -- plate tectonics.

Everything that happens is rigorously mechanically determined. FALSE -- quantum mechanics.”

From Lilienfeld et al. (2010, p.6):

“…French writer Voltaire (1764) pointed out, ‘Common sense is not so common.’ Indeed, one of our primary goals in this book is to encourage you to mistrust your common sense when evaluating psychological claims. As a general rule, you should consult research evidence, not your intuitions, when deciding whether a scientific claim is correct.

As several science writers, including Lewis Wolpert (1992) and Alan Cromer (1993), have observed, science is uncommon sense. In other words, science requires us to put aside our common sense when evaluating evidence (Flagel & Gendreau, 2008; Gendreau et al., 2002).”

When engaging in argument avoid using the Common Sense fallacy, it gives and impression that you have no evidence to support your claim. It may perusade some people , but it will fail when arguing with someone who has a firm understanding of logic.


References

Lilienfeld, S. et al. (2010). 50 Great Myths of Popular Psychology. Wiley-Blackwell.

Shakespeare, G. (2009). 5 Ways “Common Sense” lies to you Everyday. http://www.cracked.com/article_17142_5-ways-common-sense-lies-to-you-everyday.html. (Accessed August 8, 2010).

Webster’s New World Dictionary. (2004). Wiley Publishing Inc.

Wikipeida. Common sense. http://en.wikipedia.org/wiki/Common_sense. (Accessed August 8, 2010).

Wednesday, July 28, 2010

Science Might be Wrong

by Jamie Hale

Recently, a friend and I were discussing my article Sham Psychology or Scientific Psychology when he asked, “Are there any definites in Psychology?” I answered by telling him there are no definites in psychology or any branch of science, or any other method of knowledge acquisition. Some people have the idea that science claims certainty, when in fact science knowledge is tentative. The tentative nature of science is one of its strong points. Science, unlike faith-based beliefs accepts the preponderance of evidence and changes it’s stance if the evidence warrants.

The scientist has the attitude that there are no absolute certainties. R.A Lyttleton suggests using the bead model of truth (Duncan R & Weston-Smith M 1977). This model depicts a bead on a horizontal wire that can move left or right. 0 appears on the far left end, and a 1 appears on the far right end. The 0 corresponds with total disbelief and the 1 corresponds with total belief (absolute certainty). Lyttleton suggests that the bead should never reach the far left or right end. The more that the evidence suggests the belief is true the closer the bead should be to 1. The more unlikely the belief is to be true the closer the bead should be to 0.

The non-scientist is ready to accept explanations that are based on insufficient evidence or sometimes no evidence. They heard it on CNN or their teacher said it so it must be true (logical fallacy of an Appeal to Authority). They reject notions because they can’t understand them or because they don’t respect the person making the claim. The scientist investigates the claim and critically evaluates the evidence.

The scientific method is the best method we have for acquiring knowledge. Sometimes science is wrong, but science does not claim absolutism, nor does it claim to have all the answers. I have heard some people say, “science doesn’t matter, what matters is the real world”, news flash- the scientific method is the very best we have for understanding the real world. Of course, no one complains about science while watching TV, driving their car, or receiving their medications, all luxuries given to us by science.

References

Duncan R & Weston-Smith M. (1977) The Nature of Knowledge by RA Lyttleton. The Encyclopaedia of Ignorance. Pergamon Press.

Hale, J. (2009). Scientific and Nonscientific Approaches to Knowledge. http://www.maxcondition.com/page.php?126
(Accessed July 28, 2010)

Sunday, June 6, 2010

Correlational Studies & Science

Correlation does not necessarily imply causation, as you know, if you read scientific research. Two variables may be associated without having a causal relationship. However, just because a correlation has limited value as a causative inference, does not mean that correlation studies are not important to science.

Why are correlation studies important? Stanovich (2007) points out the following:

“First, many scientific hypotheses are stated in terms of correlation or lack of correlation, so that such studies are directly relevant to these hypotheses."


"Second, although correlation does not imply causation, causation does imply correlation. That is, although a correlational study cannot definitely prove a causal hypothesis, it may rule one out."


"Third, correlational studies are more useful than they may seem, because some of the recently developed complex correlational designs allow for some very limited causal inferences."


"…some variables simply cannot be manipulated for ethical reasons (for instance, human malnutrition or physical disabilities). Other variables, such as birth order, sex, and age are inherently correlational because they cannot be manipulated, and, therefore, the scientific knowledge concerning them must be based on correlation evidence.”


When practical, evidence from correlation studies can lead to testing that evidence under controlled experimental conditions.

In conclusion, it is true that correlation does not necessarily imply causation, however causation does imply correlation. Correlational studies are a stepping-stone to the more powerful experimental method.

Notes:

There are two major problems when attempting to infer causation from a simple correlation- 1) directionality problem- before concluding that a correlation between variable 1 and 2 is due to changes in 1 causing changes in 2, it is important to realize the direction of causation may be the opposite, thus, from 2 to 1 -2) third-variable problem- the correlation in variables may occur because both variables are related to a third variable

Complex correlational statistics such as path analysis, multiple regression and partial correlation “allow the correlation between two variables to be recalculated after the influence of other variables is removed, or ‘factored out” or ‘partialed out” (Stanovich, 2007, p. 77)

Conditions Necessary to Infer Causation (Kenny, 1979):

Time precedence: For 1 to cause 2, 1 must precede 2. The cause must precede the effect.

Relationship: The variables must correlate. To determine the relationship of two variables, it must be determined if the relationship could occur due to chance. Lay observers are often not good judges of the presence of relationships, thus, statistical methods are used to measure and test the existence and strength of relationships.

Nonspuriousness (spuriousness- not genuine): “The third and final condition for a causal relationship is nonspuriousness (Suppes, 1970). For a relationship between X and Y to be nonspurious, there must not be a Z that causes both X and Y such that the relationship between X and Y vanishes once Z is controlled” (Kenny, 1979. pp. 4-5).

References

Kenny, D. (1979). Correlation and Causality.

Stanovich, K. (2007). How to Think Straight About Psychology. Boston, MA: Pearson.