This is the first in a series of short articles that will discuss basic stats. Learning about stats will help you think in terms of probabilities, and allow you to gain a better understanding of research data.
Statistic: 1 number that summarizes a property of a set of numbers (Osbaldiston, 2011)
One of the key reasons of why we need statistics is to be able to effectively conduct research. Without the use of statistics it would be very difficult to analyze the collected data and make decisions based on the data. Statistics give us an overview of the data and allow us to make sense of what is going on. Without statistics, in many cases, it would be extremely difficult to find meaning in the data. Statistics provides us with a tool to make an educated inference.
Most scientific and technical journals contain some form of statistics. Without an understanding of statistics, the statistical information contained in the journal will be meaningless. An understanding of basic statistics will provide you with the fundamental skills necessary to read and evaluate most results sections. The ability to extract meaning from journal articles, and the ability to evaluate research from a statistical perspective are basic skills that will increase your knowledge and understanding of the article of interest.
Gaining knowledge in the area of statistics will help you become a better-informed consumer. Of course, statistics can be used or misused. Some individuals do mislead with statistics. If you understand basic statistical concepts, you will be in a better position to evaluate the information you have been given.
In future articles will be discussing: mean, median, mode, range, standard deviation, t-tests, correlation coefficient, and many other statistical concepts.
Monday, December 19, 2011
Saturday, December 17, 2011
"Person-Who" Statistics
Results of scientific studies are stated in probabilistic terms. Science is not in the business of making claims of absolute certainty (refer to bead model of truth). When science describes, predicts or explains something, it is understood that the conclusion is tentative. This willingness to admit fallibility is probably one of science’s biggest strengths. In virtually every other area of knowledge acquisition, admitting fallibility is not a virtue, but a severe weakness.
Person-who statistics: situations in which well-established statistical trends are questioned because someone knows a “person who” went against the trend (Stanovich, 2007). For example, “Look at my grandpa, he is ninety years old, has been smoking since he was in thirteen, and is still healthy”, implying smoking is not bad for health. Learning to think probabilistically is an important trait that can increase one's ability to think more accurately. Person-who statistics is a ubiquitous phenomenon.
Research shows people have a difficult time thinking probabilistically. People like things stated in terms of absolute. However, many things cannot be explained in those terms, and in fact when referring to causation in everyday life we are often wrong. Determining what causes something is not as simple as we would like to think.
The conclusions drawn from scientific research are probabilistic- generalizations that are correct most of the time, but not every time. People often weight anecdotal evidence more heavily than probabilistic information. This is an error in thinking, and leads to bad decisions, and often, irrational thinking.
References
Stanovich, K. (2007). How To Think Straight About Psychology. Boston, MA: Pearson.
Person-who statistics: situations in which well-established statistical trends are questioned because someone knows a “person who” went against the trend (Stanovich, 2007). For example, “Look at my grandpa, he is ninety years old, has been smoking since he was in thirteen, and is still healthy”, implying smoking is not bad for health. Learning to think probabilistically is an important trait that can increase one's ability to think more accurately. Person-who statistics is a ubiquitous phenomenon.
Research shows people have a difficult time thinking probabilistically. People like things stated in terms of absolute. However, many things cannot be explained in those terms, and in fact when referring to causation in everyday life we are often wrong. Determining what causes something is not as simple as we would like to think.
The conclusions drawn from scientific research are probabilistic- generalizations that are correct most of the time, but not every time. People often weight anecdotal evidence more heavily than probabilistic information. This is an error in thinking, and leads to bad decisions, and often, irrational thinking.
References
Stanovich, K. (2007). How To Think Straight About Psychology. Boston, MA: Pearson.
Monday, December 5, 2011
How I became Interested In Science
Kevin Akers
It took a good deal of thinking to try and remember when and why I became interested in science. I think that it is hard to narrow it down to just one or two things. I was always a pretty good student in high school and college, and I would guess that some of my teachers probably instilled a basic interest of various science topics in me. I became interested in science primarily when I began reading on my own after I graduated instead of reading for a particular class. I found it frustrating when each week according to the news it seemed like the same food alternated between being horrible for you and being great for you. I was also frustrated when a reporter would spend thirty seconds or so attempting to describe a scientific discovery, but obviously had little idea what it meant or how it was discovered. Along with some other factors, I think I basically wanted to find out for myself how a variety of things worked and how scientists knew what they knew.
The first non-fiction books I read after college were history books. I was interested in ancient history, and in particular alternative theories about ancient history. Most of the books I read were essentially pseudoscience in historical form; they presented interesting questions that clashed with traditional history and on the surface seemed quite plausible, but in reality there was little evidence in support of their theories and a great deal against. A common idea was this: historians, archaeologists, and others refuse to consider any evidence that goes against any long-held theory simply because the theory has existed so long, and, after all, they would have to rewrite a lot of textbooks. This certainly can happen in science; every now and again an idea comes along that seems so outrageous and counter-intuitive that it is not even considered until a wealth of evidence supports it. I think I liked these kinds of books because they did sometimes raise good questions that sometimes 'serious' historians simply wouldn't consider, and it originally got me thinking of the question, how do they know what they know? It's important to not dismiss a claim automatically because it sounds outrageous, but it is equally important to not waste time fully investigating claims that have been refuted over and over again and bring no new evidence to the table. I stopped reading some of my favorite 'history' authors when I realized that their claims had been refuted by experts in the fields they were discussing, and instead of replying to the refutation the original author simply pretended that the experts had never replied.
After reading a lot of history I think the first science books I read were about astronomy. I was curious about how much ancient civilizations knew about the motion of the stars and how very little I knew. History books already had me thinking about how people knew what they knew, and I just applied the same thing to science. I think I read A Brief History of Time and then some introductory books on Einstein and relativity after that. I didn't grasp some of the concepts in them but I kept reading anyhow until I felt I got something out of them, and I think they were beneficial to my overall understanding of science topics. From there I read a lot of books on science topics that kept popping up in the news, and like I mentioned earlier, I always got the impression the reporters had very little idea what they were talking about. I read about the Big Bang Theory, stem cell research, psychology, space travel, extraterrestrial life, research methods, evolution, and other topics. Some of the topics I read about I had discussed in college, and some of them I discussed with people that were also science readers. As a part of reading a lot of different books I figured out the importance of finding authors that were knowledgeable about the subject matter and not just speculating. I think a lot of my interest in science stems from wanting to know more about something that is commonly discussed (in the news or our culture) but not really all that well-known by the majority.
Finally, I think I would be fooling myself if I didn't admit that it is simply exciting and entertaining to know something that most people don't know, or to find out that something that seems like common sense isn't true. I wouldn't say that I have an interest in science simply so I can go around correcting people all the time. The fact is, though, that people everywhere are constantly doing this exact thing only without any evidence to back them up. People are constantly telling their friends (and non-friends) that they heard this or that science related thing and the sad thing is the listener will often pass that information on, citing not evidence but their friend that heard it from a friend, etc. With a lot of topics now it is much easier for me to decide the merit of the ideas passed along to me by others. It was really exciting for me when I realized that knowing something about how scientific research is done really has a huge impact on my life. It is now much easier to tell which products at the store do absolutely nothing, which ones can be purchased for a much cheaper price that do the same thing, and what it means when I am told on the news that I am a certain percentage more likely to have this happen if I take this action. The interesting thing to me is, and I don't mean to brag too much here, that if the average person had an interest in science our legal and political systems would be a great deal improved. In our court system it seems to me that often the jurors are confused and argue about ideas that are not really controversial in the scientific community. Politicians on all sides seem to constantly find that correlation is always causation when it supports what they are saying. Wouldn't it be great if the average person could see through such nonsense? A lot of myths about a variety of topics were long ago dispelled by scientific research, and I think that the more I read about them, and about science topics in general, the more I am interested to find out what else I don't know.
It took a good deal of thinking to try and remember when and why I became interested in science. I think that it is hard to narrow it down to just one or two things. I was always a pretty good student in high school and college, and I would guess that some of my teachers probably instilled a basic interest of various science topics in me. I became interested in science primarily when I began reading on my own after I graduated instead of reading for a particular class. I found it frustrating when each week according to the news it seemed like the same food alternated between being horrible for you and being great for you. I was also frustrated when a reporter would spend thirty seconds or so attempting to describe a scientific discovery, but obviously had little idea what it meant or how it was discovered. Along with some other factors, I think I basically wanted to find out for myself how a variety of things worked and how scientists knew what they knew.
The first non-fiction books I read after college were history books. I was interested in ancient history, and in particular alternative theories about ancient history. Most of the books I read were essentially pseudoscience in historical form; they presented interesting questions that clashed with traditional history and on the surface seemed quite plausible, but in reality there was little evidence in support of their theories and a great deal against. A common idea was this: historians, archaeologists, and others refuse to consider any evidence that goes against any long-held theory simply because the theory has existed so long, and, after all, they would have to rewrite a lot of textbooks. This certainly can happen in science; every now and again an idea comes along that seems so outrageous and counter-intuitive that it is not even considered until a wealth of evidence supports it. I think I liked these kinds of books because they did sometimes raise good questions that sometimes 'serious' historians simply wouldn't consider, and it originally got me thinking of the question, how do they know what they know? It's important to not dismiss a claim automatically because it sounds outrageous, but it is equally important to not waste time fully investigating claims that have been refuted over and over again and bring no new evidence to the table. I stopped reading some of my favorite 'history' authors when I realized that their claims had been refuted by experts in the fields they were discussing, and instead of replying to the refutation the original author simply pretended that the experts had never replied.
After reading a lot of history I think the first science books I read were about astronomy. I was curious about how much ancient civilizations knew about the motion of the stars and how very little I knew. History books already had me thinking about how people knew what they knew, and I just applied the same thing to science. I think I read A Brief History of Time and then some introductory books on Einstein and relativity after that. I didn't grasp some of the concepts in them but I kept reading anyhow until I felt I got something out of them, and I think they were beneficial to my overall understanding of science topics. From there I read a lot of books on science topics that kept popping up in the news, and like I mentioned earlier, I always got the impression the reporters had very little idea what they were talking about. I read about the Big Bang Theory, stem cell research, psychology, space travel, extraterrestrial life, research methods, evolution, and other topics. Some of the topics I read about I had discussed in college, and some of them I discussed with people that were also science readers. As a part of reading a lot of different books I figured out the importance of finding authors that were knowledgeable about the subject matter and not just speculating. I think a lot of my interest in science stems from wanting to know more about something that is commonly discussed (in the news or our culture) but not really all that well-known by the majority.
Finally, I think I would be fooling myself if I didn't admit that it is simply exciting and entertaining to know something that most people don't know, or to find out that something that seems like common sense isn't true. I wouldn't say that I have an interest in science simply so I can go around correcting people all the time. The fact is, though, that people everywhere are constantly doing this exact thing only without any evidence to back them up. People are constantly telling their friends (and non-friends) that they heard this or that science related thing and the sad thing is the listener will often pass that information on, citing not evidence but their friend that heard it from a friend, etc. With a lot of topics now it is much easier for me to decide the merit of the ideas passed along to me by others. It was really exciting for me when I realized that knowing something about how scientific research is done really has a huge impact on my life. It is now much easier to tell which products at the store do absolutely nothing, which ones can be purchased for a much cheaper price that do the same thing, and what it means when I am told on the news that I am a certain percentage more likely to have this happen if I take this action. The interesting thing to me is, and I don't mean to brag too much here, that if the average person had an interest in science our legal and political systems would be a great deal improved. In our court system it seems to me that often the jurors are confused and argue about ideas that are not really controversial in the scientific community. Politicians on all sides seem to constantly find that correlation is always causation when it supports what they are saying. Wouldn't it be great if the average person could see through such nonsense? A lot of myths about a variety of topics were long ago dispelled by scientific research, and I think that the more I read about them, and about science topics in general, the more I am interested to find out what else I don't know.
Sunday, November 20, 2011
Food Perception & RET
Irving Kirsch’s (Harvard Medical School lecturer & Associate Director of Program Placebo Studies @ Harvard) Response Expectancy Theory is based on the idea that what people experience depends partly on what they expect to experience. This is the process that can at least partly explain what lies behind the placebo effect and hypnosis. The theory is supported by research showing that changing people’s expectancies can alter physiological responses. The theory has been applied to understanding pain, depression, anxiety disorders, asthma, addictions, psychogenic illnesses, and food hedonics.
Food perception and Response Expectancy Theory
How we perceive taste and flavor can be influenced by suggestions and expectations. Yeomans et al. (2008) looked at expectations about food flavor by using an unusual flavor of ice cream- smoked salmon ice. One group ate the ice cream from a dish labeled ice cream and another group ate the ice cream from a dish labeled frozen savory mousse. The experience of the food in the mouth generated strong dislike when labeled as ice-cream, but acceptance when labeled as frozen savory mousse. Labeling the food as ice-cream also resulted in stronger ratings of how salty and savory the food as compared to when it was labeled as a savory food. The individuals that ate the frozen savory mousse found the ice cream less salty and bitter, and found its overall flavor more pleasant. Thirty- nine patrons attending a prix–fixed dinner at a university–affiliated restaurant were given a glass of either North Dakota–labeled or California–labeled wine with their meal. The amount of leftover food and wine was measured. Those whose wine was labeled from California consumed 12% more of their entrée and consumed a greater weight of wine and entrée combined compared to those served North Dakota–labeled wine. The researchers concluded that not only does taste expectation influence one's taste ratings of accompanying foods, but that it also influences consumption of accompanying foods (Wansink et al., 2007).
At a cafeteria in Urbana Illinois 175 people were given a fee brownie dusted with powdered sugar (Wansink, 2006). They were told the brownie was a new dessert that may be added to the menu. They were asked how well they liked the flavor and how much they would pay for it. All of the brownies were the same size and had the same ingredients. However, the brownies were served on a china plate, on a paper plate or on a paper napkin. Those who received the brownie on a china plate said the brownie was excellent. The people eating the brownie from the paper plate rated the brownie as good. Those who were served the brownie on a napkin said it was okay but nothing special.
Individuals eating from the china plate said they would pay $1.27 for the brownie, while those eating from the paper plate said they would pay 76 cents, and those eating from the napkin said they would pay 53 cents. In a classic study conducted by Allison and Uhl (1964) college students who claimed to be “brand loyal” beer drinkers were asked to rate a number of unlabeled beers. Once the labels were removed and the beer was poured into a glass the “brand loyal” participants didn’t do very well picking out their favorite beer. Quite often we taste what we expect to taste, good or bad.
References available upon request
Food perception and Response Expectancy Theory
How we perceive taste and flavor can be influenced by suggestions and expectations. Yeomans et al. (2008) looked at expectations about food flavor by using an unusual flavor of ice cream- smoked salmon ice. One group ate the ice cream from a dish labeled ice cream and another group ate the ice cream from a dish labeled frozen savory mousse. The experience of the food in the mouth generated strong dislike when labeled as ice-cream, but acceptance when labeled as frozen savory mousse. Labeling the food as ice-cream also resulted in stronger ratings of how salty and savory the food as compared to when it was labeled as a savory food. The individuals that ate the frozen savory mousse found the ice cream less salty and bitter, and found its overall flavor more pleasant. Thirty- nine patrons attending a prix–fixed dinner at a university–affiliated restaurant were given a glass of either North Dakota–labeled or California–labeled wine with their meal. The amount of leftover food and wine was measured. Those whose wine was labeled from California consumed 12% more of their entrée and consumed a greater weight of wine and entrée combined compared to those served North Dakota–labeled wine. The researchers concluded that not only does taste expectation influence one's taste ratings of accompanying foods, but that it also influences consumption of accompanying foods (Wansink et al., 2007).
At a cafeteria in Urbana Illinois 175 people were given a fee brownie dusted with powdered sugar (Wansink, 2006). They were told the brownie was a new dessert that may be added to the menu. They were asked how well they liked the flavor and how much they would pay for it. All of the brownies were the same size and had the same ingredients. However, the brownies were served on a china plate, on a paper plate or on a paper napkin. Those who received the brownie on a china plate said the brownie was excellent. The people eating the brownie from the paper plate rated the brownie as good. Those who were served the brownie on a napkin said it was okay but nothing special.
Individuals eating from the china plate said they would pay $1.27 for the brownie, while those eating from the paper plate said they would pay 76 cents, and those eating from the napkin said they would pay 53 cents. In a classic study conducted by Allison and Uhl (1964) college students who claimed to be “brand loyal” beer drinkers were asked to rate a number of unlabeled beers. Once the labels were removed and the beer was poured into a glass the “brand loyal” participants didn’t do very well picking out their favorite beer. Quite often we taste what we expect to taste, good or bad.
References available upon request
Monday, October 10, 2011
Testing Hypotheses
When testing scientific hypotheses- predicted outcome of study involving potential relationships between at least two variables- scientists are not attempting to prove their hypotheses, but are attempting to falsify them. Offering proof for a hypothesis is logically impossible. There are too many alternative possibilities that could explain the outcome, and in order to prove something is true would mean saying it was true every time.
Scientists set up hypotheses that they attempt to falsify / disprove. Two mutually exclusive hypotheses are formed with the intent of falsifying one while gaining support for the other. The null hypothesis- no relationship, no difference- predicts when comparing different groups there will be no difference. The alternative hypothesis- there is a difference- predicts when groups are compared there will be a difference. An alternative hypothesis can be one tailed / directional- predicting the direction of the relationship or it can be two tailed- not predicting the direction of the relationship. With hypothesis testing- testing hypothesis in a research to study in order to determine whether we support or do not find support- we are attempting to falsify the null hypothesis. After forming the hypotheses and determining a significance level- criteria for rejecting null hypothesis- data is collected which supports or does not support the null hypothesis. The significance level or alpha level is usually set at .05. This means that we are highly confident that we are correct if we have determined that there is a less than 5% chance the null hypothesis is correct- we reject the null hypothesis. By default, when we reject the null hypothesis we infer that the alternative hypothesis is correct. The p-value can be defined as the probability that the null hypothesis is true, or the probability that the observed effects occurred due to chance. While the confidence level can be reflected as 1- p value. When the significance level is .05 we can say our confidence level is 95% that we have inferred the appropriate conclusion.
If the null hypothesis is rejected we can say there is evidence for a relationship. If we fail to reject the null hypothesis we can say there in no evidence of a relationship. It is important to be cognizant of the wording used when NHST. We use the words support and unsupport, rather than proof. Proof, as pointed out earlier is a logical impossibility. It is also important to point out if a hypothesis is unfalsifiable it is untestable and thus unscientific.
There is always a chance of our inferences being incorrect. When testing the null hypothesis there are four possible outcomes
Type 1 error- rejecting the null hypothesis when it is true
Correct- rejecting the null hypothesis when it is false
Type 2 error- failing to reject the null hypothesis when it is false
Correct- failing to reject the null hypothesis when it is true
Scientists set up hypotheses that they attempt to falsify / disprove. Two mutually exclusive hypotheses are formed with the intent of falsifying one while gaining support for the other. The null hypothesis- no relationship, no difference- predicts when comparing different groups there will be no difference. The alternative hypothesis- there is a difference- predicts when groups are compared there will be a difference. An alternative hypothesis can be one tailed / directional- predicting the direction of the relationship or it can be two tailed- not predicting the direction of the relationship. With hypothesis testing- testing hypothesis in a research to study in order to determine whether we support or do not find support- we are attempting to falsify the null hypothesis. After forming the hypotheses and determining a significance level- criteria for rejecting null hypothesis- data is collected which supports or does not support the null hypothesis. The significance level or alpha level is usually set at .05. This means that we are highly confident that we are correct if we have determined that there is a less than 5% chance the null hypothesis is correct- we reject the null hypothesis. By default, when we reject the null hypothesis we infer that the alternative hypothesis is correct. The p-value can be defined as the probability that the null hypothesis is true, or the probability that the observed effects occurred due to chance. While the confidence level can be reflected as 1- p value. When the significance level is .05 we can say our confidence level is 95% that we have inferred the appropriate conclusion.
If the null hypothesis is rejected we can say there is evidence for a relationship. If we fail to reject the null hypothesis we can say there in no evidence of a relationship. It is important to be cognizant of the wording used when NHST. We use the words support and unsupport, rather than proof. Proof, as pointed out earlier is a logical impossibility. It is also important to point out if a hypothesis is unfalsifiable it is untestable and thus unscientific.
There is always a chance of our inferences being incorrect. When testing the null hypothesis there are four possible outcomes
Type 1 error- rejecting the null hypothesis when it is true
Correct- rejecting the null hypothesis when it is false
Type 2 error- failing to reject the null hypothesis when it is false
Correct- failing to reject the null hypothesis when it is true
Saturday, June 18, 2011
Improving Your Cognitive Toolbox
The Edge Question 2011- Suggested by Steven Pinker- WHAT SCIENTIFIC CONCEPT WOULD IMPROVE EVERYBODY'S COGNITIVE TOOLKIT?
"The term 'scientific"is to be understood in a broad sense as the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great people in history, or the structure of DNA. A "scientific concept" may come from philosophy, logic, economics, jurisprudence, or other analytic enterprises, as long as it is a rigorous conceptual tool that may be summed up succinctly (or "in a phrase") but has broad application to understanding the world." Read Full entries
One hundred and sixty four individuals commented on the question. A few of my favorites:
From Richard Dawkins:
"If all schools taught their pupils how to do a double-blind control experiment, our cognitive toolkits would be improved in the following ways:
1. We would learn not to generalise from anecdotes.
2. We would learn how to assess the likelihood that an apparently important effect might have happened by chance alone.
3. We would learn how extremely difficult it is to eliminate subjective bias, and that subjective bias does not imply dishonesty or venality of any kind. This lesson goes deeper. It has the salutary effect of undermining respect for authority, and respect for personal opinion.
4. We would learn not to be seduced by homeopaths and other quacks and charlatans, who would consequently be put out of business.
5. We would learn critical and sceptical habits of thought more generally, which not only would improve our cognitive toolkit but might save the world." more
From Paul Bloom:
"Reason
We are powerfully influenced by irrational processes such as unconscious priming, conformity, groupthink, and self-serving biases. These affect the most trivial aspects of our lives, such as how quickly we walk down a city street, and the most important, such as who we choose to marry. The political and moral realms are particularly vulnerable to such influences. While many of us would like to think that our views on climate change or torture or foreign policy are the result of rational deliberation, we are more affected than we would like to admit by considerations that have nothing to do with reason." more
From PZ Meyers:
"I'm going to recommend the mediocrity principle. It's fundamental to science, and it's also one of the most contentious, difficult concepts for many people to grasp — and opposition to the mediocrity principle is one of the major linchpins of religion and creationism and jingoism and failed social policies. There are a lot of cognitive ills that would be neatly wrapped up and easily disposed of if only everyone understood this one simple idea.
The mediocrity principle simply states that you aren't special. The universe does not revolve around you, this planet isn't privileged in any unique way, your country is not the perfect product of divine destiny, your existence isn't the product of directed, intentional fate, and that tuna sandwich you had for lunch was not plotting to give you indigestion. Most of what happens in the world is just a consequence of natural, universal laws — laws that apply everywhere and to everything, with no special exemptions or amplifications for your benefit — given variety by the input of chance." more
From Sue Blackmore:
"Correlation is not a cause
The phrase "correlation is not a cause" (CINAC) may be familiar to every scientist but has not found its way into everyday language, even though critical thinking and scientific understanding would improve if more people had this simple reminder in their mental toolkit.
One reason for this lack is that CINAC can be surprisingly difficult to grasp. I learned just how difficult when teaching experimental design to nurses, physiotherapists and other assorted groups. They usually understood my favourite example: imagine you are watching at a railway station. More and more people arrive until the platform is crowded, and then — hey presto — along comes a train. Did the people cause the train to arrive (A causes B)? Did the train cause the people to arrive (B causes A)? No, they both depended on a railway timetable (C caused both A and B)." more
My answer to this question is- accepting the idea that all beliefs, claims, doctrines, and ideas should be subject to critical analysis. Why should some ideas be put under the analytical microscope while others shouldn't? Why should we espouse scientific inquiry in so many important areas of life, yet turn away when scientific evidence refutes our cherished beliefs? Faith based beliefs, dogma, they-say, over-reliance on experts, and other non-evidence based claims are dangerous, as they promote the dissemination of contaminated mindware. I recently wrote an article-Identifying and Avoiding Contaminated Mindware - that sheds light on contaminated mindware, and how it is spread and how it contributes to irrationality. Lose the idea that some beliefs have a special privilege- are immune to critical analysis- and you will radically improve your cognitive toolbox.
"The term 'scientific"is to be understood in a broad sense as the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great people in history, or the structure of DNA. A "scientific concept" may come from philosophy, logic, economics, jurisprudence, or other analytic enterprises, as long as it is a rigorous conceptual tool that may be summed up succinctly (or "in a phrase") but has broad application to understanding the world." Read Full entries
One hundred and sixty four individuals commented on the question. A few of my favorites:
From Richard Dawkins:
"If all schools taught their pupils how to do a double-blind control experiment, our cognitive toolkits would be improved in the following ways:
1. We would learn not to generalise from anecdotes.
2. We would learn how to assess the likelihood that an apparently important effect might have happened by chance alone.
3. We would learn how extremely difficult it is to eliminate subjective bias, and that subjective bias does not imply dishonesty or venality of any kind. This lesson goes deeper. It has the salutary effect of undermining respect for authority, and respect for personal opinion.
4. We would learn not to be seduced by homeopaths and other quacks and charlatans, who would consequently be put out of business.
5. We would learn critical and sceptical habits of thought more generally, which not only would improve our cognitive toolkit but might save the world." more
From Paul Bloom:
"Reason
We are powerfully influenced by irrational processes such as unconscious priming, conformity, groupthink, and self-serving biases. These affect the most trivial aspects of our lives, such as how quickly we walk down a city street, and the most important, such as who we choose to marry. The political and moral realms are particularly vulnerable to such influences. While many of us would like to think that our views on climate change or torture or foreign policy are the result of rational deliberation, we are more affected than we would like to admit by considerations that have nothing to do with reason." more
From PZ Meyers:
"I'm going to recommend the mediocrity principle. It's fundamental to science, and it's also one of the most contentious, difficult concepts for many people to grasp — and opposition to the mediocrity principle is one of the major linchpins of religion and creationism and jingoism and failed social policies. There are a lot of cognitive ills that would be neatly wrapped up and easily disposed of if only everyone understood this one simple idea.
The mediocrity principle simply states that you aren't special. The universe does not revolve around you, this planet isn't privileged in any unique way, your country is not the perfect product of divine destiny, your existence isn't the product of directed, intentional fate, and that tuna sandwich you had for lunch was not plotting to give you indigestion. Most of what happens in the world is just a consequence of natural, universal laws — laws that apply everywhere and to everything, with no special exemptions or amplifications for your benefit — given variety by the input of chance." more
From Sue Blackmore:
"Correlation is not a cause
The phrase "correlation is not a cause" (CINAC) may be familiar to every scientist but has not found its way into everyday language, even though critical thinking and scientific understanding would improve if more people had this simple reminder in their mental toolkit.
One reason for this lack is that CINAC can be surprisingly difficult to grasp. I learned just how difficult when teaching experimental design to nurses, physiotherapists and other assorted groups. They usually understood my favourite example: imagine you are watching at a railway station. More and more people arrive until the platform is crowded, and then — hey presto — along comes a train. Did the people cause the train to arrive (A causes B)? Did the train cause the people to arrive (B causes A)? No, they both depended on a railway timetable (C caused both A and B)." more
My answer to this question is- accepting the idea that all beliefs, claims, doctrines, and ideas should be subject to critical analysis. Why should some ideas be put under the analytical microscope while others shouldn't? Why should we espouse scientific inquiry in so many important areas of life, yet turn away when scientific evidence refutes our cherished beliefs? Faith based beliefs, dogma, they-say, over-reliance on experts, and other non-evidence based claims are dangerous, as they promote the dissemination of contaminated mindware. I recently wrote an article-Identifying and Avoiding Contaminated Mindware - that sheds light on contaminated mindware, and how it is spread and how it contributes to irrationality. Lose the idea that some beliefs have a special privilege- are immune to critical analysis- and you will radically improve your cognitive toolbox.
Wednesday, June 8, 2011
Let's Talk Psychology
Interview with Psych Central Publisher John Grohol
Dr. Grohol currently publishes the 16-year-old Psych Central (www.psychcentral.com)
one of the leading mental health social networks online offering consumers professionally-reviewed mental health information, resources, news, various information related to health sciences, research briefs, the popular World of Psychology blog (and many other popular blogs), social networking tools, and dozens of safe, secure support communities.
Dr. Grohol regularly writes and blogs on Psych Central, reporting on the latest science in mental health psychology, dissecting bad research, and adding his personal thoughts on the world of psychology
Psych Central is not the typical pseudoscientific self-help psychology site. Psych Central promotes evidence-based information.
Let’s talk psychology with John Grohol.
Briefly, can you take me through a day in the life of John Grohol?
What is helpful to understand first about what I see as my professional role is as an important guide and filter to what's going on in the world of mental health and psychology. I do that through reading original research, filtered research (other people's news stories), writing, editing and publishing. So a lot of what my day consists of are those kinds of activities, often in no particular order or priority once I get through my morning.
Every day starts pretty much the same way, whether it's a weekday or weekend. After a quick morning check of email for any outstanding site issues, I walk through the articles needing review and publication first. Usually this includes the news, which was written in put into our system the night before, and edited overnight. My review includes reading every article we publish, checking grammar and editing the article for clarity and understanding. I may also change or tweak the headline.
After hitting the news, I go to our largest blog, World of Psychology, and publish an entry for the morning there. Then I'll check the news headlines and work on my blog entry for the day. Alternatively, I may look for another blog article to publish from one of our regular contributors if my day is going to be busy with other projects or what-not.
Those projects range from things like getting a new Psych Central Blog or quiz online, to working on a particularly lengthy or in-depth piece that requires doing a fair amount of PsycINFO research and reading. It may be surprising, but to get to the heart of an issue often means digesting and summarizing a great deal of research into something that can be written under 1,200 words. Less is more, and getting to that point sometimes takes a fair amount of work.
Being your own boss also means the day never really officially ends. I regularly check the news throughout the day to ensure we're covering breaking news and research findings too. I want to ensure our readers are always getting up-to-date information and that they can rely on us for that objective, independent reporting.
Psych Central publishes new articles daily. I have often tried to guess how many new articles you publish per day. Approximately, how many new articles are published daily at the site?
I can't give you a specific average, but including everything that gets published on the PsychCentral.com, you're probably looking at anywhere from 10 to 20 articles a day. Once you dive into our self-help support communities, however, you're looking at anywhere from 1,200 to 2,000 new posts/day.
Where do you see Psych Central in five years?
Right now, we've hit a milestone of over 2 million international unique visitors. In 5 years, I'd really like to see us breaking the 5 million mark, because that would mean we're reaching more people with our mental health information. And if we're reaching that many people, I've got to believe that stigma will also be reduced and treatment rates will increase.
You have to keep in mind, when I was the first person to publish the symptom criteria for the major mental disorders online on a single website in 1995, that information simply wasn't available to most consumers. This information was previously available largely only to professionals, and so I see this kind of transparency thing did a lot of help break down the barriers of stigma associated with mental health concerns.
In 5 years, I hope we've made even more strides, so talking about your depression or bipolar disorder is as simple and easy as talking about your diabetes or other disease diagnosis. I see Psych Central doing that through reaching people wherever they are -- on their smartphone or iPad -- in whatever stage of treatment they're at.
Other than Psych Central, what are some of your other current projects?
As an entrepreneur, Psych Central is my primary love and project. Everything I do revolves around helping to build Psych Central, to ensure we're doing the best job possible, and to find ways to help get our information in the hands of more people.
So a couple of years ago, I devised the Sanity Score to help people understand mental health issues in a way that didn't invoke mental health and all the baggage of that term. People throw around the phrases, "You're crazy" or "You're insane," so I thought, "Hey, we can test for that." With all of the interactive quizzes we've designed over the years, it seemed natural to pull them all together and create a single, simple mental health-screening tool. So that's what we did, and it has helped get the word out to a different audience than Psych Central reaches.
I'm also very interested in online mental health interventions, like the Australian MoodGYM program (http://moodgym.anu.edu.au/welcome), as well as person-based mental health treatments online, such as e-therapy. People-based stuff doesn't scale very well when it's one-on-one with a trained mental health professional, so something needs to be done to address that problem if we have more and more people seeking mental health treatment.
Last, I want to do more for suicide interventions online. I believe there's a lot of good stuff being done online for people who are suicidal, including more suicide chat services, but the need is so great and so much more could and needs to be done.
Favorite book? Favorite writer?
I like to read, but I prefer fiction to psychology and similar nonfiction books. My reading tastes are, quite frankly, all over the map. I prefer older John Grisham and Stephen King, as well as more classical authors like Flannery O'Connor, Henry James, and Charles Bukowski. I just got done reading the Memory Palace by Mira Bartók, which was okay. I'm also working my way through Sherry Turkle's "Alone Together."
What is the most common psychology myth you encounter on a regular basis? I know it is hard to pick one, but, assuming you can name one, what would it be?
The myth that there's often a simple explanation or set of characteristics that explain someone's behavior. I look at things like the Myers-Briggs Indicator personality test that help popularize this myth -- that by putting people into one of 16 fairly arbitrary categories, we then better understand that person. It's such a simple but ultimately simplistic, hollow idea.
And that's true of so much of what passes for science today. Health news stories so often confuse a correlational finding with something that has some causal meaning. I find that infuriating, because rarely is it explained in the article and it contributes to the dumbing down of research. And rarely is a single research finding put into any kind of context about what the broader research shows in that area. It's lazy journalism and it's what passes for a lot of news writing today.
But it's a chicken and egg problem. We're becoming a society of lazy information consumers, looking for quick, easy-to-digest pieces of information that fit into our common wisdom schemas (which are often wrong). Which do you think would get more traffic, a news article with the headline, "Scientists Create a Computer with Schizophrenia" or "Schizophrenia simulated on a computer"? Of course computers can't have schizophrenia, but you miss that fact in the first headline, because it's sexier to suggest that a computer can be "given" schizophrenia. But it's just wrong.
Dr. Grohol currently publishes the 16-year-old Psych Central (www.psychcentral.com)
one of the leading mental health social networks online offering consumers professionally-reviewed mental health information, resources, news, various information related to health sciences, research briefs, the popular World of Psychology blog (and many other popular blogs), social networking tools, and dozens of safe, secure support communities.
Dr. Grohol regularly writes and blogs on Psych Central, reporting on the latest science in mental health psychology, dissecting bad research, and adding his personal thoughts on the world of psychology
Psych Central is not the typical pseudoscientific self-help psychology site. Psych Central promotes evidence-based information.
Let’s talk psychology with John Grohol.
Briefly, can you take me through a day in the life of John Grohol?
What is helpful to understand first about what I see as my professional role is as an important guide and filter to what's going on in the world of mental health and psychology. I do that through reading original research, filtered research (other people's news stories), writing, editing and publishing. So a lot of what my day consists of are those kinds of activities, often in no particular order or priority once I get through my morning.
Every day starts pretty much the same way, whether it's a weekday or weekend. After a quick morning check of email for any outstanding site issues, I walk through the articles needing review and publication first. Usually this includes the news, which was written in put into our system the night before, and edited overnight. My review includes reading every article we publish, checking grammar and editing the article for clarity and understanding. I may also change or tweak the headline.
After hitting the news, I go to our largest blog, World of Psychology, and publish an entry for the morning there. Then I'll check the news headlines and work on my blog entry for the day. Alternatively, I may look for another blog article to publish from one of our regular contributors if my day is going to be busy with other projects or what-not.
Those projects range from things like getting a new Psych Central Blog or quiz online, to working on a particularly lengthy or in-depth piece that requires doing a fair amount of PsycINFO research and reading. It may be surprising, but to get to the heart of an issue often means digesting and summarizing a great deal of research into something that can be written under 1,200 words. Less is more, and getting to that point sometimes takes a fair amount of work.
Being your own boss also means the day never really officially ends. I regularly check the news throughout the day to ensure we're covering breaking news and research findings too. I want to ensure our readers are always getting up-to-date information and that they can rely on us for that objective, independent reporting.
Psych Central publishes new articles daily. I have often tried to guess how many new articles you publish per day. Approximately, how many new articles are published daily at the site?
I can't give you a specific average, but including everything that gets published on the PsychCentral.com, you're probably looking at anywhere from 10 to 20 articles a day. Once you dive into our self-help support communities, however, you're looking at anywhere from 1,200 to 2,000 new posts/day.
Where do you see Psych Central in five years?
Right now, we've hit a milestone of over 2 million international unique visitors. In 5 years, I'd really like to see us breaking the 5 million mark, because that would mean we're reaching more people with our mental health information. And if we're reaching that many people, I've got to believe that stigma will also be reduced and treatment rates will increase.
You have to keep in mind, when I was the first person to publish the symptom criteria for the major mental disorders online on a single website in 1995, that information simply wasn't available to most consumers. This information was previously available largely only to professionals, and so I see this kind of transparency thing did a lot of help break down the barriers of stigma associated with mental health concerns.
In 5 years, I hope we've made even more strides, so talking about your depression or bipolar disorder is as simple and easy as talking about your diabetes or other disease diagnosis. I see Psych Central doing that through reaching people wherever they are -- on their smartphone or iPad -- in whatever stage of treatment they're at.
Other than Psych Central, what are some of your other current projects?
As an entrepreneur, Psych Central is my primary love and project. Everything I do revolves around helping to build Psych Central, to ensure we're doing the best job possible, and to find ways to help get our information in the hands of more people.
So a couple of years ago, I devised the Sanity Score to help people understand mental health issues in a way that didn't invoke mental health and all the baggage of that term. People throw around the phrases, "You're crazy" or "You're insane," so I thought, "Hey, we can test for that." With all of the interactive quizzes we've designed over the years, it seemed natural to pull them all together and create a single, simple mental health-screening tool. So that's what we did, and it has helped get the word out to a different audience than Psych Central reaches.
I'm also very interested in online mental health interventions, like the Australian MoodGYM program (http://moodgym.anu.edu.au/welcome), as well as person-based mental health treatments online, such as e-therapy. People-based stuff doesn't scale very well when it's one-on-one with a trained mental health professional, so something needs to be done to address that problem if we have more and more people seeking mental health treatment.
Last, I want to do more for suicide interventions online. I believe there's a lot of good stuff being done online for people who are suicidal, including more suicide chat services, but the need is so great and so much more could and needs to be done.
Favorite book? Favorite writer?
I like to read, but I prefer fiction to psychology and similar nonfiction books. My reading tastes are, quite frankly, all over the map. I prefer older John Grisham and Stephen King, as well as more classical authors like Flannery O'Connor, Henry James, and Charles Bukowski. I just got done reading the Memory Palace by Mira Bartók, which was okay. I'm also working my way through Sherry Turkle's "Alone Together."
What is the most common psychology myth you encounter on a regular basis? I know it is hard to pick one, but, assuming you can name one, what would it be?
The myth that there's often a simple explanation or set of characteristics that explain someone's behavior. I look at things like the Myers-Briggs Indicator personality test that help popularize this myth -- that by putting people into one of 16 fairly arbitrary categories, we then better understand that person. It's such a simple but ultimately simplistic, hollow idea.
And that's true of so much of what passes for science today. Health news stories so often confuse a correlational finding with something that has some causal meaning. I find that infuriating, because rarely is it explained in the article and it contributes to the dumbing down of research. And rarely is a single research finding put into any kind of context about what the broader research shows in that area. It's lazy journalism and it's what passes for a lot of news writing today.
But it's a chicken and egg problem. We're becoming a society of lazy information consumers, looking for quick, easy-to-digest pieces of information that fit into our common wisdom schemas (which are often wrong). Which do you think would get more traffic, a news article with the headline, "Scientists Create a Computer with Schizophrenia" or "Schizophrenia simulated on a computer"? Of course computers can't have schizophrenia, but you miss that fact in the first headline, because it's sexier to suggest that a computer can be "given" schizophrenia. But it's just wrong.
Wednesday, May 18, 2011
When Experts are Wrong
by Jamie Hale & Brooke Hale
We often consult with experts for advice. Their judgments and predictions are often accepted without question. After all, they are experts, shouldn’t we take their word?
Clinical vs. Statistical Methods
Experts rely one of two contrasting approaches to decision making- clinical vs. statistical (actuarial) methods. Research shows that the statistical method is superior (Dawes, R., et al., 1989). Clinical methods rely on personal experience and intuitions. When making predictions, those using clinical methods claim to be able to use their personal experience and go beyond group relationships found in research. Statistical methods rely on group (aggregate) trends derived from statistical records. “A simple actuarial prediction is one that predicts the same outcome for all individuals sharing a certain characteristic” (Stanovich, 2007, p.176). Predictions become more accurate when more group characteristics are taken into account. Actuarial predictions are common in various fields- economics, human resources, criminology, business, marketing, medical sciences, military, sociology, horse racing, psychology, and education.
It is important to note that clinical judgment does not equate to judgments made by only clinicians. Clinical judgment is used in various fields- basically any field where humans make decisions. It is also important to realize “[a] clinician in psychiatry or medicine may use the clinical or actuarial method. Conversely, the actuarial method should not be equated with automated decisions rules alone. For example, computers can automate clinical judgments. The computer can be programmed to yield the description “dependency traits” just as the clinical judge would, whenever a certain response appears on a psychological test. To be truly actuarial, interpretations must be both automatic (that is, prespecifiied or routinized) and based on empirically established relations” (Dawes, et al., 1989, p.1668).
Decades of research investigating clinical versus statistical prediction have shown consistent results- statistical prediction is more accurate than clinical prediction Dawes et al., 1989; Stanovich, 2007; Tetlock, 2005).
While investigating the ability of clinical and statistical variables to predict criminal behavior in 342 sexual offenders, Hall (1988) found that making use of statistical variables was significantly predictive of sexual re-offenses against adults and of nonsexual re-offending. Clinical judgment did not significantly predict re-offenses.
From Predicting Criminal Behavior (Hale, 2011):
In a statistical analysis of 136 studies Grove and Meehl (1996) found that only 8 of those studies favored clincial prediction over statistical prediction. However, none of those 8 studies were replicated (repeated) studies. In the realm of scientific research studies need to be successfully repeated before they are referred to as sufficient evidence.
In regards to the research showing that actuarial prediction is more accurate than clinical Paul Meehl (1986) stated “There is no controversy in social science which shows such a large body of qualitativley diverse studies coming out so uniformly in the same directions as this one” That is, when considering statistical versus clinical, statistical wins hands down. Yet, experts from various domains still claim their “special knowledge” or intuition overrides statistical data derived from research.
The supremacy of statistical prediction
Statistical data is knowledge consisting of cases drawn from research literature, which is often a larger and more representative sample than is available to any expert. Experts are subject to a host of biases when observing, interpreting, analyzing, storing and retrieving events and information. Professionals tend to give weight their personal experience heavily, while assigning less weight to the experience of other professionals or research findings. Consequently, statistical predictions usually weight new data more heavily than clinical predictions.
The human brain is at the disadvantage in computing and weighing in comparison to mechanical computing. Predictions based on statistics are perfectly consistent and reliable, while clinical predictions are not. Experts don’t always agree with each other, or even with themselves when they review the same case the second time around. Even as clinicians acquire experience, the shortcoming of human judgment can help explain why the accuracy of their prediction lacks improvement. (Lilienfield, Lynn, Ruscio, & Beyerstein, 2010).
When a clinician is given information about a client and asked to make a prediction, and the same information is quantified and processed by a statistical equation the statistical equation wins. Even when the clinician has more information in addition to the same information the statistical equation wins. The statistical equation accurately and consistently integrates information according to an optimal criterion. Optimality and consistency supersedes any informational advantage that the clinician gains through informal methods (Stanovich, 2007).
Another type of investigation mentioned in the clinical-actuarial prediction literature discusses giving the clinician predictions from the actuarial prediction, and then asking them to make any necessary changes based on their personal experience with clients. When the clinician makes changes to the actuarial judgments, the adjustments lead to a decrease in the accuracy of the predictions (Dawes, 1994).
A common criticism of the statistical prediction model is that statistics do not apply to single individuals. This line of thinking contradicts basic principles of probability. Consider the following example (Dawes, et al., 1989):
“An advocate of this anti-actuarial position would have to maintain, for the sake of logical consistency, that if one is forced to play Russian roulette a single time and is allowed to select a gun with one or five bullets in the chamber, the uniqueness of the event makes the choice arbitrary.” (p.1672)
The erroneous assumption statistics don’t apply to the single case is often held by compulsive gamblers (Wagenaar, 1988). This faulty sense of prediction often leads them to believe they can accurately predict the next outcome.
“Even as clinicians acquire experience, the shortcomings of human judgment help to explain why the accuracy of their predictions doesn’t improve much, if at all, beyond what they achieved during graduate school” (Stanovich, 2007; Dawes, 1994; Garb, 1999).
Application of statistical methods
Research demonstrating the general superiority of statistical approaches should be calibrated to recognition of its limitations and need for control. Albeit, surpassing clinical methods actuarial procedures are not infallible, often achieving only moderate results. A procedure that proves successful in one setting should be periodically reevaluated within that context and shouldn’t be applied to new settings mindlessly (Dawes, et al., 1989).
In Meehl’s classic book- Clinical versus statistical prediction(1996)- he thoroughly analyzed limitations of actuarial prediction. Paul illustrated a possible limitation by using what became known as the “broken-leg case.” Consider the following:
However, this example does not lend support to the idea that avoiding error in such cases will greatly increase clinicians accuracy as compared with statistical prediction. For a more detailed discussion on this matter refer to Grove, W.M., & Lloyd, M., 2006.
From Clinical versus actuarial judgment (Dawes, et al., 1989):
The use of clinical prediction relies on authority whose assessments-precisely because these judgments are claimed to be singular and idiosyncratic-are not subject to public criticism. Thus, clinical predictions cannot be scrutinized and evaluated at the same level as statistical predictions. (Stanovich, K., 2007)
Conclusion
The intent of this article is not to imply that experts are not important or do not have a role in predicting outcomes. Expert advice and information is useful in observation, gathering data and sometimes making predictions (when predictions are commensurate with available evidence). However, once relevant variables have been determined and we want to use them to make decisions, “measuring them and using a statistical equation to determine the predictions constitute the best procedure.” (Stanovich, 2007, p.181)
The problem is not so much in experts making decisions (that’s what they are supposed to do), but in experts making decisions that run counter to actuarial predictions.
Decades of research indicate statistical prediction is superior to clinical prediction. Statistical data should never be overlooked when making decisions (assuming there is statistical data in the area of interest- sometimes there is not).
I will leave you with these words (Meehl, 2003):
References
Dawes, R., Faust, D., & Meehl, P. (1989). Science, New series, Vol. 243, 4899, 1668-1674.
Dawes, R. (1994). House of Cards: psychology and psychotherapy built on myth. New York: Free Press.
Dawes, R. (1996). House of Cards: psychology and psychotherapy built on myth. Simon and Schuster.
Garb, H.N. (1998). Studying the Clinician: Judgment research and psychological assessment. Washingotn, DC: American Psychological Association.
Grove, W.M., & Meehl, P. (1996). Comparatvie efficiencey of informal and formal prediction procedures: The clinical-statisical controversy. Psychology, Public Policy and Law, 2, 293-323.
Grove, W.M., & Lloyd, M. (2006). Meehl’s Contribution to Clinical Versus Statistical Prediction. Journal of Abnormal Psychology, Vol. 115, No. 2, 192–194.
Hale, B. (2011). Predicting Criminal Behavior. College term paper.
Hall, G.C. Nagayama. (1988). Criminal Behavior as a Function of Clinical and Actuarial Variables in a Sexual Offender. Journal of Consulting and Clinical Psychology, v56 n5 (1988): 773-775.
Lilienfeld, S., Lynn, S. J., Ruscio, J., & Beyerstein, B.L. (2010). Great Myths of Popular Psychology: Shattering Widespread Misconceptons about Human Behaivor. Malden, MA: Wiley-Blackwell.
Meehl, P.E. (1986). Causes and effects of my disturbing little book. Journal of Personality Assessment, 50, 370-375.
Meehl, P. E. (1996). Clinical versus statistical prediction: A theoretical
analysis and a review of the evidence. Northvale, NJ: Jason Aronson. (Original work published 1954)
Meehl, P.E. (2003). Clinical versus statistical prediction: A theoretical
analysis and a review of the evidence. Copyright 2003 Leslie J. Yonce. (Copyright 1954 University of Minnesota)
Stanovich, K. (2007). How to Think Straight About Psychology. 8th Edition. Boston, MA: Pearson.
Tetlock, P.E. (2005). Expert Political Judgment. Princeton, NJ: Princeton University Press.
Wagenaar, W.A. (1988). Paradoxes of Gambling Behavior. Hove, England: Erlbaum.
Copyright 2011 Jamie Hale
We often consult with experts for advice. Their judgments and predictions are often accepted without question. After all, they are experts, shouldn’t we take their word?
Clinical vs. Statistical Methods
Experts rely one of two contrasting approaches to decision making- clinical vs. statistical (actuarial) methods. Research shows that the statistical method is superior (Dawes, R., et al., 1989). Clinical methods rely on personal experience and intuitions. When making predictions, those using clinical methods claim to be able to use their personal experience and go beyond group relationships found in research. Statistical methods rely on group (aggregate) trends derived from statistical records. “A simple actuarial prediction is one that predicts the same outcome for all individuals sharing a certain characteristic” (Stanovich, 2007, p.176). Predictions become more accurate when more group characteristics are taken into account. Actuarial predictions are common in various fields- economics, human resources, criminology, business, marketing, medical sciences, military, sociology, horse racing, psychology, and education.
It is important to note that clinical judgment does not equate to judgments made by only clinicians. Clinical judgment is used in various fields- basically any field where humans make decisions. It is also important to realize “[a] clinician in psychiatry or medicine may use the clinical or actuarial method. Conversely, the actuarial method should not be equated with automated decisions rules alone. For example, computers can automate clinical judgments. The computer can be programmed to yield the description “dependency traits” just as the clinical judge would, whenever a certain response appears on a psychological test. To be truly actuarial, interpretations must be both automatic (that is, prespecifiied or routinized) and based on empirically established relations” (Dawes, et al., 1989, p.1668).
Decades of research investigating clinical versus statistical prediction have shown consistent results- statistical prediction is more accurate than clinical prediction Dawes et al., 1989; Stanovich, 2007; Tetlock, 2005).
While investigating the ability of clinical and statistical variables to predict criminal behavior in 342 sexual offenders, Hall (1988) found that making use of statistical variables was significantly predictive of sexual re-offenses against adults and of nonsexual re-offending. Clinical judgment did not significantly predict re-offenses.
From Predicting Criminal Behavior (Hale, 2011):
Within the field of dangerousness risk assessment (as it applies to violent offenders), it has been recommended that clincial assessments be repalced by actuarial assessments. In a 1999 book from the American Psychological Association- Violent Offenders: Appraising and Managing Risk- (Quinsey, Harris, Rice and Cormier)-the authors argued explicitly and strongly for the "complete replacement" of clinical assessments of dangerousness with actuarial methods. "What we are advising is not the addition of actuarial methods to existing practice, but rather the complete replacement of existing practice with actuarial methods" (p. 171).
When considering the accuracy of clinical vs. statistical behavior- In regards to predicting criminal repeat behavior- it is quite clear that statistical predictions / methods are superior to clinincal predictions / methods. "The studies show that judgments about who is more likely to repeat are much better on an actuarial basis than a clinical one", says Robyn Dawes (Dawes,1996).
In a statistical analysis of 136 studies Grove and Meehl (1996) found that only 8 of those studies favored clincial prediction over statistical prediction. However, none of those 8 studies were replicated (repeated) studies. In the realm of scientific research studies need to be successfully repeated before they are referred to as sufficient evidence.
In regards to the research showing that actuarial prediction is more accurate than clinical Paul Meehl (1986) stated “There is no controversy in social science which shows such a large body of qualitativley diverse studies coming out so uniformly in the same directions as this one” That is, when considering statistical versus clinical, statistical wins hands down. Yet, experts from various domains still claim their “special knowledge” or intuition overrides statistical data derived from research.
The supremacy of statistical prediction
Statistical data is knowledge consisting of cases drawn from research literature, which is often a larger and more representative sample than is available to any expert. Experts are subject to a host of biases when observing, interpreting, analyzing, storing and retrieving events and information. Professionals tend to give weight their personal experience heavily, while assigning less weight to the experience of other professionals or research findings. Consequently, statistical predictions usually weight new data more heavily than clinical predictions.
The human brain is at the disadvantage in computing and weighing in comparison to mechanical computing. Predictions based on statistics are perfectly consistent and reliable, while clinical predictions are not. Experts don’t always agree with each other, or even with themselves when they review the same case the second time around. Even as clinicians acquire experience, the shortcoming of human judgment can help explain why the accuracy of their prediction lacks improvement. (Lilienfield, Lynn, Ruscio, & Beyerstein, 2010).
When a clinician is given information about a client and asked to make a prediction, and the same information is quantified and processed by a statistical equation the statistical equation wins. Even when the clinician has more information in addition to the same information the statistical equation wins. The statistical equation accurately and consistently integrates information according to an optimal criterion. Optimality and consistency supersedes any informational advantage that the clinician gains through informal methods (Stanovich, 2007).
Another type of investigation mentioned in the clinical-actuarial prediction literature discusses giving the clinician predictions from the actuarial prediction, and then asking them to make any necessary changes based on their personal experience with clients. When the clinician makes changes to the actuarial judgments, the adjustments lead to a decrease in the accuracy of the predictions (Dawes, 1994).
A common criticism of the statistical prediction model is that statistics do not apply to single individuals. This line of thinking contradicts basic principles of probability. Consider the following example (Dawes, et al., 1989):
“An advocate of this anti-actuarial position would have to maintain, for the sake of logical consistency, that if one is forced to play Russian roulette a single time and is allowed to select a gun with one or five bullets in the chamber, the uniqueness of the event makes the choice arbitrary.” (p.1672)
The erroneous assumption statistics don’t apply to the single case is often held by compulsive gamblers (Wagenaar, 1988). This faulty sense of prediction often leads them to believe they can accurately predict the next outcome.
“Even as clinicians acquire experience, the shortcomings of human judgment help to explain why the accuracy of their predictions doesn’t improve much, if at all, beyond what they achieved during graduate school” (Stanovich, 2007; Dawes, 1994; Garb, 1999).
Application of statistical methods
Research demonstrating the general superiority of statistical approaches should be calibrated to recognition of its limitations and need for control. Albeit, surpassing clinical methods actuarial procedures are not infallible, often achieving only moderate results. A procedure that proves successful in one setting should be periodically reevaluated within that context and shouldn’t be applied to new settings mindlessly (Dawes, et al., 1989).
In Meehl’s classic book- Clinical versus statistical prediction(1996)- he thoroughly analyzed limitations of actuarial prediction. Paul illustrated a possible limitation by using what became known as the “broken-leg case.” Consider the following:
We have observed that Professor A quite regularly goes to the movies on Tuesday nights. Our actuarial data support the inference “If it’s a Tuesday night, then Pr {Professor A goes to movies} _ .9.” However, suppose we learn that Professor A broke his leg Tuesday morning; he’s in a hip cast that won’t fit in a theater seat. Any neurologically intact clinician will not say that Pr {goes to movies} _ .9; they’ll predict that he won’t go. This is a “special power of the clinician” that cannot, in principle, be completely duplicated by even the most sophisticated computer program. That’s because there are too many distinct, unanticipated factors affecting Professor A’s behavior; the researcher cannot gather good actuarial data on all of them so the program can take them into account (Grove, W.M., & Lloyd, M., 2006).
However, this example does not lend support to the idea that avoiding error in such cases will greatly increase clinicians accuracy as compared with statistical prediction. For a more detailed discussion on this matter refer to Grove, W.M., & Lloyd, M., 2006.
From Clinical versus actuarial judgment (Dawes, et al., 1989):
When actuarial methods prove more accurate than clinical judgment the benefits to individuals and society are apparent…Even when actuarial methods merely equal the accuracy of clinical methods, they may save considerable time and expense. For example, each year millions of dollars and many hours of clinicians’ valuable time are spent attempting to predict violent behavior. Actuarial prediction of violence is far less expensive and would free time for more productive activities, such as meeting unfulfilled therapeutic needs.
Actuarial methods are explicit, in contrast to clinical judgment, which rests on mental processes that are often difficult to specify. Explicit procedures facilitate informed criticism and are freely available to other members of the scientific community who might wish to replicate or extend research.
The use of clinical prediction relies on authority whose assessments-precisely because these judgments are claimed to be singular and idiosyncratic-are not subject to public criticism. Thus, clinical predictions cannot be scrutinized and evaluated at the same level as statistical predictions. (Stanovich, K., 2007)
Conclusion
The intent of this article is not to imply that experts are not important or do not have a role in predicting outcomes. Expert advice and information is useful in observation, gathering data and sometimes making predictions (when predictions are commensurate with available evidence). However, once relevant variables have been determined and we want to use them to make decisions, “measuring them and using a statistical equation to determine the predictions constitute the best procedure.” (Stanovich, 2007, p.181)
The problem is not so much in experts making decisions (that’s what they are supposed to do), but in experts making decisions that run counter to actuarial predictions.
Decades of research indicate statistical prediction is superior to clinical prediction. Statistical data should never be overlooked when making decisions (assuming there is statistical data in the area of interest- sometimes there is not).
I will leave you with these words (Meehl, 2003):
If a clinician says “This one is different” or “It’s not like the ones in your table,” “This time I’m surer,” the obvious question is, “Why should we care whether you think this one is different or whether you are surer?” Again, there is only one rational reply to such a question. We have now to study the success frequency of the clinician’s guesses when he asserts that he feels this way. If we have already done so, and found him still behind the hit frequency of the table, we would be well advised to ignore him. Always, we might as well face it, the shadow of the statistician hovers in the background; always the actuary will have the final word (p.138).
References
Dawes, R., Faust, D., & Meehl, P. (1989). Science, New series, Vol. 243, 4899, 1668-1674.
Dawes, R. (1994). House of Cards: psychology and psychotherapy built on myth. New York: Free Press.
Dawes, R. (1996). House of Cards: psychology and psychotherapy built on myth. Simon and Schuster.
Garb, H.N. (1998). Studying the Clinician: Judgment research and psychological assessment. Washingotn, DC: American Psychological Association.
Grove, W.M., & Meehl, P. (1996). Comparatvie efficiencey of informal and formal prediction procedures: The clinical-statisical controversy. Psychology, Public Policy and Law, 2, 293-323.
Grove, W.M., & Lloyd, M. (2006). Meehl’s Contribution to Clinical Versus Statistical Prediction. Journal of Abnormal Psychology, Vol. 115, No. 2, 192–194.
Hale, B. (2011). Predicting Criminal Behavior. College term paper.
Hall, G.C. Nagayama. (1988). Criminal Behavior as a Function of Clinical and Actuarial Variables in a Sexual Offender. Journal of Consulting and Clinical Psychology, v56 n5 (1988): 773-775.
Lilienfeld, S., Lynn, S. J., Ruscio, J., & Beyerstein, B.L. (2010). Great Myths of Popular Psychology: Shattering Widespread Misconceptons about Human Behaivor. Malden, MA: Wiley-Blackwell.
Meehl, P.E. (1986). Causes and effects of my disturbing little book. Journal of Personality Assessment, 50, 370-375.
Meehl, P. E. (1996). Clinical versus statistical prediction: A theoretical
analysis and a review of the evidence. Northvale, NJ: Jason Aronson. (Original work published 1954)
Meehl, P.E. (2003). Clinical versus statistical prediction: A theoretical
analysis and a review of the evidence. Copyright 2003 Leslie J. Yonce. (Copyright 1954 University of Minnesota)
Stanovich, K. (2007). How to Think Straight About Psychology. 8th Edition. Boston, MA: Pearson.
Tetlock, P.E. (2005). Expert Political Judgment. Princeton, NJ: Princeton University Press.
Wagenaar, W.A. (1988). Paradoxes of Gambling Behavior. Hove, England: Erlbaum.
Copyright 2011 Jamie Hale
Wednesday, May 4, 2011
Everyday Illusions
I just got the word from Chris Chabris that the paperback edition of The Invisible Gorilla is scheduled for release on June 7th.
The Invisible Gorilla, refers to a wide variety of stories and counterintuitive scientific findings to reveal- Our minds don't work the way we think they do. We think we know our own minds, but this isn't so. Chabris and Simons combine the work of other researchers with their own findings on attention, perception, memory, and reasoning, to reveal how faulty intuitions often lead us astray.
Often, we think we experience and understand the world as it is, but our perceptions are sometimes, actually, often times nothing more than illusions.
The Invisible Gorilla, provides detailed explanations of why people experience these everyday illusions and what we can do to protect ourselves against their effects. The ultimate goal of the book is to help you notice the invisible gorillas in your own life.
Everday Illusions
A detailed discussion of six everday illusions are discussed
illusions of attention
............. memory
confidence
knowledge
cause
potential
These illusions are referred to as everyday illusions because they influence our lives on a daily basis.
The official website of The Invisible Gorrilla
Simons and Chabris videos
Coming soon! An interview with Simons and Chabris
The Invisible Gorilla, refers to a wide variety of stories and counterintuitive scientific findings to reveal- Our minds don't work the way we think they do. We think we know our own minds, but this isn't so. Chabris and Simons combine the work of other researchers with their own findings on attention, perception, memory, and reasoning, to reveal how faulty intuitions often lead us astray.
Often, we think we experience and understand the world as it is, but our perceptions are sometimes, actually, often times nothing more than illusions.
The Invisible Gorilla, provides detailed explanations of why people experience these everyday illusions and what we can do to protect ourselves against their effects. The ultimate goal of the book is to help you notice the invisible gorillas in your own life.
Everday Illusions
A detailed discussion of six everday illusions are discussed
illusions of attention
............. memory
confidence
knowledge
cause
potential
These illusions are referred to as everyday illusions because they influence our lives on a daily basis.
The official website of The Invisible Gorrilla
Simons and Chabris videos
Coming soon! An interview with Simons and Chabris
Wednesday, February 16, 2011
How We Think and Do Not Think About Food: Behavioral & Cognitive Nutrition
In an effort to understand the complexity of nutrition, eating behaviors and the role of food in society it is important to refer to information from various fields- exercise science, nutrition, biology, chemistry, psychology, marketing, economics, sociology, cognitive science and so on. In the past I have written extensively about nutrition, and most of my references have been to the fields of exercise science, nutrition, chemistry, and biology. In the new book I am co-authoring -How We Think and Do Not Think About Food: Behavioral and Cognitive Nutrition(tentative title)- in addition to referencing info from the fields of exercise science, nutrition, chemistry and biology, research from the fields mentioned in the opening sentence will be discussed.
How We Think and Do Not Think About Food- will provide very brief to no coverage of the following:
The calorie theory
Macronutrient composition
High Glycemic vs Low Glycemic Diets
Organic vs conventional foods (however this subject will be discussed as it relates to ideational motives)
Low carb diet myths
High carb myths
Meal frequency
and many many other myths that i have already addressed in Knowledge and Nonsense
Excerpts from How We Think and Do Not Think About Food
Suppositions that a change in genetics is responsible for the increase in obesity over the past three decades are unlikely due to the lack of evidence of mutations over this short period of time. However, what has changed drastically, is the environment in which we now live (Cohen, 2008).
Food advertising is not new, but greater sophistication in marketing—including the development of branding, expanded use of vending machines and other mechanisms for self-service, technologies like eye movement tracking, and the application of social psychology- are all widely used to increase impulse buying and sales of high calorie indulging foods. Eating behaviors are often made unconsciously.
Food variety, obtained by adding condiments can increase food intake in the short term. The mechanism by which food consumption is increased after the addition of condiments is at least partly related to the attenuation of sensory-satiety for a given food (Brondel et al., 2009). Sensory specific satiety- decrease in pleasure when consuming a specific food, and a consequent renewal in pleasure when consuming a different food or flavor. Senses become sated if continually exposed to the same stimulus. As you eat more of a specific food it becomes less pleasant. The more dissimilar the food's sensory characteristics (taste, flavor, color, texture, shape, temperature) the longer it will take to achieve sensory specific satiety.
Having a variety of foods presented in succession during a meal enhances intake, and the more different the foods are the greater the enhancement is likely to be (Rolls, 1981). However, if the sensory characteristics of a variety of foods presented in a meal are too similar increased consumption may not occur.
Food is often consumed when not hungry. When and how often we eat is determined by a myriad of factors.
Gustation- sense of taste
The four basic tastes are salty, sour, bitter and sweet (Wolfe et al., 2006). However, some sources list a fifth basic taste- Unami (Beauchamp & Mennella, 2009).
'components of flavor, detected by the olfactory system, are strongly influenced by early exposure and learning beginning in utero and continuing during early milk (breast milk or formula) feedings. These experiences set the stage for later food choices and are important in establishing life-long food habits', (Beauchamp & Mennella, 2009). The foods we like are shaped by learning and innate factors.
The pleasure or displeasure associated with tastes is seen in infants. With no experience, infants like sweet and dislike bitter and sour. Some of the most impressive work concerning hardwired taste preferences comes from Jacob Steiner (Steiner, 1973)
A chemist named A.L. Fox discovered we do not all experience taste in the same manner. While synthesizing the compound phenylthiocarbamide some of it spilled and flew into the air. One of Fox's colleagues noticed a bitter taste while Fox tasted nothing. With further testing some of Fox's other colleagues did not taste the compound, but most tasted it as bitter (Fox, 1931).
Flavor is a combination of true taste and smell. Flavor and taste are not synonymous.
Natural preferences for sweet-tasting compounds changes developmentally (infants and children have higher preferences than adults) and can be modified by experience (Cowart et al., 2004). Bitter tasting substances are innately disliked probably because most bitter compounds are toxic- plants evolved systems to protect themselves from being eaten and plant-eating organisms evolved sensory systems to avoid being poisoned (Glendinning, 1994 & Beauchamp, 2009).
Marketers have capitalized on the tendency of humans to be physical misers (put out minimal physical energy) by developing products that make eating quick and easy, including packaging that allows people to eat on the run, eat in their cars, eat fast, etc (Morrison, 2007)
Research has shown that images, sounds, smells, and lighting, affect eating behaviors.
Are fast food restaurants conspiring to make society obese? No. They are conspiring to sale food and make huge profits. If humans preferred eating fruits and vegetables to eating burgers and fries fast food restaurants will sale fruits and vegetables.
Environmental cues to aid in eating less
Use smaller plates and dishes. Use tall skinny glasses.
Eat at the table, and avoid eating in the TV room
Minimize eating from a package
Keep high calorie tempting food out of sight. Don't leave them on the counter in plain view
Avoid eating too many different foods in one meal. If you like to include variety in meals include a variety of nutrient dense, low cal foods that are similar in sensory characteristics
Those are just a few of the many topics that will be explored. Stay tuned for further updates. Any other suggestions for a book title are appreciated.
How We Think and Do Not Think About Food- will provide very brief to no coverage of the following:
The calorie theory
Macronutrient composition
High Glycemic vs Low Glycemic Diets
Organic vs conventional foods (however this subject will be discussed as it relates to ideational motives)
Low carb diet myths
High carb myths
Meal frequency
and many many other myths that i have already addressed in Knowledge and Nonsense
Excerpts from How We Think and Do Not Think About Food
Suppositions that a change in genetics is responsible for the increase in obesity over the past three decades are unlikely due to the lack of evidence of mutations over this short period of time. However, what has changed drastically, is the environment in which we now live (Cohen, 2008).
Food advertising is not new, but greater sophistication in marketing—including the development of branding, expanded use of vending machines and other mechanisms for self-service, technologies like eye movement tracking, and the application of social psychology- are all widely used to increase impulse buying and sales of high calorie indulging foods. Eating behaviors are often made unconsciously.
Food variety, obtained by adding condiments can increase food intake in the short term. The mechanism by which food consumption is increased after the addition of condiments is at least partly related to the attenuation of sensory-satiety for a given food (Brondel et al., 2009). Sensory specific satiety- decrease in pleasure when consuming a specific food, and a consequent renewal in pleasure when consuming a different food or flavor. Senses become sated if continually exposed to the same stimulus. As you eat more of a specific food it becomes less pleasant. The more dissimilar the food's sensory characteristics (taste, flavor, color, texture, shape, temperature) the longer it will take to achieve sensory specific satiety.
Having a variety of foods presented in succession during a meal enhances intake, and the more different the foods are the greater the enhancement is likely to be (Rolls, 1981). However, if the sensory characteristics of a variety of foods presented in a meal are too similar increased consumption may not occur.
Food is often consumed when not hungry. When and how often we eat is determined by a myriad of factors.
Gustation- sense of taste
The four basic tastes are salty, sour, bitter and sweet (Wolfe et al., 2006). However, some sources list a fifth basic taste- Unami (Beauchamp & Mennella, 2009).
'components of flavor, detected by the olfactory system, are strongly influenced by early exposure and learning beginning in utero and continuing during early milk (breast milk or formula) feedings. These experiences set the stage for later food choices and are important in establishing life-long food habits', (Beauchamp & Mennella, 2009). The foods we like are shaped by learning and innate factors.
The pleasure or displeasure associated with tastes is seen in infants. With no experience, infants like sweet and dislike bitter and sour. Some of the most impressive work concerning hardwired taste preferences comes from Jacob Steiner (Steiner, 1973)
A chemist named A.L. Fox discovered we do not all experience taste in the same manner. While synthesizing the compound phenylthiocarbamide some of it spilled and flew into the air. One of Fox's colleagues noticed a bitter taste while Fox tasted nothing. With further testing some of Fox's other colleagues did not taste the compound, but most tasted it as bitter (Fox, 1931).
Flavor is a combination of true taste and smell. Flavor and taste are not synonymous.
Natural preferences for sweet-tasting compounds changes developmentally (infants and children have higher preferences than adults) and can be modified by experience (Cowart et al., 2004). Bitter tasting substances are innately disliked probably because most bitter compounds are toxic- plants evolved systems to protect themselves from being eaten and plant-eating organisms evolved sensory systems to avoid being poisoned (Glendinning, 1994 & Beauchamp, 2009).
Marketers have capitalized on the tendency of humans to be physical misers (put out minimal physical energy) by developing products that make eating quick and easy, including packaging that allows people to eat on the run, eat in their cars, eat fast, etc (Morrison, 2007)
Research has shown that images, sounds, smells, and lighting, affect eating behaviors.
Are fast food restaurants conspiring to make society obese? No. They are conspiring to sale food and make huge profits. If humans preferred eating fruits and vegetables to eating burgers and fries fast food restaurants will sale fruits and vegetables.
Environmental cues to aid in eating less
Use smaller plates and dishes. Use tall skinny glasses.
Eat at the table, and avoid eating in the TV room
Minimize eating from a package
Keep high calorie tempting food out of sight. Don't leave them on the counter in plain view
Avoid eating too many different foods in one meal. If you like to include variety in meals include a variety of nutrient dense, low cal foods that are similar in sensory characteristics
Those are just a few of the many topics that will be explored. Stay tuned for further updates. Any other suggestions for a book title are appreciated.
Wednesday, January 26, 2011
What Intelligence IS and IS NOT
Intelligence (as defined by narrow theories)- those mental abilities measured by IQ tests and their proxies (SAT etc.) does not provide a comprehensive assessment of cognitive skills. These theories provide a scientific concept of intelligence generally symbolized as g, or "in some cases where the fluid / crystalized theory is adopted intelligence (Gf) and crystalized intelligence (Gc)" (Stanovich, 2009, p. 13). Fluid intelligence reflects reasoning abilities (and to a degree processing speed) across a variety of domains, particularly novel ones. Crystalized intelligence reflects declarative knowledge acquired by acculturated learning- general knowledge, vocabulary, and verbal comprehension, etc. Mental abilities assessed by intelligence tests are important, but a variety of important mental abilities are missed by intelligence tests.
It is important to point out that the research I have reviewed and the research I propose does not suggest there are multiple intelligences or that intelligence is unimportant. Critics of intelligence routinely point out the intelligence does not encompass many domains of important psychological functioning. “However, these standard critiques of intelligence tests often contain the unstated assumption that although intelligence tests miss certain key noncognitive areas, they encompass most of what is important cognitively” (Stanovich, 2009, p. 5). These popular assumptions have been thoroughly refuted, and in fact intelligence tests do not assess many important cognitive skills. Intelligence tests are radically incomplete measurements of good thinking. It is commonplace for critics, writers and the lay public to suggest that intelligence has nothing to do with real life- that it’s not important in real life. Decades of research have shown otherwise- intelligent tests do measure important cognitive skills. “[S]cientific evidence does converge on the conclusion that MAMB IT [mental abilities measured by intelligence tests] picks out a class of mental operations of considerable importance. The problem is just that folk psychology values those mental operations- and the tests used to measure them- too much” (Stanovich, 2009, p.54).
Cognitive abilities assessed on intelligence tests are not about:
- personal goals and their regulation
- tendency to change beliefs when faced with contrary evidence
- argument & evidence evaluation
Intelligence tests do not measure important thinking dispositions, such as: openness to experience, belief perseverance, level of confirmation bias, reliance on intuition, impulsiveness, myside bias, one-sided bias, need for cognition, need for closure, alternative hypothesis testing, thought flexibility, fully disjunctive reasoning etc.
In short, cognitive abilities assessed on intelligence tests are not measurements of rationality, but measurements of algorithmic- level cognitive capacity. Good thinking is more than just intelligence.
References
Stanovich, K. (2009). What Intelligence Tests Miss: the psychology of rational thought. Hartford, CT: Yale University Press.
It is important to point out that the research I have reviewed and the research I propose does not suggest there are multiple intelligences or that intelligence is unimportant. Critics of intelligence routinely point out the intelligence does not encompass many domains of important psychological functioning. “However, these standard critiques of intelligence tests often contain the unstated assumption that although intelligence tests miss certain key noncognitive areas, they encompass most of what is important cognitively” (Stanovich, 2009, p. 5). These popular assumptions have been thoroughly refuted, and in fact intelligence tests do not assess many important cognitive skills. Intelligence tests are radically incomplete measurements of good thinking. It is commonplace for critics, writers and the lay public to suggest that intelligence has nothing to do with real life- that it’s not important in real life. Decades of research have shown otherwise- intelligent tests do measure important cognitive skills. “[S]cientific evidence does converge on the conclusion that MAMB IT [mental abilities measured by intelligence tests] picks out a class of mental operations of considerable importance. The problem is just that folk psychology values those mental operations- and the tests used to measure them- too much” (Stanovich, 2009, p.54).
Cognitive abilities assessed on intelligence tests are not about:
- personal goals and their regulation
- tendency to change beliefs when faced with contrary evidence
- argument & evidence evaluation
Intelligence tests do not measure important thinking dispositions, such as: openness to experience, belief perseverance, level of confirmation bias, reliance on intuition, impulsiveness, myside bias, one-sided bias, need for cognition, need for closure, alternative hypothesis testing, thought flexibility, fully disjunctive reasoning etc.
In short, cognitive abilities assessed on intelligence tests are not measurements of rationality, but measurements of algorithmic- level cognitive capacity. Good thinking is more than just intelligence.
References
Stanovich, K. (2009). What Intelligence Tests Miss: the psychology of rational thought. Hartford, CT: Yale University Press.
Wednesday, January 5, 2011
Science @ Psych Central
Psych Central provides readers with a wide array of content from various writers. The site is updated on a regular basis- numerous times daily- and produces informative, quality scientific information (emphasizing health sciences). The site is easy to navigate, well organized, and presents scientific information in a way that is relatively easy to comprehend for the layman. However, the site is also useful for individuals with moderate and advanced levels of scientific knowledge. Psych Central is different than many of the popular psych sites. Psych Central promotes REAL PSYCHOLOGICAL INFORMATION- PSYCHOLOGICAL SCIENCE
I am a regular contributor to the site. Some of my recent articles include:
Testimonials Aren’t Real Evidence
Does GRE measure anything related to Grad School?
Why Intelligent People Do Foolish things?
Here is a short list of some excellent articles featured at Psych Central:
Is Science Dead? In a Word: No
Why doctors oversell benefits undersell risks and side effects
2011: The Power of Positive Thinking
Five relationship benefits in knowing how your brain works
There are many, many other excellent articles at Psych Central.
I am a regular contributor to the site. Some of my recent articles include:
Testimonials Aren’t Real Evidence
Does GRE measure anything related to Grad School?
Why Intelligent People Do Foolish things?
Here is a short list of some excellent articles featured at Psych Central:
Is Science Dead? In a Word: No
Why doctors oversell benefits undersell risks and side effects
2011: The Power of Positive Thinking
Five relationship benefits in knowing how your brain works
There are many, many other excellent articles at Psych Central.