Jamie Hale

Jamie Hale

Sunday, October 31, 2021

I am Not Biased- You Are!?

You are biased and so is everyone else. “I am not biased” says the uninformed consumer, researcher, policy maker, minister, spiritual guru, coach, therapist etc....... 

Myside bias: is the tendency for people to evaluate evidence, generate evidence, and test hypotheses in a manner biased toward their own opinions. The weight of evidence doesn’t matter when making decisions or determining belief. It is reasonable to suggest everyone is influenced at some level of bias (conscious and unconscious- not being aware of bias).

Bias Blind Spot

“Research involving the assessment of one’s own biases indicates people often feel that they are less biased than others. Bias blind spot is conceptualized as a tendency to recognize bias in others, while not recognizing bias in ourselves (Pronin et al. 2002). Emily Pronin and colleagues conducted a study that asked participants to rate themselves and others on their susceptibility to a variety of biases. The results indicated across eight biases people felt they were less biased than their peers. In summary, people acknowledge the value of scientific findings on biased processing, but they don’t believe those findings apply to them.

A key factor involved with bias blind spot is placing too much emphasis on introspective evidence (monitoring one’s own conscious processes), despite the tendency for biases to occur unconsciously (below our awareness). Another factor driving bias blind spot is the tendency for people to assume their perceptions directly reflect reality (naive realism), and that those who don’t agree are biased. Indeed, “People’s tendency to deny their own bias, even while recognizing bias in others, reveals a profound shortcoming in self-awareness, with important consequences for interpersonal and intergroup conflict” (Pronin 2007)Full article 

Intelligence & Myside Processing

Toplak & Stanovich (2003) presented 112 undergraduate university students with an informal reasoning test in which they were asked to generate arguments both for and against the position they endorsed on three separate issues. Performance on the task was evaluated by comparing the number of arguments they generated which endorsed (myside arguments) and which refuted (otherside arguments) their own position on that issue. Participants generated more myside arguments than otherside arguments on all three issues, thus consistently showing a myside bias effect on each issue. Differences in cognitive ability were not associated with individual differences in myside bias. However, year in university was a significant predictor of myside bias. The degree of myside bias decreased systematically with year in university. Year in university remained a significant predictor of myside bias even when both cognitive ability and age were statistically partialled out. 
Myside bias was displayed on all three issues, but there was no association in the level of myside bias shown across the different issues  Read more  

Proxies of Intelligence Do Not Predict Avoidance of Myside Bias

In Experiment 1, the researchers concluded, there was "no evidence at all that myside bias effects are smaller for students of higher cognitive ability" (p.140). The main purpose of Experiment 2 was to investigate the association of cognitive abilities with myside and one side bias. "The results... were quite clear cut. SAT total scores displayed a nonsignificant 7.03 correlation with the degree of myside bias and a correlation of .09 with the degree of one-side bias (onebias1), which just missed significance on a twotailed test but in any case was in the unexpected direction" (p.147). It was also revealed that stronger beliefs usually imply heavier myside bias. In Experiment 3 "the degree of myside bias was uncorrelated with SAT scores", and "[t]he degree of one-side bias was uncorrelated with SAT scores" (p.156). Myside bias was weakly correlated with thinking dispositions. One side bias showed no correlation with thinking dispositions. From In Evidence We Trust 2nd Edition  

Sunday, October 10, 2021

Measures in Science

 An instrument can provide accuracy and preciseness but lack value if the measurement is non-valid. When determining the validity of the measurement one must ask does the measurement really measure the concept in question?

The key aspects concerning the quality of scientific measures are reliability and validity (Hale, 2011). Reliability is a measure of the internal consistency and stability of a measuring device. Validity gives us an indication of whether the measuring device measures what it claims to.

Internal consistency is the degree in which the items or questions on the measure consistently assess the same construct. With an internally consistent measure items are positively correlated with each other. This measure of internal consistency is particularly important regarding self-report measures. It isn't as important when considering performance based measures, tests or surveys. Each question should be aimed at measuring the same thing. Stability is often measured by test / retest reliability. The same person takes the same test twice and the scores from each test are compared. Interrater reliability is sometimes used in assessing reliability. With interrater reliability different judges or raters (two or more) make observations, record their findings and then compare their observations. If the raters are reliable then the percentage of agreement should be high.

When asking if a measure is valid we are asking if it measures what is supposed to. Validity is a judgment based on collected data; it is not a statistical test. Two primary ways to determine validity include: existing measures and known group differences.

The existing measures test determines if the new measure correlates with existing relevant valid measures. The new measure should be similar to measures that have been recorded with already-established valid measuring devices. Known group differences determine whether the new measure distinguishes between known group differences. An illustration of known group differences is seen when different groups are given the same measure, and are expected to score differently. As an example, if you were to give Democrats and Republicans a test assessing the strength of certain political views, you would expect them to score differently. Various sub-categories of validity (external, internal, statistical and construct) are also important in some contexts. Validity rating is not overly objective; in fact, it is relatively subjective in some areas. There isn't a perfect validity.

It is possible to have a reliable but not valid measure. However, a valid measure is always a reliable measure,

Often, when using unsystematic (non-scientific) approaches to knowledge measures are not reliable or valid. That is, they do not measure the trait or characteristic of interest consistently nor do they measure what they are intended to measure. Quality scientific approaches generally make great efforts to ensure reliability and validity.

What about Replication in Science??

Replicable (reproducible) findings are important to science; they are a sub-component of converging evidence. When referring to the replication crisis it is important to understand that what is meant- is lack of replicating statistically significant findings. It would be more precise to say there is a "statistically significant replication crisis." Consider replication from another perspective; the original study failed to detect stat...sign.. (using criteria NHST prevalent with use of frequentist stats), but additional studies detect statistical significance.  What would the implications be??  College instructors should make an effort to address this condition- non-significant precedes significant findings. Students are often advised no need to try to replicate non-significant findings, but sign..findings should be replicated. This implies that the non-sign....findings must be accurate (if they occurred first), even though all studies are susceptible to flaws.  Read more 

Learn more about the need for science, rationality and statistics  - In Evidence We Trust