Jamie Hale

Jamie Hale

Wednesday, August 29, 2018

Reconceptualizing Science


Reconceptualizing science involves thinking about science and relevant concepts differently. It involves concept change, or at least concept modification. There may be some disagreement regarding at what point does change mean new concept. Or, how much does a concept need to change in order to be considered a new concept, rather than a modified concept.  Concepts change and are modified over time. Water, earth and air were once classified as elements, now they are known to be compounds consisting of element combinations. Science is replete with examples of concept change. Scientific information is tentative; it changes according to evidence. The change, ideally, is in congruence with converging evidence and demonstrates a high level of explanatory coherence. Science, its conceptualizations and operationalizations are concerned with good epistemic values. Good epistemic values as described by Paul Thagard (2012) are those of evidential quality; knowledge values that are in line with logic and evidence (essentially epistemic rationality characteristics).

Defining concept: There is a large range of definitions for the term concept. Defined as: concepts are abstract entities, concepts are replicas of sense impressions, concepts are mental representations fo units or categories, concepts are distributed neural representations, and so on. Terminological confusion can lead to misconception. Of course, this isn't unique to the term concept.  Three of the main interpretations of concepts, as studied in cognitive science, are those of prototypes, exemplars and explanatory theories. Another interpretation, the semantic pointer, is one put forth originally by Chris Eliasmith. "A semantic pointer is a kind of neural representation whose nature and function is highly compatible with what is currently known about how brains process information" (Thagard, 2012, p.304). Thagard asserts that exemplar, prototype and explanatory elects of concepts can be understood in the view of semantic pointers. More on semantic pointers:

Abstract (Blouw, et al., 2015)
"The reconciliation of theories of concepts based on prototypes, exemplars, and theory-like
structures is a longstanding problem in cognitive science. In response to this problem, researchers have recently tended to adopt either hybrid theories that combine various kinds of representational structure, or eliminative theories that replace concepts with a more finely grained taxonomy of mental representations. In this paper, we describe an alternative approach involving a single class of mental representations called “semantic pointers.” Semantic pointers are symbol-like representations that result from the compression and recursive binding of perceptual, lexical, and motor representations, effectively integrating traditional connectionist and symbolic approaches. We present a computational model using semantic pointers that replicates experimental data from categorization studies involving each prior paradigm. We argue that a framework involving semantic pointers can provide a unified account of conceptual phenomena, and we compare our framework to existing alternatives in accounting for the scope, content, recursive combination, and neural implementation of concepts."


Full paper - http://scholar.google.com/scholar_url?url=https://pdfs.semanticscholar.org/51fa/7ddfd385d451e5f17cd21cf551896688057b.pdf&hl=en&sa=X&scisig=AAGBfm3_WSTiqx_kvIo8vvxSOipwLKtGiw&nossl=1&oi=scholarr

When talking about concepts with students I define the term as follows: mental representation of a unit, reflected as patterns of synaptic activity. Read more about the study of concepts- The Cognitive Science of Science (2012) by Paul Thagard   Cognition (2013) by Daniel Reisberg
         
 Discussions on science are often short circuited when science is over simplified. Science is concept-complex; it consists of multiple components (represented conceptually as in converging evidence reflected as complex neural circuits- refer to semantic pointers, ). When discussing the implications and value of science it is important that its complexity is appreciated. That doesn't mean all discussions related to science have to be complex; what it means is, the related concept or concepts being discussed or analyzed should be clearly stated. As an example, when talking about research methodology, a component of scientific thinking, is being discussed. But, only a portion, scientific thinking is much broader than just research methodology. When speaking of scientific literacy (derived scientific literacy), it is important to not confuse this with domain specific scientific knowledge. As an example, exercise literacy (knowledge in science of exercise) is not synonymous with scientific literacy. There are those who rate high in exercise literacy and also scientific literacy, point is, they are not interchangeable.  Proponents of science and science educators do a disservice when they misrepresent science. 
 
Scientific Literacy and Scientific Cognition

Discussions involving scientific literacy are ubiquitous. Scientific literacy is conceptualized and operationalized in various ways. Examples used in defining scientific literacy include: understanding science and its applications, knowledge of what counts as science, general scientific knowledge, knowledge of risks and benefits of science, making informed decisions regarding science and technology, etcetera (DeBoer 2000; Brennan 1992). A precise, standard conceptualization of scientific literacy has not been demonstrated since the origin of the concept (DeBoer 2000). In the context of this article, scientific literacy is synonymous with general scientific knowledge. Scientific literacy in this form involves remembering scientific facts, theories, principles, and so on—products of scientific inquiry. This form of literacy is sometimes referred to as a form of derived scientific literacy. Scientific literacy is important, however other science related concepts are just as important. Scientific cognition is not the same as scientific literacy.Scientific cognition (thinking) involves complex cognitive mechanisms. Scientific cognition involves much more than general scientific knowledge, procedural skills to conduct research, attaching "science says" to your statements, a science degree, perpetuating views of popular science figures, identifying yourself as evidence based, asking for evidence, being skeptical, etc. Scientific thinking involves an array of components and can be used in everyday, out of the lab, thinking. Scientific thinking is broad and should be used in an array of contexts. Deanna Kuhn asserts that the essence of scientific thinking is coordinating belief with evidence (2011). At the very least scientific cognition involves philosophy of science, scientific methodology, quantitative reasoning, probabilistic reasoning, and elements of logic. Various scales have been developed to measure scientific thinking. Kahan developed the Ordinary Science Intelligence Scale (OSI 2.0, Kahan 2014), and Drummond and Fischhoff (2015) developed the Scientific Reasoning Scale (SRS). Drummond and Fischhoff found that scientific reasoning were distinctfrom measures of scientific literacy, even though there was a positive association to measures of scientific literacy... Read more- Science: The Vast Enterprise  https://www.csicop.org/specialarticles/show/science_the_vast_enterprise


But, it's only a theory

“It’s only a theory” is a phrase often used to suggest that the theory in question is weak.  This phrase is often used as a response to a theory that one doesn’t agree with or understand.  It is imperative to recognize that theory in science is drastically different than the type of theory discussed in everyday conversation.  In science, theory represents a body of knowledge that offers an explanation for converging lines of evidence. Science needs theory!   Lay person theory (everyday theory) reflects speculation or a guess directed at explaining phenomena. 
  
“Theory: In science, a well-substantiated explanation of some aspect of the natural world that can incorporate facts, laws, inferences, and tested hypotheses.”  National Center for Science Education
“The formal scientific definition of theory is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence.” National Academy of Sciences  Read more- It's Only A Theory
https://jamiehalesblog.blogspot.com/2015/02/its-only-theory.html

Limitations of Peer Review

In the Peer Review Process a paper is submitted to a journal and evaluated by several reviewers (often reviewers are individuals with an impressive history of work in the area of interest-that is, the specific area that the article addresses).  After critiquing the paper the reviewers submit their thoughts to the editor.  Then, based on the commentaries from the reviewers, the editor decideswhether to publish the paper, make suggestions for additional changes that could lead to publication, or to reject the paper. 

Single Blind and Double Blind Reviews

In Single Blind Reviews authors do not know who the reviewers are.  In Double Blind Reviews authors do not know who the reviewers are, nor do reviewers know the identity of the authors.  In many fields Single Blind Reviews are the norm, while in others Double Blind Reviews are

preferred. “Peer review is one way (replication is another) science institutionalizes the attitudes of objectivity and public criticism.  Ideas and experimentation undergo a honing process in which they are submitted to other critical minds for evaluation.  Ideas that survive this critical process have begun to meet the criterion of public verifiability” (Stanovich, 2007, p. 12).

Peer review doesn't guarantee only quality information will be published. The Peer Review Process is not perfect, but some researchers suggest  it is one of the best safeguards we have against junk science (Stanovich, 2007). When evaluating the worth of scientific data, in addition to whether it is published in a peer reviewed journal, it is important to take into consideration:  funding sources, study replication, study design, sample size, conflicting interest, sampling error, different measures of reliability and validity, reporting limitations and other possible criticisms of study. There are good studies that never get published in peer review publications, and low quality studies are published by peer review publishers.  It is erroneous to label a study, review, commentary, meta-analysis or any other scholarly papers as high quality based solely on peer review status. This over glorification of peer review pervades academia and pop science. Read more- Peer Review is not the antidote https://jamiehalesblog.blogspot.com/2018/01/peer-review-is-not-antidote.html

Stay tuned for part 2

References available upon request

 
 




 


 

Wednesday, July 11, 2018

A New Understanding of the Human Brain: The Human Advantage


by Jamie Hale

College students are taught that the human brain consists of 100 billion neurons.  This claim can be found in numerous textbooks.  College instructors often promote the 100 billion neuron claim.  This claim is also promoted by widespread media sources. If you have read much about the brain or engaged in dialogue regarding the human brain there is a good chance you have encountered this statement: seemingly, this is general neuroscience- basic stuff.    When I was a graduate student this number was accepted without question.  What is the original source for this number?

Another claim that is often made regarding brain science is that there is "ten times more glia (glia often referred to as neuron support cells) than neurons in the human brain."  Is there an original source for this number?  Is there evidence for 1 trillion glia in the human brain?
 
Herculano-Houzel addresses both of these topics in her book The Human Advantage: A New Understanding of How Our Brain Became Remarkable (2016).  She provides evidence to refute the 100 billion neuron and the 1 trillion glia claim. Her research has led to a change in teaching neuroscience and has driven pop and scholarly publications to make changes. 

The Human Advantage: In Review

Suzana Herculano-Houzel, is the author of The Human Advantage.  She is a former associate professor and head of the Laboratory of Comparative Anatomy at the Federal University of Rio de Janeiro.  She is the author of six books on the neuroscience of everyday life.  She is a former writer and presenter of the TV series Neurologica.  Currently, she resides at Vanderbilt University.
How do humans have such tremendous cognitive abilities?  Herculano-Houzel argues that human are remarkable, but they are not special in light of evolution.  Human brains follow the rules of primate evolution.  Primates have an advantage over other mammals regarding brain structure;   primates brains have evolved in a way that allows neurons to be added to the brain, without the large increases in average cell sizes seen in other mammals.  Primate brains have evolved differently than those brains of other animals.  As an example, cows and chimpanzees have brains that are similar in mass, but the chimpanzee can be expected to have at least twice as many neurons as a cow.  Human brains are scaled up primate brains.  Contrary to the popular claim that the human brain is larger than can be expected for body type (expressed as encephalization quotient), the author argues the number of brain neurons as a function of body mass is what can be expected for a non-great ape primate.

Neuroscientists, in the past, thought that the human brain is large relative to the size of the body that contains it, when directly comparing to brain and body size of great apes.  If our body is smaller, then our brain should be smaller, and yet it is three time larger in terms of mass.  However, Herculano-Houzel's data show that when great apes are excluded humans show the same relationship between their body mass and number of brain neurons as that of other primates.  In the first decade of the twenty-first century, systematic comparisons relative to the encephalization quotient, started being made of cognitive abilities among nonhuman primates, and of self control abilities among birds and mammals.  The general finding was "simple absolute brain size was a much better correlate of cognitive capabilities than the encephalization quotient.  It was back to square one.  If the human brain is not the largest, then how can it be the most capable of them all?" (Hercualno-Houzel, 2016, pp. 16-17.).  The human brain is just what can be expected for a primate brain that has evolved to adapt to human conditions.  The primary mechanism responsible for human cognitive abilities is the number of neurons in the cerebral cortex.  The human brain has more neurons in the cerebral cortex (16 billion) than any other animal, even when the animal (African elephant) has 257 billion brain neurons.

What is the original source for the - 100 billion neurons in the human brain assertion?  Herculano-Houzel asked senior neuroscientists and no one was able to point her to the original source. After an extensive search through the scientific literature she wasn't able to find a single source supporting the 100 billion neuron claim.  According to Herculano-Houzel, Eric Kandel (Nobel Laureate), co-author of Principles of Neural Science, couldn't provide an original source for the claim, even though the claim was made in Principles of Neural Science (a book Kandel co-authored).  When asked about the claim, Kandel responded saying he wasn't responsible for the chapter containing the 100 billion neuron claim.
           
Is there evidence for 1 trillion glia in the human brain?   Herculano- Houzel reports she couldn't  locate any research to support the claim- 1 trillion glia in the human brain.  Both of these claims (100 billion neurons and ten times more glia) are often taken as fact.   Accepting information as fact, even though it is not  supported by evidence is problematic; specifically problematic as it is odds with a central tenet of science; scientific data is based on evidence.  A paper published by Herculano-Houzel and colleagues titled "Equal Numbers of Neuronal and Non-Neuronal Cells Make the Human Brain an Isometrically Scaled-Up Primate Brain," which is now a heavily cited paper, was rejected by high ranking journals including Nature, Proceedings of the National Academy of Sciences of the U.S.A., Neuron and the Journal of Neuroscience. The paper was eventually published in the Journal of Comparative Neurology.

Houzel developed a method called the "isotropic fractionator" that allowed her to create what she calls brain soup?  The method allows dissolving only cell membranes, but not nuclear membranes (each neuron consists of one nuclear membrane), therefore producing brain soup with free-floating nuclei.  These nuclei are relatively easy to count by sampling tiny amounts of the soup.  All the nuclei from all of the cells are stained blue, collected and counted.   In the book she provides a description of what went into developing the technique.  The first attempts to use the method led to the destruction of some of the nuclei. Early attempts involved testing the preparation after a few hours of fixation. In order for the all of the nuclei to remain intact longer preparation times were required. It was finally  established that after approximately two weeks of fixation the nuclei would all stay in place during testing.  Other researchers have used this method.  Christopher von Bartheld, from the University of from the University of Reno, and Jon Kaas, from Vanderbilt University have shown this method to be faster, more reliable and easier to apply than stereology, which was commonly used in the past. 
   
Results, after using the isotropic fractionator, indicate the human brain has an average of 86 billion neurons and 85 billion non-neuronal cells (glia and endothelia- cells composing blood vessels)   For people who like to point out that "86 is close to 100" and who claim the 100 billion is reasonable as an order-of-magnitude estimate, Houzel asserts, an entire baboon brain contains 11 billion neurons.  Fourteen billion is not a small number of neurons.

The author concludes that the human brain is remarkable due to the number of neurons in the cerebral cortex (approximately 16 billion) and secondly it is remarkable thanks to cooking, which allowed humans to escape the energetic limitations of a raw food diet, that limits other animals to less cortical neurons.  Chapter 11 provides detailed information on how cooking contributed to the human brain.      
The book appeals to a large audience.  Even though sections of the book might be difficult for some to read, with the appropriate effort the information is accessible for most people.  The author points out that some of her earlier work was met with resistant.  It shouldn't be surprising that some may have a problem accepting views that challenge what they thought to be neuroscience fact for so many years.  I highly recommend this book. Herculano-Houzel is a major player in neuroscience.

Title: The human advantage: a new understanding of how our brain became remarkable / Suzana Herculano-Houzel.
Description:  Cambride, MA: The MIT Press, 2016
ISBN 9780262034258

Monday, June 25, 2018

Why Science Matters


By James Randi
Visit Randi’s site at www.randi.org 

Science is not the mysterious, distant, smoking-test-tube sort of a priesthood that many imagine it to be. Rather, it is simply an organized, formal method of “finding out.” Science works. We’re all much better off for having vaccines, rapid international travel, fast access to information, instant communication; and improved, safer nutrition —all direct results of what scientists have discovered about how our real world works. And have no doubt about it: we’re living in a real world, one that doesn’t really care about our comfort or even our survival. We have to see to these matters, and we’ve gotten to be very good at this. 

That’s due to what we call “science.” 

There are those who try to disparage efforts by science to discover the secrets of the universe, preferring to depend on mythology like faith healing, charms, incantations/prayers, and various other magical motions. Science looks at the evidence, evaluates it, proposes a likely scenario that can accommodate it —a theory —and then tests that idea for validity. 

But science doesn’t really discover many cold, hard, facts. Rather, it discovers statements that appear to explain certain observed phenomena or problems. These statements –s=ut+½ at², for example –are tested endlessly. Should they fail, they are either re-written or scrapped.
 
You just may have recognized that formula above. It’s a discovery made by Sir Isaac Newton, and expresses the variables of the situation in which a cannonball is dropped from a convenient Leaning Tower in Italy. The formula works quite well, except when the cannonball is replaced by something the size of an electron or a galaxy. Then, it fails. 

Does that mean that the eminent scientist Newton was wrong all these years? Did science fail? No. Within the parameters in which Sir Isaac worked, he was right; outside of those limits, quantum physics takes over, and all’s right with the world once more.
 
This self-correcting feature of science is not a weakness. It’s one of the most important advantages of the discipline. Scientists learn something new, when they’re wrong. And they correct their findings, and we get closer to the truth. Science has no dogmas…The bottom line: Science works, we need it, and it improves our lives and the lives of those dear to us. What more can we ask?  - This article is an excerpt from In Evidence We Trust: The Need for Science Rationality and Statistics

Reviews of  In Evidence We Trust
Jamie Hale: In Evidence We Trust
Recommended Resources-In Evidence We Trust