Table 1.1
Source: Adapted from Shermer, 1997, pp. 44–61.
Problems in Pseudoscientific Thinking: Scientific Language Does Not Make a Science
Pseudoscientific thinking involves reference to a theory or method that is without scientific support. What we are thinking about may be called science, but it may have no scientific basis, and it is not based on the scientific method. Shermer notes, “Scientific language does not make a science” (p. 49). It is tempting to use words that sound impressive and appear in a discipline, even when no convincing explanation of their meaning or importance is provided. What’s better than coming up with a new term, especially with your name linked to it? Social science is replete with such terms. The use of scientific terms is not necessarily incorrect, but what is a problem is the use of terms without an explanation of their meaning in everyday language.
Pseudoscientific thinking: Involves reference to a theory or method that is without scientific support.
Furthermore, using such words without supporting evidence and confirmation is an example of pseudoscientific thinking. In the area of health care research, for example, many architects now use the term evidence-based design to describe their work. Without a clear understanding of what that terms means, and what qualifies as credible evidence (e.g., subjective measures such as patients’ self-reports? Objective measures such as vital signs, levels of pain medication, and recovery time?), simply using that phrase makes designers sound more authoritative than they actually are. The use of a term in a discipline without an explanation of its meaning or clear indication of how the term is operationalized (i.e., how it is being measured) creates misunderstanding.
Coincidence (Gambler’s Fallacy) and Representativeness (Base Rate)
Two other important aspects of this category “Problems in Pseudoscientific Thinking” according to Shermer (1997) are coincidence (gambler’s fallacy) (pp. 53–54) and representativeness (base rate) (pp. 54–55). These two aspects frequently appear when we make assumptions in the research process. In the gambler’s fallacy, we commit a logical fallacy and lose sight of the facts of probability; we think an event is less likely to occur if it has just occurred or that it is likely to occur if it hasn’t for a while. When we toss a coin, we have a 50–50 chance of heads. Each toss of the coin is an independent event. Yet if we have seen three heads in a row, we may be very likely to predict that the next toss will yield tails when, in fact, the odds of a tail (or head) appearing on the next coin toss is still 50–50.
Coincidence (gambler’s fallacy): Thinking that an event is less likely to occur if it has just occurred or that it is likely to occur if it hasn’t occurred for some time (e.g., assuming a slot machine will pay off because it hasn’t for the past few hours).
Representativeness (base rate): One of the heuristics talked about by Kahneman and Tversky (1972) in which we make decisions based on how representative or characteristic of a particular pattern of events data are (e.g., people think the birth order BGBBGG is more representative of a sequence of births than is BBBGGG).
A related idea is the mistaken belief that correlation (for example, of two co-occurring events) is causation. Superstitions are an example of this erroneous thought process. Athletes are notorious for superstitions (Vyse, 1997). For example, if you win two games in a row in which you tie your left shoelace first as you prepared for the game, you may believe that tying that left shoe first influenced the victories. These two events (left shoe tying and game victory) are correlated, that is, when one event happened the other also happened, but shoe tying did not achieve the victory. Humans are pattern seekers because they are limited information processors. Humans look for causal relationships that may not exist; they see patterns (a series of coins coming up heads) and predict that the next coin toss will produce a tail. Humans make this prediction because such an outcome would be more representative of the occurrence of events as they know them. This is an aspect of representativeness (which was discussed earlier in the chapter).
In representativeness, we are on the lookout for events in the world that match or resemble the frequency of occurrence of those events in our experience. When we encounter a situation that does not look representative, we are likely to ignore, disregard, or mistrust it. As we have already discussed in this chapter, Kahneman and Tversky’s work is full of examples of problems in thinking related to representativeness. The base rate is the frequency with which an event (e.g., twins, a hole in one, or perfect SATs) occurs in a population. We may have little knowledge of the actual base rate of events in a population, and we often overestimate the occurrence of events (e.g., likelihood of a plane crash or likelihood of winning the lottery). Our overestimation of the base rate may be influenced by the availability heuristic (discussed earlier in the chapter). If we have read or heard about a recent plane crash, for example, we are more likely to overestimate the occurrence of a plane crash for our upcoming trip.
The odds of dying from a motor vehicle accident are far greater than the odds of dying from a commercial airline accident. Likewise, we are far, far more likely to die from heart disease than we are from homicide (Kluger, 2006). In other words, we are not logic machines, and we don’t carry around statistics in our heads; instead we carry estimates of events based on the frequency with which we have encountered them, and exposure to media typically elevates our estimates of the base rate, or the frequency with which events actually occur.
These errors in understanding the base rate underscore the importance in research of assessing the degree to which participants in your study may have familiarity with the topics under investigation. For example, if you were evaluating patients’ reactions to hospitalization, you would certainly want to ask a question about the number of prior hospitalizations. You want to ask yourself what aspects of a participant’s background might have relevance and possibly influence your research. As another example, if you were investigating students’ satisfaction with their educational institution, it might be helpful to know if the college they attend was their first choice.
Try This Now 1.2
What kinds of background variables and experiences might influence students’ satisfaction with their educational institution, aside from qualities of the institution itself?
Logical Problems in Thinking: Hasty Generalization and Overreliance on Authorities
Among the logical problems in thinking that Shermer lists, he gives us “hasty generalization” (p. 56)—reaching conclusions before the evidence warrants—or faulty induction. Induction is reasoning from premises to a probable conclusion. In faulty induction, the conclusion is not warranted. People also describe this kind of thinking as stereotyping. As but one example, when we take a limited range of evidence about an individual and ascribe those qualities to the group of which the person is a member, we are stereotyping. A popular television show,1 The Big Bang Theory, had characters that embody stereotypes, whether Sheldon Cooper, the brilliant but interpersonally less skilled theoretical physicist, or Amy Farrah Fowler, who for some time was his “girlfriend” before becoming his wife. Those who faithfully watched the show will recall that initially Sheldon described Amy as a “girl” and his “friend” but not his “girlfriend” in the traditional meaning of the term. For long-term watchers of the show, the staying power of the series came through the evolution of these characters over time as they became less true to the stereotype they represented. But many individuals argue that the portrayal of these characters reinforced unfortunate and hurtful stereotypes about scientists and gender (Egan, 2015).
1Ranked