Koriat, A., S. Lichtenstein, and B. Fischhoff. 1980. “Reasons for Confidence.” Journal of Experimental Psychology: Human Learning and Memory 6(2), 107–18.
Lerner, J., and A. Lomi. 2018 (August). Diverse Teams Tend to Do Good Work in Wikipedia (But Jacks of All Trades Don’t), 214–21.
Lorenz, J., H. Rauhut, F. Schweitzer, and D. Helbing. 2011. “How Social Influence Can Undermine the Wisdom of Crowd Effect.” Proceedings of the National Academy of Sciences 108(22), 9020–25.
Luker, K. 1985. Abortion and the Politics of Motherhood. Berkeley: University of California Press.
Mercier, H. 2011a. “On the Universality of Argumentative Reasoning.” Journal of Cognition and Culture 11, 85–113.
Mercier, H. 2011b. “Reasoning Serves Argumentation in Children.” Cognitive Development 26(3), 177–91.
Mercier, H., and D. Sperber. 2011. “Why Do Humans Reason? Arguments for an Argumentative Theory.” Behavioral and Brain Sciences 34(2), 57–111.
Mercier, H., and D. Sperber. 2017. The Enigma of Reason. Cambridge: Harvard University Press.
Milgram, S. 1963. “Behavioral Study of Obedience.” Journal of Abnormal and Social Psychology 67(4), 371–78.
Nickerson, R. S. 1998. “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.” Review of General Psychology 2(2), 175–220.
Perry, G. 2013. Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments. Brunswick: Scribe Publications.
Polavieja, G. D., and G. Madirolas. 2014. “Wisdom of the Confident: Using Social Interactions to Eliminate the Bias in Wisdom of the Crowds.” arXiv:1406.7578.
Rollwage, M., R. J. Dolan, and S. M. Fleming. 2018. “Metacognitive Failure as a Feature of Those Holding Radical Beliefs.” Current Biology 28(24), 4014–21.
Rudas, C., O. Surányi, T. Yasseri, and J. Török. 2017. “Understanding and Coping with Extremism in an Online Collaborative Environment: A Data-Driven Modeling.” PLoS ONE 12, no. 3 (March), 1–16.
Sears, D. O., and R. E. Whitney. 1973. Political Persuasion. Morristown, NJ: General Learning Press.
Sniezek, J. A., and R. A. Henry. 1989. “Accuracy and Confidence in Group Judgment.” Organizational Behavior and Human Decision Processes 43(1), 1–28.
Stanovich, K. E., R. F. West, and M. E. Toplak. 2013. “Myside Bias, Rational Thinking, and Intelligence.” Current Directions in Psychological Science 22(4), 259–64.
Surowiecki, J. 2005. The Wisdom of Crowds. New York: Anchor Books.
Taber, C. S., and M. Lodge. 2006. “Motivated Skepticism of Political Beliefs.” American Journal of Political Science 50(3), 755–69.
Tiedens, L. Z., and S. Linton. 2001. “Judgment under Emotional Certainty and Uncertainty: The Effects of Specific Emotions on Information Processing.” Journal of Personality and Social Psychology 81(6), 973–88.
Witte, K., and M. Allen. 2000. “A Meta-Analysis of Fear Appeals: Implications for Effective Public Health Campaigns.” Health Education & Behavior 27(5), 591–615.
Yaniv, I. 2011. “Group Diversity and Decision Quality: Amplification and Attenuation of the Framing Effect.” International Journal of Forecasting 27(1), 41–49.
We are fallible, but we are not incompetent. Certainly not as incompetent as we might have thought based on the initial experimental results about our cognition. Instead, we were asking the wrong questions about our reasoning. We were expecting our cognition to have a very different goal than the one it seems to have. Our cognition evolved to make us better at surviving. As social beings, our survival depended heavily on fitting into our societies. Leading those societies would be even better, of course. It makes sense that evolution might consider social influence much more important than truth. Having reasonable ideas about the world still matters, of course. Jumping from the top of a precipice due to social pressure would not be a good adaptation. That is trivial and, indeed, such trivial matters are rarely the subject of disagreement. But most ideas we can hold about the world are less damaging than jumping from precipices. For many ideas, fitting into the group might have been much better for our ancestors’ survival than looking for the best explanations. Convince your peers and, that failing, agree with them.
Convincing requires being well adjusted to what our group considers a solid argument. Obvious falsehoods will be rejected easily by our opponents, and we always have rivals inside our groups. Competition for the most prestigious positions might be unavoidable, but less obvious mistakes—those other people might not notice—can provide good strategies. Combine that with the observation that we tend to accept arguments in favor of conclusions we like without much critical thinking. That suggests we might have acceptable but wrong forms of argumentation and discourse that work inside groups. Those forms don’t need a logical basis. As long as they work, they will be useful.
We see difference in the criteria of what makes a solid argument all the time. Some groups act as if testimonial evidence should be taken at face value. Others consider that there is one source of knowledge, be that a person or a book, that is so authoritative that we can trust it to always tell the truth. The arguments based on those assumptions often work well inside restricted cultural groups. The same arguments, however, fail to convince any outsiders. Outsiders, with good reason, see those arguments as simple strategies of convincing, with too little actual logical strength.
If we want to correct our observed tendencies to stick to our own ideas, we need very good tools. We need solid standards that can help us avoid our own cognitive pitfalls, but we must be wary of our tendency to choose standards that would benefit our points of view. Our own standards might be good and solid. However, if we want to be safe, we should not trust our own ability to evaluate them. After all, even very intelligent people seem to fall prey to distorting arguments to defend their views. It seems that, if we want to be safe, we need to check our reasoning against those who disagree with our conclusions. If they disagree on our opinion but can’t find a flaw in our methods, that does add much more credence to those methods than if they had been checked by someone who agrees with the conclusions.
In general terms, that means we’d better look for standards that are universal. Those standards should be so clear and obvious no sane individual could challenge them. That does not avoid the possibility that every human might be insane and agree with a wrong logic. If that were the case, however, there would be nothing we could do anyway. So, we assume we are not that insane, hope that will work, and move on.
The aim of this search, therefore, is to identify and avoid mistakes. If only some people see an argumentation strategy as valid, there is a good chance they might be wrong at their assessment. That is especially true when those who consider a reasoning tool valid are the same ones who use those tools to defend their points of view. Avoiding the possibility of mistakes requires us to avoid those one-sided strategies. To do better, we should remember that we tend to excuse those who agree with us too easily (Claassen and Ensley 2015). But that is counterproductive. What we want is to point out errors whenever they happen, even when they are committed by our group. If we care about learning the best explanations, we probably should be even more insulted by wrong behavior at our favorite side, as that is the type of behavior that can discredit even good theories.
An exception must be made for competence, of course. Criticisms to a method by people who do not know how to use it is not something we need to worry about. Quite the opposite, the criticism is easy to understand from the point of view of defending one’s arguments. Take the example of those who do not trust mathematics.