Arguments, Cognition, and Science. André C. R. Martins. Читать онлайн. Newlib. NEWLIB.NET

Автор: André C. R. Martins
Издательство: Ingram
Серия:
Жанр произведения: Философия
Год издания: 0
isbn: 9781786615084
Скачать книгу
information, and we get two pieces of information, the base rate and the result of some test (or similar information). In that case, it makes sense to use the base rate as a prior estimate. That is exactly how I obtained the 4.7 percent figure.

      A complete use of the Bayesian tools, as we will discuss further ahead, however, would demand I had some initial estimate about the problem before getting any data—base rates included. The initial estimate can be non-informative. That is, I might look for the best way to represent the fact that I know nothing about the problem. That would be a correct theoretical start. From there, the problem gives me two pieces of information, the base rate and the chances the test might fail. Both must be used to get the final chance the patient is sick. What happens in this case is not that people are disregarding initial opinions. They are not paying attention to part of the data. That could happen if they are only looking at the problem and trying to locate the number that makes more sense as an answer. That would be a solution in the spirit of finding a fast heuristic: pick the data that looks more relevant and use only that. If that is what we do, we might still need to learn what happens when humans actually have an opinion before they get the data.

      

      Both memory and sensory perception provide more evidence that we reason in ways that mimic a Bayesian analysis. Almost everyone has seen pictures that deceive our eyes in some way. Some of those pictures have two possible interpretations. Others cause mistakes in our estimates of the size or the alignment of geometric figures. More complex figures can induce the illusion of movement when no actual movement exists. There are too many visual illusions. Attempts to systematize them into a few types or a theoretical framework have proved to be a surprisingly hard task (Changizi et al. 2008), but it does seem the way our brain interprets the information it receives from our eyes is similar to the way we reason (Helmholtz 1867). The task is indeed similar. Given what we know about the world and what we observe, the brain tries to arrive at the best possible conclusion. The reasoning, in this case, is not conscious. Our brains just provide us with their best guesses as if they were true (Eagleman 2001). Most of the time, those guesses are remarkably good—but not always.

      If we sometimes get what we see wrong, this is not necessarily a bad thing. Recognition of patterns is a very useful skill. It does not matter whether those patterns emerge in the financial market or the behavior of the game one is hunting. If you are the first to identify it, there is more to gain. This can be enough to compensate for the cost of false detections. Indeed, in both general reasoning and in interpreting visual information, we are able to identify patterns very fast, but we also falsely identify random meaningless noise as something meaningful too often. This general phenomenon is called apophenia (Fyfe et al. 2008). An interesting example of how we see more than what is there is our tendency to identify faces everywhere. We do it with simple typographical juxtaposition of characters such as “:)” or “;-(“ and we do it when seeing faces on rocks or on toasts or on shadowy, blurred images from Mars. Quickly identifying other people and inferring their emotional state is a very useful trait for a social animal (Sagan 1995). The costs of thinking you see a face where there is none are far less important than the costs you can incur from failing to identify there is an actual person in your visual field.

      This over-interpretation allows us to create new ways to communicate. It also can give extra meaning to art. But the reality is that much of what we see as faces is only the hard-wired conclusion of our brains. Evidence from MRI scanning of our brains show that the specific areas of the cortex that become more active when we see faces (Kanwisher et al. 1997) also show the same type of activity when we perceive a non-face image as a face (Hadjikhani et al. 2009). The timing of the activity suggests we don’t see an image and conclude it looks like a face. Our brains seem to see an actual face, instead of giving us a later reinterpretation of the image.

      

      We tend to think of our memory as if it were stored in boxes. We might not find a specific box when we want it, and then we would say we forgot something, but we trust the content of each box. If we manage to find it, whatever is inside should correspond to how we had experienced a situation. At best, we may acknowledge the fact that our senses might have been deceived. We might have perceived something wrongly, but we recall accurately what we lived.

      Unless some delusional state was involved, we trust memories, ours or from other people. Others may lie, but their memories are also dependable. We trust memories so completely, we send people to jail every day based only on witness testimonies, based on what those witnesses remember they saw or heard. We assume a healthy person will remember things as she has perceived them, and she will not create false memories nor alter the original ones.

      This picture is often accepted not only by the layman but, until recently, by psychologists. Many practitioners believed in the concept of “repressed memory” (Loftus 1993). As a matter of fact, some psychologists still seem to believe it (Patihis et al. 2014). A “repressed memory” would be an event that a person has experienced in the past and has committed to memory, but that person would be unable to retrieve in the present moment; in other words, only the conscious memory would be missing. That would probably happen because the event the person experienced would have been too traumatic. Many therapists worked based on the idea that these memories can be recovered through treatment. When those memories were thus “recovered,” they would correspond to actual events in the life of the patient.

      The first sign that there was something wrong with this picture appeared in the 1990s. Therapists observed an unexpectedly large number of cases where people claimed to have recovered the memories of abuses they had suffered. One thing was suspicious about these supposedly recovered memories: they often included elements that should be very rare, such as satanic practices, and all those cases were recovered under particular types of psychotherapy. And, as should happen if those memories were real, arrests and convictions followed. The large number of those cases, however, worried some researchers. Maybe those memories, as vivid and real as they seemed to be, were only an artifact of the therapy.

      Research followed. In a series of interesting experiments, Elizabeth Loftus (1997) observed that she could create false memories in the minds of her subjects. Cases involving innocent people who had been wrongly found guilty were later discovered. Those wrong convictions were not only related to “repressed memories.” Many times the evidence of guilt had consisted only of witness reports based on distorted memories. Simple things like showing pictures of innocent people to a victim could cause the error. At a later point, the victim wrongly identified a man from those pictures as the man who had raped her. It is not clear how many innocent lives were destroyed due to our lack of understanding of how our minds work, or how many real culprits were not identified because investigations did not follow other leads once wrongly accused culprits had been found.

      Our memory seems to be much more fluid than any of us would have thought. It changes to accept new information. Its final content is a function of what we perceived, our initial estimate, but it is also dependent on what we learn later, the new data. Memory combines both and tries to keep a record of whatever our brains conclude as the more plausible scenario. Our brains change the recording of events, so that they will fit our new beliefs. Missing pieces of information can be obtained from sources as unrelated to the event as a picture one observes later. What we carry in our minds is actually a mixture of what we observed, what we expected to see and things we have experienced or thought later, all mixed. As Steven Novella said in his blog, “When someone looks at me and earnestly says, ‘I know what I saw,’ I am fond of replying, ‘No, you don’t. You have a distorted and constructed memory of a distorted and constructed perception, both of which are subservient to whatever narrative your brain is operating under’” (Novella 2014).

      Unfortunately, we have no uncertainty associated to our memories. A completely correct description of events would try to keep different possible scenarios. It would have chances associated to each one and the most likely memory could be the first one we would remember, followed by other possible alternative memories. But that is not how our brains work. While correct, that method would need extra mental effort and a larger capacity of memory. Our brains simply choose the more probable alternative and keep that. It is an approximation, but not as absurd as it might seem at first. Storing memories this way might also be a useful heuristic, but we need to be aware, to