Scatterbrain. Henning Beck. Читать онлайн. Newlib. NEWLIB.NET

Автор: Henning Beck
Издательство: Ingram
Серия:
Жанр произведения: Биология
Год издания: 0
isbn: 9781771644020
Скачать книгу
incorporate breaks into our learning process, we fear we might forget things that could be important. But our brain is not interested in the sheer mass of information so much as it is interested in our ability to connect the information.

      To research this, one study asked participants to identify the painting style of various artists. The subjects were divided into two groups. The first group was shown a series of six images, all of them works by one artist, followed by another series of six images, which were by the next artist, and so on for the next four artists. The second group was shown all of the images mixed up in no particular sequence, so that the various artistic styles alternated from image to image. The results were clear. The group that viewed the alternating images was able to identify a new image according to the particular style of the individual artist. Those in the first group, who viewed the images in sequential blocks, were less able to recognize the basic painting concept (artistic style). Despite the results, most of the test subjects indicated that they preferred learning in blocks (“massed learning”), as they believed it to be a more successful strategy.6

      This result has been reaffirmed over and over again in studies. Taking breaks is what makes learning successful. Not only for learning about various artistic styles, but also vocabulary at school, movement patterns, biological correlations, or lists of words. The reason for this has to do with the way in which our nerve cells interact. An initial information impulse triggers a stimulus for structural change in the cells. These changes must first be processed to prepare the cells for the next informational push. Only after they have taken a short break are they optimally prepared to react to the recurrent stimulus. If it comes too early, it will not be able to fully realize its effect.7 It is only by alternating information that the brain is able to embed it in a context of related bits of knowledge. It’s not too different from making lasagna. You could of course choose to pour the sauce into the pan all at once and then pile the lasagna noodles and the cheese on top. That would be something like “massed cooking,” but it wouldn’t result in authentic lasagna. Only when you alternate the components do you get the desired, delicious dish—or, when it comes to the brain, a meaningful thought concept. This kind of conceptual thought is the brain’s great strength because it enables us to get away from pure rote learning. Only then is it possible for us to organize the world into categories and meaningful correlations and, thereby, to begin to understand it.

       Don’t learn—understand!

      ANYONE WHO CAN learn something can also unlearn it. But once you have understood something, you cannot de-understand it. Learning is not particularly unique. Most animals and even computers can learn. But developing an understanding of the things in the world is the great art of the brain, which it is able to master precisely because it does not consume and draw correlations from data in the same way a robot would. A brain creates knowledge out of data, not correlations. These are two vastly different concepts, though they are often equated with each other in the modern, digitalized world. But whereas the amount of data from :-) and R%@ is the same, the information conveyed is completely different. Not to mention the concept behind it—a smiling face. To a computer, the characters :-) and :-( are only 33 percent different. But to us, they are 100 percent different.

      How do we learn such knowledge, such thought concepts? How do we understand the world? We can see how we don’t do it by marveling at computer algorithms. Specifically, the most modern algorithms in existence, the “deep neural networks.” These are computer systems that are no longer programmed to follow the classic A then B system of logic. Rather they “borrow” from the brain and copy its network structure. The software simulates digital neurons that are able to adapt their points of contact to one another depending on which pieces of data they need to process. Because the cells and their contacts are able to adjust themselves, the system is able to learn over time. For example, if the software needs to be able to identify a penguin, it is presented with hundreds of thousands of random images with a few hundred penguin images included among them. The program independently identifies the characteristics specific to penguins until it is able to recognize what a penguin might look like.

      The advances that have been made in artificial neural networks are huge. Merely by regularly viewing images, such a system is independently capable of identifying animals, objects, or humans in arbitrary pictures. Facial recognition capabilities have even surpassed human ability (Google not only pixels out human faces in its Street View maps but also the faces of cows).8 But to put it all into perspective: a computer system like this is to the brain what a local amateur athlete is to an Olympic decathlon champion. The comparison is not even the same concept because computers do something that is very different than neurons, in spite of the pithy appropriation of the neurology terms by IT companies who claim they are building “artificial neural networks.” In reality, computers are neither replicating real neural networks nor a brain. It is nothing but a marketing trick by computer companies. For a deep learning network to learn to identify a penguin, it must first process thousands of images of one, in a method that follows the maxim “practice makes perfect.” But this is not necessarily how the brain works.

       Deep understanding

      I WAS RECENTLY standing in the hallway with my two-and-a-half-year-old neighbor. He pointed to the ceiling and said, “Smoke detector.” I was amazed and had to ask myself what kind of parents did this little boy have. Did they perhaps subject him for weeks and weeks to thousands of pictures of smoke detectors, always repeating the series of images until he was finally able to identify the similarities and characteristics of smoke detectors and to correlate the object? His father is, admittedly, a fireman and so my neighbor already has a certain predisposition toward fire safety tools. But still, had this little human really been bombarded with thousands of pictures of smoke detectors, fire extinguishers, and fire axes that then enabled him to quickly identify the required implement for the next possible crisis? And then did they send him down the hall in my direction once he had finally passed the test with flying colors? No way! That’s not how it works. But the question still remains: How was my little neighbor able to identify a smoke detector in a completely new context after only seeing a smoke detector maybe two or three times in his short life?

      The answer is that my neighbor did not learn about smoke detectors in the same way that a computer does, rather he understood the idea of smoke detectors. This is something which humans are very good at and which science calls “fast mapping.” If, for example, you were to give a three-year-old child never-before-seen artifacts and explain that one very special artifact is named “Koba” or comes from the land of “Koba,” the child will remember the Koba object one month later.9 After only seeing it one time! It gets even better if the child is learning to understand new actions and not only new words. Children who are only two and a half years old require only fifteen minutes of playing with an object before they can transfer its properties to other objects. For example, a child who realizes that they can balance a plastic clip named “Koba” on their arm later realizes that a similar clip, but with a slightly different shape, is also called a “Koba” and can be balanced on one’s arm.10 The whole exchange only takes a few minutes. How would two-year-olds possibly be able to learn an average of ten new words a day if they had to practice each word hundreds of times? No brain has that much time on its hands.

      Of course, the brain cannot simply learn something from nothing. From what we currently know, we assume that learning by “fast mapping” allows new information to be rapidly incorporated into existing categories (presumably without even bothering the hippocampus, the memory trainer that you learned about in the previous chapter).11 But we are even able to create these categories very rapidly—whenever we give ourselves time for some mental digestion. If you present a three-year-old with three variations of a new toy (i.e., a rattle with different colors and surfaces) one right after the other and give each of these the artificial designation of “wug,” the child will not easily be able to identify a fourth rattle as a “wug.” If, however, the child is allowed half a minute of time between the presentation of each new rattle to play with the item, he or she would then grasp the concept of the wug and be able to identify a new, differently shaped and differently colored rattle as a wug. This seemingly inefficient break, this unrelated waste of time that we would love nothing more than to rationalize away in our