Survival of the Sickest: The Surprising Connections Between Disease and Longevity. Jonathan Prince. Читать онлайн. Newlib. NEWLIB.NET

Автор: Jonathan Prince
Издательство: HarperCollins
Серия:
Жанр произведения: Прочая образовательная литература
Год издания: 0
isbn: 9780007369164
Скачать книгу
everywhere throughout the body. And while most cells end up with too much iron, one particular type of cell ends up with much less iron than normal. The cells that hemochromatosis is stingy with when it comes to iron are a type of white blood cell called macrophages. Macrophages are the police wagons of the immune system. They circle our systems looking for trouble; when they find it, they surround it, try to subdue or kill it, and bring it back to the station in our lymph nodes.

      In a nonhemochromatic person, macrophages have plenty of iron. Many infectious agents, like tuberculosis, can use that iron within the microphage to feed and multiply (which is exactly what the body is trying to prevent through the iron-locking response). So when a normal macrophage gathers up certain infectious agents to protect the body, it inadvertently is giving those infectious agents a Trojan horse access to the iron they need to grow stronger. By the time those macrophages get to the lymph node, the invaders in the wagon are armed and dangerous and can use the lymphatic system to travel throughout the body. That’s exactly what happens with bubonic plague: the swollen and bursting lymph nodes that characterize it are the direct result of the bacteria’s subversion of the body’s immune system for its own purposes.

      Ultimately, the ability to access iron within our macrophages is what makes some intracellular infections deadly and others benign. The longer our immune system is able to prevent an infection from spreading by containing it, the better it can develop other means, like antibodies, to overwhelm it. If your macrophages lack iron, as they do in people who have hemochromatosis, those macrophages have an additional advantage – not only do they isolate infectious agents and cordon them off from the rest of the body, they also starve those infectious agents to death.

      New research has demonstrated that iron-deficient macrophages are indeed the Bruce Lees of the immune system. In one set of experiments, macrophages from people who had hemochromatosis and macrophages from people who did not were matched against bacteria in separate dishes to test their killing ability. The hemochromatic macrophages crushed the bacteria – they are thought to be significantly better at combating bacteria by limiting the availability of iron than the nonhemochromatic macrophages.

      Which brings us full circle. Why would you take a pill that was guaranteed to kill you in forty years? Because it will save you tomorrow. Why would we select a gene that will kill us through iron loading by the time we reach what is now middle age? Because it will protect us from a disease that is killing everyone else long before that.

      Hemochromatosis is caused by a genetic mutation. It predates the plague, of course. Recent research has suggested that it originated with the Vikings and was spread throughout Northern Europe as the Vikings colonized the European coastline. It may have originally evolved as a mechanism to minimize iron deficiencies in poorly nourished populations living in harsh environments. (If this was the case, you’d expect to find hemochromatosis in all populations living in iron-deficient environments, but you don’t.) Some researchers have speculated that women who had hemochromatosis might have benefited from the additional iron absorbed through their diet because it prevented anemia caused by menstruation. This, in turn, led them to have more children, who also carried the hemochromatosis mutation. Even more speculative theories have suggested that Viking men may have offset the negative effects of hemochromatosis because their warrior culture resulted in frequent blood loss.

      As the Vikings settled the European coast, the mutation may have grown in frequency through what geneticists call the founder effect. When small populations establish colonies in unpopulated or secluded areas, there is significant inbreeding for generations. This inbreeding virtually guarantees that any mutations that aren’t fatal at a very early age will be maintained in large portions of the population.

      Then, in 1347, the plague begins its march across Europe. People who have the hemochromatosis mutation are especially resistant to infection because of their iron-starved macrophages. So, though it will kill them decades later, they are much more likely than people without hemochromatosis to survive the plague, reproduce, and pass the mutation on to their children. In a population where most people don’t survive until middle age, a genetic trait that will kill you when you get there but increases your chance of arriving is – well, something to ask for.

      The pandemic known as the Black Death is the most famous – and deadly – outbreak of bubonic plague, but historians and scientists believe there were recurring outbreaks in Europe virtually every generation until the eighteenth or nineteenth century. If hemochromatosis helped that first generation of carriers to survive the plague, multiplying its frequency across the population as a result, it’s likely that these successive outbreaks compounded that effect, further breeding the mutation into the Northern and Western European populations every time the disease resurfaced over the ensuing three hundred years. The growing percentage of hemochromatosis carriers – potentially able to fend off the plague – may also explain why no subsequent epidemic was as deadly as the pandemic of 1347 to 1350.

      This new understanding of hemochromatosis, infection, and iron has provoked a reevaluation of two long-established medical treatments – one very old and all but discredited, the other more recent and all but dogma. The first, bleeding, is back; the second, iron dosing, especially for anemics, is being reconsidered in many circumstances.

      Bloodletting is one of the oldest medical practices in history, and nothing has a longer or more complicated record. First recorded three thousand years ago in Egypt, it reached its peak in the nineteenth century only to be roundly discredited as almost savage over the last hundred years. There are records of Syrian doctors using leeches for bloodletting more than two thousand years ago and accounts of the great Jewish scholar Maimonides’ employing bloodletting as the physician to the royal court of Saladin, sultan of Egypt, in the twelfth century. Doctors and shamans from Asia to Europe to the Americas used instruments as varied as sharpened sticks, shark’s teeth, and miniature bows and arrows to bleed their patients.

      In Western medicine, the practice was derived from the thinking of the Greek physician Galen, who practiced the theory of the four humours – blood, black bile, yellow bile, and phlegm. According to Galen and his intellectual descendants, all illness resulted from an imbalance of the four humours, and it was the doctor’s job to balance those fluids through fasting, purging, and bloodletting.

      Volumes of old medical texts are devoted to how and how much blood should be drawn. An illustration from a 1506 book on medicine points to forty-three different places on the human body that should be used for bleeding – fourteen on the head alone.

      For centuries in the West, the place to go for bloodletting was the barber shop. In fact, the barber’s pole originated as a symbol for bloodletting – the brass bowl at the top represented the bowl where leeches were kept; the one at the bottom represented the bowl for collecting blood. And the red and white spirals have their origins in the medieval practice of hanging bandages on a pole to dry them after they were washed. The bandages would twist in the wind and wrap themselves in spirals around the pole. As to why barbers were the surgeons of the day? Well, they were the guys with the razor blades.

      Bloodletting reached its peak in the eighteenth and nineteenth centuries. According to medical texts of the time, if you presented to your doctor with a fever, hypertension, or dropsy, you would be bled. If you had an inflammation, apoplexy, or a nervous disorder, you would be bled. If you suffered from a cough, dizziness, headache, drunkenness, palsy, rheumatism, or shortness of breath, you would be bled. As crazy as it sounds, even if you were hemorrhaging blood you would be bled.

      Modern medical science has been skeptical of bloodletting for many reasons – at least some of them deserved. First of all, eighteenth- and nineteenth-century reliance on bleeding as a treatment for just about everything is reasonably suspect.

      When George Washington was ill with a throat infection, doctors treating him conducted at least four bleedings in just twenty-four hours. It’s unclear today whether Washington actually died from the infection or from shock caused by blood loss. Doctors in the nineteenth century routinely bled patients until they fainted; they took that as a sign they’d removed just the right amount of blood.

      After millennia of practice, bloodletting fell into extreme disfavor at the beginning of the twentieth century. The medical community – even