How medicine missed that all chronic disease is one condition, and how we can heal.
v1.0 - January 2026 | Generated January 11, 2026
Chapter One
In 2022, the world conducted an inadvertent experiment on millions of people. As GLP-1s exploded in popularity for weight loss, a pattern started to reveal itself. Patients' disease markers were getting better across many different unrelated conditions.
Within days of starting treatment, far before patients experienced any weight loss, their diabetes was improving. Within weeks, their blood pressure was dropping. Fatty liver disease, kidney function, cognition in Alzheimer’s, and Parkinson’s tremors, all started seeing improvements.
And it was happening in thin people too. Parkinson's patients at healthy weights saw their tremors improve. Alzheimer's patients showed slower cognitive decline. People struggling with alcohol addiction had far fewer cravings. By 2024, GLP-1s were in clinical trials for over a dozen diseases across nearly every medical specialty: heart disease, liver disease, kidney disease, Parkinson's, Alzheimer's, addiction, even cancer. Like dominoes, one disease after another was responding to this single molecule, hinting at the common upstream cause medicine’s siloed approach couldn’t see.
These were supposed to be completely different diseases with different causes. According to medicine’s framework, kidney disease has nothing to do with Parkinson's. Liver disease with addiction. Alzheimer's with heart disease. Yet one peptide was improving them all; bringing into question everything we thought we knew about disease itself, and providing a clue as to what disease actually is.
To understand why this went against our current understanding, let’s go back to 1847, to a Vienna maternity ward where women were dying at alarming rates. They would come to the hospital healthy, and then within days they would develop what was at the time called childbed fever, which presented initially as fever and pain, and then would invariably, and quite quickly, lead to death. Everyone knew this was happening, but no one knew why or what to do about it. A young doctor named Ignaz Semmelweis was working in a hospital at the time that had two maternity wards. One was staffed by doctors and one by midwives, and he noticed that in the doctor’s ward, 1 in 10 of the women who came in for care was dying, whereas in the ward staffed by midwives only about 1 in 30 was dying. The doctor’s ward was far better resourced, they had more training and superior equipment, so why were their patients dying at three times the rate? Semmelweis became obsessed with figuring out the difference. What were the midwives doing that the doctors weren’t? Or—and this was likely a harder question to ask—what were the doctors doing that was killing their patients?
Then one day one of the doctors from the ward, one of Semmelweis’s good friends, got sick and died. He was performing an autopsy with one of his students, when their hand slipped, cutting him with their scalpel. The doctor then developed the same symptoms as the women with “childbed fever”, and the same rapid decline leading to his death. These doctors were performing autopsies in the morning, and then delivering women’s babies in the afternoon. They must be carrying something from the dead, to the living.
Semmelweis came up with the idea to wash their hands with chlorinated lime before surgery, to see if that would stop them from transferring childbed fever to the patients. The doctors on his ward implemented his protocol, and almost overnight, the death rates fell from 10% to 1%. But he didn't know why it worked, he couldn't explain it scientifically, and so despite these clear and documented outcomes, the medical institution wouldn't take him seriously. Without institutional backing, individual doctors had no reason to believe him over their training. They were outraged at his suggestion that doctors could be unclean. Semmelweis got increasingly desperate, and he started writing angry letters, accusing doctors of killing their patients. Which, to be fair, they were. He couldn't understand why they couldn't see what he saw, why they wouldn't believe what his data clearly showed. And eventually, sick of hearing his wild rants, they drove him from Vienna. He died in an asylum at the age of 47.
By this time, in the mid-1800s, physicians had catalogued a huge number of diseases, each with its own set of symptoms and perceived causes, but their theories couldn’t explain the patterns they were seeing. For patients with festering wounds, for example, a surgeon could perform the same operation on two patients, with the same surgical techniques and pre- and post-op care, and one would heal perfectly, while the other would develop angry red streaks, fever, delirium, and death. They called this outcome "hospital gangrene" or "surgical fever" and had elaborate theories to explain it like bad hospital air, the patient's constitution, and even the weather.
Cholera outbreaks would come on out of nowhere and kill thousands of people. During the 1854 London outbreak, entire neighborhoods were leveled by the disease while others would be left untouched. The belief at the time was that it was the miasma that was causing it, you could smell it, the foul air coming out of the cesspits in the Thames, so of course that was what was killing people. Except some people living in the worst air stayed healthy, and removing patients from the miasma didn't always help.
Medicine had unique theories for every disease, each with their own experts and treatments. Wound infection experts debated whether pus was healthy or dangerous. Cholera specialists mapped outbreaks and argued about miasma versus contagion. For tuberculosis, doctors prescribed rest cures in various climates, endlessly debating which location worked best. Medical knowledge was fragmenting with everyone focused on their specific illness, because specialization seemed like the path to effective treatments. No one thought to connect them all, why would they? A wound infection had nothing to do with cholera. Tuberculosis with childbed fever. They were different problems requiring different explanations.
Then in 1857 a chemist named Louis Pasteur was hired by the French wine industry. Their wine kept spoiling, unpredictably turning to vinegar and ruining entire batches at once. Pasteur looked at the wine under a microscope, and he saw what at the time were called animalcules, tiny organisms living in the wine. He noticed he was seeing one type of animalcule (organism) in the good wine and a different type in the spoiled wine. Maybe, he thought, specific types of microorganisms were causing specific types of fermentation.
Now microorganisms weren't a new concept, people had been observing them under microscopes for almost two centuries, but the thinking at the time was that these tiny life forms spontaneously generated from decay. When mold suddenly appeared on your food, or maggots in your meat, they seemed to arise from nothing; one day it wasn't there, and the next it was. Pasteur was the first to show that life doesn't just come from nothing. Through what is famously known as the swan neck experiment, he showed that broth only spoiled when airborne particles could reach it - proving these organisms were deposited from the air, not spontaneously generated, just needing the right conditions to multiply. Once he understood this, he was able to explain why some batches of wine were going bad and others weren’t, and he was able to come up with a system so that they could prevent it from continuing to happen. If the winemakers heated the wine, that would kill off the bad organisms, thus preventing them from multiplying and ruining their wine.
Pasteur was so successful with the wine problem that in 1865 he was asked to help in another French industry that was experiencing a similar and similarly mysterious issue. Whole crops of silkworms were dying off in large numbers, again, seemingly at random, and causing massive loss for the industry. And as you can probably guess, it turned out to be exactly the same story. Pasteur found that microorganisms were infecting the silkworms, causing disease, and that the disease passed from sick silkworms to healthy ones through the spread of these microorganisms, which led to entire colonies being wiped out almost at once. The same thing that was spoiling wine was killing living creatures.
This is when Pasteur started connecting the dots. If the same thing, microorganisms, was spoiling wine and killing silkworms, passing from one to the next through the air or physical contact, what else could it be doing that we aren’t seeing? Was it possible that this was affecting humans too? That all those various medical conditions like cholera and tuberculosis and childbed fever were also caused by these same microorganisms?
Pasteur had stumbled onto a discovery that had the potential to explain all of those disparate illnesses and even possibly create a unifying theory of disease that medicine had been lacking. But the concept of 'proof' as we know it today didn't exist yet, and without any way to prove his ideas, there was no way to convince doctors to incorporate his thinking. They had rejected Semmelweis even with clear evidence, despite him showing that his new protocol was drastically saving lives. These doctors had been building and debating their protocols and theories for decades, and changing people’s thinking when they’ve invested their lives thinking about things a specific way is not easy. For his idea to overturn centuries of thinking, he would need some way to prove it to them. The challenge was, what kind of proof would convince them? Luckily for medicine, it was around this same time, in the mid 1800s, that the way we thought about proof and what counted as evidence was about to change.
For most of human history, "knowledge" came from a combination of observation and, even more importantly (then and now), authority. For example if Aristotle said something, it was automatically considered true. If you had enough authority, you could observe something, and if you had a good enough explanation for what you were seeing, your educated guess could become accepted truth. The idea of taking those observations and somehow testing them to prove what you thought you were seeing just didn’t really exist. But in the 1600s this system where guesses became truth started to change. Francis Bacon famously argued that you shouldn’t just be able to reason your way to ‘truth’, you should have to test it. He thought you should observe it, conduct an experiment, gather data from that experiment, and then you could draw conclusions. And he wasn’t the only one. René Descartes contributed the idea of systematic doubt, which is that we should question everything, and break each problem down into its component parts, drilling down into smaller and smaller parts and then separating and analyzing them carefully. And by the late 1600s and into the 1700s, scientists had largely adopted this framework, and Newton's work on gravity and motion showed the world how powerful this new approach could be.
This new method of gaining scientific validation, what is now known as the scientific method, worked beautifully well for things like physics and math. Things that behaved predictably, didn’t change or adapt, and could easily be broken down into their component parts and measured or tested in a lab. Planets reliably orbit around the sun on the same path, chemical elements combine in consistent rations, and gases follow precise laws as they relate to pressure, volume, and temperature. Through this new method of testing, sciences like physics and chemistry were transformed from philosophical speculation into rigorous sciences. And through this transformation, a new era of scientific discovery had begun.
Medicine, however, lagged behind. By the mid-1800s, physicians were still relying largely on the old ‘scientific’ methods of tradition, authority, and clinical observation. They would watch patients, catalogue their symptoms, and try treatments to see how they worked, but they weren't designing controlled experiments to test their theories about what caused disease. This meant that medicine at this time was still largely made up of a lot of guessing, very few effective treatments, and minimal clear answers.
Which brings us to 1876, to a makeshift laboratory in a small German town, where a doctor named Robert Koch was trying to figure out what was killing farmers’ livestock. Anthrax, a bacterial disease that had been killing off livestock for centuries, was devastating cattle and sheep across the region. While some animals would get sick and die, others in the same field would remain healthy, and as we have now seen over and over again, no one had any idea why. Now Koch, who was extremely methodical and driven, was determined to figure this out. Building on the work of Pasteur, he knew where to look for the answer, and he spent months isolating bacteria from the diseased animals, growing them in pure cultures, and studying them under a microscope. And what he did during these months of meticulous work was something no one else had ever done before in medicine; he created a systematic way to prove that a specific microorganism caused a specific disease.
His system of proof, which became known as Koch's postulates, is still taught in medical schools today, and consists of four steps: First, find the organism in sick patients - by examining their blood, tissues, or other samples under a microscope. Second, isolate it and grow it in pure culture in the lab. Third, introduce it to a healthy animal and reproduce the disease. Fourth, isolate the same organism again from the newly diseased animal and see if it matches the one you originally found. So in short: find the unique organism, isolate and grow it, put it into a new animal, then pull it out and isolate again. If you could do all four steps, you had proof - not just speculation or observation or theory, but proof - that one specific microorganism caused one disease. It was repeatable and testable. Koch had figured out how to apply the scientific method of the hard sciences to the messy field of biology, and now medicine finally had a way to prove causation.
Once Koch had successfully used his new system to find the specific bacterium that was causing Anthrax in the diseased animals, he moved on to studying disease in humans. He was able to use his new system to find the specific bacterium that caused tuberculosis, which was the leading killer of the time. Then he found the bacterium that caused cholera. For every new discovery he used his postulates, and his method proved effective over and over again. This became the first domino in a series of changes that would lead to what we know of as “evidence based” medicine today. Koch had systematized medical research, and this framework: one cause, one disease, isolated in a laboratory, tested systematically, became THE way to think about disease.
And it worked. Spectacularly. Within decades, scientists had identified the bacteria causing diphtheria, typhoid, tetanus, pneumonia, and the plague. These diseases that had killed millions for millennia suddenly had identifiable causes, and even cures. Antibiotics arrived. Vaccines proliferated. Infant mortality plummeted and life expectancy soared. Medical authority and prestige was on the rise, and it was hard earned; the results from this new way of thinking about disease were incredible. Medicine had officially grown up, from bloodletting and humors, to saving lives on a massive scale. The framework was touted and celebrated, and it deserved it.
The framework was succeeding wildly, but with each victory, it also grew more rigid. Through these successes and saved lives, it had started becoming the only way people were allowed to think about and conceptualize disease. If you couldn't isolate a pathogen in a laboratory, culture it, and reproduce the disease in a controlled experiment, then what you were describing wasn't considered a real disease. Medical education was transformed around this model and research funding started flowing only to studies that fit it. Pharmaceutical companies built empires around it. The infrastructure of modern medicine, now a nearly 5 trillion dollar behemoth, was built on Koch's postulates.
And all the while, as medicine celebrated these victories and organized itself around this new system, chronic disease was becoming the dominant health problem of the developed world. And the framework that had worked so brilliantly for infectious disease just didn't quite work for these new diseases.
Heart disease. Cancer. Diabetes. Autoimmune disease. Alzheimer's. These conditions didn't seem to fit the mold we had trapped ourselves in. Because in an interconnected body, with a condition that affects multiple systems, there is no single pathogen to isolate, no simple cause-and-effect. Researchers were able to find associations, risk factors, and genetic predispositions, but nothing that could cleanly explain what they were seeing. There was nothing in these diseases that worked the way anthrax or tuberculosis had worked. And medicine, instead of questioning whether the framework fit these diseases, or whether perhaps these diseases were something else entirely, doubled down. There MUST be a specific cause: a gene, a protein, a pathway. It had worked so well before, we just haven't found it yet. We must just need more funding, more studies, and more decades of searching.
Medicine threw money and resources at the problem for decades, making little forward progress, but sticking rigidly to the model. While researchers kept searching for the single cause, they ran into two main problems. For the big diseases - cancer, Parkinson's, Alzheimer's, heart disease - the high profile diseases that brought in the funding, they did find things: genetic mutations, protein aggregations, plaques, inflammatory markers. With billions of dollars and decades of research, they could finally point to specific mechanisms. But knowing what was broken wasn’t leading to cures. Cancer treatments remained brutal, often killing healthy cells along with diseased ones. Parkinson's and Alzheimer's medications might slow progression slightly, sometimes helping with symptoms for a while, but they couldn't stop or reverse the damage. Heart disease treatments managed symptoms but didn't address why the disease developed in the first place and didn't actually reverse anything either. The framework had delivered on its promise - it found the specific effects the diseases were causing on the body. But it couldn't deliver what patients actually needed: healing.
And then there were all the other patients. The ones with Chronic Fatigue Syndrome, Fibromyalgia. IBS. Chronic pain. These conditions received little funding due to the stigma that they weren’t ‘real’. The framework's definition of 'real disease' had become completely untethered from patient experience or even biomarkers, the objective physiological measurements that medicine uses to confirm disease and develop treatments. As an example, when The Rome Foundation, the organization that sets IBS diagnostic criteria, updated them in 2016, they didn't use the biomarkers that had been found - which by then had an Area Under the Curve (AUC) of 0.89 which is a standard metric for evaluating diagnostic tests. Which, if it were used, puts these into the good to excellent range for diagnostic testing, and rivals that of the diagnostic accuracy of any of the ‘big’ diseases. Instead, they surveyed about 1665 people from the general population, excluding people who had been diagnosed with IBS, found the 90th percentile of symptom frequency, and declared that the threshold for disease. The top ten percent of bowel issues in this swath of healthy people became the new definition of IBS. And subjective pain became the primary marker, despite clear biomarker evidence. So anyone with severe, constant, debilitating symptoms that weren't specifically pain - daily bloating, distention, and abnormal bowel function - no longer qualified for diagnosis or treatment. The change cut IBS prevalence in half overnight - not because fewer people were sick, but because they no longer met the criteria. Medicine had defined disease by population statistics rather than by whether people were actually sick, and this change got ratified by the same committee that proposed it. No real research necessary. The pattern holds across the other dismissed disorders too, for example Fibromyalgia has biomarkers with around 85% diagnostic accuracy, and Chronic Fatigue Syndrome’s biomarkers diagnose the disease with a whopping 96% accuracy. And without the legitimacy that would come from people paying attention to these biomarkers, funding dried up, and without funding, treatments were never developed. Millions of patients suffered for decades with conditions that had documented biomarkers but no available treatments, leaving millions of patients in limbo.
The framework had a label for these cases: "functional disorders." Not real disease, just the body malfunctioning ‘without a clear structural cause’. Because no one was talking about these biomarkers, and without thinking about the biomarkers, if you couldn't culture it, image it, biopsy it, or see it under a microscope, then by definition it wasn't a disease the framework could recognize. Medical training even has a term for the patients with these conditions: 'heartsink patients'. They are taught in medical school about this subset of patients that keep coming back with symptoms but no findings, and who ultimately become a drain on doctors time and energy. The term ‘heartsink’ was coined to capture how the doctors felt about these patients, as in, the patients who make doctors' hearts sink when they see their name on the schedule. These patients were shuffled between specialists, each finding nothing wrong in their domain, until the labs they took based on outdated knowledge came back unremarkable and they were eventually referred to psychiatry, told their symptoms were psychosomatic, and that they needed to manage their stress.
But the patients in both groups were still sick. And medicine, having built itself entirely around finding specific, visible causes, had no framework for either group. Neither the “real” diseases where finding the mechanism didn't lead to cures, nor the “functional” ones, with biomarkers medicine wasn’t looking for. By the 21st century, chronic diseases were causing 75% of all deaths worldwide, and the framework had failed the majority of medicine's patients.
Yet despite this clear gap in our ability to help patients, medicine was becoming MORE dogmatic, not less. The failures should have prompted questions; the pattern was clear in the data. The framework could identify mechanisms but not produce cures. Millions of patients suffered with no identifiable pathology. Medicine was missing something fundamental, the way it had missed microorganisms for centuries, but this time by being too focused on the microscopic to see the pattern at a different scale.
You would think it would be clear the same pattern was repeating again, but we don't seem to learn from history the way we think we do. In the 1830s, "heroic medicine" was the dominant medical framework. Physicians believed that illness stemmed from imbalances in bodily fluids that needed to be expelled, so they would bleed patients repeatedly, or give them mercury to induce violent diarrhea, all in an attempt to purge the disease. And it was actively killing patients. People could tell the official approach was wrong, so alternatives proliferated: herbal remedies, water cures, dietary reform, the 1800s version of the wellness industrial complex today. Some of these helped somewhat. Some were nonsense. None had the actual mechanism, they just knew the establishment was failing them. It took germ theory to finally explain what was actually happening, and once it did, medicine gained the authority to dismiss the alternatives. But that same victory created a new blindspot: the assumption that all disease must work like infectious disease. Find the pathogen, kill the pathogen. Heroic medicine had the wrong framework for infectious disease. Germ theory has the wrong framework for chronic disease. And medicine, so buoyed by its victory figuring out germ theory, can't see they're repeating the same pattern.
The success had been too spectacular. Germ theory and Koch's postulates hadn't just cured diseases - they had transformed medicine from educated guessing into a respected science. For the first time in human history, doctors could reliably save lives. That success became medicine's identity, its claim to authority. And instead of questioning whether the framework fit chronic disease, medicine closed ranks. To question the framework was to question whether medicine deserved its hard-won prestige.
And by the mid-20th century, enormous infrastructure had been built on that framework. Medical schools restructured their entire curricula around it - going from two-year apprenticeships to up to fifteen years of rigorous training in identifying pathogens, understanding molecular mechanisms, and finding specific causes. Licensing boards tested whether doctors could think within the framework, and research institutions organized themselves into departments studying specific diseases with specific mechanisms. The money followed the framework too. Pharmaceutical companies built trillion-dollar empires on the principle: one drug, one target, one disease. A pill for every ill, each designed to hit a specific molecular pathway. Research grants flowed to projects that fit the model - isolate a mechanism, develop an intervention, then test it in controlled trials. Entire careers were built on expertise within the framework. Questioning it didn't just threaten ideas; it threatened livelihoods, institutions and industries.
Then there was the public health imperative. Vaccines work. Antibiotics work. When medicine needed people to trust doctors - to get their shots, take their medications, and follow medical advice - "trust the science" became essential messaging. Any hint of uncertainty could be exploited by quacks peddling snake oil, or people spreading dangerous misinformation. Medicine had to project confidence, certainty and authority. To say "we don't actually understand chronic disease very well" would undermine the credibility needed to get people to accept the treatments that DID work. It became a propaganda of sorts, but one with a virtuous goal: make sure people keep taking these life-saving drugs.
So medicine and the industry that surrounded it circled the wagons. The framework that had succeeded spectacularly for infectious disease became the ONLY legitimate way to think about ALL disease. If your illness didn't fit–if it couldn't be cultured, imaged, biopsied, or measured with the tools designed to find what we already knew existed–then either you weren't really sick, or we just hadn't found the mechanism yet. Keep looking. Keep testing. Keep believing the framework will deliver. And anyone who suggested otherwise faced dismissal. Researchers who proposed methods that didn’t follow these strict guidelines struggled to get funding. Doctors who acknowledged the limits of the framework risked being labeled unscientific. Patients who insisted their suffering was real despite normal test results were told it was psychosomatic, that they needed to see a psychiatrist, that perhaps they were catastrophizing their symptoms.
By the 1980s the dogma was well cemented into our society, from the institutional level all the way down to how we talked about our own illnesses. An outbreak in a small Nevada town would soon test just how far medicine was willing to go to defend it. In 1984 in the affluent resort town of Incline Village, Nevada, on the shores of Lake Tahoe, dozens of residents - teachers, professionals, active community members - suddenly fell ill with debilitating fatigue. They experienced cognitive problems and flu-like symptoms that wouldn't resolve. Two local physicians, Daniel Peterson and Paul Cheney, meticulously documented the outbreak. They ran tests and tracked patterns, following every medical protocol they knew.
When they reported their findings to the CDC, investigators were sent to examine the outbreak. When they arrived they found that none of what had been documented fit their framework and so drew the conclusion that this was a case of mass hysteria. They even started calling it the "Yuppie flu." The patients - many of whom were successful, educated, and previously healthy - were dismissed as stressed, anxious, and perhaps subconsciously seeking attention. And the doctors, Peterson and Cheney, were ridiculed for taking it seriously and for continuing to research what would later be recognized as Chronic Fatigue Syndrome.
And Incline Village wasn't an isolated incident. For decades prior, similar cluster outbreaks had been documented - at hospitals, schools, even an entire symphony orchestra. From 1984 to 1992, an unprecedented wave of these clusters was reported across North America. And the CDC systematically dismissed them as mass hysteria. And because of this systematic dismissal, around 1992, the reporting just...stops. The illness didn't disappear, Chronic Fatigue Syndrome (ME/CFS) continued in sporadic individual cases, but the cluster outbreak pattern that had been documented for over 50 years suddenly vanishes from the medical literature. When institutions declare something illegitimate, people stop reporting it. These patients' suffering was real, but because the CDC declared it hysteria, they lacked the funding needed to find the markers that the framework would take seriously. And as you now know, without a pathogen to culture or a lesion to biopsy, the framework left them with no category for their illness except: not real.
But the dogmatism went even deeper than dismissing what didn't fit. Even discoveries that followed the framework's rules perfectly could be rejected if they challenged the wrong beliefs. In 1982, two Australian researchers, Barry Marshall and Robin Warren, discovered that stomach ulcers - long attributed to stress and excess acid - were actually caused by the bacterium Helicobacter pylori, otherwise known as H. Pylori. They followed Koch’s postulates and the framework to the letter. They isolated the organism from diseased tissue, they cultured it, and they even demonstrated causation when Marshall, in an act of desperation to prove his point, drank a culture of H. pylori and gave himself acute gastritis, then cured it with antibiotics. This was Koch's postulates executed flawlessly: a clear bacterial cause for a disease. Exactly what the framework was designed to identify.
But the medical establishment rejected it anyway. For years. In 1983, their abstract was one of only 11 rejected out of 67 submissions to the Australian Gastroenterology Society meeting. Their paper linking the bacteria to ulcers faced extensive delays at The Lancet, one of medicine’s most esteemed journals, ostensibly due to "difficulties finding reviewers." At conferences, the microbiologists in attendance were intrigued enough to start research projects, but gastroenterologists routinely dismissed the theory as "preposterous." Then Marshall discovered something that revealed just how deep the resistance went; when he submitted his first paper to a US journal, it was rejected - not because the science was flawed, but because senior gastroenterologists had made a policy decision that the theory was "too new and radical" and they wouldn't accept papers on it. So they made this decision despite the solid science, because it seemed too…new.
The prevailing wisdom at the time was that bacteria couldn't survive in stomach acid, that ulcers were a stress disease, and that the real treatment was antacids and acid-suppressing drugs. And it’s important to note that Tagamet and Zantac were among the world's biggest-selling prescription drugs at the time, and generated billions in pharmaceutical sales. Marshall called it fighting the "Acid Mafia." It would take a decade before the medical community integrated his findings into treatment protocols, and it would be twenty-three years before he received his Nobel Prize for the discovery.
Even when researchers follow the rules perfectly, even when individual doctors or researchers see the truth clearly, if it challenges the framework, they're dismissed until the evidence becomes undeniable - and sometimes not even then.
Chronic fatigue syndrome didn’t fit the framework so it was dismissed. H. Pylori didn’t fit in with current understanding and established knowledge so it was rejected. The spiral to dogma out of control was complete. The “science” of medicine had become unfalsifiable. If it found a mechanism, that proved it worked. If it didn't find a mechanism, that just meant we needed to look harder, develop better tests, search at a different molecular level, or dismiss it as not real. If you look through the medical literature, it’s full of researchers documenting findings that don't fit, calling things "paradoxes," noting things are "surprising" or "novel," saying "mechanisms remain unclear" - and then the papers just... stop. They don't follow the implications. They add the anomaly to the list and move on. There is no observation, no pattern of failure, that can challenge the fundamental assumption: all disease must work the way infectious disease works. When a new framework explains existing anomalies better than the current model, it faces resistance proportional not to the weakness of its evidence, but to how fundamentally it threatens established thinking. It voraciously searches for anything that could question the new idea, and demands the new framework be perfect in every scenario, while giving the old framework infinite grace. Semmelweis had the data. Marshall followed Koch's postulates perfectly. The evidence didn't matter - the framework was more powerful than proof.
But if you know anything about science, you know that it’s supposed to be provisional. "Here's what we know now, subject to revision when we learn more." That's the whole point. We celebrate scientific breakthroughs precisely because they overturn what we thought we knew. But medical science has become something else. It’s become: "This IS how disease works. Questioning it isn't scientific curiosity - it's dangerous anti-science thinking." This system, the very framework that was supposed to help us understand disease, had become the thing preventing us from seeing what we were missing. And this dogmatic institutionalized ‘truth’ became the operating system of modern medicine. For decades, it determined what counted as real disease, what deserved funding, and even which patients deserved belief. An entire generation of physicians was trained to believe the framework was reality itself, not a tool for understanding it.
And then, in 2022, came the drug the framework couldn't explain
Trial after trial showed GLP1s improving a dozen different diseases. The results were being published rapid fire in medical journals, and new uses were getting approved by the FDA. But nobody could explain why it was helping across so many conditions.
And the research money was pouring in. Each new trial revealed another disease responding to the same drug, and each time, the response was: publish the results and move on to the next trial. Never stop to ask the obvious question.
If one peptide improves diabetes, heart disease, liver disease, kidney disease, Parkinson's, Alzheimer's, and addiction, what does that mean about what these "different" diseases actually have in common?
The framework had no answer. Because answering that question would mean admitting the framework was wrong about what chronic disease actually is.
But it turns out there is an answer, and this answer has been clinically and academically documented across all of medicine. Every single piece of data points to one overarching mechanism, a physiological state that precedes all disease expression, that medicine has been documenting and then overlooking, because they're looking at the pieces instead of the whole.
You've probably heard the parable - of blind men touching different parts of an elephant, each convinced they understand what they're touching. One feels the trunk and thinks it's a snake. One feels the leg and thinks it's a tree. One feels the tail and thinks it's a rope. They're all right about some details of their piece, but they're all wrong about the whole. Medicine has been meticulously documenting every piece of the elephant for over a century, without ever stepping back to see what they're actually looking at.
So let's do what medicine can't. Let's take a look at the pieces across all of medical research. Every medical system, every specialty field. And then we can step back and see the entire elephant that's been hiding in plain sight.
We'll start with one clue in particular - something that's been measured in hundreds of thousands of clinical trials, documented since the 1950s, and dismissed every single time as noise to be subtracted away.
Chapter Two
In 1950 a physician named Stewart Wolf conducted a (morally questionable) experiment. A woman had come into his office because she couldn’t stop vomiting. She was pregnant, had lost a lot of weight, couldn’t keep any of her food down, and was getting really dehydrated. The condition she was presenting with is called hyperemesis gravidarum, and Wolf told her he had just the thing for her; he had a powerful new drug that would make all of her symptoms go away.
Then he handed her ipecac.
If you don’t know what ipecac is, it’s not an anti-vomiting medication. In fact, it’s the exact opposite; ipecac is what you give to someone if they have ingested poison and you need them to immediately and violently clear the contents of their stomach. It was so touted and so trusted at this time that parents were encouraged to always have a bottle on hand in their medicine cabinet in case their children ate something they shouldn’t. And this is what Dr. Wolf handed to this woman who couldn’t stop vomiting, this drug that should have amplified her symptoms even more. He gave ipecac to his vomiting patient, told her it would cure her, and then just waited and watched and documented what happened.
And what did actually happen? Well, her nausea disappeared, and her vomiting…stopped. Completely. And Wolf documented it all. He measured her body’s response, like her gastric contractions and the other physical mechanisms of nausea and vomiting, and he saw all of it reverse. The waves of contractions that were forcing her stomach contents up through her esophagus were replaced by the normal, downward peristaltic waves of healthy digestion. And this was clearly not just perception, her body’s physical measurable responses completely reversed, and Wolf meticulously documented it all. He had given her a drug that was widely used, and that predictably caused violent vomiting, and its functional mechanism had been not just overridden but reversed by her expectation.
Wolf published his findings in the Journal of Clinical Investigation. He had demonstrated that belief could override pharmacology; that the body's expectation could reverse a very well proven chemical mechanism. And this wasn't the first time Wolf had documented remarkable observations from a single patient, and it wasn't the first time his colleagues dismissed him for it. When he'd presented earlier findings on stress and gastric function to the Gastroenterological Society? They…laughed. One case, they said. How could anyone draw conclusions from a single patient?
They had a point, I mean, one case truly isn’t proof of anything. But proof is not the same as intriguing evidence. It was just one case, but it was a case with measurable, documented, physiological changes. This should have at least sparked curiosity. It should have prompted the question: has anyone else seen this? Does this happen with other patients, other conditions? But instead, it was dismissed as an anomaly. An interesting outlier at best. Nothing worth investigating further.
To understand how unusual this response is in the name of science, we need to look at what was going on in physics at around the same time. In 1956 physicists noticed something weird about particle decay; the math didn’t seem to be working the way they expected it to. Most physicists at the time assumed it was a measurement error, as anomalies typically are, but they investigated it further anyway, just to be sure. And through this investigation two physicists, Lee and Yang, proposed something radical: maybe one of physics’ fundamental laws was wrong…maybe parity wasn’t actually conserved in weak interactions. Within months, experiments within the field confirmed their hypothesis, and the following year they won the Nobel Prize for their discovery.
Now let’s look at what happened in 2015 when physicists at CERN noticed a bump in their data at 750 GeV, which could have meant the discovery of a new particle. They were ecstatic, and excited, but also cautious. Not because they'd proven anything yet, but because they'd found something that didn't fit. They were so excited at the possibility of something new that over 250 papers were published analyzing this single anomalous signal at 750 GeV. Physics chases anomalies because occasionally one rewrites the textbooks. And physicists want to rewrite the textbooks. The scientific spirit is captured in statistician George Box's observation: 'All models are wrong, but some are useful.' Physics embodies this - the curiosity, the willingness to be wrong, the excitement when something doesn't fit.
Don’t get me wrong, medicine is different, because human lives are at stake. You can’t just rush new findings into treatment, and testing things on humans and animals absolutely has different ethical considerations than some of the other sciences need to contend with. Caution is not only justified but necessary. But caution about treatment is not the same as incuriosity about novel findings. In the case of Wolf’s finding for example, he wasn’t proposing a new therapy be implemented immediately, he had just documented something he had measured that had already happened: expectation mediated physiological changes that reversed the expected pharmacological results. Understanding how and when belief could override chemistry could be tested ethically and safely. If anything the stakes at hand should have made investigating the claims more urgent not less, because human lives are involved, and this if true could have far reaching implications.
Five years later, in 1955, Henry Beecher, an anesthesiologist at Massachusetts General Hospital, looked at 15 clinical trials spanning different conditions and different treatments to assess the placebo response Wolf was demonstrating at scale. Wolf had only one patient’s worth of data, but Henry Beecher was looking across 1,082 patients, and he found that 35% of these patients experienced “satisfactory relief” from placebo alone. This was no longer an isolated case, or an anomaly, it was a pattern across hundreds of patients and multiple conditions, and Beecher published his findings in a paper titled “The Powerful Placebo” in The Journal of the American Medical Association otherwise known as JAMA, which is one of the most prestigious journals in medicine. So now Wolf had shown it could happen - belief reversing pharmacology in one dramatic case, and Beecher had quantified it - 35% of patients showed satisfactory relief across different diseases. The phenomenon was clearly real, measurable, and reproducible. So what did medicine do with this discovery?
Beecher himself framed it as a problem. And Ted Kaptchuk, the director of Harvard's Program in Placebo Studies, has described Beecher's framing as treating placebo like 'the devil' in clinical trials—something to be controlled for and eliminated. His paper argued for placebo-controlled trials as a way to subtract the placebo response away and isolate 'real' drug effects. His focus wasn't on understanding how belief could produce physiological changes - it was on eliminating it from the data so that the ‘real’ science could show through.
Then in 1962 Thalidomide, a drug that was being prescribed to pregnant women for morning sickness, caused over 10,000 birth defects worldwide. In the US alone, roughly 20,000 patients had received the drug in unregulated clinical trials, and the drug had seemed safe because no one had tested it rigorously enough to catch the effect it had on developing fetuses. There was enough public outrage and attention during this time that congress responded with new requirements for drug testing before they would be allowed to enter the market. Pharmaceutical companies would now have to prove both safety and efficacy before approval - not through observation or doctor testimonials, but through standardized controlled trials. This became the beginning of what we now know of as randomized controlled trials, and through this, placebo controls became law.
By the 1970s, beating placebo became the gold standard of proving efficacy. Every drug seeking approval had to demonstrate that it worked better than placebo alone. Which meant that every study was actively measuring placebo effects, and rigorously recording their outcomes. The irony was perfect. Medicine needed placebo controls to prove drugs worked, and in doing so, it accidentally documented - with expert precision, over seventy years, in hundreds of thousands of trials - the very mechanism it was trying to eliminate. For seventy years, medicine measured placebo in nearly every clinical trial ever conducted, with millions of patients all showing the same thing: placebo produces measurable physiological changes. And for seventy years, medicine asked only one question: how do we make it disappear?
In 2001, the Cochrane Collaboration appeared to have the answer. Hróbjartsson and Gøtzsche published a systematic review analyzing 130 placebo-controlled trials across 40 different clinical conditions, and their conclusion was definitive: placebo effects were weak, clinically insignificant, and possibly even nonexistent. The paper appeared in the New England Journal of Medicine - another of medicine's most prestigious journals - and medicine ate it up. Finally, rigorous analysis had confirmed what many suspected: placebo was mostly myth. The emperor had no clothes. The paper then became the most cited work on placebo ever published, it functioned as the definitive reference that shut down conversations about whether placebo effects were real. Researchers could now focus on real mechanisms instead of this troublesome variable that had complicated seventy years of drug trials.
Now to anyone examining the analysis with the same rigor physics applies to anomalous findings, it was pretty clear that Hróbjartsson and Gøtzsche's methodology wasn't scientifically sound. They had 130 trials showing placebo effects across different conditions, and some conditions showed consistently strong effects. But Hróbjartsson and Gøtzsche didn't report it that way - they averaged everything together. Parkinson’s with smoking cessation trials. Nausea with schizophrenia. Asthma with infertility. All 40 conditions pooled and averaged into a single analysis, measured against an arbitrary marker of “significance” decided on by the writers themselves, and through this pooling and averaging, the real effects disappeared into the noise.
This would be like averaging the effectiveness of antibiotics across bacterial infections, viral infections, broken bones, and depression, then concluding antibiotics don't work because the average effect was small. No drug would survive that analysis. No physicist would average the behavior of electrons across completely different experimental conditions and call it rigorous.
When they wrote up their results, Hróbjartsson and Gøtzsche emphasized outcomes where placebo showed no effect while minimizing the outcomes where it did show effects. They did acknowledge that placebo might have "possible small benefits" for pain and subjective outcomes, but then spent most of the paper explaining why even those effects probably weren't real - that they were probably just reporting bias, or possibly regression to the mean. Their conclusion wasn't that placebo works for some conditions but not others; it was that placebo effects don't exist in any clinically meaningful way at all.
And then there was the circular exclusion criterion buried in their methodology. They brazenly excluded trials where "the alleged placebo had a clinical effect not associated with placebo" - which meant they excluded cases where placebo worked too well, reasoning these placebos must have contained some active ingredient or in some other way represented something that wasn’t truly placebo. Any particularly strong placebo effect was, by their definition, not a placebo effect, and so they just…quietly removed it from their data set. When other researchers reanalyzed the same data, they reached opposite conclusions. Wampold and colleagues found the effect sizes were essentially identical to what Hróbjartsson and Gøtzsche had calculated - the difference wasn't in the numbers but in the interpretation. An effect size of 0.28 could be called "small and clinically insignificant" or "robust and meaningful" depending on your framing. For context, that effect size - the one dismissed as meaningless - exceeds many accepted medical interventions.
But the reanalysis came too late. For two decades, the Cochrane paper shut down investigation. Placebo was myth. The data had spoken.
Then in 2005, Harald Walach and his colleagues decided to look at placebo data differently. They analyzed over a hundred clinical trials and found something medicine had been documenting but never noticed: when patients improved significantly on a drug, they also improved significantly on placebo. When a drug showed weak effects, placebo showed weak effects too. The two moved together, trial after trial, with a correlation of 0.78. When patients improved on the drug, patients on placebo improved by similar amounts. When the drug didn't work well, placebo didn't work well either. The correlation coefficient was 0.78 - meaning roughly 60% of the treatment outcomes could be explained by what was happening in both groups. Think about what that means: in a trial where 40% of patients improve on the drug and 35% improve on placebo, medicine celebrates the 5-point difference. The drug works! But 87.5% of what patients actually experienced - the relief they felt, the symptoms that resolved - was happening in both groups. Something other than the drug's pharmacology was producing most of the improvement. This pattern held across over a hundred trials. The thing that determined whether a trial would show strong effects or weak effects wasn't primarily the drug - it was something happening in both arms.
Decades of data showing that most improvement in clinical trials occurs in both groups, pointing to something massive and obvious, and the response was…pretty much nonexistent. Nothing much was said, and nothing much changed.
There were a small number of researchers looking at placebo effects. The world's first and only research center dedicated to understanding placebo mechanisms was established at Harvard in 2011. And in 2014, some researchers formed the Society for Interdisciplinary Placebo Studies, an international association that holds a tiny conference every two years. These teams made and are making real discoveries - they’ve documented endogenous opioid release in response to placebo, showed dopamine production in Parkinson's patients given fake treatments, and found that conditioning and expectation could produce measurable physiological changes. They proved Wolf and Beecher had been documenting something real. Yet despite seventy years of consistent findings and growing evidence of physiological mechanisms, medicine couldn't answer basic questions: When do placebo effects occur? Why do some conditions show massive placebo responses while others show none? What determines whether a patient will respond? The research was fragmentary, studying individual pieces without a unifying framework.
To understand why progress around placebo response remained so limited, let’s look at the numbers. As of the time of publication the NIH - which funds the majority of publicly funded medical research in the United States - is spending roughly $47 billion annually across all research. Cancer research gets $7.3 billion. Alzheimer's gets $3.8 billion. Diabetes gets $1.1 billion. The NIH tracks spending across more than 300 research categories, from rare genetic disorders to common chronic diseases, publishing detailed breakdowns of where every dollar goes. So where does placebo research rank on that list?
It doesn't.
Placebo isn't a category. Rare genetic disorders affecting a few thousand people have dedicated funding streams. Diseases with clear treatments and established protocols get billions. But the mechanism documented in hundreds of thousands of trials, affecting millions of patients, explaining 60% of treatment outcomes? Medicine doesn't even track what it spends trying to understand it, indicating it receives essentially no dedicated funding.
Because of this lack of funding, and the lack of investigation into findings that don’t match what we think we know, no one has thought to ask certain questions - to step back and look at the whole of the data and think: what if there IS something here. Not just does or doesn't this mean something, not just why might this be happening, but even more importantly - is it possible placebo is telling us something about our bodies we didn't even realize we were measuring?
The data is there if you step back and look at it - clues are everywhere, documented but virtually unexplained - and it is telling us something vital. Take blood pressure for example. When doctors measure blood pressure in the office there is a very large documented placebo effect. But when they strap a blood pressure monitor on you and send you home for the day, there is virtually none. Why, when they are measuring blood pressure in the same patient in both cases, does placebo present so differently? Cardiology literature has little to say about this mystery. Most of what you will see if you look for cardiology's explanation about this is just that home monitoring is more accurate than in the office, so that is the gold standard when precise measurement is needed. When they do try to explain it they invoke "white coat syndrome" most often, which is the effect of feeling nervous in the office which causes your blood pressure to rise. But this doesn’t explain it at all, because the placebo effects they're showing in the office cause your blood pressure to LOWER, not rise. Despite decades of research and billions in funding, their best explanation for why these two vital measurements vary so wildly doesn't actually account for what they're seeing. And to show how wild this is, Cardiology receives $3 billion annually from the NIH alone - which is only about a third of all medical research and development funding, meaning there is probably around $9 billion being spent after you factor in money coming from pharmaceutical companies. And they can't explain this basic discrepancy.
And blood pressure isn’t the only paradox. When measuring air flow out of the lungs, doctors conduct two main tests regularly, FEV1 and Peak expiratory flow (PEF). They test these two things in the same patients at the same time, and the placebo effects are opposite each other. One shows placebo, the other actually goes down. And what’s interesting is that respiratory medicine hasn’t even tried to explain this variation, they just do what cardiology does and recommend one as more reliable than the other. However unlike in cardiology where they’re recommending the more reliable and accurate test, respiratory is doing the opposite unknowingly.
Respiratory medicine chose FEV1 as the "gold standard" because it seemed more sensitive, more responsive to treatment. They interpreted its greater responsiveness as superiority. But that greater responsiveness includes the placebo effect. Medicine preferred the measurement that changed more - without realizing that greater sensitivity was partly placebo responsiveness. Peak flow, which was dismissed as "too variable" and "effort-dependent," was actually capturing something different: voluntary muscle force, which doesn't respond to placebo. The measurement chosen as superior was unknowingly chosen partly because it had higher placebo responsiveness.
But blood pressure wasn't the only paradox, and respiratory function wasn't the only contradiction. Medicine has been documenting these discrepancies for decades - measurements that should show the same thing but don’t, conditions that respond differently to placebo for no clear reason, and patterns that are noted but never explained.
So now is a good time for us to take a step back and look at what we have in front of us; really spread it all out and see what the data actually says. And there is a lot of it. Seventy years of placebo-controlled trials. Hundreds of thousands of studies documenting placebo effects across every area of medicine. Mountains of data showing which conditions respond to placebo and which don't, which measurements show effects and which show none. And when you actually look at all this data together - not trial by trial, but the whole picture - a pattern emerges that seems, in retrospect, almost obvious.
The prevailing assumption in medicine is that placebo only works on subjective measures - pain, nausea, things patients report feeling. Despite seventy years of data showing otherwise, that's what medicine, and subsequently researchers and physicians and the general public believe. Placebo is psychological. It's about perception, not physiology. So let's test that assumption, pull some levers and see what we find. Let's look at measurements that can't be explained by belief or perception alone - objective physiological changes that happen whether a patient is aware of them or not.
Coming back to blood pressure, let’s look at what the data is actually showing us. When patients are given sugar pills and told it will decrease their blood pressure, in study after study the data shows that their blood pressure drops. And not just a little bit - an average of 5-7 mmHg systolic, sometimes more. Real, measurable changes recorded by automatic cuffs, measurements that can’t be influenced by perception in the way we have been taught to think of placebo. These are the same measurements doctors use to do things like diagnose hypertension and prescribe medications. And though this is pretty astonishing, blood pressure can be kind of hard to conceptualize, you can’t see it and what does 5-7 mmHg really mean? Could it be a measurement error?
So let’s look at something a little different, something a little more visible. When you think of Parkinson’s you probably think of the shaking that defines it in our perception of the disease. It’s measurable and even visible to the naked eye. If you’re not familiar, Parkinson’s rest tremor is called a “rest” tremor because it happens when you’re at rest, and stops happening when you voluntarily engage your muscles. And the way doctors have found to treat this tremor is by increasing the dopamine in your brain using a drug called Levodopa, which gets converted to dopamine once it’s in your body. This drug reduces the shaking in rest tremor and is the first line of defense against this tremor in patients.
So when you look at the clinical trials for drugs that reduce shaking by increasing dopamine in the brain, and then look at what happens in the placebo arm of the trial, when these patients are told they’re taking a drug to reduce their shaking, you will see that their shaking measurably decreases. Not a little, but a LOT, by 25-45%. With some patients showing over 70% improvements. And this is pretty substantial evidence on its own, but what will strike you more is seeing that what is happening in the brain to produce these changes is the exact same thing happening in the active drug arm of the trial. The patients' brains are creating more dopamine. The studies show measurable increases in dopamine production in these patients receiving the placebo, creating improvements that, for some patients, rival any drug currently on the market.
We could go through more examples but I think you get the picture. Placebo is not just perceptual, it is acting on real physiological and biochemical activity in the body, and since placebo is required for most every clinical trial, there is a lot of data that shows just this.
Now to be crystal clear on what we’re talking about here, we need to also look at that same dataset and see if there are things that don’t show placebo effects. Because surely if it acted on every single thing the prevailing assumption wouldn’t be that it was merely psychological. And this is what the data shows as well. Not everything shows a placebo effect, in fact if you look at the two most studied functions from each of the major body systems and assess the amount of placebo effects on each one, only 10 of 22 of those functions show high placebo effects. Things like bone mineral density and fracture healing, ovulation and wound healing, all show low to no placebo effects reliably across the medical testing landscape. So what do the things with high placebo effects have in common? What do the things with low placebo effect have in common? What could this data be showing us that medicine has failed to see across the decades?
Let’s think about what the high-responders have in common. Heart rate. Blood pressure in the clinic. Breathing patterns. Gastric motility. Pain perception. Tremor at rest. These aren't things you consciously control, they happen automatically. So what about the non-responders: bone density. Wound healing. Blood pressure over 24 hours. Red blood cell production. Ovulation. These are also automatic, so it's not about voluntary versus involuntary. The difference is something else.
Let's look at heart rate again and see if it has anything to show us. Your heart beat can change pretty quickly. If you were to stand up right now your heart rate would increase within seconds. And then if you take a few deep breaths, it will slow down momentarily. Compare that to bone density: changes in your bones are happening over weeks or months, there's no moment-to-moment adjustment. So speed seems like it could be part of it, but maybe not quite the whole picture.
Let’s look at pain perception; that responds to placebo. But so does nausea. Yet pain signals travel through nerves at hundreds of miles per hour, while nausea involves chemical signals in the gut that move much slower. Different speeds, but both show placebo effects. So it's not just speed…what else? Maybe if we look at how these systems work: heart rate changes through direct nerve signals but bone density changes through hormones. Different types of control; direct neural signaling versus hormonal messengers. And look - pain and nausea both work through nerves. Heart rate, breathing, digestion - all controlled by nerve fibers. The things that respond to placebo all share this: direct neural control, real-time regulation, moment-to-moment adjustments based on your body's needs and your brain's perception.
But wait - muscle strength is also neural. When you flex your bicep, nerve signals travel from your brain to your muscle. That's direct neural control too. Yet muscle strength shows essentially no placebo response. So it's not just about having neural control - maybe it's about what KIND of neural control?
Voluntary muscle movement, the kind you control consciously when you decide to lift your arm, is mediated by your somatic nervous system. Heart rate, breathing rhythm, digestion, blood pressure regulation, these are all mediated by your autonomic nervous system (ANS). Somatic function you control consciously, and autonomic is automatic, meaning it keeps things running in the background and can adjust based on your body’s needs and your brain’s perception of threat or safety.
So let's look back to those apparent paradoxes we were looking at earlier and see if this helps explain what we were seeing. Clinical blood pressure, like when your doctor puts that cuff on you in the office, shows large placebo effects. And ambulatory, which is the kind you wear home for a day, shows virtually none. So are these being mediated by different systems? It turns out yes, they are. Clinical, namely short-term, blood pressure is controlled by baroreceptor reflexes - moment-to-moment adjustments your body makes in response to context and perception. ANS regulated. Whereas ambulatory blood pressure is controlled by renal mechanisms that operate over hours and days, and these aren't under ANS control.
What about the lung function example? FEV1 measures how much air you can expel in one second - think of it like a balloon that's really filled up versus one that's only half filled up. If you unplug the end and let air come out, which one will expel more air? The fuller one, right? Well it turns out that the smooth muscles involved in this involuntary process of pulling more or less air into your lungs are ANS mediated, which tracks with our theory. Whereas peak flow measures the maximum speed of airflow - how hard you can push air out using your voluntary muscles. That's not ANS mediated, it's somatic. So yeah, this explains them.
So what we are seeing is that the functions that respond to placebo all seem to be controlled by the ANS. And when you look at the data, it bears this out with remarkable accuracy. Those 22 functions we talked about earlier? They span every major body system and all match perfectly to this framework. This pattern holds up with remarkable precision across all 22 functions the likelihood of which, if this were to happen by chance, is less than one in four million.
For seventy years, medicine had been measuring placebo in hundreds of thousands of trials, documenting which conditions responded and which didn't. The pattern was there the entire time, medicine just never asked the right question. In fact medicine barely asked any questions at all. They just corrected for this nuisance of a noise, while systematically documenting something fundamental about how our bodies work, how disease actually works, and more importantly, what we can do about it. The ANS mediation we just discovered is one piece of the elephant. One, admittedly large, piece of the elephant, all sitting in their data, largely unexamined. And on its own it admittedly almost raises more questions than it answers. So next let's look at some of the diseases they have been spending time and money on, billions of dollars and decades of research, and see what that part of the elephant will uncover for us. Let’s start with cancer; arguably one of the most investigated diseases in history. Let's see what it can teach us about disease that we have been missing.
Chapter Three
Your body produces new cancer cells every day. Hundreds of billions of them. And every day, your body finds them and eliminates them.
Now to be more specific we're talking about cells that could become cancer if left unchecked: such as cells that are damaged, that divided incorrectly, or that got infected by pathogens. These early could-become-cancer cells are being produced by the billions every day our whole lives, and every day they are being cleared by our bodies. Billions of them. Every single day of your life. But when our bodies stop efficiently clearing these cells, they can start to accumulate. So then we have a lot more cells than we should have in a given area, and a lot of them have mutations and shouldn’t still be there. And now these accumulated cells are competing with each other, for what are now limited resources and space. When this starts happening, natural selection begins. Survival of the fittest: cell edition. Once this competition begins, the cells that can reproduce fastest, grab the now limited resources the most efficiently, or create environments that are harder for the other cells to tolerate, those are the ones that survive.
Because our cells reproduce so quickly, this natural selection process plays out at evolutionarily eye-watering speeds. Each time a cell mutates and doesn't get cleared, it’s made it past another round of selection, which means it's slightly better at surviving than the last version. Think about how antibiotics work. I’m sure you know you’re supposed to finish the entire course of antibiotics even if you start feeling better. Because if you stop early, the bacteria that survived the first doses are the strongest and most resistant ones, which means they’re the ones most likely to survive future rounds of antibiotics. The same thing happens with these cells. Each round of mutations that survives clearing is a little more resistant to your body's defenses. String enough of those mutations together, and that becomes what we call cancer. This is why the places with the highest cell turnover - the gut, the skin, the blood - have the highest cancer rates. More divisions means more mutations means more raw material for selection. Places with little turnover, like the heart, see almost no cancer.
This is all just basic natural selection, and mapped out this way it seems pretty obvious. But then, why does anyone get cancer in the first place, if our bodies are so good at clearing these aberrant mutation cells? And since this is clearly a clearing issue leading to natural selection on a cellular level, why is medicine looking at the cells themselves as the culprits? Why isn't medicine asking instead what caused this normally efficient clearing system to stop working?
Let's look back through the history of cancer research to see if we can figure out why it isn't being viewed through this lens.
In 1923 a researcher named Otto Warburg was studying cancer in his lab in Berlin, and he saw that cancer cells were using a process called glycolysis, which produced 2 units of energy instead of the usual 36 from the same amount of fuel. The understanding at the time was that cells only behaved like this to compensate when there wasn't enough oxygen, but the cancer cells he was observing were doing it with more than enough oxygen to go around. Why would they be doing something so seemingly wasteful?
Seeing this he thought he'd found THE answer to what cancer was. Here were cells that had switched to a completely different way of making energy; something normal cells never did unless they were oxygen-starved, and he reasoned that this must be the defining change that transformed normal cells into cancer cells. And he wasn’t alone in this belief. His discovery was seen as so momentous, and he was so celebrated for it, that the Rockefeller Foundation built him his own building specifically designed to his requirements: forty rooms, no offices or conference rooms, just labs and a library. This is where Warburg was doing his work, in Berlin, as a Jewish and almost certainly gay (though this was never publicly discussed) man, when in 1941, the Nazi regime came and dismissed him from his position. But then within weeks Hitler himself, during WWII, personally intervened and reinstated him. Hitler's mother had died of cancer, and he was so convinced Warburg was going to cure it, he personally let this gay Jewish man continue his work.
But unfortunately Warburg never did end up delivering the cure. Because while he'd found something real - many cancer cells are shifted to this emergency metabolism - he had the causation backwards. This wasn't how cells become cancer; it was what most cells do under certain conditions. The Warburg effect wasn't the cause but a consequence.
Then in 1992 a pediatric geneticist named Gregg Semenza discovered something called HIF-1, which is basically this master regulator in the cells. One of its jobs being to direct cells to shift to the glycolysis state. So now we knew about cancer cells switching to glycolysis, and we knew how cells got directed to make that switch. But the assumption at the time was still that generally speaking, HIF-1 sent that signal only when oxygen was low.
Then throughout the 1990s and 2000s, researchers started finding that sometimes HIF-1 was activating even when oxygen levels were perfectly normal. Inflammation could trigger it. Stress hormones could trigger it. Reactive oxygen species from overworked mitochondria could trigger it. By the mid-2000s, multiple studies showed that metabolic stress, inflammatory signaling, and oxidative stress could all activate HIF-1 and drive cells into Warburg metabolism without any oxygen deprivation at all. So Warburg had found that cancer cells were using glycolysis, and Semenza had found how cells got switched to glycolysis, and now we knew this could happen when the body was under various types of physiological stress.
Now let's jump back to the 1970s. When Macfarlane Burnet and Lewis Thomas proposed the concept that the immune system must be patrolling for and ridding the body of the lifelong accumulation of genetic changes. They posited that this was an evolutionary necessity.
But then in 1974, Osias Stutman at Memorial Sloan Kettering seemingly disproved this theory. He was actually trying to prove that immune surveillance was preventing cancer, by testing whether mice without immunity would develop more cancer than mice with it. But he ended up finding that the immune-deficient mice got cancer at the same rate as normal mice. And this discovery became a landmark paper, and the pace of research in understanding tumour/immune interactions slowed significantly in its wake. For many scientists, this closed the case: immune surveillance was wrong. And after this finding, Stutman would go on to become chair of the Department of Immunology at MSKCC and was an esteemed figure in the field.
But decades later, a researcher named Robert Schreiber saw a problem with Stutman's experiment. He figured out that Stutman's "immune-deficient" mice weren't actually immune-deficient. The mice in his study still had NK cells and some T cells, both of which contributed pretty significantly to immunity, so Schreiber redid the experiment with truly immune deficient mice. And the result? His immune deficient mice developed cancer much more readily than those with regular immunity. In the early 2000s, Schreiber was invited to present his findings at Memorial Sloan Kettering. "I'm sitting there getting ready for my talk and this elder gentleman walks in and sits in the front row." he recalls. The elderly man was Stutman - the man whose work he was disproving. "The first hand up was Stutman's," Schreiber recalled. "I'm just holding onto the lectern going, 'Oh, here it comes, he's gonna just lay into me.' And Stutman got up in front of everyone and said, 'You know, it is remarkable what you can do now at the turn of the century that we couldn't do in the 1970s.'"
Immune surveillance was hypothesized in the early '70s, seemingly disproven in '74, and then definitively reproven in the 2000s. And once they figured out that immune surveillance was in fact real, they took it a step further and documented that healthy people's immune systems could and were successfully clearing precancerous cells all the time. This was when the concept of your immune system patrolling your body, looking for damaged or abnormal cells, and eliminating them, became established science. They knew it worked. They could even see it working. But even with this knowledge, they didn't seem to be asking why it sometimes... stopped.
In 2002, in the wake of immune surveillance being reproven, researchers discovered that tumor cells were expressing a protein called PD-L1 on their surface. PD-L1 serves as a signal to the immune system patrollers - think of it like a flag you hang outside your door that says "don't kill me! I'm one of you!.” Healthy cells hang these flags so the immune system doesn’t accidentally kill them when it’s mounting a response, so when things like infection or inflammation are happening and the immune system is mounting a defense, cells are more likely to hang these flags. Tumor cells have been discovered to use these flags, which is why they are able to survive even with the immune system patrolling, which makes sense given what we talked about earlier - natural selection favors cells that can avoid being killed. If you are hanging these flags, your chances of surviving immune clearance go way up, so of course the tumor cells that survived are ones that still use this signal. But the medical literature framed this discovery of tumor cells using these flags as: tumors were actively suppressing the immune system, evading detection through the use of these flags. The medical literature was full of this kind of framing: tumors 'co-opt' immune checkpoints, tumors 'evade' immunosurveillance, tumors use 'key mechanisms' to escape immune destruction.
Once they discovered this, the solution seemed obvious: block those signals. If tumors were ‘actively’ suppressing immunity, just remove that suppression and unleash the immune system. So they tried it, and it worked. When researchers developed drugs that blocked PD-1 (the receptor the immune system uses to read the PD-L1 signal), tumors shrank, and some patients with previously fatal cancers who would have otherwise died, lived. In 2010, a clinical trial of a new treatment for patients with advanced skin cancer (melanoma) using the knowledge of these flags produced results at a level that medicine had never seen before. In several of the patients in the trial, signs of their cancer disappeared completely after treatment. These were people with metastatic disease, meaning their cancer was spreading to other body systems, who without treatment would have died within months. The drug that emerged from this research, ipilimumab, was very quickly approved by the FDA, in 2011. It was the first new treatment for advanced melanoma in over a decade, and following the wake of ipilimumab, the field exploded. Allison and Honjo, the people who developed this method, won the Nobel Prize in 2018. The word "revolution" appeared in paper after paper.
But there was a pretty big side effect to this treatment that wasn't getting as much air time as the solution: autoimmunity. And not just a little but a whole lot of it. Patients being treated with this new method developed things like inflammatory colitis, hepatitis, thyroiditis, and dermatitis. Their immune system, without the usual signals telling it not to, had started attacking healthy tissue throughout their bodies. Some of these autoimmune reactions were severe enough that patients had to stop the cancer treatment. This confirms what we just discussed: PD-L1 isn't a tumor trick. It's how ALL cells protect themselves during immune activity. Block it everywhere, and healthy tissue loses its protection too. When your immune system is attacking an infection or responding to injury, inflamed but regular tissue puts out the PD-L1 flags to prevent collateral damage—to say "yes, I know there's activity here, but don't attack ME, I'm just responding to the problem." Macrophages do it. Epithelial cells do it. Normal inflamed tissue does it all the time.
Tumor cells hanging the PD-L1 flag in response to attacking T cells isn't "adaptive immune resistance" or a "clever evasion strategy." It's just... what cells do when they're in an inflammatory environment. It's the normal cellular response.
So this 'revolutionary' discovery is more evidence that tumor growth is just natural selection playing out. When immune clearance fails, cells accumulate and compete. The ones that happen to retain normal protective signals survive. Medicine was seeing the winners of a selection process and calling it a strategy. The discovery was: some tumor cells happened to retain or upregulate this normal protective signal, and the cells that did had survived immune surveillance. The ones that lost it got cleared. This proved that the immune system CAN clear tumors - the capacity exists, it's just being held back. When they removed the safety mechanisms with checkpoint inhibitors, tumors disappeared. But patients developed autoimmunity. Medicine had accidentally proven, again, that immune capacity matters. They just weren't looking at it that way.
Then in 2015, researchers found that epithelial cells, which make up most of the cells that become cancer, have a built-in migration program called 'unjamming.' During wound healing, these cells shift from a solid-like 'jammed' state to a fluid-like 'unjammed' state, allowing them to migrate collectively to repair tissue. More specifically, these cells unjam when their neighbor goes missing. So to conceptualize this, let's think about a sheet of honeycomb for a minute. If you imagine all those little pockets as cells, you can see how they each have neighbors on every side. What triggers unjamming is if you were to cut a line in that sheet, all of the cells along that cut that had a neighbor a second ago and now don't would get the signal, "I lost my neighbor, we need to repair this gap that was just created." And what is particularly important here is that these cells move in sheets or clusters to repair these gaps, not individually. This is important because we know from the research that single rogue cancer cells rarely cause cancer to spread in the body; it’s almost always clusters of traveling cancer cells that are the catalyst for spread.
So let’s look at this one a bit more closely. Researchers know that cancer metastasis (when cancer spreads from its original location to other places in the body) involves cell clusters spreading to places they shouldn't be. They know most deadly cancers are epithelial. And a decade ago researchers figured out that epithelial cells naturally migrated using this unjamming program, and that this program creates moving clusters or sheets. But even now, ten years later, no one seems to have connected the dots.
And what this means is that cancer metastasis isn't some sophisticated evolutionary strategy that tumor cells developed. It's epithelial cells doing epithelial cell things; activating their normal wound-healing migration program. If there's tissue damage around growing tumors, that will create wound signals. Normal cells in the area respond appropriately, migrate a bit, repair, and stop. But tumor cells, which are just epithelial cells that survived selection for rapid growth when clearing failed, they also get these wound signals. They unjam and migrate too, but if they've lost the normal controls through mutation, without the normal stopping signals, they go places they shouldn't.
So if unjamming is how epithelial cells migrate (it is), and cancer cells are primarily epithelial cells (they are), and unjamming is triggered by edge creation (it is)... then what would we expect to see? And what would that mean for how we do things? The huge implication here is that surgery creates massive wound signals, it is literally creating wounds. And the data bears this out. Modern studies show right after surgery, within 6-12 months, there is a sharp peak of metastasis, which suggests a triggering event, not gradual progression. Radiation damages tissue, and even chemotherapy causes tissue damage that can cause these edges needed for unjamming to take place. So our front line treatments for cancer are triggering the migration program in cells that have already been selected for proliferation. We've known about unjamming for nearly ten years, and we're still using treatments that create the exact signals that trigger metastasis, then wondering why cancer spreads after treatment.
But maybe the biggest potential implication of all is biopsy. Biopsy is a front line diagnostic procedure. See anything out of the ordinary on your scans? Biopsy the potential mass to see if it's cancer. And interestingly, when someone actually looked, they found exactly what this framework predicts - more lymph node involvement, and specifically macrometastases (clusters), not individual displaced cells. In 2004 Hansen conducted a study of over 600 women who had breast cancer to see if there was a correlation with biopsy and metastasis, and she found that women who had needle biopsies were 50% more likely to have cancer in their sentinel lymph nodes than women who had their tumors surgically removed without prior biopsy. And not just cancer cells but macrometastases which means groups of cells clumped together, which is exactly what unjamming triggers. And cancer cells in your lymph nodes is categorically bad. Lymph node involvement is one of the primary criteria that determines how far along your cancer is and how aggressive your treatment should be, which means the biopsy may be increasing not just spread but also the intensity of subsequent treatment.
A larger study by Peters-Engl and colleagues found the same association Hansen did - a 37% increased risk - but after statistical adjustments, concluded the finding wasn't significant. They corrected away what they saw like we saw the researchers do in the placebo Cochrane study. A third study by Moore in 2004 conducted at Memorial Sloan Kettering showed the direct correlation: no biopsy (1.2%) → FNA (small needle) (3.0%) → core needle (bigger needle) (3.8%) → surgical biopsy (4.6%). With a p=0.002 which basically means the chances of this happening by chance are extremely unlikely. So what we are clearly seeing is: more manipulation = more metastasis.
The medical response has been to say those cancers must have already spread. But now we have the mechanism. Biopsy creates wound edges. Wound edges trigger unjamming. And tumor cells that unjam go into the lymphatics and bloodstream through normal routes, and this happens because our front line diagnostic procedures are telling them to migrate.
So while individual doctors or medicine as a whole may know this, sort of, if they truly understood how these connect—if they'd integrated this into their framework—everything would look different. If they understood and acknowledged the potential implications, biopsy wouldn't be the reflexive first step every time a scan shows something suspicious. We would be doing more studies looking at whether all this cancer screening is really helping anyone. Miller et al. did a 25-year randomized controlled trial of nearly 90,000 women that showed women randomized to screening vs no screening for breast cancer had the exact same death rates. So it’s not a given that all this screening is providing more benefit than harm. And to be clear, none of this means we shouldn’t look for cancer at all, or that we shouldn’t treat it when we find it. It just means that we should know what we’re doing when we do it. Know the risks and make calculated judgement calls, instead of looking for cancer, cutting it and triggering spread, and then acting surprised when it moves around the body. But they aren’t thinking about it this way. Which means that while medicine may 'know' the individual facts, they clearly haven't put them together into a framework that actually changes how they think about, research, or treat cancer.
There was one researcher who did see it differently. Robert Gatenby, a radiologist at Moffitt Cancer Center, had been thinking about cancer through an evolutionary lens since the 1990s. He remembered being taught in medical school that "cancers grow because cancer cells grow faster than normal cells", which he knew was wrong on multiple levels. Cancer cells don't grow faster because a mutation flipped a switch. They grow faster because, in a competitive environment where damaged cells are accumulating instead of being cleared, fast proliferation is a winning strategy. We're seeing the result of selection, not the cause of the problem.
In 2009, Gatenby published a paper proposing that we stop trying to eradicate all cancer cells in the way we have always done, and instead try using treatments strategically; use evolution against itself. He called it "adaptive therapy." He figured that maximum-dose chemotherapy kills sensitive cells, creating space and resources for resistant cells to proliferate. That's why cancer so often comes back resistant—we're selecting for resistance with each treatment. But what if we cycled treatment on and off, maintaining a population of sensitive cells to suppress resistant ones?
In 2017, Gatenby published results from a 11-patient prostate cancer trial. He was able to demonstrate that using this approach could double patient survival time, while cutting drug use in half. The paper appeared in Nature Communications, which is one of the most prestigious journals in science. Other centers started testing variations of his protocols, and he got follow-up funding for larger trials. By 2022, five years after publishing those initial results, multiple adaptive therapy trials were ongoing or planned in different cancers. But in 2025, sixteen years after his original concept paper, eight years after proving he could double survival time, adaptive therapy was still described as "promising" and "gaining traction." Sixteen years. Dramatic results. Proper channels. Published in a prestigious journal. And it's still "promising."
And even Gatenby, who understands cancer cells are responding to selection pressures and is using that understanding to save lives, even he is still asking: "How do we manage the tumor population better?" rather than looking upstream to: "Why did the clearing system fail in the first place?" His innovation is still, ultimately: how can we best kill cancer cells directly. He's figured out how to get really good at bailing water out of the boat. But he's not asking why the boat is sinking.
So what we’re seeing here is that cancer isn't that mysterious, it’s really just normal epithelial cells doing two normal epithelial cell things - proliferating and migrating - without normal supervision. When normal immune clearance fails or depletes, natural selection favors the fastest replicators. When wound signals trigger migration, these selected cells spread chaotically. Every 'hallmark of cancer' is just normal cell behavior without normal regulation. And we didn't need to spend decades in laboratories or billions in research funding to understand this. We didn't need to become oncologists or molecular biologists. We just needed to recognize the pattern: when immune clearance fails, natural selection happens at the cellular level. The winners become what we call cancer.
This understanding predicts cancer should develop specifically in epithelial tissues (which can unjam and migrate) where clearance has failed. Cancer distribution data confirms this: malignancies cluster in epithelial tissues (lung, GI tract, breast), while benign tumors that don't metastasize occur in tissues without this migration program.
The research community has meticulously documented every piece of this puzzle - the Warburg effect, immune surveillance, epithelial unjamming. They've published it all, proven it all. But without understanding that it’s ultimately a clearing issue, they've studied each piece in isolation, missing the breathtakingly simple whole. And this isn't a criticism of researchers - they've done extraordinary work mapping these mechanisms. But when you're deep in the molecular details of your specific pathway, it's hard to step back and see that it's all the same pattern.
And to be clear, there are undoubtedly individual physicians and researchers who see pieces of this pattern. Some may even see all of it. Gatenby clearly understands the evolutionary dynamics. But as we talked about above, this understanding isn't reflected in how medicine teaches, researches, or treats cancer. Medical students still learn that cancer is caused by genetic mutations. Research funding still flows toward finding new ways to kill cancer cells rather than asking why clearing failed.
So even though there are likely many who see pieces or the whole, a radiation oncologist who suspects treatment triggers metastasis still has to follow standard protocols. A researcher who understands clearing failure can't get funding to study it when grant committees expect molecular pathway research. The system's inertia overwhelms individual insight.
And this pattern that we are seeing here, of medicine finding the mechanism downstream, but not even asking about the upstream cause, is not unique to cancer. So now we have another piece of the elephant. Medicine is missing that clearing failures cause cancer, and all of the “mysterious” cancer findings throughout history are just what you would naturally find when regular cells keep regular cellular processes as they go through the natural selection process. So then the next obvious question is: why is the clearing failing? Let’s look at that next.
Chapter Four
Medicine sees autoimmunity as the immune system gone rogue. Why is this person's immune system attacking their own body? The research focuses on understanding the mechanisms of attack: which tissues are targeted, which inflammatory pathways are activated, what's different about these attacking immune cells. And the treatment follows this research, attempting to turn the immune response off in targeted ways so it stops attacking the body.
But every healthy body has cells that attack its own cells, called autoreactive immune cells, which serve useful purposes like clearing damaged cells and debris. Which means that what's happening in autoimmunity is similar to what we saw in cancer, in that it's a normal cellular process running without its normal controls. In healthy people this process is kept in check by regulatory T cells (Tregs) that suppress them before they can cause damage in healthy tissue. But in autoimmunity, a few things happen simultaneously: Treg production drops meaning there are fewer overall Tregs, and the remaining Tregs function less efficiently, meaning they are less able to regulate the way they’re meant to. And there is something called IL-2, which feeds both Tregs and autoreactive immune cells, which are the ones that always exist to attack your own tissue. And when Tregs are diminished, there's more IL-2 to go around and so the autoreactive cells get a lot more of it, which effectively ramps them up. You end up with a situation that's kind of like removing the police while simultaneously arming everyone in a crowd. The autoreactive cells that are normally suppressed can now attack freely and have more fuel to do so. Which means the question medicine should be asking is: why does the monitoring fail like we saw in cancer?
So this is what’s happening in autoimmune diseases, and medicine is genuinely mystified why autoimmune diseases cluster together. About 25% of people with one autoimmune condition develop at least one more - and for some conditions, the risk of developing a second is 4 to 10+ times higher than the general population. But when you understand what we just laid out, that autoimmune disease comes from systemic monitoring failure plus autoreactive activation, of course you'd see multiple autoimmune conditions develop. If your entire immune monitoring system is compromised while autoreactive cells are ramped up throughout your body, why would it only affect one tissue type? Or occur in one isolated spot in the body? Ibuprofen doesn’t just target inflammation in your hand, or your gut, it’s system wide because that's how the body works. The mystery isn't why they cluster - it's why medicine expected them not to.
Medicine has identified between 80 and 100 "different" autoimmune diseases, which as a whole affect 1 in 10 people. They keep discovering "new" ones every year, adding to the list like they're cataloging species. Hashimoto's thyroiditis\! Graves' disease\! Type 1 diabetes\! Rheumatoid arthritis\! Lupus\! Sjögren's syndrome\! Each one gets its own specialist, its own treatment protocol, its own research funding. Medicine acts like they've discovered 100 different mysteries. "Oh look, the immune system can attack the thyroid\!" "Oh wow, it can also attack the joints\!" "Can you believe it? it attacks the pancreas too\!" As if each one is a separate phenomenon requiring separate explanation. It's like "discovering" that water can make cotton wet, and wool wet, and silk wet, and declaring you've found 100 different wetting diseases instead of recognizing that water makes things wet. At this rate, they'll 'discover' 200 autoimmune diseases by 2050, each one treated like a breakthrough instead of recognizing they're all variations of the same failure.
And autoimmune clustering is just the beginning of a much bigger pattern; disease clustering extends far beyond autoimmune disease. Medicine uses the term "comorbidity" when diseases occur together - for example if you have both diabetes and heart disease, or depression and Alzheimer's. They think of "comorbidity" as separate diseases that just happen to coincide. But the clustering is far too consistent and predictable to be explained by chance alone.
In fact, over half of US adults have multiple diagnosed health conditions. It's not unusual to see patients with 5-7 conditions, and older adults having 5+ conditions means an average of 50 prescriptions, 14 different doctors, and 37 visits per year. And these conditions aren't accumulating randomly, they are clustering in predictable patterns. Depression with stroke, chronic pain, and Alzheimer's. Diabetes with heart disease, depression, and kidney disease. Asthma with cardiovascular disease, COPD, and depression. The more conditions you have, the more likely you are to develop more. When you look at the actual patterns, what you see is that certain diseases cluster in specific constellations. Alzheimer's, congestive heart failure, COPD, diabetes, and cancer cluster together at rates far beyond what chance would predict. About 80% of people with Parkinson's disease will develop dementia during the course of the disease. If you have depression you are about 40% more likely to experience a stroke. The list goes on and on, across every body system you could name.
And neurodegeneration clusters the same way. Alzheimer's and Parkinson's share so much pathology that at autopsy, about 60% of Alzheimer's patients have Lewy bodies - the supposed "hallmark" of Parkinson's. MS patients have about twice the risk of developing Alzheimer's disease. The boundaries between these "different diseases" blur the moment you look closely. Medicine draws boundaries between them, but the bodies keep showing all of them bleeding into each other.
When researchers analyzed 10 million patients to look at the pattern of comorbidities, they identified very well-defined disease clusters. Metabolic syndrome clusters together: obesity, high cholesterol, hypertension, diabetes. Autoimmune and inflammatory conditions cluster together. Mental health disorders cluster with chronic pain. Cardiovascular diseases cluster together. The pattern is there, clear and consistent, across millions of people. And medicine sees these clusters and says they share "risk factors." Which truly, if you translate what this means from medical jargon into common parlance means: we see the pattern, but since we treat each disease as its own discrete entity with its own specialist, we never step back far enough to see what's creating the cluster.
So let's do that now. If we look across all of medical research, and then step back so we can see the full picture, it becomes clear that there's a specific physiological state that precedes all chronic disease. Medicine has identified some of its markers: high blood sugar, hypertension, inflammation, and obesity, and noticed these appear upstream of hundreds of conditions. But they stopped there, treating these markers as causes rather than symptoms. They never asked what was upstream of those markers. What could explain why they were happening in the first place?
When we look at the full picture to see where disease actually begins, maintains itself, and progresses, a clear physiological state emerges. The chemicals being maintained in this state cause Tregs to downregulate and become less effective, for example, and this not only explains autoimmune disease but the clearance failures in cancer too. In cancer, this same chemical state suppresses the NK cells and cytotoxic T cells that normally eliminate aberrant cells thus stopping the abundance of cells that leads to the natural selection process that produces cancer. And when we look below at all the rest of medicine's chronic disease categories, we will see it cleanly explains them as well. From here forward we'll call this upstream disease state pathostasis. Patho meaning disease, stasis meaning steady state. Pathostasis is the disease state medicine has been meticulously documenting piece by piece but has never assembled whole. And it explains…everything.
So what exactly is pathostasis? It's what happens when the body's adaptive and useful acute stress response, which is designed to activate briefly then resolve once a threat passes, gets stuck on. Here's what that looks like chemically:
| Chemical | Baseline | Acute Stress | Pathostasis |
|---|---|---|---|
| Cortisol | Circadian rhythm (peak AM, low midnight) | ↑ Elevated | ↑ Elevated - rhythm gets smoothed out due to chronic elevation |
| Norepinephrine | 70-1700 pg/mL | ↑ Increased | ↑ Elevated (2-3x normal) |
| Epinephrine | 0-140 pg/mL | ↑ Increased | ↑ Elevated (2-3x normal) |
| Dopamine | 0-30 pg/mL | ↑ Increased | ↓ Decreased |
| Glucagon | <20 pmol/L | ↑ Increased | ↑ Elevated |
| Insulin | 2-10 μIU/mL | ↑ Increased initially | ↑ Elevated → tolerance/resistance → ↑ increased elevation needed → pancreatic exhaustion → ↓ depletion/suppression |
| Vasopressin | 0-5.9 pg/mL | ↑ Increased | ↑ Elevated |
| Glutamate | 0.5-2.5 μmol/L | ↑ Increased | ↑ Elevated |
| GABA | 10-500 nmol/L | ↓ Decreased | ↓ Decreased |
| Serotonin | Normal levels | ↑ Increased (most regions) | ↓ Decreased globally |
| Oxytocin | Normal function | ↑ Increased | ↓ Decreased/dysregulated |
| Beta-Endorphin | Circadian rhythm (peak AM) | ↑ Increased | ↔ Dysregulated rhythm/receptor dysfunction |
| Histamine | Very low (<617 pg/mL) | ↔ No acute increase | ↑ Increased (chronic mast cell activation) |
| Cytokines | Very low/undetectable | ↔ No acute increase | ↑ Increased (chronic inflammation) |
This isn't controversial; every value in this table comes from mainstream medical research. What's never been recognized is that this chemical configuration IS the chronic disease state, not a risk factor for it or a contributor to it, but the actual mechanism itself.
Before we go further, let's clarify what we mean by the acute stress response and how pathostasis relates to it. When we say 'stress' here, we're not talking about missing your kid's recital or having too many emails. Medicine and the wellness industry both saw the correlation between disease and stress, and so started to recommend things like “self care” and destressing. But while the correlation was correct, the cause was not. That’s why taking a bath, going on vacation, or going for a daily walk, isn’t actually curing anyone of anything. It might help make you feel less stressed in the moment, but it’s not addressing the engine that’s keeping the pathostatic chemicals turned on. So you can't address pathostasis by ‘reducing stress’ in the colloquial sense. For long term change, you have to address the conditioning that creates this engine, which we will address in part five. Which also means that you don’t need to feel “stressed” to be sick. And conversely you can feel stressed and be perfectly healthy. Acute stress is a natural, normal body response.
So pathostasis is the state of these chemicals being stuck on. The cumulative burden of time spent in pathostasis, and the degree of chemical dysregulation, we will call pathostatic load. When medical research documents 'chronic stress,' they're actually measuring load, even if they don't realize it.
While in this state, due to prolonged exposure to these chemical changes that our bodies weren’t evolved to maintain, your clearing systems fail, your immune system dysregulates, inflammatory processes go haywire, and disease develops. And though it seems random: why did person A get cancer while person B got depression? What we can tell from the data is that it isn't as random as it may seem. Let’s think about it mechanistically. Imagine a chain. If a chain were to get rusty and weathered, and someone dropped something heavy on it, is it just going to break where the thing hit it? Or is it going to break at its weakest point - where there's the most rust, where the latch is damaged, where it's been rubbing against something? It can be either or both. Where your system got the blow, and/or where there was already damage or vulnerability. This is what we see happening with the development of disease; it shows up where there's already vulnerability - genetic predisposition, prior damage, existing strain, or it develops where it received the blow, like lung cancer in smokers. And then, once that system starts failing, those failures create more stress on adjacent systems in that same cluster, and the next most vulnerable point in that system breaks.
This is why cardiovascular diseases cluster with kidney disease and stroke risk, for example. Hypertension develops because your blood vessels are constricted due to the elevation of norepinephrine, epinephrine, vasopressin and cortisol from pathostasis, and then the damage from that creates more load on your heart, and heart disease develops. Then that heart damage affects your kidney’s ability to function optimally, and kidney disease develops. The vascular damage affects your brain, and stroke risk increases. It's not four separate diseases. It's progressive failure in an already-compromised vascular system, each failure creating conditions for the next.
The same pattern plays out in metabolic clusters. The chemical cocktail of pathostasis creates the exact conditions for diabetes: increased glucose production, and insulin tolerance/resistance. When this is expressed, this metabolic dysfunction affects the cardiovascular system, and heart disease develops. The inflammatory state from both affects more metabolic processes. The pattern cascades through related systems, all connected through the same metabolic infrastructure that was vulnerable to begin with.
Autoimmune conditions cluster together, which makes sense because if your immunity is already a weak point, then it’s easier for further autoimmune diseases to tack on to the first. Again, these are not separate diseases randomly occurring in the same person; they're progressive failure in an interconnected body system.
This explains everything medicine finds puzzling about comorbidity. It explains why "comorbidity" rates are so absurdly high - they're not actually CO-morbidity, they're the same morbidity expressing in related systems. It explains why treating each disease separately downstream doesn't stop new ones from developing - because you're not addressing the upstream state that's causing the failures. It explains why lifestyle interventions often fail; because telling someone to eat better and exercise more doesn't address why they got sick in the first place. They don’t address the pathostasis that’s driving it all.
So let’s look at how this plays out across the spectrum of diseases. To read this tree, you can look at the letters on the chemicals in Level 0, and the numbers on the level 1 and 2 disorder/disease mechanisms, and see how those map to the example diseases below. This tree shows that chronic diseases cascade down from level 0 chemistry. Levels 1 and 2 map out commonly studied cascade pathways medicine has documented but never unified. The disease list is a representative snapshot, not an exhaustive list.
Here are some of the most talked about "real" diseases mapped out from the tree above:
And here are a few "functional" disorders mapped out from the same tree:
And "psychiatric":
You will notice how 'real' diseases, 'functional' disorders, and 'psychiatric' conditions all trace back to the same Level 0 chemicals? That's because these divisions never existed in biology, only in medical specialization. An emergency room doctor, a psychiatrist, and a rheumatologist could all be treating the same patient for what they think are three different diseases, chest pain, depression, and arthritis, never realizing they're all looking at different expressions of the same Level 0 chemistry.
So let’s walk through what we're looking at here, because this is quite a lot. It’s all of medical research finally assembled into one picture. And let’s also be really clear that this is a conceptual framework, not a biochemical flowchart. We are using this table to present the unifying cascade that happens in pathostasis, not to perfectly map every single mechanistic change.
At the very top, Level 0 is the chemical picture of the body in pathostasis like we showed in the first chart, and it explains in a clean domino effect, every single chronic disease we know about. Notice how cortisol (A) appears in almost every disease pathway? Or how immune dysfunction (mechanisms 8, 9, 10) shows up everywhere? These aren't coincidences - they're the common threads medicine missed by studying diseases separately.
In levels one and two we are showing byproducts of level zero, downstream cascades that medicine talks a lot about and sees causing a lot of other stuff. For example, notice obesity is on the level 2 list, secondary cascades. This means it’s a symptom of disease, not the cause. In fact, why don't we look at obesity a little more closely; use obesity as a case study to see how this tree can shake out the truth that medicine has been missing, since it's blamed for so much. Medicine sees that a lot of disease happens in people who are 'overweight,' and they conclude weight gain causes disease. They've even noticed that belly fat specifically correlates with certain diseases, so they declare belly fat is especially dangerous. But rather than asking WHY the body would preferentially store fat around organs during certain conditions, they just declare the fat itself is the problem. Lose the weight, they say, and the disease will improve. But JUST losing weight doesn’t actually help. In 2004 when Klein et al. removed 28-44% of abdominal fat via liposuction, they found no change in insulin sensitivity, no change in inflammatory markers, and no change in blood pressure, glucose, insulin, or lipid concentrations. And this was about 12% of overall body fat removal. They then tracked these same people for 1.5 to 4 years after the liposuction and these same results persisted. There have been multiple meta-analyses of similar studies and some show no effect at all and some show inconclusive results. If fat were really causing disease, wouldn't you expect losing it to provide some consistent benefits?
Given that losing fat alone doesn’t seem to change much, let's think about what's actually happening at Level 0 and Level 1 when weight gain occurs in pathostasis. You have elevated glucagon (E) and cortisol (A) driving chronic hyperglycemia (5). Your body is producing more glucose than it needs because your stress response thinks you need immediate energy available. That glucose has to go somewhere. Insulin’s job is to get the cells to open the door and let in glucose, so when glucose rises like this, initially, insulin rises as well, to shuttle it into cells. But just like your body needs more and more alcohol to get you drunk the more you drink, the more insulin the system has, the more insulin is needed to perform the same amount of glucose shuttling. So the cells become tolerant (medicine calls this resistance) to insulin's signal, and now you have high glucose and high insulin circulating in your bloodstream. Insulin does two other things as well. It promotes fat creation, and stops fat from breaking down. So when there are chronic high levels of insulin in the blood it makes it very easy to gain weight and very hard to lose it.
And cortisol which is also elevated in pathostasis specifically promotes visceral fat accumulation - fat around your organs. In acute danger having readily accessible energy stores near your vital organs makes evolutionary sense. You might need that energy fast. The problem is that when you’re in pathostasis and these chemicals persist, your body just keeps packing fat into your midsection month after month, year after year. You're stuck in storage mode without an efficient release mechanism.
So where did this idea come from then? If there isn’t really any medical basis for it? It turns out the idea of body fat as the cause of disease came from the same guy who proposed that eating fatty foods was bad. In the 1950s, a physiologist named Ancel Keys started pushing what he called the diet-heart hypothesis, which posited that eating saturated fat raises cholesterol which he thought caused heart disease. He had access to data from 22 countries and in an effort to prove his idea, he only used data from the 6 that supported his hypothesis. Rather than reexamine his idea once he saw the data didn’t cleanly fit, he just ignored the findings that didn’t match. He was ridiculed in the scientific community for his antiscientific methods. Two epidemiologists published a devastating critique pointing out that Keys had data from 22 countries but only used 6, and that he was studying a "tenuous association" rather than proof of causality. They showed that if he had included all 22 countries the correlation essentially disappeared. In fact they were even able to show that if you picked a different set of 6 countries you could show the exact opposite effect that he was trying to prove. After being ridiculed he wanted to prove his detractors wrong, so he decided to conduct a study. But he used the same cherry picking methods from his original study, chose 7 countries where the data best fit what he wanted to see, used cohorts that weren’t representative of their countries, used inconsistent measurement methods, and through this research “proved” his hypothesis. This somehow worked and got him onto the board of the American Heart Association, and eventually on the cover of Time magazine in which he promoted "the evils of fat" including both body fat and dietary fat as one unified evil.
Keys was also the person who named BMI Body Mass Index, which later got adopted by the NIH and then used by the world health organization to recognize obesity as an epidemic. BMI actually came from one statistician in the 1830s who was trying to define “the average man”. To determine this he gathered data of Scottish and French military men’s height and weight, and came up with a formula that he felt represented it: weight/height^2. This was just based on his observation that these European military men didn’t tend to get wider as they got taller. So no actual medical or even scientific basis and it wasn’t intended for medical purposes. It was just some guy trying to figure out the mathematical correlations between human physical characteristics. But when Keys renamed it the Body Mass Index, it somehow took hold, and now every doctor's office in America uses it to tell people they're diseased. They came up with arbitrary numbers of normal based on European averages, and mortality statistics, and determined this to be the health bar for the world. No consideration for how height and weight may differ for women, or Asian people, or African people, or literally anyone else.
Which means many people labeled 'overweight' by these arbitrary standards aren't sick at all. They're just larger than a formula designed around 19th century European military men said they should be. Their blood markers are fine, their energy is fine, their bodies are functioning exactly as they should. The medical system has pathologized normal human variation in body size, then spent decades trying to 'fix' people who were never broken. Meanwhile, the people who ARE sick, the ones actually in pathostasis, aren't sick because they're storing fat. They're storing fat more than healthy people because they're sick.
So this one guy single handedly vilified body fat, dietary fat, coined BMI leading to it becoming a clinical measurement tool, and convinced everyone cholesterol was bad—all without any actual evidence. And the question isn't really why this one guy thought these wrong things; people think wrong things all the time. The question is how and why the entire system bought in, amplified his wrongness, and protected it for seven decades. And why it's all still largely believed and implemented today across the globe.
Keys was the co-principle investigator on a study meant to prove his hypothesis. They studied over 9000 people in controlled settings where they could actually measure dietary compliance, from 1968-1973, and they found two key things. One, that lowering saturated fat did in fact reduce cholesterol. And two, that reducing cholesterol didn’t reduce heart death. In fact the participants in the study who had greater reductions in cholesterol had higher risk of death. The inverse of what his hypothesis predicted. So what happened with this study? They just…never published it. It sat in the lead investigator Ivan Frantz’s basement until his son found it in 2011. When asked why they never published it Frantz said “We were just disappointed in the way it came out.” The most rigorous study ever done to test his hypothesis proved it was wrong, and they didn’t like the results so they just sat on it. Frantz’s son helped get it published in 2016, but by then it was all so institutionalized there was really no reversing course. It got published and completely ignored.
The truth is that cholesterol is actually good for you and necessary for immune function and brain function, every cell membrane requires it, and many other vital processes. It is so vital that 80% of your cholesterol is produced “in house” in your body. When you are in pathostasis your body produces more of it because it’s so beneficial, and as is the pattern we keep seeing over and over, medicine saw elevated cholesterol in sick people and assumed correlation meant causation. And we have spent decades trying to lower people’s cholesterol. Statins are a front line drug for people with heart disease and statins do actually provide benefits. Medicine thinks this is because of the cholesterol lowering function, but statins also reduce inflammation and improve endothelial function. When they use drugs that ONLY reduce cholesterol, they see no benefits at all on cardiac health. When they added ezetimibe (a non-statin cholesterol lowering drug) to statins, it lowered cholesterol even further but showed no additional mortality benefit. Same with PCSK9 inhibitors - they lower cholesterol dramatically but don't reduce death rates. The drugs that lower cholesterol without the anti-inflammatory effects don't save lives. Statins which have anti-inflammatory effects do. The mechanism medicine attributes the benefit to isn't actually the mechanism producing the benefit.
And here's what makes the obesity-as-cause narrative particularly insidious: when people lose weight through extreme caloric restriction without addressing Level 0, they often don't get healthier. They might see some temporary improvement in certain markers, but the underlying chemicals are still there. Their body is still in pathostasis, still producing the same hormonal environment, still generating the same inflammatory state. And because their body is still trying to store energy in response to that stress signal, the weight almost always comes back. Medicine sees this and concludes people lack willpower, that they couldn't maintain the weight loss. What actually happened is their body was fighting to return to the state Level 0 was commanding it to maintain.
But when weight loss happens as a result of turning off Level 0 - which we'll talk about in detail later - the weight comes off and stays off because you've addressed the cause. The body is no longer getting the chronic stress signal to store energy. Insulin sensitivity improves because insulin is no longer chronically elevated. Fat cells begin functioning normally again.
The weight loss isn't the treatment, it's the side effect of treating the actual disease state.
One of the most common criticisms you will hear about new theories or ideas is that 'correlation doesn't equal causation' and yet here they've built an entire treatment paradigm around mistaking a symptom for a cause. They saw a correlation - obesity occurs alongside disease - and declared it causal, with no mechanistic explanation for why having more adipose tissue would cause such diverse conditions. And they did this while ignoring all the data that suggests otherwise. Having more fat cells doesn't cause diabetes any more than having a runny nose causes a cold. They're both just observable symptoms of an upstream process. Yet we've built entire industries, treatment protocols, and social stigmas around treating the weight as if it were the disease itself.
It's yet another example of bailing water out of the boat without trying to fix why it was sinking in the first place.
Now that we know all of this, let’s come back to the GLP-1 ‘mystery’. All of the diseases GLP-1s act on are downstream of the same upstream dysfunction (pathostasis - chronic stress chemistry that never turns off). It’s not on the list of pathostatic chemicals above, because GLP-1s haven’t been thought of as a key player in the stress response, but the medical research shows that in acute stress, GLP-1s elevate, and are part of the acute stress chemical cascade. And in pathostasis, GLP-1s are depleted. So it makes sense that if you have been stuck in pathostasis for a long time, artificially adding back in one of these depleted chemicals would have an effect on everything downstream of these chemicals. Which as we have figured out here, is every chronic disease. The reason one drug works on "many diseases" is because they're actually one disease with many manifestations. And just like many other drugs we artificially inflate, whether alcohol, or dopamine in Parkinson’s patients, the body eventually builds up both a tolerance and a dependence. Which is what the early users of this drug are reporting. After 12-15 months, people stop losing weight, and if they go off the drug immediately, they gain most of the weight right back. Doctors are starting to recommend that patients stay on the drug, in an attempt to create a new set point for the body’s weight, but we don’t yet know what will happen when we take people off this drug after that time. Because now the body is used to having this chemical in its system, and removing it could be no big deal, or it could make everything worse than it was at baseline. This is something that should be tested long term.
Alright let’s walk through another example from the mapping above. Take diabetes, which medicine sees as a distinct metabolic disease, treats with insulin and glucose management, and considers it largely irreversible. What do we see when we trace it through the tree? Elevated cortisol (A), epinephrine (C), and glucagon (E) at Level 0 - drive chronic hyperglycemia (5). Your body keeps producing glucose as if you need immediate energy, even when you're sitting still. That excess glucose forces insulin production higher and higher, trying to shuttle all that glucose into cells. But over time, those cells build a tolerance (resistance) to insulin's signal. When you expose receptors to more of a substance than they were designed to handle, they downregulate to protect themselves.
So now that you've built up a tolerance to insulin, the pancreas has to work overtime trying to overcome that tolerance by producing even more insulin. This chronic overstimulation eventually exhausts the pancreatic beta cells that produce insulin. When they fail, insulin production drops - that's F at Level 0, depleted insulin, which also leads to weight loss, which you would think would have been a hint that weight was downstream. This insulin drop combined with the ongoing glucose elevation from A, C, and E, plus the inflammation from mechanism 10, you get the full picture of Type 2 diabetes at position 13 in Level 2. It's not a separate disease entity, but pathostasis hitting a vulnerable metabolic system. The tree shows exactly how A, C, E, F → 5, 10 → 13. Every step is documented in medical literature. They just never connected it all back to Level 0.
We already talked about cancer, so let's see how it maps on the tree. Remember how we discovered that cancer happens when immune clearance fails and cells accumulate, leading to natural selection at the cellular level? Look at the tree: Immune suppression and dysfunction (8) which causes chronic inflammation (10) where selection pressure intensifies and impaired clearance systems (17). All of it tracing back to the chemicals at Level 0 - elevated cortisol (A), elevated cytokines (N), and elevated histamine (M). The Warburg metabolism we talked about? It shows up at mechanism 7, driven by cortisol (A), epinephrine (C), glutamate (H), and cytokines (N). But it's not causing cancer - it's just what stressed, proliferating cells do. The real drivers are the failed clearance and immune dysfunction. Based on your mapping above, cancer = A, C, N, H → 7, 8, 10, 17. Though mechanism 7 (Warburg) is more of a marker than a driver, the key pathogenic mechanisms are 8 (immune suppression), 10 (inflammation), and 17 (impaired clearance). Chapter three explained how cancer isn't a random mutation lottery but a predictable outcome of failed clearing in bodies stuck in pathostasis. Now we can see it mapped perfectly - the immune failure, the inflammatory environment, the clearance breakdown - it's all one cascade from Level 0.
Remember how we saw neurodegeneration clustering earlier - Alzheimer's and Parkinson's sharing so much pathology that half of Alzheimer's patients have Lewy bodies at autopsy, MS patients developing Alzheimer's at 2-4x the normal rate? The boundaries blur because they're not actually separate diseases. They're the same upstream process hitting different brain regions.
If we look at what the research shows across all of these conditions, the same thing comes first, before the protein aggregates and plaques that medicine points to as causes, they found the same thing in every single neurodegenerative condition: reduced blood flow (hypoperfusion). Hypoperfusion precedes neurodegeneration in Alzheimer's, Parkinson's, ALS, Huntington's, MS, and frontotemporal dementia.
| Disease | Hypoperfusion? | Precedes Neurodegeneration? |
|---|---|---|
| Alzheimer's | ✓ | ✓ |
| Parkinson's | ✓ | ✓ |
| ALS | ✓ | ✓ |
| Huntington's | ✓ | ✓ |
| MS | ✓ | ✓ |
| Frontotemporal Dementia | ✓ | ✓ |
The evidence is robust across all of them. And this makes perfect sense when you trace it through the tree. The pathostatic chemicals we've been tracking - norepinephrine, epinephrine, vasopressin, cortisol - all cause vasoconstriction at mechanism 1. That's their job during acute stress: shunt blood toward muscles and away from non-essential functions. But when these chemicals stay elevated chronically, the vasoconstriction becomes chronic too. And the brain regions that get hit first are the ones supplied by small terminal vessels, the last stops on the vascular supply chain, or the areas with the highest metabolic demands and the least backup blood supply.
And medicine already knows hypoperfusion causes dementia. That's what vascular dementia IS - cognitive decline from reduced blood flow. They diagnose it all the time. They even have a category called 'mixed dementia' for when they can't cleanly separate what's vascular from what's Alzheimer's, because the boundaries blur so often. So they know hypoperfusion causes cognitive decline. They know Alzheimer's shows hypoperfusion before plaques appear. And yet nobody thought to check whether the same mechanism was driving Parkinson's, ALS, Huntington's, MS. Each specialty documented hypoperfusion in their own disease, published it in their own journals, and never looked across.
This explains everything that otherwise makes no sense about these diseases. Lucid periods in Alzheimer's patients, where someone who hasn't recognized their family in months suddenly knows everyone's name and recalls old memories perfectly. Good days and bad days across all these conditions. Stress making symptoms worse, because vasoconstriction tightens further and more neurons go dormant. Exercise helping, because improved blood flow brings neurons back online. The dramatic placebo responses in Parkinson's we talked about in chapter 2, where patients produced real dopamine and reduced their tremors by 70%, none of that is possible if the neurons are dead. Dead cells don't start working because you believed in a sugar pill. They don't come back online because you had a good night's sleep or went for a walk. But dormant cells can, and do, whenever perfusion improves enough to support their metabolic needs.
Medicine has spent decades and billions of dollars trying to clear amyloid plaques and Lewy bodies, treating these protein aggregates as the cause of neurodegeneration. But patients can have extensive plaques with no cognitive symptoms, and severe dementia with minimal plaques. The proteins aren't causing the disease. They're accumulating because the clearing systems we talked about in chapter 3 are failing, for the same reason they fail everywhere else in pathostasis: the body's resources are being redirected toward survival, not maintenance.
But if hypoperfusion is the driver rather than the plaques or the proteins, that raises a question: what's actually happening to these neurons? Are they dying, as medicine assumes? Or is something else going on? The answer has significant implications, and we'll examine it in the next chapter when we look at the predictions pathostasis generates.
And medicine has been meticulously documenting all of these things for over a century. Every mechanism in this tree is documented in their literature. The elevated hormones, the physiological effects, the cascading failures, all of it measured and published in thousands of studies across every medical specialty. The pieces are all there, but what they haven't done is step back far enough to see it's all connected.
It seems almost unthinkable that something so simple could have been sitting there in plain sight. If this were true, wouldn't someone have noticed by now?
But history shows that the most important scientific discoveries often hide in plain sight, waiting for someone to connect pieces that everyone else kept in separate boxes. And history also shows us that the simpler and more far-reaching the discovery, the fiercer the resistance. Not because it's wrong, but because accepting it means admitting we've been looking at the problem backwards.
So before we go any further, let's pressure test this the way any scientific theory should be pressure tested: by looking at the rules for what makes a theory good. What separates revolutionary science from pseudoscience? When we apply those criteria - the same ones that validated germ theory, evolution, and plate tectonics - which explanation actually holds up?
Medicine's framework, with its hundreds of separate diseases and unclear mechanisms?
Or this one?
Chapter Five
In 1831 Charles Darwin was 22 years old. He tried going to medical school, but he had to drop out when he figured out that the sight of blood made him physically ill. So his father shipped him off to Cambridge to study theology so he could become a clergyman instead. He got his degree, but while in Cambridge he spent most of his time collecting beetles and taking long walks with his botany professor John Stevens Henslow, rather than actually studying theology. Henslow suggested that he join a surveying voyage to South America. The ship's captain needed a companion and a naturalist for the trip. Someone educated who could document the natural world while the crew charted coastlines. The position was unpaid, the voyage was slated to last two years, and Darwin would be sharing a cabin the size of a closet with several other men while sailing around the world on a ship that was, by all accounts, quite small and prone to violent rocking. Darwin said yes immediately.
The HMS Beagle set sail in December 1831. The plan was to spend two years surveying South America's coastline, and then head home. But the work kept expanding and what was supposed to be two years turned into five. And those five years, Darwin later wrote, determined his whole career.
During the voyage, Darwin did what naturalists did: he collected things. Birds, plants, rocks, fossils, insects. If it existed, Darwin wanted a specimen of it. He shipped crate after crate back to England, each one meticulously labeled, each one representing hours of work that he performed while fighting off constant, debilitating seasickness. He kept meticulous notes, drew detailed sketches, and documented everything he saw. He found fossils that he noticed looked like enormous versions of animals still living on the continent. Giant armadillo-like creatures. Huge sloths. Why would extinct creatures resemble the living ones in the same place and also be so wildly different in size? In Argentina, traveling south, he kept finding slightly different versions of the same bird species. Not dramatically different, just...variations. Why would there be so many species that were almost identical but not quite? Then, in September 1835, the Beagle reached the Galápagos Islands.
The Galápagos are young islands, geologically speaking, volcanic and isolated. And Darwin started noticing that each island had animals that were seemingly slight variations of the animals on the other islands. The mockingbirds were different sizes and had different markings, the tortoises had differently shaped shells, and then there were the finches.
The finches looked similar enough that Darwin initially thought they were all one species with some variation. It wasn't until he got back to England and examined the drawings he had made more carefully, that he realized these weren't variants at all, they were distinct species. And they were distinct in ways that allowed each species to survive on their specific island more effectively. Birds on islands with hard seeds had thick, strong beaks for cracking them open. Birds on islands with insects had thin, pointed beaks for catching them. Birds on islands with cacti had sharp beaks perfectly suited for accessing the fruit. It was as if each island's finches had been custom-designed for their environment. If God had created each species separately, why would he create nearly identical birds but give them slightly different beaks for different islands? Why would extinct South American animals look like giant versions of the living South American ones? Why would species vary slightly as you traveled from place to place? The patterns suggested something that would have been heretical to say out loud in 1836: species weren't fixed. They changed over time. They adapted.
But having the thought and proving it were very different things. Darwin knew that claiming species could change would be professional suicide without ironclad evidence. Maybe even with it. So when he got home in October 1836, he didn't publish anything. He got married, moved to the countryside, and started working. For twenty years.
He spent those two decades obsessively studying everything he could get his hands on. He studied barnacles for eight years straight, cataloging every species he could find, establishing himself as a serious naturalist who did rigorous, careful work. He bred pigeons in his backyard, documenting how artificial selection could produce wildly different varieties from a single ancestral species. He corresponded with breeders, farmers, gardeners, and anyone else he could find who worked with living things and watched them change. And he collected data on everything he could think of: orchids, worms, coral reefs. He was building an argument so comprehensive that when he finally did publish, no one would be able to dismiss it as speculation.
But it wasn’t just that he was worried he didn’t have enough evidence, he was also scared. His wife Emma was deeply religious, his friends were religious, Victorian England was religious. And the church believed that God created every individual creature just as it existed today. The idea that instead life evolved and adapted went directly against everything the church believed and taught at the time, which made it both scientifically controversial, and socially and professionally dangerous. The last book suggesting species could change had come out in 1844 by Robert Chambers, who published anonymously because he was afraid of backlash. And as he feared, critics tore it apart. It was mocked, and dismissed as unscientific. Darwin’s former mentor Adam Sedgwick called it a foul book and wrote a scathing critique in The Edinburgh Review attacking it. The book was called ignorant, reckless, and accused of corrupting morals, and Chambers kept his authorship secret for the rest of his life. Darwin watched that happen and decided he wasn't ready. Not yet.
He wrote out his theory first in a 35-page sketch and then in an expanded 231 page paper, and showed these initial drafts to his close friend Joseph Hooker, swearing him to secrecy, and told his wife that if he died, she should publish it. And then he just…kept working. Collecting more evidence, anticipating objections, writing and rewriting, on and on and on. His friends kept urging him to publish; Lyell and Hooker both insisted he had enough, that he needed to get it out before someone else did. But Darwin kept saying not yet. Not yet. I need more evidence.
By 1858, he was finally writing what he called his "big book" on natural selection, thousands of pages to make his argument as comprehensive as possible. And he was maybe two years away from finishing, when on June 18, 1858, a letter arrived from halfway around the world.
The letter was from Alfred Russel Wallace, a naturalist collecting specimens in the Malay Archipelago. Wallace had enclosed a short essay and was asking Darwin to review it and, if Darwin thought it was good enough, pass it along to Charles Lyell for possible publication. When Darwin read the essay, his stomach dropped. Wallace had figured it out. Not just evolution, which Darwin knew other people were starting to circle around, but natural selection. The exact mechanism Darwin had been sitting on for twenty years. Wallace had arrived at the same conclusion independently, and he'd done it in a twenty-page essay that Darwin said later could not have been a better short abstract of his own work. Even the terminology matched the chapter headings Darwin had been working on. After two decades of careful preparation, someone else had gotten there too.
Darwin sent the essay to Lyell immediately. "Your words have come true with a vengeance," he wrote. Lyell had been warning him this would happen if he kept waiting, and now, here it was. Darwin was beside himself, and not just about priority. Wallace had sent the essay to him in confidence, trusting Darwin to help him get it published. What was he supposed to do? Sit on it while he rushed out his own book? He couldn't do that. But if he forwarded it for publication as Wallace requested, he'd lose credit for twenty years of work.
Darwin left the decision to Lyell and Hooker. "I would far rather burn my whole book than that he or any man should think that I had behaved in a paltry spirit," he wrote. They came up with what seemed like a fair solution: a joint presentation to the Linnean Society, which was and is one of London's most prestigious scientific organizations for naturalists. They would present Wallace's essay alongside excerpts from Darwin's sketch and an 1857 letter Darwin had written to the botanist Asa Gray outlining his theory. Both documents predated Wallace's essay, which Lyell and Hooker could verify. This way both men would get credit for the independent discovery, Darwin's priority would be established, and Wallace's work wouldn't be suppressed.
On July 1, 1858, the papers were read before the Linnean Society with neither Darwin nor Wallace in attendance. Darwin's infant son had just died, and Wallace was in the Malay Archipelago, completely unaware the presentation was even happening. But when Wallace eventually learned about it months later, he was genuinely delighted. He'd been included in a major scientific presentation alongside Darwin, and that was backed by the most respected naturalists in England. He never expressed any bitterness about the arrangement, not then or later. He was just happy to have contributed.
So natural selection, now one of the most important discoveries of all time, was presented publicly, finally, after twenty years of sitting on it. And what happened after? Pretty much nothing. The Linnean Society's president would later remark that 1858 had been a year in which nothing particularly revolutionary had occurred. The papers were published in the society's journal in August, and the scientific world essentially yawned. One of the most important scientific papers ever published, and it was met with near-total indifference.
What got noticed was what came next. Wallace's letter had finally forced Darwin's hand. He couldn't sit on his theory any longer. Someone else had gotten there independently, which meant the idea was in the air, which meant if he didn't publish soon, someone else might beat him to it entirely. So Darwin abandoned his massive encyclopedic "big book" and started writing what he called an "abstract." A condensed version that would get the essential ideas out quickly. He worked frantically for thirteen months, and on November 24, 1859, On the Origin of Species was published.
The first printing of 1,250 copies sold out immediately. The book was readable, comprehensive, and packed with evidence from every field Darwin had been quietly studying for two decades. He made a carefully argued case built on thousands of observations of his and others that all pointed to one unifying mechanism: natural selection. Organisms vary, and some of those variations make them more likely to survive and reproduce, and over time, this process produces all the diversity of life on Earth. Not separate acts of creation, as was the prevailing belief, but one continuous process of descent with modification. And the response was explosive. Religious leaders were outraged; the idea that humans descended from animals was blasphemy. Cartoons appeared in newspapers showing Darwin with the body of an ape. Public debates erupted, the most famous of which was the 1860 confrontation at Oxford between Thomas Huxley, who became known as "Darwin's bulldog" for his fierce defense of the theory, and Bishop Samuel Wilberforce. Darwin himself stayed home, citing illness. Some historians think his convenient illnesses during controversy were genuinely stress-related. Others suspect he preferred to let his defenders take the heat while he kept working.
But the scientific community's response was more complicated than simple rejection. Many scientists accepted that evolution had occurred because the evidence for change over time was overwhelming once you looked at it. What they resisted was natural selection as the mechanism, because to them it seemed too random and undirected. There was even a period called "the eclipse of Darwinism" where alternative mechanisms were seriously explored, such as Lamarckian inheritance, where traits you acquire throughout your life like large muscles can be passed down to your children. Orthogenesis: that evolution is predetermined; organisms are "programmed" to evolve toward increasing complexity or a specific endpoint. And saltationism: that evolution happens through sudden large jumps rather than gradual change. It wasn't until the 1930s and 1940s, when Mendel's work on heredity was integrated with Darwin's natural selection, that natural selection finally became widely accepted as the primary driver of evolution. That was seventy years after On the Origin of Species was published. Seventy years of debate, resistance, and alternative theories before Darwin was fully vindicated.
Darwin had everything he needed to publish by 1844. All the observations were there. Breeders had known for centuries that selection could change organisms. Fossils showed species had changed over time. Everyone could see that organisms varied. The pieces were sitting in plain sight across different fields. Darwin's contribution was connecting them, seeing that artificial selection, natural variation, the struggle for existence, and the fossil record were all part of one process. And even with mountains of evidence, even with twenty years of meticulous preparation, even after demonstrating that his theory explained everything from species distribution to orchid structure to ancient extinctions, it still took seventy years for the scientific establishment to fully accept it. Not because the evidence was weak, but because the implications were too disruptive. And because it overturned fundamental assumptions about how life worked, humanity's place in nature, and the role of divine creation. At its most basic, it took so long because accepting it meant rethinking everything.
So here we are, looking at pathostasis. And it may seem almost too simple. Too neat. One mechanism explaining hundreds, thousands, of diseases? Surely if this were true, someone would have noticed by now. Surely this would have been figured out already. The same thing people said about evolution when they first encountered it - if species really changed over time, wouldn't we already know that?
But history shows us that the most important scientific discoveries often hide in plain sight, waiting for someone to connect pieces that everyone else keeps in separate boxes. And history also shows us that when those connections finally get made, the resistance can be fierce, and it can last for years. Not because the new theory is wrong, but because accepting it means admitting we've been looking at the problem wrong all along.
What’s unique about pathostasis is that it’s not proposing anything that doesn’t already exist in thousands of peer reviewed studies. We aren’t claiming to have discovered a mechanism medicine hasn't documented. Every arrow in the pathostasis tree — every hormone, every cascade, every downstream effect — comes from peer-reviewed medical research. The Level 0 chemicals are measured in standard labs. The Level 1 and 2 mechanisms are in medical textbooks. Thousands of researchers across decades documented these pieces. What's new isn't any individual piece. What's new is seeing that they fit together.
This matters for how you evaluate the theory. We don’t need to evaluate whether cortisol suppresses Tregs — it’s in the immunology literature. We don’t need to evaluate whether chronic catecholamine elevation causes vasoconstriction — it's in every physiology textbook. The question isn't whether the pieces are real. The question is whether the assembly is correct.
So before we go any further, let's look at this theory the way science is supposed to work, by asking: what makes a theory good? What separates revolutionary science from wishful thinking or speculation? And when we apply those same criteria that eventually validated germ theory, evolution, and plate tectonics to both medicine's current framework and to pathostasis, which explanation actually holds up? And when we apply these criteria, we're not asking whether the pieces are correct, they have all been meticulously and carefully documented over decades by medicine themselves. What we're asking is does the synthesis of medicine’s own data make more or less sense than the way medicine is interpreting it.
Karl Popper established the criteria for what separates science from non-science. His framework is taught in every scientific methodology course and used in courts. When you need to know if something is actually science or just speculation, you use Popper's criteria which consists of these five things: falsifiability, parsimony, predictive power, integrative power, and mechanistic rigor. So let's apply them and see what pathostasis offers compared to medicine's current framework.
An overly simplified example of something that is unfalsifiable would be the idea that “everything happens for a reason.” This can never be disproven because any outcome can be explained as "this happened because it was supposed to happen.” Whereas for gravity, if you dropped a ball and it fell upward, that would falsify the theory. So in order to be considered falsifiable, which is the gold standard for any scientific theory, there must be things that could make the theory wrong. Let’s look at medicine first.
Medicine's framework:
Medicine has no concept of healing chronic disease. They call it "remission" when it happens, they assume it’s waiting in the wings. And there is no outcome that can prove them wrong. If someone heals they either say they could get sick again at any time, or they say it must have been a misdiagnosis, they must have never been sick. They wave away anything that doesn't fit their narrative of chronic diseases being incurable. This is the definition of something being "unfalsifiable" which is one of the worst offenses in the sciences.
And then there are the things that are falsifiable, and that actually are proven wrong through the data, that medicine steadfastly continues believing anyway: Medicine claims being overweight causes disease, but weight loss alone doesn’t change disease outcomes. People who are thin still get sick, people who are ‘overweight’ are perfectly healthy. And yet none of those things factor into them revisiting their theory, they just keep saying fat is bad. Medicine believes that Alzheimer's patients' neurons are dying, and yet they have lucid periods where all of their lost function comes back for a few hours at a time. Instead of revisiting their theory to see if it actually holds up they just say it's a paradox. Medicine becomes unfalsifiable in their hand waving away anything that doesn't fit their theory. They say things like "it's multifactorial", meaning it's complex, which allows anything and nothing to fit in anywhere. Anything that fits their data they tout, anything that doesn't they ignore, deny, or call it an anomaly or a paradox or multifactorial or complex. But if you have the right theory, there aren't anomalies. There are no fossils in the fossil record that don’t fit Darwin’s evolution. Gravity works the way we expect it to all of the time. We don’t occasionally see things fall upwards or sideways. That's not what you see when a theory is right, you see things following the rules.
There's no observation that can prove medicine's framework wrong because it doesn't actually predict anything. It just describes and names, becoming more and more complex the more they find.
Pathostasis:
For pathostasis, if you can find a single chronic disease where the documented physiological mechanisms in medicine's own literature don't trace back to the Level 0 stress hormones, then the theory fails.
It has a specific, testable prediction, as science demands.
When trying to disprove a theory, sometimes people will do so without engaging with the actual theory, so to be clear, to scientifically test whether pathostasis holds up, requires that you actually use the equation. You can’t falsify Pythagorean theorem without actually applying the equation as intended; you can't "disprove" it by saying "I thought of a triangle where it doesn't work" without actually measuring the sides and doing the math. So if someone tries to falsify pathostasis without actually engaging by tracing the chemicals to the disease in question, then that’s not falsifying, that’s hand waving. But if you use the chemical tree and trace it down and it doesn’t fit, then that breaks the framework.
And people will argue that you can be stressed and not get a chronic illness or perfectly calm and get a chronic illness, which we will address in the next section. But that’s not the theory. The theory is that all chronic diseases trace back to the chemicals at Level 0. That is falsifiable.
Medicine treats chronic diseases as separate entities. It catalogs over 100 distinct autoimmune diseases, multiple types of cardiovascular disease, various cancers, different neurodegenerative conditions - each with its own specialty, its own research journals, its own treatment protocols. Each disease has well-documented downstream mechanisms but murky upstream causes. When the origin is unclear, medicine labels it 'multifactorial' - a sophisticated way of saying 'we see associations but don't understand causation.' It's a framework for categorizing disease, not explaining it. Explaining less with more.
Pathostasis: All chronic diseases trace back to one stable physiological state. That state produces direct physiological chemicals, which directly and clearly cascade into many secondary disease mechanisms, which express as specific diseases depending on individual vulnerabilities. Different diseases aren't different problems, they're different expressions of the same upstream process hitting different vulnerable systems. One mechanism, unlimited expression. Explaining more with less.
This is what parsimony demands - when you have two theories that both explain the observations, choose the simpler one. Medicine requires hundreds of separate etiologies. Pathostasis requires one.
Evolution produces staggering complexity, but the mechanism is dead simple. People see the complexity of the output of these processes and assume the explanation must match in complexity. But it never does. The explanation is always a simple engine that runs for a long time and produces elaborate results.
The history of science isn't "we slowly accumulated more and more complexity until we understood everything." It's the opposite. We accumulated complexity, complexity, complexity - and then someone came along and said "what if it's just this one thing" and the whole edifice collapsed into elegance.
Heliocentrism. Evolution. Germ theory. Plate tectonics. Every single one replaced a complicated mess of special cases with a single unifying principle.
Every generation thinks they're the ones who finally got it right. The doctors who believed in humors weren't stupid. They were working within the best framework they had, and that framework had internal logic and institutional support and centuries of tradition. They would have found the idea that tiny invisible organisms cause disease laughable. Ridiculous. Obviously wrong.
And we tell that story now like "of course germs are real, how silly they were" - without ever turning the lens on ourselves and asking: what are we absolutely certain about right now that will look exactly that silly in 100 years?
But what’s difficult for paradigm shifts to take hold is that they require giving something up. Accepting germ theory meant giving up miasma and humoral medicine. Accepting pathostasis means giving up the idea that each disease is its own special puzzle to solve. People don’t like giving things up, especially if they’ve built their identity or career around them.
So we end up in this weird place where complexity signals sophistication, when historically it’s been a signal that you’re missing something. When we thought the universe revolved around the earth, with every new finding we had to add another epicycle, another specific and complex explanation, a bandaid to an incorrect theory. Once we figured out we were actually revolving around the sun, all the bandaids were stripped away, no longer needed once we had the right answer.
Which version is doing that here?
It should tell you what you'll observe before you observe it, which allows for scientific testing to be constructed and executed. For example we should see pathostatic chemicals before we see disease onset.
Pathostasis predicts disease clustering patterns that medicine observes but can't explain. It predicts which diseases will cluster based on shared mechanisms. It predicts that addressing the root cause will produce improvements across multiple "separate" conditions simultaneously. It predicts that load reduction will improve outcomes across the board.
It predicts genetic expression patterns - why some people with "disease genes" never get sick, why others get sick without the genes, and why genetic diseases only express under certain conditions.
These aren't post-hoc explanations, they're predictions the theory generates that can be tested.
Medicine's framework mostly describes associations without predicting them. But the overarching theory seems to be a vestige of germ theory, where they’re still looking at things through the pathogen model, assuming that if they can remove or kill the thing they can see, that will cure the disease. For example, medicine predicted clearing amyloid plaques would cure Alzheimer's. They spent billions of dollars over decades of research, and created drugs that had minimal to no benefits, they found patients with plaques that had no Alzheimer’s, and patients without plaques with severe Alzheimer’s. Their prediction was clearly wrong, but did that lead them to revise it? No. The majority of Alzheimer’s research and treatment is still focussed around amyloid plaques.
Medicine predicted lowering cholesterol would prevent heart disease. As we saw, lowered cholesterol not only doesn’t prevent heart disease, it leads to worse death outcomes. Did this incentivize them to come up with a new hypothesis? No. Cholesterol levels are still being medically managed.
If you have a correct theory, and information comes to light that doesn’t fit what you thought, when you dig in further it should further clarify the theory, not muddy the waters. Sometimes things are wrong, and that can lead you to a better truth. But medicine doesn’t have a correct theory, so they just keep adding more and more complexity without adding any more truth.
Neurodegenerative diseases are clustered together by one overarching disease marker: neuronal death. Medicine thinks they all cause neurons to die, thus the grouping. But the way that medicine measures neuronal death is by staining certain markers within the cells to see how many are left. And it turns out what this is actually measuring is whether cells are metabolically active, not whether they exist at all. If a cell was dormant, rather than dead, it wouldn't show up on their tests.
Pathostasis predicts that the neurons are dormant due to hypoperfusion, not dead. If you starve a cell of blood and oxygen, it can shut down and wait for the resources to come back.
Lucid periods in Alzheimer's patients, where someone who hasn't recognized their family in months suddenly knows everyone's name and recalls old memories perfectly, would be seemingly impossible if the neurons encoding those memories are dead, but could make a lot of sense if it was caused by dormant cells being turned back on for a while because the blood flow returned.
Here are some other things that don’t make sense if neurons are dead: Good days and bad days across all these conditions. Stress making symptoms worse, and relaxation making them better. Exercise helping across all neurodegenerative conditions. The dramatic placebo responses in Parkinson's we talked about in chapter 2, where patients produced real dopamine and reduced their tremors by 70%. Dead dopaminergic neurons don't start producing dopamine because you believed in a sugar pill. Dormant ones might.
And there are documented cases of ALS reversal, where patients who met full diagnostic criteria — including EMG-confirmed denervation — recovered completely. Their motor function returned. Follow-up EMGs were normal. But because medicine doesn’t have a template for reversing or healing from chronic disease, they assumed these patients must have been misdiagnosed. Which as we discussed above, means the current medical model is unfalsifiable, which is the cardinal sin in science.
So a testable hypothesis that came out of the pathostasis framework is this: neurons that appear dead by current measures may be recoverable if perfusion is restored before true cell death occurs. Which if true would mean that these diseases could be reversible under the right conditions. Which is what those ALS reversals indicate.
This is a testable prediction. And the anomalies medicine keeps dismissing as paradoxes or misdiagnoses are exactly what you'd expect if it's true.
Pathostasis does this repeatedly - not by generating new data, but by correctly interpreting medicine's own research.
Disease clustering: Medicine observes that autoimmune diseases cluster but can't explain why. Pathostasis predicts this exactly - they're not separate diseases, they're the same upstream cascade hitting different tissues based on individual vulnerability.
Genetic penetrance: Medicine can't explain why identical mutations express so differently. Why one person with the Huntington's gene get sick at 40, another at 80, another never (we will cover the documented reality that not everyone with Huntington’s gene gets the disease in the next chapter). They call this "incomplete penetrance" and "variable expressivity" - naming the mystery without solving it. Pathostasis explains it: genes load the gun, pathostatic load pulls the trigger.
Cancer metastasis: Medicine knows surgery triggers metastasis. Medicine knows wound healing involves cell migration. Medicine knows cancer cells become migratory. But they treat these as separate phenomena. Pathostasis connects them: surgery triggers wound healing signals → epithelial unjamming → dormant cancer cells become fluid and migratory → metastasis. One mechanism, documented at every step.
Neurodegeneration paradoxes: Medicine defines Alzheimer's and Parkinson's by neuronal death - progressive, irreversible, incurable. But as we saw in chapter 4, the neurons aren't dead. They're dormant from hypoperfusion, and hypoperfusion precedes degeneration in every single neurodegenerative disease. This explains why lucid periods happen, why stress worsens symptoms, why exercise helps, why placebo can trigger dopamine activity in Parkinson's patients. Dead cells don't come back online because you believed in a sugar pill. Dormant cells do, whenever perfusion improves.
Treatment paradoxes: Why do GLP-1s improve outcomes across dozens of "separate" diseases simultaneously? Why does reducing inflammation help both diabetes and depression? Why do stress reduction techniques show benefits across multiple conditions? Medicine has no framework to explain this. Pathostasis predicts it: reduce the root cause, improve all downstream effects.
These clear explanations emerged because once you have the universal law, it organizes the truth. If you don't understand gravity, then you may have one explanation for why we don't float off the earth and another for why two objects of different weight fall at the same speed. Once you have the equation, you can start asking more relevant questions, and all the pieces fall into place.
And luckily for us, medicine has so clearly documented every single bodily process, that now that we have the right equation, the right fundamental law, when we ask the right questions, the answers just present themselves cleanly. One after the other.
Medicine has no equivalent. Each specialty documents its own findings in its own journals, and no one steps back to see how they connect. Cardiology doesn't talk to rheumatology. Oncology doesn't talk to neurology. The same upstream pattern shows up across every field, documented in thousands of papers, and no one puts it together because the framework assumes these are separate diseases requiring separate explanations.
Every step in the pathostasis cascade uses documented mechanisms. The hormones at Level 0 are measured in standard medical labs. The Level 1 and 2 cascades are recognized disease mechanisms. This isn't speculation, its integration of existing knowledge into a coherent framework. Think about what we walked through with diabetes. Using the pathostasis framework, it’s not mysterious, a moral failing, or even dietary. Pathostatic chemicals signal the body to do exactly what we see in diabetes, sustained high glucose leading to high insulin leading to insulin tolerance/resistance, leading to weight gain, and eventually as we saw in late stage diabetes, to insulin depletion and weight loss. It maps perfectly.
Medicine sees high glucose and thinks it might be diet, or weight issues, or other vague lifestyle issues. They treat it by continuing to flood the system with more insulin which makes the tolerance/resistance even more acute. They have theories about what causes it, and when things don’t fit they just hand wave them away.
Medicine talks about "risk factors" and "associations" and "mechanisms unclear." They find what's broken without explaining what broke it. Pathostasis shows the breaking in progress, mechanism by mechanism.
| Criterion | Medicine | Pathostasis |
|---|---|---|
| Falsifiability | ✗ Unfalsifiable | ✓ Falsifiable |
| Parsimony | ✗ More with less | ✓ Less with more |
| Predictive Power | ✗ Predictions fail, no revision | ✓ Testable predictions |
| Integrative Power | ✗ Names paradoxes | ✓ Resolves paradoxes |
| Mechanistic Rigor | ✗ Associations only | ✓ Full mechanism |
Of all the criteria for a good theory, parsimony is often considered the most important - the simplest explanation that accounts for all the data is usually correct. So stepping back and looking at them as a whole, what requires fewer assumptions?
Option A: One unified mechanism - time spent in the pathostasis - that manifests differently based on genetics, duration, intensity, and which system fails first.
Option B: Hundreds of separate diseases with mostly unknown causes that just happen to cluster together in predictable patterns and all correlate with stress through unclear mechanisms and respond to similar interventions for unexplained reasons.
The answer seems obvious. But if pathostasis is so clearly supported by medicine's own data, why hasn't it been recognized? Why are we still treating hundreds of separate diseases instead of one underlying mechanism?
The objections will come, just as they came for Darwin. Some will be scientific, some institutional, some purely reactive. But there's one objection that towers above all others - the one that seems so obvious it might have occurred to you already:
Of course sick people show the same dysregulated hormones we see in pathostasis, being ill IS stressful to the body. How do we know pathostasis causes disease rather than disease causing pathostasis?
It's a fair question. Maybe THE question. Because if we can't prove the direction of causation - if we can't show that pathostasis comes first - then this entire framework collapses.
So let's look at what happens when we track people from childhood, before any disease develops, and see what actually comes first...
Chapter Six
In the 1990s, researchers accidentally discovered that asking ten questions about a patient's childhood could predict disease better than anything else medicine routinely measures. It costs next to nothing to administer, takes about three minutes, and patients would only need to take it one time ever, as their answers, once they are over 18, would never change. This simple assessment predicts lifelong disease probability as well as or better than all of medicine’s standard current screening tools combined: blood pressure, cholesterol, smoking history, and family history of disease. It alone gives us a more complete picture of future health outcomes than all of these expensive tests combined, we've known about it for over thirty years, and I bet at this point you can guess what medicine is doing with it. Pretty much nothing.
The assessment we are talking about here is called the Adverse Childhood Experience survey, or ACEs for short. You may have even heard of it. It is so predictive of disease that if a patient has a score of six or more out of ten, vs a patient who scores a zero, this test accurately predicts a difference in life expectancy of twenty years. Twenty years.
The ACEs survey consists of ten questions about your childhood before the age of 18, and you answer yes that happened to me, or no it didn’t happen to me. That’s it. So if you answered “yes that happened to me” to 6 or more of the questions, your life expectancy is on average 20 years less than your peers who answered no to all of the questions.
The original study that figured out how predictive this was tracked 17,421 people over decades of their life. For contrast, most clinical trials that dictate our care across the entire medical model we think of as healthcare today enroll an average of 65 patients and last an average of 6-12 weeks, with larger phase 3 trials capping at around 3,000. The original study focussed on a low-risk population of mostly white, middle class, college educated, insured people. They asked the participants these ten simple questions about their childhood, and then they tracked their health outcomes for decades. They found that about 60% of these people reported at least one yes on their assessment, and one out of eight people reported 4 or more yeses. And there was a direct dosage response, meaning for each additional number you answered yes to, your health outcomes were predictably worse. To be sure this wasn’t only relevant and applicable to this homogenous population, there have been hundreds of studies in the 30 years since this original study, and through that expanded research those same results have been replicated across many different populations.
So here we have over a hundred studies over a 30 year timespan, studying hundreds of thousands of patients, tracking a direct correlation between childhood distress and disease. Our first, pretty robust, data point that pathostasis comes before disease. And what is medicine doing with this information? Not much. The researchers who discovered this thought it would be a landmark finding. Something that would rewrite the textbooks. But as we are pretty well aware by now, medicine isn’t wont to do that. Especially when it doesn’t fit with their current understanding of find the pathogen or mutation, kill the pathogen or mutation. But we already know that medicine wants biomarkers, and that they have trouble acting on things they can’t physiologically see with their own eyes. So let’s look at a parallel set of research that was being done over this same 30 year timespan that gave them just that. Biomarkers showing these same exact findings.
The term ‘allostasis’ was coined in 1988 by Sterling and Eyer. They saw that black Americans had far higher rates of hypertension than genetically similar populations in West Africa. They had basically the same genetics, but existed in different environments, and they were showing vastly different disease rates. The belief into the 1980s was that your genes and constitution (namely if you were weak or hardy) determined whether you got disease and what diseases you got. This allostasis work proved that our physiological responses weren't fixed, but were actually adaptive to context, which was a pretty revolutionary finding at the time. Of course we know that what they were seeing was pathostasis in action - the same genetics producing disease or health depending on environmental load.
Then in 1993 researchers Bruce McEwen and Eliot Stellar took the concept a bit further and came up with the term ‘allostatic load’, which they defined as multi-systemic “wear and tear” on the brain and body, experienced when repeated allostatic responses enact ‘stress’ on the body. McEwen saw the brain as the central coordinator of the body's stress response, meaning that the brain interprets the demands on the body, and coordinates system-wide responses. The way they approached researching this was to isolate biological parameters that represented the way the hypothalamic-pituitary-adrenal (HPA) axis was functioning, which included the sympathetic nervous system, the cardiovascular system, and metabolic processes. These biomarkers were comprised of what they called primary mediators, aka the stress hormones cortisol and catecholamines (part of the level 0 pathostasis chemicals), and what they called secondary mediators (which fall into our level 1 and 2 categories) which included things like metabolic markers (blood pressure, cholesterol, glucose, waist-hip ratio) and immune/inflammatory markers (IL-6, CRP). They found that under conditions of cumulative strain, these hormones become dysregulated and that this dysregulation then starts an inter-connected 'domino effect' on biological systems that collectively collapse as individual biomarkers topple towards disease. So just to summarize, they were measuring these chemicals and their physiological effects on the body through a set of measurable biomarkers, and the findings were clear and impactful.
In one study they tracked 738 adults over 5 years, and tested 12 allostatic biomarkers - cortisol, DHEA-S, CRP, cholesterol markers, glucose metabolism, kidney function, and body composition. In this study they found that for every single additional biomarker that was out of range at baseline, people had 35% higher odds of developing type 2 diabetes, 21% higher odds of cardiovascular disease, and 15-24% higher odds of physical impairment - even though they seemed healthy by regular medical measures when originally tested.
Once they had this data they divided participants into three groups: those with 0-3 biomarkers out of range (healthy), 4-5 out of range (moderate load), and 6 or more out of range (high load). Compared to the healthy group, people with just 4-5 dysregulated markers had nearly triple the odds of developing diabetes over the next 5 years (2.78 times higher). Those with 6 or more dysregulated markers had more than double the odds of cardiovascular disease (2.32 times), and double to triple the odds of physical impairment. They weren't measuring people who were already sick - they were measuring these markers in people who seemed fine by medical standards, and then accurately forecasting their disease trajectory years before symptoms appeared. And what's remarkable is how consistent these findings are across populations. Many studies have been conducted globally showing the same patterns. The relationship held across age groups and cultural contexts. Higher allostatic load meant worse health outcomes, period. And they documented this through a specific biological pathway with biomarkers and physiological documentation.
To date, thousands of articles, 2,465 as of the time of this publication, informed by the allostatic load model have expanded stress science theory, research, and clinical perspectives. But they only took it so far. They can see that cumulative stress ‘contributes’ to disease broadly. What they don’t do is connect the dots and see that it’s actually the upstream driver of all of it. But in all fairness, they got really close. The problem is that the way even allostatic load researchers frame it is: "reduce external stressors, improve coping skills, practice stress management". Exactly what we know doesn’t actually work most of the time. But despite this limited interpretation of the data, this is still monumentally important, and predicts and explains disease far better than anything medicine is currently pointing to as a reason or a cause and yet…this whole body of research is mostly not taught in med school. If you happen to be studying one of a handful of fields, you may get some education in it (public health and epidemiology, health disparities and social determinants, psychoneuroimmunology, or some specialized psychiatry training) but barring a specialty in those fields, which most doctors actually treating patients do not have, you can go through all 11-15 years of med school hearing very little or nothing at all about allostatic load research and what it means about disease and health. So they have thirty years of research showing that allostatic load levels predict who gets sick and who stays healthy better than the biomarkers doctors actually use. And it's treated as a niche topic for public health researchers.
So there's our answer to the central objection we mentioned at the end of the last chapter as to whether disease comes first, or pathostasis does. ACEs research shows childhood adversity predicts adult disease decades later - before any disease develops. Allostatic load research shows the biomarkers are elevated years before symptoms appear. These are huge bodies of research and they have very clearly shown that pathostasis comes first, documented and measurable, with disease following predictably behind.
Of course this won't be the only objection. The implications of pathostasis are too far-reaching, too disruptive, to accept without scrutiny. So let's take a minute and address some of the other most likely objections and see how they hold up. And let’s look at how those same objections would hold up if the mirror were turned on medicine itself as well, and see which theory has better answers.
Darwin didn't need a separate explanation for each individual species, because he had the unifying mechanism. Natural selection acting on variation explained all of it. The complexity wasn't in having multiple causes, it was in how one principle expressed across different contexts, different environments, different timescales.
Pathostasis: One upstream cause expressing through interconnected systems in context-dependent ways. Individual diseases emerge based on genetic vulnerabilities, prior damage, environmental factors, which systems are under most strain. The complexity is in the expression, not the cause.
Medicine: Medicine doesn't have "complex explanations" - they have hundreds of incomplete explanations that don't connect to each other. When they see patterns they can't explain, they call it "comorbidity" or a “paradox” or “multifactorial” and treat each disease separately anyway. That's not accounting for complexity - that's missing the pattern.
The word "stress" has become so vague it means everything and nothing. Humans are built to experience a lot of stress on a daily basis. We evolved over millions of years to be hunter gatherers, and that includes dealing with a lot of very stressful things like finding food that may or may not be available, watching for and avoiding predators, and existing without consistent shelter. That kind of stress, acute stress, causes our body to go into sympathetic activation, and then once our brain signals to our autonomic nervous system that the threat has passed, it shifts us back into our normal homeostatic physiological state. The confusion comes from conflating "stress" - the vague cultural term we use for feeling overwhelmed - with the specific physiological state pathostasis describes.
Pathostasis isn't "feeling stressed." It's a precisely defined physiological state with measurable chemical changes, documented cascades, and predictable disease outcomes. It's the state your body enters when the stress response activates and then fails to deactivate, remaining locked in a configuration designed for short-term survival but destructive when sustained.
Medicine acknowledges stress is "a factor" in disease, along with diet, lifestyle, and genetics, but they treat it as a vague psychosocial factor rather than the primary physiological driver, because they are thinking about it in terms of having too much work to do, or not taking enough breaks. In fact they have no clear causal explanation for any chronic illnesses; just risk factors and associations.
Pathostasis isn't making claims about your emotional state or life events. It’s describing a measurable physiological configuration. We’ll discuss in Section Four what actually creates and maintains pathostasis.
The point here is simple: medicine can't explain what causes most (any?) chronic diseases. Pathostasis provides a direct causal link pointing to a distinct physiological state that clearly maps directly and causally to every chronic disease we have documented medically.
This is like saying "if smoking caused cancer, every smoker would have cancer." Causation doesn't require 100% occurrence, it requires that the mechanism reliably produces the outcome at the population level, which pathostasis cleanly does.
As we mentioned above, being “stressed” in the colloquial sense, and being in pathostasis are not the same thing. We were meant to be able to handle everyday stress. Not everyone who is stressed gets into pathostasis, what matters is degree and duration. If your spouse dies and you're already carrying years of pathostatic load, that additional stress might be when cancer develops. If you were healthy with low load and your spouse dies and you're deeply stressed for six months, you likely won't get sick - your system can handle acute stress, even prolonged acute stress, because that's what it was designed for. Pathostasis isn't about having a hard year. It's about a system that got stuck and stayed stuck.
Medicine also can't explain why some people get diseases and others don't. They invoke "genetics," "environment," "lifestyle," "bad luck" - a constellation of factors with no unifying principle for why disease develops when it does.
We just walked through ACEs research tracking childhood adversity predicting disease decades later, and allostatic load research showing elevated biomarkers years before symptoms appear. The direction of causation has been documented across hundreds of studies and hundreds of thousands of people. This isn't correlation - it's tracking cause before effect.
Medicine on the other hand uses correlation and treats it as causation all the time. They find associations between genes and disease, between biomarkers and disease, between lifestyle factors and disease. They catalog what's broken without explaining what broke it. That’s like looking at a runny nose and claiming it’s the cause of colds. When they claim obesity causes diabetes or that amyloid plaques cause Alzheimer's, they're pointing to correlation and calling it causation - exactly what this framework will be accused of doing.
This is worth examining closely because it reveals something important about how medicine conducts research and draws conclusions.
In 1993 they discovered that Huntington's disease is caused by a specific genetic mutation - a CAG repeat expansion in the huntingtin gene. Medicine identified this by studying families where Huntington's ran across generations, found the mutation in affected individuals, and concluded the mutation causes the disease with near-complete penetrance, meaning they were saying that pretty much everyone with the gene will get the disease. But the thing is, they only tested people who already had Huntington's disease, or who had family members with Huntington's disease. They found the mutation in people with the disease, confirmed it was inherited, and declared it fully penetrant. Which is, for the record, the definition of selection bias.
And when researchers finally did screen the general population 23 years later in a 2016 study of over 7,000 people, they found that approximately 1 in 400 individuals carry the expanded CAG repeat associated with Huntington's disease. That's 250 per 100,000 people. But Huntington's prevalence is only about 5-10 per 100,000. Which means roughly 96-98% of people with 'the Huntington's gene' never develop Huntington's disease. For context, Huntington's expression depends on how many times the CAG sequence repeats - anywhere from 36 to 41+ repeats is considered the "disease range," with higher numbers associated with more severe symptoms and earlier onset. When they finally looked at the general population instead of just symptomatic families, they found that having the mutation doesn't actually mean you'll develop the disease. Up to 86% of people with 36 repeats - the low end of the range - never got the disease in their lifetime. They found 10 people aged 67-95 with 36-39 repeats who showed no signs of Huntington's. Even at 40-41 repeats, traditionally called the "full penetrance" range where the disease should be inevitable, they documented asymptomatic carriers. The mutation is far more common than the disease. Most carriers never get sick.
So as rigorous research standards make very clear, if you only test people who have the disease or are in families where it appears, and you find a genetic mutation in those people, you can't conclude that everyone with that mutation gets the disease. That is correlation, not causation, which means you can only conclude that everyone with the disease has the mutation - which is a very different claim. But despite this, the belief that genetics are determinant is still pretty strongly believed, even though we know it’s not true. And this same flaw runs through genetic research on most chronic diseases. They study sick people, find genetic variants that are more common in sick populations than healthy ones, and declare those variants "cause" disease. But they're not screening everyone with those variants to see how many never get sick. They're not investigating what determines expression versus non-expression. They're looking only where they expect to find disease, then concluding what they find is universal.
Pathostasis acknowledges genetic variants matter - they determine your vulnerabilities, where disease shows up first when the cascade runs. But genes aren't destiny. They're weak points that only matter when the system comes under the kind of sustained stress that pathostasis describes. The genetic research medicine loves to cite as proof of biological causation is riddled with this same selective sampling problem. Millions of people carry the same "disease genes" and never get sick, and millions more get sick without the genes, and medicine just shrugs and says disease is "multifactorial and complex."
The objections we’ve just gone through were probably the biggest ones, but let’s run through just a few more really quickly:
Pathostasis: Remission and recovery happen when you turn Level 0 down or off, and address the pathostatic conditioning and structural remodeling that develop during the course of disease - we'll discuss both in detail in section four.
Medicine: Views chronic disease as permanent and progressive. They use the word 'remission' rather than 'recovery' or 'cure' because they view chronic disease as something you manage, not something you reverse. Even when someone's diabetes completely resolves or their autoimmune markers normalize, medicine frames it as 'remission' - implying the disease is still lurking and could return at any time. They have no framework for true recovery because they don't understand what caused it in the first place.
Pathostasis: Load is higher in environments with poverty, discrimination, instability, pollution, food insecurity. Same mechanism, different exposure levels.
Medicine: Health disparities due to access to care, environmental toxins, diet quality, healthcare literacy. Each disease studied separately for social determinants.
Pathostasis: Two things run in families: (1) genetic vulnerabilities determining which systems fail first, (2) pathostatic conditioning transmitted through co-regulation, modeling, and shared environment. Both create familial clustering accounting for but not requiring specific disease genes.
Medicine: There are two distinct ways medicine explains this. One is through genetic mutations like we talked about in Huntington’s disease, where they have actually found a genetic marker and (eventually) studied how often that marker led to the disease expressing (causing disease). And the other is what they call heritability. For example they say schizophrenia has 80% heritability. It’s important we lay out what that actually means, because if you don’t dive into the details it sounds like a pretty strong correlation, as if 80% of the disease is determined by genes, or 80% of people with family history will develop it.
To come up with that number, researchers found twins where at least one had been hospitalized for schizophrenia, and asked them to participate in their survey. Then they determined whether the other twin also had schizophrenia, calculated concordance rates (how many sets of twins BOTH had schizophrenia vs how many had only one twin with it), and compared identical twins (100% genetic similarity) to fraternal twins (50% genetic similarity). They found that only 15% of identical twins were both affected in the largest study - with other studies ranging up to 28% - not the near-100% you'd expect if genes were truly determinative.
Then they used a formula that I will walk through here: 1.) Take the difference between identical and fraternal twin concordance. 2.) Double the number (why? who knows). 3.) Use the doubled number, and apply it to all the twins who don’t have the disease yet, because you assume the same percentage will get it but just haven’t gotten it yet. 4.) Call this new number heritability. By the time you get from "15% of identical twins both have it" - which is what the largest study, covering over 30,000 twin pairs, actually found - to "80% heritable," you've passed through so many layers of statistical transformation and assumption-baking that the final number has almost no intuitive relationship to the original observation.
Actually let’s look a bit more at why they double the number because it’s…well…just see for yourself. So the thinking is that since identical twins share 100% of genes, and fraternal twins share 50%, the difference in concordance must represent that extra 50% of genetic sharing. Double it to get the "full" genetic effect. Except…the difference is already represented here. That's the entire reason you're comparing these two populations in the first place. The whole point of the twin study design is that you're directly observing what happens at 100% genetic sharing versus 50%, that’s what you were measuring. Adding it again is literally doubling the difference.
And the reason they did this strange calculation in the first place was because they couldn’t find any genetic markers at all to explain schizophrenia diagnosis. So if you have 100% of the same genes as someone else, you have at most a 1 in 4 chance of sharing their diagnosis - and possibly as low as 1 in 7. If you think about how many genes you actually share with anyone in your direct or extended family who isn’t your identical twin, that number is a lot less. So this means that having schizophrenia in the family means…not a whole lot other than which system might be vulnerable if you experience sustained pathostatic load, not that you're genetically destined to develop the disease.
So we’re saying that pathostasis explains everything: cardiovascular disease, metabolic disease, autoimmune conditions, neurodegenerative diseases, cancer. The mechanism is the same, the cascade is documented, the pattern is clear.
And depression, anxiety, schizophrenia - they're right there alongside heart disease and diabetes. According to this framework, Parkinson's and panic disorder are the same mechanism hitting different vulnerable systems. Alzheimer's and depression are both just pathostasis affecting the brain.
Which means the entire structure of modern medicine - the fundamental division between medicine and psychiatry, between "physical" and "mental," between diseases treated by cardiologists and diseases treated by therapists - is built on a false premise.
So how did we end up here? How did medicine decide that symptoms in the heart are fundamentally different from symptoms in the brain, when both are just organs responding to the same upstream cascade? Why is depression sent to psychiatry while chronic pain goes to rheumatology, when they cluster together following the exact same pattern?
It turns out the answer isn't medical, and it’s not the least bit scientific. It's historical and institutional. And understanding how this divide happened, and why it's persisted despite all evidence to the contrary, will help clarify a lot of things.
Chapter Seven
Back in the 1600s, in the time of Descartes, the prevailing belief was that the body and soul were one thing. Diseases were considered spiritual or moral failings. And because of this, the body couldn't be studied as just a body and diseases weren't something to be scientifically understood, they were to be understood spiritually as well. And remember that back in those days theology and law were far more intertwined than they're supposed to be today. The church dictated what was and wasn’t allowed. Which meant that people weren’t allowed to study disease and physiology outside of the orthodoxical framework of the time.
To fully illustrate the authoritative reach of the church, and what happened when you spoke out against their beliefs, let's look quickly at Galileo. In 1633, Galileo claimed that the earth moved around the sun, which was contrary to the belief at the time that the earth was at the center of everything. Because of this (astute) assertion that went against their beliefs, he was condemned by the Catholic Church and was sentenced to house arrest for the rest of his life. He died there nine years later at the age of 77. And René Descartes, who had just finished writing a book defending the very same idea, seeing the church's reaction, immediately withdrew his book from publication.
So when Descartes published his next book in 1641, Meditations on First Philosophy, he was very careful to draw a line. The soul, he argued, was immaterial - and at that time, 'soul' meant everything we would now call 'mind': thought, consciousness, emotion, the inner life. That domain belonged to the Church and religion. But the body was something else entirely: a physical machine that operated according to mechanical laws, and could be studied by science without threatening religious authority. This let him carve out space for scientific inquiry while appearing to leave the Church's domain intact. He even sought the Church's approval by dedicating the book to theologians. While some theologians were suspicious, the conceptual divide had been introduced: body and mind were separate substances, operating by different rules, belonging to different authorities. When they eventually did ban his works in 1663, it was too late; the separation had already started formalizing on the institutional level, and was too hard to reverse. And the split he created to protect science from the Church became the framework that would blind science for the next four hundred years. At that time medicine, finally free to conduct business as they saw fit without church oversight, understandably ran with it. Over the next two centuries, what started as a philosophical framework to enable scientific study, transformed into a complete institutional separation between "diseases of the mind" and "diseases of the body," as though they were fundamentally different categories requiring entirely different systems of care.
Of course, this separation didn't happen all at once; it evolved gradually through a series of institutional decisions that seemed perfectly practical and reasonable at the time. In the early 1800s, as cities grew and industrialization accelerated, European societies had to figure out what to do with people who were acting strangely, or who couldn't work, or were disruptive or distressed. The answer, arrived at more or less independently across multiple countries, was to create specialized institutions to house them. Britain passed the County Asylums Act in 1808, leading to the first public asylum opening in Nottinghamshire in 1812, and similar institutions began sprouting up across Europe and America. These weren't hospitals in the medical sense - they were basically just places to keep people with weird anti-social seeming behaviors separate from the rest of society, essentially functioning as custodial facilities.
So now there were these asylums filled with sick patients who needed care, including needing doctors who could attend to them, but regular physicians didn't want to work there. The work didn’t feel 'medical' in the way medicine was coming to understand itself in the germ theory era. There were no clear diseases to identify, or pathogens to kill, and there were no surgical procedures to perfect. So to fill this clear gap, a new specialty emerged, initially called "alienists" - doctors who specialized in treating people whose minds had become "alienated" from reason. And by the late 1800s, this specialty had evolved into what we now call psychiatry. Because these asylums were pretty much completely separate from regular medical hospitals, the psychiatric doctors who worked in them were free to develop their own theories, and treatments, and even their own professional organizations, deepening the structural divide.
In 1871, the Association of Medical Superintendents of American Institutions for the Insane, essentially the early psychiatric professional organization, was invited by the American Medical Association to merge together into one umbrella organization, but they declined. Their reasoning was straightforward and given the context of the time, was completely reasonable. Their meetings were focussed solely on the care of the insane, and they didn't feel like regular doctors were very interested in that, so why should they merge and muddy the waters with people who were so separately focussed? It seemed better to stay separated so they could each focus on their domains.
By 1894, the separation had become so complete that American neurologist Silas Weir Mitchell stood before a gathering of asylum physicians and delivered a scathing critique - they had isolated themselves from the rest of medicine, their medical records were inadequate, and their research was nonexistent. Asylum psychiatry had developed in complete isolation from the rest of medicine. They weren't attending the same conferences, reading the same journals, or training in the same institutions.
And once these institutional structures were in place, they became self-reinforcing. Medical schools developed separate training tracks - you either studied to be a psychiatrist or you studied to be a regular doctor. Insurance systems created separate coverage categories and different reimbursement structures. Research funding got divided into mental health research and medical research. Hospitals were either psychiatric hospitals or general hospitals, but rarely both. Professional journals specialized in either psychiatry or medicine. Even the language diverged - psychiatrists talked about mental illness, affect, and behavior, while other doctors talked about physiology, organs, and biomarkers.
So given the divide, psychiatry had to figure out how to create diagnosis without medicine's standard toolbox. The solution, eventually, was the Diagnostic and Statistical Manual of Mental Disorders - the DSM. First published in 1952, and now in its fifth edition, the DSM has become psychiatry's defining document, the authoritative guide to what counts as a mental disorder and what doesn't.
If you crack open the tome that is the DSM and really look at the diagnosis laid out for you inside, you will see that it's not made up of clearly defined diseases, but rather a collection of different expressions of distress or dysfunction, that have been grouped together and labeled. Imagine a deck of cards spread out on your table. What the DSM did is they took the cards of disease expression, and noticed common clustering patterns. So sleep issues, trouble concentrating, avoiding public places, etc, and noticed that some of these clustered together in people often enough that they gave them an overarching name. They basically picked up 5 cards and called it a disorder. Then they threw those cards back in the pile so they'd be available to the next diagnosis they were creating that might include them as well. Not because the biology suggested these clusters had clear boundaries, but because the system they were working within required them to give them concrete definitions. You need discrete diagnoses to bill insurance. You need specific disease categories to get FDA approval for drugs. You need defined patient populations to design research studies. You need clear diagnostic criteria to train psychiatrists in a standardized way. The DSM gave psychiatry all of that. It made the field look scientific, systematic, and organized. But in doing so, it also created categories that ended up obscuring more than they revealed.
Look at Major Depressive Disorder and Generalized Anxiety Disorder, two of the most commonly diagnosed conditions in psychiatry. To meet criteria for Major Depressive Disorder, you need five out of nine of their symptoms cards, and for Generalized Anxiety Disorder you need three out of six of the cards. But both include fatigue, sleep disturbances, and difficulty concentrating. So you could be diagnosed with Generalized Anxiety Disorder with just those three, but not depression because you would need two additional cards to get to play. So where exactly does one disorder end and the other begin? If someone has four symptoms from the Major Depressive Disorder list, they have nothing - no diagnosis, no treatment, no insurance coverage. But if they have five symptoms, suddenly they have a disease. The cutoff is arbitrary.
PTSD symptoms overlap massively with panic disorder, depression, and generalized anxiety disorder. Fibromyalgia, classified as a physical condition (sort of) has symptoms that overlap substantially with depression and anxiety. Chronic fatigue syndrome has a lot of overlapping symptoms with depression. Parkinson's patients commonly experience depression, anxiety, and fatigue - often years before the tremor appears. Alzheimer's patients frequently develop depression and sleep disturbances before memory problems. Heart disease patients show high rates of depression and anxiety. These aren't separate diseases that just happen to co-occur together. "Comorbidity" rates in psychiatry are so absurdly high (it's not unusual for someone to meet criteria for three, four, or five different DSM diagnoses) because we've taken one continuous phenomenon and sliced it into multiple overlapping categories.
And the way psychiatry has organized these diagnoses makes it nearly impossible to see the mechanism. If depression, anxiety, PTSD, panic disorder, and OCD are all separate diseases with separate causes requiring separate treatments, then you're looking for what's unique about each one. You're trying to find the "depression pathology" or the "anxiety pathology" or the "OCD pathology." But if we were to understand that they're all just different manifestations of the same thing - the body stuck in stress physiology - then we could finally start looking for what they have in common instead.
The DSM served important institutional purposes. It gave psychiatry credibility as a medical specialty. It created a common language for clinicians and researchers. It enabled insurance billing and drug development. But in creating discrete diagnostic categories, it reified the idea that these are separate diseases rather than different expressions of the same underlying dysregulation. It made the pattern harder to see by drawing artificial boundaries that don't exist in biology or symptom expression. And it further obscures the connection between "mental" and "physical" disease.
Eventually, psychiatry did try to find biological explanations for their DSM categories. This gave us the 'chemical imbalance' theory of mental illness - the idea that depression was caused by low serotonin, that anxiety was a GABA problem, that schizophrenia was about dopamine. Finally, psychiatry thought, we have our pathogens: imbalanced neurotransmitters. We have our treatment: drugs to fix the imbalance.
Except it didn't work. Decades of research have now shown that SSRIs don't outperform placebo in mild to moderate depression, and it only looks like it works a bit better in severe depression because people with severe depression don’t respond to placebo very well so the placebo arm is lower making the active arm look more effective. A landmark meta-analysis that included unpublished trial data by Kirsch in 2008 found that the difference between antidepressants and placebo was clinically negligible. The simple chemical imbalance theory was wrong.
What happened was they noticed serotonin seemed to improve people’s moods, so they started testing depressed people to see what their serotonin levels looked like. What they actually saw was that in some depressed people serotonin was elevated and in some people it was depleted, which baffled them, but if you look back at Table 1 in chapter 4, you will see that initially when we enter into acute stress our serotonin levels elevate. Over time as we get stuck in pathostasis, the system depletes and that’s when you see chronically low levels. So researchers were testing people at different stages of pathostasis, not understanding what they were seeing, and still thought the answer was to further disrupt an already dysregulated system.
What SSRI’s actually do is inhibit reabsorption of serotonin. So normally serotonin is released, it floats across the gap between neurons, it binds to receptors on the receiving neuron, and that binding becomes a signal. Then normally it gets pulled back into the original neuron to be reused.
By blocking reuptake, the serotonin stays in the synapse longer, so it has more opportunities to keep binding to those receptors, and keep signaling. By staying in the gap longer it keeps hitting the receptor over and over instead of being cleared away.
Think of it like a text message. Normally you send it and it gets read one time. What SSRIs do is more like the message keeps re-appearing in someone's inbox, so they keep reading it again and again. More signal events from the same serotonin. But just like with every other time we try to adjust the chemistry of the body artificially over the long term, the brain adapts. If you're getting bombarded with serotonin signaling, the receiving neuron goes "okay this is too much" and starts downregulating its receptors. Pulling them back. Becoming less sensitive. This is the same tolerance we talked about with insulin in diabetes. So you end up with more serotonin floating around than you started with, but this amount becomes the new normal, and the receptors adapt and now you also have fewer receptors to receive it and/or less responsive receptors.
This is the same pattern we see across medicine when they try to artificially replace depleted chemicals. It works for a while - then tolerance develops. The body adapts to the artificial supply, downregulates its own production or sensitivity, and you need more to get the same effect. Eventually the drug stops working, or you become dependent on it just to maintain a baseline. We saw this with insulin in diabetes. We're seeing it with GLP-1s. Psychiatry saw it with SSRIs. When you add back a dysregulated chemical without addressing why it's depleted in the first place, you're not treating the disease - you're creating dependence while the upstream cause keeps running.
And this keeps happening despite this clear issue because drug trials are typically conducted over 6-12 weeks. And SSRIs are no exception. The average amount of time people stay on antidepressants is 5 years, with 25% staying on them for 10 years or more, and we really only tested them as a short term bandaid. The fact that drug testing for chronic diseases is so short is pretty ludicrous if you think about it. Think about alcohol tolerance. If you needed to test how drunk someone will get when they drink a specific amount of alcohol every day, and you only test them for 6 weeks, they would probably get pretty drunk off of let’s say 6 beers a day for that 6 weeks. But if someone is drinking a 6 pack every day for 5 years? We all intuitively know that at some point there is likely no effect at all on that person from those beers. This is how most long term medicine works too.
Psychiatry was looking in the brain for something specific to 'mental' illness because that's the organ they'd been assigned. They were trying to find the 'depression chemical' or the 'anxiety receptor' because the institutional framework demanded that mental illnesses be brain diseases, separate from the body. The framework itself guaranteed they'd miss what was actually happening.
So psychiatry hasn't been any more successful than the rest of medicine. Just like with diabetes, heart disease, and IBS, psychiatry found some mechanisms, but none of them led to cures. But despite the gaps in their understanding and ability to effectively help these patients, like medicine, psychiatry kept diagnosing. Millions of people were given labels - Major Depressive Disorder, Generalized Anxiety Disorder, Panic Disorder - complete with narratives about chronicity, prognosis, and what to expect. And this raises a question we haven't addressed yet: if the treatments don't work better than placebo, what exactly are these diagnoses for? What happens when we give someone one of these diagnoses - medical or psychiatric? Medicine treats diagnosis as though it's a neutral act of observation - we're simply naming what's already there, documenting reality. But what if it's not neutral at all? What if the act of giving someone a diagnosis is itself an intervention that changes what happens next?
In Chapter 2, we saw that expectation produces biology. When patients believe treatment will help, their bodies respond - blood pressure drops, tremors reduce, pain decreases. Real, measurable physiological changes driven by belief. This expectation effect is so strong it can even override and reverse the expected effects of a drug like ipecac. So given that we have 70 years of data proving just that in trial after trial after trial, what about the other side of that equation? If expectation can produce healing, can it also produce harm? If believing you'll improve creates improvement, does believing you're disordered create disorder? What happens when we hand someone a psychiatric diagnosis - when we tell them 'you have this condition, here's what it means, here's how it will affect you'? Are we describing a pre-existing reality, or are we teaching their nervous system how to organize itself around a new identity, creating the very patterns we claim to be documenting?
Chapter Eight
In 1979 at just 18 years old, Michael J. Fox dropped out of high school in Canada and moved to Los Angeles to pursue acting. He was so broke that he was literally dumpster diving for food, taking packets of jam from the local IHOP, hiding from his landlord because he couldn't pay rent, and selling his sectional sofa off piece by piece to avoid admitting defeat and going home. He lived like this three years before, in 1982, he got a call from his agent saying he'd landed a supporting role on a show called Family Ties. The show was originally supposed to focus on the parents' lives with the children as side characters, but viewers fell so hard for Michael J. Fox's character Alex P. Keaton that after just four episodes, they restructured the entire show around him. And pretty much overnight, he became one of Hollywood's sweethearts.
Two years later when Robert Zemeckis and Steven Spielberg were casting Marty McFly for Back to the Future, they knew immediately they wanted Fox for the starring role. But the producer of Family Ties, David Goldberg, was worried that if Michael J. Fox got the role in Back to the Future he would leave Family Ties, and he worried the show wouldn't survive if that happened. So he hid the Back to the Future script from him completely. This left Zemeckis and Spielberg with no option but to cast someone else, which they did, but after only a month of shooting with this other actor they realized it wasn't working. The kid they’d gotten to play the part wasn’t fitting the role the way they had envisioned, they couldn’t seem to find a way to make it work, so they doubled down on trying to get Fox. On January 3, 1985, Goldberg finally told him about the script he'd been hiding, and Fox agreed to join on the spot—without even reading it. And by January 15th, less than two weeks later, they were already filming.
The agreement was that he could do the role as long as it didn’t stop him from also shooting Family Ties. So for three months straight, he worked on Family Ties from nine in the morning until six at night, then got driven to the Back to the Future set where he'd shoot until 3 in the morning. His Teamster drivers would literally carry him to bed, where he'd get maybe four hours of sleep before starting the whole cycle again. Twenty-hour days, every day, for months. He later wrote that during this period he was "Alex, Marty, and Mike" - and that was two too many. Mike, the actual person, had to disappear.
Back to the Future exploded that July, staying number one for eleven weeks. Fox went from TV star to movie star, and the offers started pouring in: Teen Wolf, The Secret of My Success, Bright Lights Big City, Casualties of War. He said yes to all of it. Between 1985 and 1991, while still filming Family Ties, he made ten feature films. He had become Hollywood's golden boy. He was working around the clock, and he never took a break.
Then in 1991, while filming Doc Hollywood, he noticed his left pinkie twitching. He went to see a neurologist who assured him he'd probably just injured his funny bone. But six months later his entire left hand was trembling, his shoulder was stiff and achy, and another doctor ran tests and diagnosed him with Parkinson's disease. He was 29 years old.
The doctor told him he had "about ten good working years left.” He used words like progressive, degenerative, and incurable. And then he told him "You don't win this, you lose".
After his diagnosis and being cautioned he had "ten good working years left", he hastily signed a three-film contract, appearing in For Love or Money (1993), Life with Mikey (1993), and Greedy (1994). And he also started drinking (a lot) as he grew increasingly depressed (I mean...who wouldn’t?), using the alcohol to "disassociate, to escape my situation". And he spent the next seven years in a sort of denial, hiding his condition, popping dopamine pills "as if they were Smarties candies" and washing them down with alcohol, working to time everything perfectly to keep his acting intact.
And…of course he was falling apart. Think about what getting a Parkinson's diagnosis actually looks like in practice. You go into the doctor's office with a small symptom, probably some shaking. They run some tests, and when they come back they tell you: you have an incurable, progressive disease. That means you will have this for the rest of your life, it will keep getting worse, and there’s not a whole lot we can do for you. We have medications to help manage the symptoms, so we will write a prescription for you today, which should help with the shaking at least for a while. Here’s a list of symptoms you can expect:
When you notice these symptoms, please let us know, and we can run more tests and decide if we need to adjust your medication. You should talk to your loved ones about this. Here are some resources for you.
We all take the way this is handled medically as a given, but let’s actually think about what just happened. You walked into the doctor’s office with a seemingly small symptom, a tremor, and walked out with a life sentence, from someone you've been conditioned to trust as an authority on your health. And you're already in pathostasis - you have to be, the symptoms prove it. Your body has been stuck in pathostasis long enough that it's starting to break down in visible ways. So at the exact moment when reducing your pathostatic load would be most critical, medicine just dumped a massive stressor on top of an already overloaded system.
Which begs the question: what does it mean to receive a diagnosis? Medicine treats it as a benign thing. Someone has a condition, you educate them about that condition so that they can manage it more effectively. Within their model, this makes perfect sense. You can't work on something you don't know exists.
But pathostasis changes the calculation entirely. Yes, it's reversible - but it's also progressive. Not progressive the way medicine uses the term, not the way Fox's doctor meant it when he said the disease would inevitably worsen until Fox ‘lost’. Pathostasis being progressive just means the longer you spend in it, the more entrenched the dysfunction becomes. The deeper you dig the hole, the more work it takes to climb back out.
Which means early intervention matters a lot. Which for the record is WHY medicine is providing these diagnoses. They want to act fast and try to manage the condition before it gets worse. Within their framework and with their current understanding, that makes perfect sense. But when you're wrong about what you're diagnosing, instead of helping, these diagnoses can and do actually deepen the pathostatic load.
Let’s take a minute to look at what medicine knows about this diagnostic effect.
In January 2022, researchers wanted to know more about what kind of side effects people were experiencing from the COVID-19 vaccination, so they conducted a big systematic review. What they did was compare the results across multiple clinical trials, of what side effects people who received the actual COVID vaccine experienced, versus those who only received a placebo injection (which was just saline, no active ingredients). They found that these placebo patients were experiencing many of the same side effects as the active vaccine group. A whopping 76% of the adverse effects people reported after their first vaccine dose were nocebo effects, which is essentially placebo's evil twin: instead of our ANS creating positive drug effects from expectation, it creates side effects and symptoms, also from expectation. So in this study over three-quarters of the side effects people experienced, the fatigue, headaches, arm soreness, etc., occurred in both groups. After the second dose, 52% of reported side effects were nocebo. The symptoms were real. People genuinely felt terrible. But most of what they experienced wasn't caused by the vaccine itself. It was caused by expecting to feel terrible.
Remember from Chapter 2 what we learned about how expectation works: it creates measurable physiological changes through ANS-mediated systems. Real, documented biological changes driven by what the brain expects to happen, like the dopamine production we saw in Parkinson’s patients. That mechanism doesn't only work in the positive direction. Expectation produces positive effects by shifting ANS function, and negative expectation can produce symptoms and side effects through the exact same pathway.
So outside of these trials, millions of people received COVID vaccines, and many of those millions experienced side effects like exhaustion, headaches and body aches. And most, ¾ of those symptoms, weren't produced by the vaccine's biological mechanism, they were produced by the information they'd received about what to expect. News articles told them to plan sick days and they heard about side effects from friends and family and social media. By the time people rolled up their sleeves, they knew exactly what symptoms they were supposed to develop. And their bodies obliged.
This isn't a new discovery. In 1961, a physician named Walter Kennedy coined the term "nocebo" - from the Latin "I shall harm" - to describe the negative counterpart of placebo effects. If positive expectations could produce healing, he reasoned, then negative expectations should be able to produce harm. Kennedy emphasized that nocebo referred to effects "inherent in the patient rather than in the remedy" - that is, the harm wasn't coming from the treatment itself, but from the patient's expectations about the treatment. He documented cases where inert substances produced side effects simply because patients expected them. He warned that this phenomenon was likely contributing to adverse effects across medicine, and that conflating these expectation-driven symptoms with true pharmacological side effects could lead doctors to discard useful treatments.
Then, as with so much research that doesn't fit medicine's framework, the finding was largely ignored. According to databases tracking placebo and nocebo research, only a few articles on nocebo were published in the twenty-five years after Kennedy's paper. For decades, the phenomenon he'd identified, that negative expectations could produce real, measurable harm, languished in obscurity.
Which brings us back to: what happens when we give someone a diagnosis? Medicine treats diagnosis as though it's a neutral act of observation - we're simply naming what's already there, documenting reality. But what if it's not neutral at all? What if the act of giving someone a diagnosis is itself an intervention that changes what happens next?
When we hand someone a diagnosis - whether ‘psychiatric’ like depression or ‘medical’ like Parkinson’s, we're not just describing their current symptoms. We're teaching them what to expect. The patient learns: these are the symptoms I should notice, this is how my disorder will present, this is what my future will look like. And just like the COVID vaccine recipients who knew to expect fatigue and headaches, the brain has now been given a script.
And what’s especially problematic about this effect, given the framework we have just uncovered, is that when it comes to chronic diseases, medicine’s understanding of what they are diagnosing is limited at best. Take Alzheimer's disease for example. To expand a bit on what we touched on in chapter 5, let’s look at how they’ve been studying this disease. For forty years, medicine insisted it was caused by amyloid plaques in the brain. They thought these plaques were destroying neurons, so their solution was to clear the plaques and cure the disease. Tens of billions of dollars were spent chasing this hypothesis, covering hundreds of trials, and decades of research - all focused on removing amyloid. And what came of all this money and research? Every. Single. Drug. Failed. Not one produced noticeable improvement in the symptoms that mattered to patients. And when they eventually looked, they found brains full of plaques in the autopsies of people without dementia or Alzheimer's, and brains with minimal plaques in those with severe Alzheimer's. Also telling is that Alzheimer's patients have lucid periods where they suddenly recognize loved ones, access old memories, and become themselves again, if only for a short while. If the brain tissue was destroyed by plaques, or the synapses were dead, these moments would be impossible. The entire neural network has to be connected and firing off all at once for these lucid periods to happen the way that they do. There is just no logical way that the highways could be destroyed at 12pm, functional at 2pm, and back to being destroyed just a few hours later. And yet, reducing amyloid plaques is still the prevailing treatment approach, and medicine is still spending the majority of its money chasing this idea.
So let’s talk about Parkinson’s again for a minute. As we talked about in chapter 5, medicine says it's caused by the death of dopamine-producing neurons in the substantia nigra. And because dead neurons can't come back, they consider the disease progressive, and irreversible. Yet as we know, Parkinson's patients show dramatic improvements with placebos - producing real dopamine, improving real motor function. One study showed half of the people receiving placebo had reductions in their tremor of at least 70%, and another trial performing fake surgery showed significant and sustained improvements. And it's not just placebos either. When patients practice relaxation techniques, their tremors can completely disappear - 15 out of 20 patients in one study showed no tremors at all for minutes after guided imagery, with some effects lasting hours. After mindfulness training, motor scores improve significantly. Dead cells don't suddenly start working because you meditated. They don't stop shaking because you relaxed. Which means that for both Alzheimer's and Parkinson's - the two most feared 'neurodegenerative' diseases - medicine has the basic facts wrong.
So given that, let’s go back to Michael J. Fox for just a minute. Imagine if instead of being told: this is it forever, your life as you know it is over, you are going to be dependent on the medical institution to manage your symptoms in-so-much as we are able from now on, but expect precipitous decline. Imagine instead if his doctor said something to him like this:
This shaking you are experiencing tells us that your body has been stuck in pathostasis for a while. These chemicals in your system are meant to be a short term survival strategy, and when we stay in them for a long time, it does a few things. One, these chemicals actually damage the parts of the brain responsible for turning off this stress response, making it easier and easier to stay stuck. Two, these chemicals are responsible for the cascade that happens upstream of all diseases, so staying there is not good for your body. Three, this is something we can manage. We caught it early, you’re really young, we know how to help.
You can finish working on the projects you are in the middle of if that’s important to you, all the choices you make will be yours to make, but we strongly recommend taking a break or at minimum hugely reducing your workload. At least for six months to a year so we can try this new protocol to get this under control, and see how your body is responding. You are in a critical window where we can give you these drugs to help your shaking, and they will work for a while, but eventually they will stop working as well, you’ll need more, or different drugs, or they can stop working entirely. The rehabilitation work you need to do to reverse this is easier now before you develop associations in your brain with these symptoms and before the cascade from these chemicals progresses too far. Here is a referral to a rehabilitation program that can teach you how to turn your prefrontal cortical control over the off switch back on, and help you recondition any associations in your brain that have already been made.
If he had been told this instead, he would have had hope, agency, and a way out. Instead he got helplessness, hopelessness, and a life sentence.
And this is not because medicine is evil or incompetent - they're doing the best they can with what they understand. But understanding matters. Getting the mechanism wrong doesn't just mean failed treatments, it means stolen futures. Fox has spent over thirty years managing what he was told was an irreversible death sentence, when the neurons medicine declared dead were actually just dormant. That's the cost of misdiagnosis.
This is why we need to do better. Medicine tells patients to "reduce stress" but when stress means everything from finishing your algebra homework to receiving a terminal diagnosis to our bodies being stuck in pathostasis, that advice is meaningless. We need precision. We need to understand what pathostasis actually is, how it works, and most importantly - how to reverse it.
Luckily, when you pull together all the relevant pieces, the science on how to work with this is pretty solid. You just have to be willing to step back and look at the full picture.
Chapter Nine
Back in the time of Aristotle, people thought the heart was the seat of intelligence. They figured that the brain was there to function as a cooling mechanism for the heart, and that this cooling ability was what separated us from cold-blooded animals; humans were more rational than animals because we had larger brains to cool our hot-bloodedness. In ancient Egypt, when they prepared a body for mummification, the brain was regularly removed, extracted with an iron hook and discarded, because since the heart was assumed to be the seat of intelligence, the vital organ of humanity, discarding the brain was no different than discarding the stomach or the liver.
For centuries, no one questioned the heart theory, until the 5th century BC, when a Greek physician named Alcmaeon of Croton was dissecting an animal, and noticed that the animal’s eyes were connected to the brain, not the heart. He started looking at the other senses, hearing, smell, and taste, and realized they too were all located on the head, with passages that seemed to lead inward toward the brain. Even with this discovery, nearly two centuries passed before physicians Herophilus and Erasistratus in Alexandria actually dissected human bodies and were able to provide the clear evidence for what Alcmaeon of Croton was asserting: that the brain, not the heart, was the center of intelligence. They were even able to map some of the brain out, distinguishing between the cerebrum and cerebellum, identifying the ventricles, and documenting the dura mater.
Around 170 AD, a Roman physician named Galen was observing what happened to people with brain injuries, and he noticed how damage to specific parts of the brain affected mental activity, movement, and sensation. He too concluded definitively that mental activity occurred in the brain rather than the heart. But even then people weren't convinced. They were too anchored to their previous beliefs, unable to let go of what they’d been taught for generations. It wasn't until the 1600s, another 1,400 years later, that this truth was finally accepted by medicine.
Once medicine finally accepted that mental activity occurred in the brain, researchers could start asking how it actually worked. By the 1900s, scientists were studying how signals traveled through the body, how injuries affected function, and whether damaged nerves could heal. In 1913, Santiago Ramón y Cajal was studying nerves' ability to regenerate. He found that some nerves were able to regenerate, and some were not. Your peripheral nervous system, which is the network of nerves that go from your brain out to the rest of your body like your arms, legs, and heart, can regenerate. When you sever a nerve in your arm, for example, it can grow back and restore its function, at least somewhat. But he found that in your central nervous system, which is your brain and your spinal cord, if those neurons, which are the cells that make up those systems, get damaged, they aren't able to physically regenerate. The connections that were severed stayed severed. Cajal could see this clearly under his microscope: axons physically regrowing in specimen after specimen in peripheral nerves, but never in the brain or spinal cord. However he also understood, and even publicly stated, that the brain must be able to reorganize somehow. Otherwise we wouldn't be able to change our minds about anything. That reorganization just wasn't happening through physical neuronal growth like it does in our arms or lungs.
Cajal didn’t actually like what he’d proven. He didn’t like the idea that our brains couldn’t regenerate into adulthood the way the rest of our nervous system could, so he issued a challenge of sorts: he said "In adult centers the nerve paths are something fixed, ended, immutable. Everything may die, nothing may be regenerated. It is for the science of the future to change, if possible, this harsh decree." Meaning basically he had found that in adult brains, the nerves are fixed, and can die but not be regenerated, and that it was the challenge of future scientists to try to prove him wrong if they could. He was literally challenging the future of medicine to figure out how to overturn his observations, hoping humanity's fate wasn’t that of a fixed brain.
He’d done other research in his career as well, that spoke to the parts of the brain that are changeable. In the 1890s Cajal had introduced a concept he called "neuronal plasticity." He'd documented cases where damaged brains formed new circuits to work around injury. As an example, he'd seen how long axon cells could convert into short axon cells with new collateral branches after trauma, creating alternate pathways that could restore function even when the original connections were destroyed. And in addition to these observable changes he saw happening in the brain, a completely fixed, rigid brain made no logical sense to him. How could learning happen? How could we adapt to new situations? How could therapy or practice improve anything? The brain, he argued, had to have the capacity to reorganize its existing connections even if it couldn't grow entirely new neurons. He'd seen it happen, and he’d documented it. The structure might be fixed, but the function was dynamic.
But medicine didn't hear that part. Or maybe it heard it and chose to ignore it. Because what stuck, what got taught in medical schools for the next three generations, was the first part: the brain is fixed, ended, immutable. Everything may die, nothing may be regenerated. That became the operating assumption of neurology, the foundational principle that shaped how doctors thought about brain injury, stroke, developmental disorders, mental illness, and aging. If the brain couldn't change, then damage was permanent. If someone had a stroke, whatever function they lost in the first few months was gone forever. If a child's brain developed abnormally, there was a narrow window for intervention and after that, nothing could be done. If someone's mental faculties declined with age, that was simply the inevitable march toward death. The textbooks said so. Cajal had said so. And Cajal was the father of modern neuroscience, the man who'd won the Nobel Prize for describing the structure of neurons. His authority was absolute.
The irony is that Cajal was right about both things. We now know that you can't grow new neurons in most of the adult brain. Adult neurogenesis does happen, but only in specific regions like the hippocampus and the olfactory bulb. For the vast majority of your brain, the neurons you have are the neurons you'll die with. Cajal was also right that damaged neurons in the central nervous system don't regenerate their axons the way peripheral nerves do. Cut the connection and it stays cut. But he was equally right about the other part, the part that got discarded. The brain reorganizes its existing connections constantly, throughout our entire lives. Every time you learn something new, every time you practice a skill, every time you change your mind about something, your brain is physically rewiring itself. Not by growing new neurons, but by strengthening some synaptic connections, weakening others, and forming new pathways. The structure Cajal described so beautifully was always in motion, always adapting, and capable of significant change, throughout our entire lives.
It took until the 1980s for someone to actually prove what Cajal had observed about plasticity. Michael Merzenich, a neuroscientist at UC San Francisco, started doing experiments that would finally prove the brain could reorganize itself, though that wasn’t what he set out to do when he started his work. The prevailing assumption at that time was that these brain maps were fixed. You were born with them, they developed in childhood, and then they stayed put for the rest of your life. That's what localizationism said—specific areas of the brain handle specific functions, these areas were the same in everybody, and they were unchanging. So Merzenich and his colleagues actually only set out to create more detailed maps, essentially documenting with greater precision which exact parts of the brain corresponded to which parts of the body to establish the standard brain organization that everyone assumed was fixed and universal. But as he did this work, he started noticing that when he mapped the same monkey twice, weeks or months apart, the maps were changing. The boundaries had shifted. The territory devoted to one finger might be slightly larger or smaller. It was subtle, but it was there. And the maps varied from one monkey to the next too - they weren't identical copies of a universal template.
Based on what he was seeing, Merzenich hypothesized that the brain might actually be changeable, and that these maps weren't fixed blueprints but living, dynamic territories that shifted based on use. To test this, he designed a series of experiments.
First, he mapped the hand area in the brain of an adult monkey, identifying exactly which neurons responded when each finger was touched. Then he amputated the monkey's middle finger. According to everything medicine believed at the time, the part of the brain that had processed sensation from that finger should go dark, essentially unused for the rest of the animal's life. But when Merzenich remapped the same area months later, he found that the brain map for the middle finger was gone, but the areas representing the two adjacent fingers had expanded, taking over the territory that had been abandoned. Touch the index finger or ring finger, and the neurons that used to respond only to the middle finger now fired for these adjacent fingers. The brain had reorganized itself.
This wasn't supposed to be possible. And since the reorganization was only a millimeter or two, skeptics argued this could be explained by existing but dormant connections between adjacent areas. Maybe there were always some overlapping nerve fibers at the borders, and when one area lost its input, the neighboring areas just activated those pre-existing connections. It was interesting, sure, but it didn't prove the brain could really change in any meaningful way.
So Merzenich went further. In another experiment, he mapped a normal monkey's hand, then he sewed two of the monkey's fingers together so that both fingers moved as one. After several months of the monkey using its sewn fingers, Merzenich remapped the brain. The two separate brain maps for those originally independent fingers had merged into a single map. Touch any point on either finger, and the entire combined map would light up. Because all the movements and sensations in those fingers now occurred simultaneously, the brain had wired them together as a single unit. Neurons that fired together in time wired together to make one map.
He kept pushing. When he stimulated all five fingers simultaneously, 500 times a day for a month, the brain eventually mapped them as one "finger"—a single unified territory instead of five separate ones. When he surgically moved a patch of skin with nerve endings from one finger to another, stimulation at the new location eventually caused the corresponding brain area to reorganize and respond to the transplanted tissue.
Then Bill Jenkins joined Merzenich's team at UCSF to study how the brain changed when an animal actually acquired a new skill rather than just losing function. They taught a monkey to touch a spinning disk with one finger, training it over and over until it became proficient at the task. Before training, they mapped the sensory cortex for that finger. After training, they mapped it again, and found that the brain territory devoted to that specific fingertip had expanded significantly. Learning the skill had caused the brain to devote more processing power to the finger doing the work. And it wasn't just that the map got bigger—the individual neurons within that expanded territory became more efficient, more finely tuned to the specific task the monkey was practicing.
This represented a complete reversal of everything medicine had believed for eighty years. The adult brain wasn't fixed. It reorganized based on what you did with it, what you practiced, what you paid attention to. Use a finger more, and its brain territory grows. Stop using a finger, and its territory gets invaded by neighbors. Make two fingers move together, and their brain maps merge. Practice a skill, and the neurons processing that skill become more numerous and more efficient.
Then in 1991, a researcher named Timothy Pons worked with a group of macaque monkeys that had been used in an earlier, unrelated experiment about a decade prior, in which the sensory nerves from one arm had been cut where they entered the spinal cord. These monkeys had lived for over a decade with no sensory input from that arm. Pons planted electrodes in their brains to see what had happened to the portion of the brain map that should have been processing signals from the now-disconnected limb. He expected to find maybe a couple millimeters of encroachment from the adjacent areas, consistent with what Merzenich had seen. Instead, he found that the face region had completely invaded the neighboring cortex—about half an inch of brain tissue had switched functions. The entire hand and arm zone now responded when the monkey's face was touched. This was neural reorganization on a massive scale, impossible to explain by dormant connections or pre-existing pathways. The brains of these monkeys had fundamentally rewired themselves.
Around the same time, the technology finally caught up to allow us to see this happening in living human brains. Functional MRI made it possible to watch which areas of the brain lit up during different tasks, to track how those patterns changed over time, to observe the physical reorganization that Cajal had predicted but couldn't directly measure. Studies on stroke patients showed that the brain could reorganize within weeks of intensive therapy. Research on London taxi drivers revealed that the part of their brain devoted to spatial navigation, the hippocampus, was enlarged compared to bus drivers, and the longer someone had been a taxi driver, the bigger the change. When people learned to juggle, their visual-motor areas reorganized. When musicians practiced, the finger areas of their motor cortex expanded.
By the 1990s, the evidence was overwhelming. The brain wasn't fixed. It had never been fixed. Cajal had been right in the 1890s when he wrote about neuronal plasticity. But it had taken a hundred years, multiple generations of scientists, and the development of entirely new technologies to prove what he'd already observed: the brain reorganizes its existing connections constantly, throughout our entire lives, based on what we do with it.
Medicine now knew that intensive practice causes physical changes in brain structure, that recovery from injury was possible through targeted behavioral interventions. The National Institutes of Health (NIH) sponsored workshops, leading scientists gathered to discuss clinical applications, papers flooded the journals documenting the therapeutic potential.
And then... not much happened.
It's not that nothing at all changed. Some things did. Stroke rehabilitation shifted toward more intensive task-specific training. Constraint-induced movement therapy emerged, forcing stroke patients to use their affected limbs by restraining the good one, capitalizing on the brain's ability to rewire. Some physical therapy protocols incorporated principles of neuroplasticity. Research programs investigated how to enhance plasticity with various interventions: brain stimulation techniques like transcranial magnetic stimulation, vagus nerve stimulation, peripheral nerve stimulation. Virtual reality systems were developed to create immersive rehabilitation environments. Robotic therapy and exoskeletons were designed to facilitate movement and guide practice. Neurofeedback and brain-computer interfaces offered new ways to work with the brain directly.
The field of neurorehabilitation experienced what researchers called a "paradigm shift," moving from a focus on compensation, which is teaching people to work around deficits, to a focus on recovery through neuroplasticity. On paper, this looked revolutionary, but in practice, most of these advanced technological interventions remained largely confined to specialized research centers and expensive private clinics.
And the basic interventions that DID reach mainstream medicine? They showed up almost exclusively in rehabilitation for acute neurological injury: stroke, traumatic brain injury, spinal cord injury, Parkinson's disease. Even there, the interventions are often diluted versions of what the research suggests would actually work. Insurance typically covers a limited number of physical therapy sessions per year after a stroke, when the research suggests daily practice for months or years would produce far better outcomes. The therapies are delivered in clinical settings for an hour a few times a week, when the principles of neuroplastic change suggest they should be practiced every day in real-world contexts.
So why hasn't neuroplasticity-based treatment expanded beyond this narrow window? There's no money in it. Pharmaceutical companies fund research on things they can patent. Drugs and devices. Things that lead to recurring revenue for them. Neuroplasticity-based interventions can't be patented - you're just teaching people to use their own brains differently. So the research doesn't get funded, which means the large-scale studies don't get done.
Physical therapy would seem to be an exception to the idea that medicine only funds and thus integrates drug or device or procedure innovation, but it turns out that it’s the exception that proves the rule - it actually emerged from wartime necessity before the current pharmaceutical-dominated research model took hold, and even now, most 'rehabilitation research' funding goes to prosthetics and assistive technology, things that can actually be sold. The behavioral interventions get almost nothing, and have remained largely unchanged because of it.
The implications of neuroplasticity are far broader than the narrow window medicine is using it for. If the brain reorganizes based on what you do with it, based on what you practice and pay attention to, then neuroplasticity is relevant to virtually every condition involving brain function. Which is…well, everything. Parkinson’s, depression, anxiety, chronic pain, diabetes, PTSD…literally any condition involving patterns of brain activity that have been wired in through repeated activation.
But medicine hasn't gone there. Or more precisely, it hasn't acknowledged that's what it's doing even when it is. Allergy desensitization? That's neuroplasticity: retraining the immune response through repeated exposure. But that’s just one thing, and it’s not the first line treatment for allergies either. The first line treatment is to just avoid the thing causing the reaction.
So here's where we are: the science of neuroplasticity is robust and well-established. The principles are clear. The potential applications are vast. And the actual clinical use is limited to a narrow band of rehabilitation for acute neurological injury, implemented in diluted forms, inaccessible to most people. And it's completely disconnected from the broader implications of what the research actually shows—that the brain reorganizes constantly based on what you do with it, and that deliberately harnessing this could transform how we approach virtually any condition involving brain function.
So what about chronic diseases? There has been some research. Scientists have documented extensively that the immune system and nervous system learn patterns of disease just as readily as the brain learns to play piano. And once learned, these patterns can spread and entrench themselves in ways that look exactly like what we saw with Merzenich's monkeys.
In 1962, a researcher named Turnbull proposed that maybe asthma was actually a learned response. The idea came from observing that asthma patients would sometimes have attacks in the complete absence of any allergen. The most famous case was reported back in 1886, when a woman had a full asthmatic attack after seeing an artificial rose. Not a real rose that could trigger an immune response, but a fake one made of paper and cloth. Her body had learned the pattern so well that the visual cue alone was enough to trigger the entire physiological cascade.
Medicine dismissed this as psychological, something separate from "real" asthma with its documented immune responses and airway inflammation. But researchers kept finding the same pattern. They could condition guinea pigs to have asthma attacks in response to completely neutral stimuli, pairing the stimulus with an allergen until eventually the neutral stimulus alone would trigger the whole cascade: bronchoconstriction, inflammation etc. The body was learning associations between arbitrary cues and immune responses, and wiring them together through repetition until they became automatic. And once wired, these learned responses proved remarkably persistent, showing up even when the animals were stressed or in new contexts.
Medicine has documented that when someone has repeated asthma attacks, the sensory nerves in their airways become more sensitive, firing at lower thresholds. The brainstem neurons controlling breathing become more reactive. The parasympathetic nerves that trigger airway constriction become hyperresponsive. Chronic inflammation increases the density of nerves in the airways. The more attacks someone has, the more efficiently their nervous system learns to produce them. It's Merzenich's principle playing out in the respiratory system: use it and it grows stronger. Practice makes you better at what you practice, even when what you're practicing is having asthma attacks. This has been documented across body systems. The first seizure someone gets lowers the threshold for the next one. Each seizure makes future seizures easier to trigger. This is so well documented neurologists have a name for it: kindling. After a first heart attack, the risk of a second is dramatically higher. After a first stroke, subsequent strokes become more likely. Once you’ve had one kidney stone your chances of another go up dramatically. Once the body learns a pattern, it gets easier and easier to repeat it the more it gets practiced.
These practiced patterns can and do also generalize out to other related things with similar pathways. In asthma for example kids who start out reactive to one allergen, say cat dander, often become reactive over time to more and more triggers. First cats, then dogs. Then dust mites. Then pollen. Then cold air. Then exercise. Medicine tracks this with molecular precision, measuring exactly which proteins a child reacts to, the list growing longer year by year. They can see that children sensitized to four or more allergens by age two have more than four times the risk of having asthma by age ten compared to kids sensitized to one allergen or none. The pattern spreads through repeated activation of the same underlying inflammatory cascade, each new trigger strengthens and expands the pathway and gives the system more opportunities to practice.
And like we discussed in the last chapter, once you get diagnosed, you're given a list of triggers to watch out for, which helps this process along. You start to pay more attention to those triggers, and your brain sees that attention and vigilance and adds those things to the list. Just like we saw in the placebo and nocebo chapters. We tell our brains what to expect, and they oblige. And in the case of a preexisting condition, our bodies already have a practiced body reaction to use to enact these expected reactions. Every time you're around cats and you expect a reaction, you're more likely to have one. Every time you exercise and worry about your breathing, you're practicing the association between exertion and bronchospasm. The diagnosis that was meant to help you manage the condition actually gives your nervous system a comprehensive training program on how to be more efficiently asthmatic.
And this spreading isn’t confined to just airway triggers. Over half of children with severe eczema go on to develop asthma, because both conditions involve barrier dysfunction in epithelial tissues, both involve immune hyperreactivity, both show the same pattern of sensitization spreading over time. Medicine calls this the "atopic march," the progression from eczema in infancy to food allergies in toddlerhood to asthma in childhood. And while they frame it as a genetic predisposition playing out developmentally, what they're actually documenting is the nervous system learning a pattern of immune overreaction and then applying that learned pattern to different barrier tissues. The broken skin barrier in eczema teaches the immune system to freak out about allergens, and then when those same immune responses show up in the lungs or the gut, the pattern is already established, already practiced, ready to be triggered by an expanding list of stimuli.
The same sensitization pattern shows up in IBS, where an initial infection or inflammatory event can sensitize the gut's nervous system, creating visceral hypersensitivity that persists long after the original trigger has resolved. The nerves fire at lower thresholds, foods that were previously fine become triggers, and patients end up hypervigilant about eating, scanning for symptoms, reinforcing the very patterns that keep them stuck.
What all of these conditions share is the mechanism we've been tracking through this chapter: the nervous system learning through repetition, associations forming between triggers and responses, patterns becoming automatic and then spreading to similar contexts, neuroplastic changes that start as temporary sensitivity and become permanent architecture. Medicine has documented every piece of this in isolation, the immune responses in asthma, the central sensitization in IBS, the atopic march from eczema to other allergic conditions, kindling in seizure patients, but they didn't connect them as examples of the same underlying process. They didn't recognize that what they were seeing was maladaptive neuroplasticity, the brain and nervous system learning patterns that made people sicker, using the exact same mechanisms that allow us to learn piano or a new language or any other skill.
Now, there are some people using this knowledge to try to make changes in disease, but they get very little attention. Behavioral neuroplasticity-based programs, which use interventions that work to change these patterns we’ve been documenting. Stopping the body from firing off responses to non threatening stimuli. These programs don't use fancy equipment. They use awareness, attention, recontextualization, and repeated practice. Exactly the principles Merzenich's monkey experiments demonstrated; repeated, attention-driven engagement with the pattern you're trying to change. The same processes that conditioned these responses in the first place, just reversed.
In 2025, a randomized controlled trial tested one of these programs on fibromyalgia patients. The results showed close to 50% reduction in pain in the active group compared to 9% in the control group. Depression was nearly halved. Perceived health increased by 47% in the active group versus 16% in controls. This was, according to the researchers, the first randomized controlled trial ever published on a neuroplasticity-based program for chronic illness. The findings were described as groundbreaking. The researchers noted they'd want to do longer studies, six months to a year, to see the full effects, since they recommend patients continue the program for at least six months to achieve complete recovery. For comparison, this study showed effect sizes 4-9 times larger than the gold standard pharmaceutical treatment of antidepressants, and maybe most importantly, the range given is because it’s helping multiple ‘different’ conditions at once. Something almost no mainstream medical intervention does. This study hints at the possibilities that open up when you use interventions that are actually addressing the upstream cause of disease.
But these interventions can't be patented. There's no device to sell, no pharmaceutical to market. The research is difficult to fund because no company has a financial interest in the outcome. One researcher noted explicitly that it's difficult to get funding for formal studies wherever pharmaceuticals are not involved. The studies that do get done are small, underfunded, and published in lower-tier journals. Which means the larger medical establishment can dismiss them as having insufficient evidence, not being rigorous enough, and needing far more research before widespread adoption. So the neuroplasticity research remains siloed, known to very select specialists, occasionally mentioned in academic papers, and rarely making it into standard clinical practice.
But the evidence keeps piling up anyway. In the 1980s Susan Nolen-Hoeksema, who would later become chair of psychology at Yale, began studying what she called "ruminative thinking.” She defined rumination as repetitive, passive thinking about one's problems, their causes, and their consequences. So basically when your thoughts circle the same worries over and over without resolving anything. Through decades of longitudinal studies following thousands of participants, she and other researchers documented that people who ruminated more developed a whole range of physical health problems at higher rates than those who didn't. Rumination reliably predicted future illness even after controlling for current health status. Something about the thinking pattern itself seemed to be driving disease.
In 2006 Brosschot and his colleagues explained how and why rumination is causing these outcomes. They realized that while a stressful event might last minutes or hours, rumination can keep the stress response activated for days, weeks, or years. Your body can't tell the difference between a threat that's currently happening and a threat you're just vividly imagining.
In 2014, Peggy Zoccola's team at Ohio University brought healthy young women into the lab and had them give a stressful speech. Then they randomly assigned half the women to ruminate about how the speech went, while the other half were distracted with neutral thoughts about sailing ships and grocery stores. The researchers drew blood samples and tracked inflammatory markers, and the women who ruminated showed sustained elevation in C-reactive protein, an inflammatory marker, that continued rising for at least an hour after the speech. While the markers of the women who were distracted returned to baseline.
A major meta-analysis published in Psychological Bulletin in 2016 synthesized the research across dozens of studies. The findings consistently showed that rumination and worry were associated with elevated blood pressure, elevated heart rate, elevated cortisol, and reduced heart rate variability. The same physiological patterns that show up in chronic disease. Thinking about stress was, physiologically speaking, producing the same effects as experiencing it.
And these weren't just temporary blips that resolved when people stopped worrying. In 1997, Kubzansky and her colleagues published a landmark prospective study where they followed 1,759 men without heart disease for twenty years. They measured their worry levels at the start, and after controlling for cholesterol, blood pressure, smoking, and diabetes—all the standard cardiac risk factors—worry independently predicted who would have heart attacks. Men in the highest worry category had more than double the risk compared to those in the lowest category, and there was a clear dose-response relationship. Meaning the more they worried, the more heart attacks they had over a two decade period.
In 2002, researchers at the National Centre for Biological Sciences published a foundational study in the Journal of Neuroscience demonstrating how chronic stress reshapes neural architecture. Vyas and colleagues subjected rats to chronic immobilization stress and then examined their brains at the cellular level. They found that in the hippocampus—the region that helps regulate memory, emotion, and the stress response itself—chronic stress caused dendritic atrophy. Meaning the neurons shrank and their branches retracted, and the structures they use to communicate with other neurons withered. These changes were visible under a microscope, and they correlated with impaired function. One of the functions of the hippocampus is to help stop the stress response, putting on the brakes and providing negative feedback to calm things down after a threat has passed. If you shrink the hippocampus, you weaken those brakes.
In the amygdala—the brain's threat detection center—the opposite happened. Neurons sprouted new dendrites, grew new branches, and formed more connections. The amygdala got stronger, more elaborate, and more connected to everything else. So the region responsible for triggering the alarm expanded its capacity while the region responsible for calming things down contracted.
The prefrontal cortex, which is the part of the brain responsible for executive control, and for consciously overriding automatic responses, shows similar changes. This is the part of the brain that can say "I know this feels dangerous but it's actually fine." Chronic stress causes dendritic retraction here too, weakening the very circuits that allow us to regulate our emotional responses. More recent research has revealed how this happens: microglia, the brain's immune cells, become overactive under chronic stress and start pruning synapses in the prefrontal cortex. A 2023 study showed that stress elevates complement C3, which tags synapses for elimination, and microglia then engulf them. The brain is literally dismantling its own regulatory capacity.
These changes persist even after the stressor is gone. The brain has been architecturally reorganized to maintain a state of vigilance: stronger threat detection, weaker threat regulation, reduced capacity to override the alarm once it's triggered. And this reorganized brain is now more likely to ruminate, which creates a feedback loop. In 2011, J. Paul Hamilton's team at Stanford used functional MRI to examine the brains of people who ruminated frequently, and they could actually see the neuroplastic maps that were being built through these thought patterns. Just as Merzenich saw in the monkeys. The pathways for self-focused worry had been practiced and strengthened, making rumination the path of least resistance.
So let's bring it all together. Rumination maintains pathostatic chemistry even in the absence of actual stressors—your thoughts alone keep the threat response firing. Those chemicals physically remodel the brain: the hippocampus shrinks, the prefrontal cortex loses synapses, the amygdala grows and becomes hyperconnected. And this remodeled brain is now wired to ruminate more easily and regulate less effectively, which maintains pathostasis, which drives more remodeling. Each piece making this feedback loop harder and harder to interrupt. Researchers have mapped each of these connections across thousands of papers.
⋯
So what we’re seeing is that our brains can be remodelled into illness. Which doesn’t make a lot of evolutionary sense. Animals in the wild don’t get sick the way humans do; why would humans evolve this perfect set up for disease? Let’s look at that next.
Chapter Ten
Even single-celled organisms have threat detection built in. If you put a single bacterium in a dish with poison on one side and food on the other, it will move toward the food and away from the poison. Which makes sense when you realize that aside from reproduction, the ability to avoid threat is the most important survival function any living being has. It is so vital that something very close to our human threat detection system exists in the lamprey, a jawless fish that's been around for 500 million years. The pressure to get it right was so intense that evolution perfected it almost immediately, and it has remained largely unchanged since. To put 500 million years into perspective, the first dinosaurs appeared 230 million years ago and went extinct 66 million years ago. This fish predates them by almost 300 million years.
The lamprey's brain is so primitive it barely looks like a brain at all; it looks more like a swollen nerve cord. But when researchers examined this ancient blob of neural tissue, seemingly so different from modern animals, they found the same threat detection architecture that we see in modern humans today. An amygdala-like structure for detecting danger, a periaqueductal gray (PAG) for coordinating response, and a hypothalamus for triggering stress chemistry. This basic setup appears in every vertebrate, and has for half a billion years. And what we see in our world today are the winners of this long evolutionary march toward the present. Anyone whose threat detection system wasn't quite up to snuff? They're not our ancestors, or anyone's ancestors. Their genes got edited out of the evolutionary line.
Half a billion years is a long time not to get an upgrade. Evolution tinkers constantly - unless something is working so well there's nothing to improve. So let's look at how this system actually works, and why evolution never found a reason to change it.
This brain system's whole job is to keep us alive by evading threats, and it does so in a one-two-three punch that happens in the blink of an eye. Which makes evolutionary sense, because the animals who could respond the fastest were the ones that got to live. It really is as simple as that. Part one of this sequence happens in the amygdala, whose job it is to process your surroundings and decide what’s a threat. It has two main ways of doing this: the high road, which involves conscious thought and that we will get into more later, and the low road, which is more relevant for this evolutionary snapshot we’re painting right now. The way the low road works is that sensory information, what you see, hear, smell etc., gets sent directly to the amygdala before your conscious mind even knows it's there. This takes about 12 milliseconds. The amygdala gets a rough, blurry sketch of what's happening - something big moving fast, a loud sudden noise, a shape that pattern-matches to "predator" - and it fires. You jump before you know what scared you. You flinch before you've decided to flinch. The whole point is speed over accuracy, because for 500 million years, the cost of reacting to a false alarm was low, and the cost of reacting too slowly was death.
And because overreacting is evolutionarily better than underreacting, the amygdala is primed to overattribute things that are similar to threats as threats. Remember the woman from the last chapter who had a full asthma attack upon seeing a paper rose? Her amygdala had learned "rose = danger" so thoroughly that the visual pattern alone triggered the entire cascade. No conscious thought required, no time to say "wait, that's paper." By the time the high road could have evaluated the situation, her body was already reacting. This is what’s happening with ever-expanding lists of food sensitivities, or allergies. And this is the system at play in many of the diseases we know about.
Part two of the threat response is the job of the periaqueductal gray, otherwise known as the PAG, which coordinates the body’s actual response. The PAG has two primary modes of response, which you can think of almost like a toggle switch. The brain quickly asks the simple question: will action improve this situation? If the answer is yes, if there's somewhere to run, something to fight, some action that might work, the dorsolateral part of the PAG activates, which we’ll call the mobilizing PAG. Your heart rate spikes, your energy stores are mobilized, and your muscles prepare for action. This is fight or flight, the one everyone's heard of.
But if the answer is no, like if you're trapped, or if the threat is too big, or if there's just nothing you can do, a different part activates. The ventrolateral PAG triggers freeze, shutdown, and/or collapse, which we’ll call the immobilizing PAG. Your heart rate drops, your body starts trying to conserve energy, and in extreme cases, you can dissociate, or go numb. The classic example is prey animals playing dead when caught by a predator - that's the one most people have heard of. But the freeze response isn't just for "a predator has you in its jaws." Sometimes the answer is "not right now, but maybe later." An animal that detects a predator but hasn't been seen yet holds perfectly still. The bet is: stay frozen, and maybe when the threat passes, I can escape. Sometimes the answer is "no, and fighting would make it worse." When a lower-ranking animal encounters a dominant one, it shows submission and withdrawal rather than fighting a battle it would lose. Better to back down than get torn apart. The system is conserving resources for situations where action might actually help.
Sometimes the answer is just "no." When an animal is sick or injured, it hunkers down and waits. Or when an animal is subjected to inescapable stress - stress where nothing it does changes the outcome - it eventually stops trying at all. Researchers documented this with dogs given inescapable shocks. The dogs would struggle at first, trying everything to escape. But after enough exposure to "nothing works," they'd just... stop. Even when escape became possible later, they wouldn't take it. The system had learned: action doesn't help here. Stop wasting energy on it. Even fish do this.
And people have been noticing these two opposite patterns and trying to explain them for decades. Attachment theory identified "anxious attachment" and "avoidant attachment" - which map perfectly onto people whose PAG tends to flip one way versus the other more often. They attributed it to how we were parented, but what they actually did to figure those out was test young children in situations where they felt there was something they could do to restore connection with their caregiver, and situations when that connection seemed impossible to restore. They were literally testing what happens in children when the two PAG responses were activated. Polyvagal theory talks about "dorsal vagal shutdown" and "sympathetic activation" which are again, these two responses playing out. It’s not the vagus nerve causing those responses; the vagus nerve is ultimately just a biological wire, no one is out there walking around with a floppy vagus nerve. But what Porges was witnessing when he came up with his theory was this very clear PAG switch pattern. And while these frameworks got the underlying mechanisms wrong, it makes sense that people kept trying to explain what they were seeing. These patterns are obvious if you watch human behavior. Or really any animal's behavior. Because this toggle between "I can act" and "I can't act" is the oldest decision in the animal playbook.
Part three is the job of the hypothalamus, which you can think of as the pharmacy of the brain. Within seconds, your hypothalamus releases the entire biochemical cascade that shifts your body from homeostasis to survival mode. Your heart rate changes, your blood flow redirects, digestion shuts down, glucose mobilizes. Everything about your body's chemistry reorganizes around one goal: survive this.
And for 500 million years, this system worked perfectly. A threat would appear, the amygdala would detect it, the PAG would coordinate the response, and the hypothalamus would deploy the chemistry to fuel it. The animal would fight, flee, or freeze, and then the threat would pass. And once this happened, the system would reset. The chemistry flooding the body would start to clear, heart rate would return to baseline, digestion would turn back on, and the animal would go back to living normally. The stress response was designed to last minutes, maybe hours in extreme cases, and then it was over. And this is still how it works, in every animal on the planet. The ancient system activates when needed and deactivates when the threat resolves.
Except in one species.
Because around 100,000 years ago, a different switch got flipped. For hundreds of thousands of years, anatomically modern humans existed with brains very similar to ours. They made the same basic stone tools, generation after generation. They lived in small groups, and left little trace of symbolic thought.
Then suddenly, the archaeological record explodes with evidence of radical behavioral change. Cave paintings appear depicting animals, humans, and abstract symbols. People start wearing jewelry made from shells and bones, some transported hundreds of miles from their origin. Burial sites begin including grave goods like tools, ornaments, and food, suggesting belief in an afterlife or at least symbolic thinking about death. Musical instruments appear, complex multi-part tools, the creation of which required planning several steps ahead, trade networks spanning continents, and boats capable of reaching Australia across open ocean.
This wasn't evolution in the way Darwin understood it, with gradual changes over millions of years. Harari in Sapiens described this as an almost overnight phenomenon around 70,000 years ago. But the skull record seems to tell a slightly different story - brain size had already reached modern levels by 300,000 years ago, but brain shape continued evolving, with the frontal and parietal regions expanding into their modern globular form between about 100,000 and 35,000 years ago. But whether you call it practically overnight like Harari claims, or something that happened over the course of 65,000 years, both are lightning fast for something this advanced to come online.
And it makes sense evolutionarily why this trait got selected for so quickly. Recursive language alone - our ability to nest ideas within ideas indefinitely, to say not just "danger" but "the man who saw the lion that killed the hunter from the neighboring tribe is afraid to go near the watering hole where it happened" - was an immediate survival advantage. The populations with this capability rapidly outcompeted and replaced those without it.
But because this adaptation happened so fast, there wasn't time for evolution to integrate these new cognitive abilities with the ancient survival systems. A hundred thousand years is nothing in evolutionary time. These kinds of huge biological adaptations typically take a million or more years to develop and refine. We developed it in less than 10% of that time. And it’s only been another 30 thousand or so years since. This is a really new evolutionary ability that came online practically overnight, and we haven’t had time to adapt.
We have the ability to imagine infinite futures, to create complex social hierarchies, to contemplate our own mortality, to compare ourselves to hundreds of others, and we keep running all of it through threat detection systems that are still shockingly similar to that of the lamprey. And this mismatch is the core problem. We can now maintain threat activation through thought alone, like the rumination researchers were showing, which is something no other animal can do.
Remember we said the amygdala has two pathways - the low road that reacts in 12 milliseconds, and the high road that involves conscious thought. The low road kept us alive for 500 million years. But the high road? That's what the cognitive revolution gave us. The ability to generate threats through thought itself. To remember past dangers, imagine future ones, replay social rejections, and worry about scenarios that haven't even happened.
The lamprey doesn't have a high road. It can't think about the predator that attacked it last week. It can't worry about whether there might be a predator around the next rock. It detects threat, responds, the threat passes, and its system resets. But humans? The high road can feed thoughts into the low road, and the low road reacts to those thoughts exactly as it would to a real predator. The amygdala fires, the PAG coordinates, the hypothalamus floods the system. We're triggering the ancient hardware with our thoughts, and it responds the same way it has for 500 million years.
And as we saw in the last chapter, when that threat activation can be kept on by thought alone, with pathostatic chemicals staying chronically activated, it physically remodels the brain. The hippocampus shrinks - the structure that would normally contextualize memories as 'past' rather than 'present danger.' The prefrontal cortex thins - the structure that could regulate the amygdala and say 'that's just a thought, not a real threat.' Meanwhile the amygdala grows larger and more hyperconnected, generating more threat-related thoughts. The very structures that could turn the system OFF are being degraded by the process they're supposed to regulate. The high road that should help us think our way out of danger instead keeps us trapped in it. Our modern thinking brains never got the chance to evolve to interface effectively with the animal threat detection system, and that mismatch gave us the mechanism to get mentally stuck. And, subsequently, sick.
This is the human condition. This feedback loop of our thoughts alone triggering our ancient threat system, and then pathostasis remodeling the brain to generate still more threatening thoughts, is why we suffer in ways that animals do not. A zebra runs from a lion, escapes, and goes back to grazing. It doesn't lie awake replaying the attack, doesn't develop a generalized fear of open plains, doesn't ruminate about whether it will happen again. The threat passes and the system resets. But humans? We can suffer for decades from things that happened once, from things that might never happen, from purely imagined scenarios our thinking brains generate and our ancient hardware can't distinguish from real danger.
We can see this in reverse when animals are exposed to us. Dogs co-evolved specifically to attune to human nervous systems, and researchers have found their long-term cortisol levels synchronize with their owners' cortisol levels. The owner's personality predicts the dog's stress hormones. Cats in the same households, less selected for human emotional attunement, show no such synchronization. And the disease rates bear this out: dogs get cancer at almost double the rate of cats. Zoo animals, trapped in inescapable confinement, whose immobilizing PAG gets stuck on, develop chronic diseases at rates far exceeding wild populations. Almost one in two captive wolves die of cancer while wild wolves almost never do. The threat system works fine when threats resolve. It's the chronicity that breaks it - and we're the first species with the ability to maintain chronic threat, either internally through thoughts, or externally through modern life circumstances.
And this suffering was always just accepted as part of our human existence, until Freud came along. In Vienna in the 1890s, a neurologist named Sigmund Freud was trying to understand why some of his patients couldn't move their limbs despite having no physical injury, or why past experiences seemed to control present behavior in ways people couldn't explain or stop.
He started noticing things like a woman whose father died when she was twelve who would, twenty years later, still react to authority figures as if they might abandon her at any moment. A man who'd been humiliated in school would avoid situations as an adult where he might be judged, his body responding with a racing heart and shallow breathing as if the childhood classroom threat were happening now. Freud also noticed that talking about these experiences, bringing them into conscious awareness, sometimes helped. Not always, but often enough that he built an entire practice around it. He called these stuck patterns "repressions," believed they were buried in the "unconscious," and theorized that making them conscious through psychoanalysis would resolve the symptoms. What he was actually seeing was the threat circuit getting activated by cues in the present that matched patterns from the past, and the amygdala firing because something in the current situation pattern-matched to a previous threat, triggering the whole cascade even though the original danger was long gone.
So Freud built an elaborate explanatory model to explain what he was observing. The Oedipus complex, penis envy, id, ego, and superego battling for control, defense mechanisms, and psychosexual stages. In the absence of any established way to make sense of this, he created his own elaborate explanations to explain what he was observing, metaphors dressed up as mechanisms. And because Freud was brilliant and persuasive, and because his observations about people getting stuck in past patterns were genuinely accurate, his model became the foundation of psychology for half a century.
By the 1950s, a generation of researchers had been trained in Freudian theory, and many of them were starting to notice it didn't actually work very well. Psychoanalysis required years of expensive treatment, produced inconsistent results, and couldn't be tested scientifically because every failure could be explained away as resistance or insufficient analysis. It had become unfalsifiable—if you got better, psychoanalysis worked; if you didn't get better, you weren't ready to get better yet.
So in reaction to this unfalsifiable and not very effective framework, behaviorism came along. B.F. Skinner, Joseph Wolpe, and others said: forget the unconscious, forget Freud's model, forget everything that can't be directly observed and measured. Behavior is what matters and symptoms are learned responses to stimuli. Change the stimulus-response pattern and you change the behavior. And they were able to prove it too. Wolpe could take someone with a snake phobia and, through systematic desensitization, gradually exposing them to snake-related stimuli while they remained calm, eliminate the phobia in weeks instead of years. Skinner showed that you could shape behavior through reinforcement schedules, and that consequences determined what behaviors persisted.
The behaviorists were right that Freud's elaborate method was wrong. They were right that you could change emotional responses through learning. They were right that observable behavior mattered more than untestable theories about unconscious drives. But in their eagerness to reject Freud's unfalsifiable explanations, they threw out the baby with the bathwater and started a movement where thoughts and feelings got removed entirely from the field. Only what could be externally observed and measured, stimulus in, response out, counted as real.
What both sides were actually documenting was the same phenomenon from different angles. Freud had noticed that past experiences created patterns that activated automatically in the present. The behaviorists had discovered that you could change those automatic responses through repeated pairing of the trigger with a different outcome. But because they were fighting over whose explanation was correct rather than examining what they both observed, neither side recognized what they'd found: our ancient learning system that could be updated through experience, but only under specific conditions. Instead, psychology fractured into warring camps, each building elaborate theories to defend their approach, each certain the other side was fundamentally wrong about human nature.
By the 1960s, both approaches dominated different corners of psychology, with psychoanalysis mostly in universities and private practice, and behaviorism mostly in research labs and institutional settings. And this is when Aaron Beck, who had been trained as a psychoanalyst, was running experiments trying to validate Freud's theory that depression came from "anger turned inward." He was using dream analysis to look for evidence of this repressed hostility. But what he kept finding instead was that his depressed patients had consistent patterns of negative thoughts about themselves, their experiences, and their future. The same patient who seemed fine discussing neutral topics would, when talking about their lives, automatically interpret everything through a lens of failure, inadequacy, and hopelessness. "I got a B on that test" became "I'm stupid." "My friend didn't call back" became "Nobody likes me." "This therapy session went well" became "I'm just fooling you, eventually you'll see I'm hopeless too."
Beck started calling these "automatic negative thoughts" and documented how they preceded emotional shifts. A patient would be talking normally, then one of these thoughts would flash through their mind, often so quickly they barely noticed it, and suddenly their mood would drop, their posture would slump, and even their voice would change. It was as if the thought had triggered something physical. Which, of course, it had. What Beck was observing was the high road feeding the low road, thoughts activating the amygdala, triggering the threat cascade. But Beck didn't have that framework. What he concluded was that these distorted thoughts were causing the depression.
This became the foundation of Cognitive Behavioral Therapy (CBT). Beck identified common patterns of "cognitive distortions" like all-or-nothing thinking, overgeneralization, and catastrophizing. (Which if you look at this list, it’s exactly what your brain should be doing when it’s making quick real time threat assessments. That isn’t the time for nuanced thinking, it’s a time to make a quick decision so you can act. These are adaptive thought patterns when you’re in threat mode.) He then developed techniques to help patients identify and challenge these distorted thoughts. So for example if you notice yourself thinking "I'm a complete failure," you examine the evidence. Is that actually true? What evidence contradicts it? What would you tell a friend who said this about themselves? The idea was: fix the distorted thinking, and the emotions will follow.
This seems like it should work based on what we just established, right? Our ability to think threatening thoughts activates our pathostatic chemistry, so thinking different thoughts should turn it off. We'll go into this more in the next chapter, but the problem is that these aren't conclusions someone reasoned their way into. They're conditioned associations that got wired into the threat circuit through repetition. Think about the woman who had an asthma attack from a paper rose as an example. Her brain had learned 'rose = danger' so thoroughly that the visual pattern alone triggered the cascade. The same thing happens with thoughts. When the threat circuit fires repeatedly at the same time you're thinking "nobody loves me" or "I'm a failure," those thoughts become part of the trigger pattern itself. Now the circuit triggers the thought, and the thought triggers the circuit. They're wired together, bidirectionally, reinforcing each other. You can't reason with a conditioned response. You can't talk yourself out of a threat loop that was learned through experience.
But medicine loves CBT, because it can be turned into standardized protocols that can be taught systematically and delivered consistently. It is short-term, typically delivered over 12-16 weeks, which when compared to psychoanalysis that took years, insurance companies were more than willing to pay for. And they figured out how to make it measurable in a way other therapies had lacked, which meant it produced research papers. Lots of them. Randomized controlled trials showing CBT reduced symptoms of depression, anxiety, PTSD, eating disorders, insomnia, and chronic pain.
But CBT works when it works not because you’ve reasoned your way into better thinking, but because by stopping the thought you’re stopping the threat pattern from firing, you’re interrupting the pattern. But because you’re just interrupting it, you aren’t wiring in a new pattern or changing the old one, typically the benefits are short lived.
By the 1990s, CBT had become the "gold standard" of evidence-based therapy. Insurance companies preferentially covered it, training programs prioritized it and grant funding flowed to it, and it had thus also become something of a weapon. Other therapeutic approaches, like psychodynamic therapy, somatic therapy, and anything else that couldn't be easily manualized and measured, were dismissed as "not evidence-based." Never mind that meta-analyses were already showing all therapies worked about equally well when compared fairly.
The irony is that what Beck had noticed was true; thoughts and emotions ARE connected. The high road does feed the low road. But by concluding that distorted thoughts caused the problem and could be logicked away, it convinced a generation of therapists and patients that if you just learned to think more rationally, you could think your way out of suffering, and if you couldn't, then you probably weren’t trying hard enough.
But people started to notice that this approach wasn't helping as much as the research papers suggested. Lots of patients were getting sicker, getting more stuck, and CBT wasn't helping the way everyone was saying it should. So some therapists started to wonder: what if there was something going on in the body that talk therapy was missing entirely?
A psychiatrist named Bessel van der Kolk started documenting what he was seeing in his trauma patients. They would come in with histories of childhood abuse, combat exposure, assault, or loss, and trying to think their way out of their symptoms wasn’t working. They could challenge their thoughts all day long, "You're safe now, that was in the past, your reaction isn't logical", and it made no difference, their bodies were still reacting as if the threat were present. And Van der Kolk noticed these weren't just presenting as memories, they appeared to be full physiological states that got activated by present-moment cues. A combat veteran would hear a car backfire and his entire nervous system would shift before his conscious mind had time to register "that's not gunfire." A woman who'd been assaulted would feel a hand on her shoulder in a crowded room and her body would freeze, shut down, go numb, exactly as it had during the original attack. The past wasn't acting like the past in these people’s brains, it seemed more like it was encoded in the body, ready to replay at any moment.
What van der Kolk was observing was exactly what we've been tracking: that threat circuit learning happens through experience and then fires automatically when similar patterns appear. The combat veteran's amygdala had learned "loud bang = mortal danger" so thoroughly that the sound pattern alone triggered the full cascade. The assault survivor's nervous system had learned "can't escape = freeze" and now activated that same immobilizing PAG response whenever anything pattern-matched to similar helplessness. These were conditioned threat responses, the same mechanism that gave the asthma patient a reaction to a paper rose, just playing out across different systems.
But van der Kolk and others didn't have that framework. What they concluded was that trauma was somehow "stored" in the body, that it needed to be "released" or "processed" or "integrated." Peter Levine developed Somatic Experiencing based on watching animals shake off their threat activation, theorizing that humans needed to "discharge trapped survival energy." Pat Ogden created Sensorimotor Psychotherapy to help people "complete" defensive responses that had been interrupted. Richard Schwartz's Internal Family Systems talked about "exiled parts" carrying childhood wounds that needed to be "unburdened."
These frameworks all think of trauma as something fundamentally "other." Something foreign living inside you that needs to be extracted, released, or healed. IFS talks about parts as if they're separate children living inside you with their own personalities and needs. Levine talks about trapped energy as if there's a reservoir of unexpressed tiger-fleeing that's been sitting in your tissues for decades. But there's no trapped energy. There are no inner children. There's no unprocessed material waiting to be integrated. These were very helpful metaphors for what we were noticing and it makes sense why we thought these things for so long. Medicine was completely ignoring the very real patterns therapy was seeing, and these explanations gave people a framework with which to address this ignored part of our human experience. And it helped a lot of people. But because we were missing the core truth, we were still dancing at the edges of what was possible.
Because when it comes down to it, all of those things we were seeing were just your threat system, doing exactly what it learned to do. Firing the same patterns when similarly triggered that it's been firing since the original event taught it "this is dangerous." The combat veteran hearing a car backfire isn't experiencing "stored trauma" - his amygdala learned "loud bang = mortal danger" and so when it hears the loud bang it fires off the threat response in an effort to keep him safe. The assault survivor who freezes when touched isn't holding "trapped survival energy" - her immobilizing PAG learned "can't escape = freeze" and it activates whenever anything pattern-matches to that original helplessness. There's nothing to release, nothing to integrate, no inner child to rescue. There's just learned patterns, encoded through the same neuroplasticity that taught you to ride a bike. Except these patterns are keeping you sick.
And from these explanations came a framework that would dominate trauma therapy for decades. In 1992, psychiatrist Judith Herman published "Trauma and Recovery," which established what became the standard trauma therapy approach: a three-stage model where establishing safety had to come first, before any trauma processing could begin. Herman wrote that 'the first task of recovery is to establish the survivor's safety - this task takes precedence over all others, for no other therapeutic work can possibly succeed if safety has not been adequately secured.' And what started as a reasonable observation, that people need to feel safe in therapy, calcified into a rigid protocol. Trauma was framed as so powerful, so dangerous, that you needed months or years of careful preparation before you could even approach it. That feeling the feelings associated with trauma would 'retraumatize' you, potentially making you worse. That there were 'resources' and 'stabilization techniques' you needed to master first. That certain traumas were 'too big' to work with directly. That your nervous system was so fragile it could be overwhelmed by activation.
This became the dominant framework in trauma therapy. Patients would spend months, sometimes years, in preparation phases. Learning grounding techniques, building their "window of tolerance," identifying their "resources," maybe occasionally "titrating" small amounts of trauma material if the therapist deemed them ready. The actual trauma? That stayed locked away, too dangerous to touch directly, while therapist and patient tiptoed around it with elaborate safety protocols.
But if we look at this through the lens of how we now know the threat activation system works, we can see that this elaborate story around our own feelings having the potential to cause real harm to us, this actually creates the perfect conditions for a never ending feedback loop. Your threat system gets activated, which is something that happens to every animal, every day, as a normal part of navigating life, but now you've been taught to be afraid of that activation itself, which signals more danger to your amygdala, and on and on it goes. That activation of the threat circuit, which is the very thing that needs to happen for the pattern to update, is itself now a threat. Now you're not just afraid of the original trigger, you're afraid of being afraid. You're afraid of your racing heart, your shallow breathing, the physical sensations of activation. The trauma therapy has added a new layer of threat on top of the original pattern.
In a 2024 survey of 348 clinicians, therapists reported high levels of fear about retraumatizing their patients. But they didn’t collectively agree on what being “retraumatized” even means. And therapists who believed they'd witnessed it became significantly more fearful of it happening again, suggesting the fear itself might be shaping what they see. The concept of retraumatization exists because therapists believe it can happen; retraumatization, when it occurs, isn't caused by activation—it's caused by the belief that activation is dangerous. That belief adds a new threat loop on top of the old one. The framework creates the very harm it claims to prevent. If we look at what ‘retraumatization’ actually is, it's…activating the threat circuit. That’s it. Which you need to do in order to update it. The paper rose woman had a full asthma attack from a visual cue. That activation wasn't "re-asthmaing" her. It was her learned pattern expressing itself, and if you wanted to change that pattern, you'd need to activate it under conditions where the feared and expected outcome (can't breathe, going to die) doesn't occur. The activation is necessary, and it’s not dangerous. It's literally the mechanism through which learning happens.
The irony is that some trauma therapy approaches, the ones that show the best results, actually involve activating the circuit. EMDR has you think about the traumatic memory while doing bilateral eye movements, which means you're activating the threat circuit while keeping the prefrontal cortex online, creating the dual activation needed for memory reconsolidation. Prolonged exposure therapy has you repeatedly revisit the trauma memory in detail until the activation decreases, to show your system that the expected catastrophe doesn't occur. Even the somatic approaches, when they work, work because they have you feel the body sensations associated with threat activation while you're actually safe, allowing the pattern to update.
But because these approaches wrapped their techniques in protective stories, "we're carefully titrating exposure," "we're processing the trauma," "we're completing the defensive response", patients have inadvertently learned to be afraid of their own nervous systems, practicing elaborate grounding techniques to avoid activation, spending years preparing for work they were actually ready to do on day one.
The trauma therapy field was seeing something true: these patterns do live in the body - but not the way they think. When therapists talk about 'trauma stored in the body,' they're observing that gut dysfunction, chronic muscle tension, autonomic dysregulation, etc. all happen alongside emotional distress. But these aren't memories held in your intestines or your tissues; they're learned patterns in your brain, specifically in the threat detection circuits we've been tracking, that then reprogram your entire physiology. Your gut's enteric nervous system gets reprogrammed to be hyperexcitable when you stay in pathostasis for extended periods of time, and then unless it gets updated by the brain, it stays that way, sending constant danger signals back up to your brain. Your muscles learn to hold tension. Your immune system learns to stay activated. But the learning, the actual programming? That's all happening in your brain.
These and other fields of psychology were doing ground breaking work with the information they had available to them at the time. They were seeing something that medicine couldn’t: that this was happening not just in our 'thoughts', as though those were just floating in the ether, but in our brains and subsequently, our bodies. The problem was that without the complete understanding, each one of these competing modalities had just one or two pieces of the whole that was needed to work with these problems effectively. And because they each only had a fraction of the whole, when you look at them objectively, strip away the bias and look at the actual data about efficacy? What shows up again and again is this: They all worked about equally well.
The "Dodo Bird Verdict," named after the scene in Alice in Wonderland where the Dodo declares "everyone has won and all must have prizes," has been replicated in meta-analysis after meta-analysis. When you control for researcher bias and actually compare therapies head-to-head: cognitive approaches, somatic approaches, psychodynamic approaches, behavioral approaches, acceptance-based approaches, they all show roughly equivalent outcomes. Not identical, but close enough that the differences are clinically meaningless.
This finding has baffled the field for decades. How could approaches with completely different theories, completely different techniques, completely different explanations for what's wrong and how to fix it, all produce the same results? The answer is obvious once you understand pathostasis. Every one of these approaches, regardless of its stated theory, occasionally creates conditions where the threat circuit can update. They are being present with the activation in a way that not only doesn’t make it worse, but in fact shows the amygdala, ‘it’s all good, this isn’t actually a threat.’ They're all doing the same thing through different doors: activating the learned threat pattern in the presence of enough safety that the circuit can learn to stand down.
And in addition to the tools of each modality, they are all also helping partially due to the process of therapy and the therapeutic relationship operating similarly to how placebo operates. Having hope, and feeling like someone is going to help, these work because they directly counter the threat state. Social connection calms the nervous system, we even see this in the fish who’ve been using this threat system for 500 million years. Feeling understood signals safety to the amygdala.
The Dodo Bird Verdict is just the data telling us, over and over, that all of these approaches are accessing the same underlying mechanism, the threat detection system that we've been tracking through this entire chapter, and that they each found a piece of it. They each built elaborate theories to explain their piece. And they've spent decades fighting over whose piece was the real one, whose explanation was correct, whose approach deserved the funding and the prestige.
So the reason humans can create and maintain pathostatic conditions is because we evolved our cognitive abilities way too fast for them to integrate effectively with the 500 million year old threat detection system we share with all vertebrates. Let’s look at what the learning and behavioral research tells us about how this happens mechanistically.
Chapter Eleven
Doing crossword puzzles helps stave off Alzheimer's...right? It's one of those things everyone just knows, and it follows the conventional wisdom of use it or lose it. But let’s actually look at the data behind this claim, as some researchers finally did, and see if it holds up.
The study that launched this idea followed 488 elderly people, of whom 101 eventually developed dementia, and only 17 of which regularly did crossword puzzles. The finding in this study was that these 17 people scored better on memory tests for longer, meaning specifically that their mental decline appeared to accelerate about 2.5 years later than those who weren’t doing crosswords. Which sounds promising, except that once the decline began, these 17 people fell off a cliff faster, which meant that by the time dementia was actually diagnosed, both groups ended up at essentially the same place. So the people doing crosswords didn’t actually decline less, they just looked fine on the tests longer before crashing harder.
And the reason this fooled the researchers is that it turns out the memory tests used to assess these people’s decline were made up of word recall tasks, which crossword puzzles are specifically training people to do. Which means the people who do crosswords regularly were already better at the thing the test measured, and they also happened to have demonstrably higher verbal IQ scores to begin with. So they weren't actually protecting their brains from Alzheimer's at all, they were just better at the test being used to assess the decline, and which just so happened to overlap with their hobby. It’d be like concluding that playing basketball helps you keep your physical coordination longer, and then testing physical coordination by assessing whether or not you can throw your trash into the bin effectively from across the room. Those basketball players could probably perform the task better longer, but if their physical abilities were really declining, there would come a point where they would completely lose the ability to do so. This is what we’re seeing here with the crossword puzzle people. So what the data actually shows is that doing crossword puzzles just…makes you better at doing crossword puzzles.
So if crossword puzzles don’t help, what does? What if the real protection against decline is actually more simple than targeted treatments, or puzzles, or diet, or pills. What if it lies in understanding how patterns get wired into our brains - and more importantly, how we can wire in new competing patterns if needed? For over a century, scientists have been documenting exactly this. Piece by piece, lab by lab, they've been mapping how learned responses form, why they persist, and how they can be changed or overridden. They were doing the same thing as medicine, documenting pieces of the elephant without seeing the whole, but all the while building an instruction manual telling us how we get sick, and how we can heal.
In the late 1800s a physiologist named Ivan Pavlov was studying the neural control of gastric secretions in digestion, using dogs as his subjects. He built an elaborate sound and vibration proof lab that got coined The Tower of Silence, in an effort to isolate as many variables as possible. To study this he surgically created openings in the dogs' digestive systems, which allowed him to collect and measure their gastric secretions in real time when they ate. To picture this, you can imagine his dog test subjects walking around living their regular dog lives, with collection tubes hanging from their mouths. What he started noticing was that the dogs' digestive responses, the gastric secretions, were starting before food even arrived, which was messing with his meticulous measurements and driving him crazy. These dogs would start salivating just from seeing the lab assistant who usually fed them, or even from hearing footsteps in the hallway. They were salivating in anticipation of the food’s arrival (sound familiar? Seems suspiciously similar to what happens with placebo effects, physiological processes being triggered by expectation…). This "psychic secretion," as Pavlov called it, was contaminating his measurements of the purely biological digestive response.
What researchers normally do in this type of situation is to control for the contaminating variable and move on with their original research, as medicine has been doing with placebo for 70 years. But Pavlov instead became obsessed with understanding what was actually happening. He saw a surprising response, and instead of labeling it “paradoxical” or a “confounding variable”, he got curious. And what he realized was that something was causing these dogs to respond physically to something that had no inherent biological meaning. Footsteps obviously don’t contain any nutrients, and yet the dogs' bodies were responding as if they did. He decided to pivot his research and study where the data led him instead of holding rigidly to his original hypothesis.
His new experiment was really simple; he would tap a buzzer, then give the dogs food. The food triggering salivation was normal and expected, but he found that after pairing the buzzer with food enough times, the buzzer alone started to trigger salivation, even if there was no food present at all. The dogs' brains had created a connection between a completely arbitrary sound and a biological response. He found he could make them salivate to metronomes, visual stimuli, even electric shocks; anything that he repeatedly paired with food became a trigger for the digestive response. Pavlov had built his Tower of Silence to eliminate every possible contamination - and then data contamination in his experiment ended up leading to one of the most important discoveries in the history of learning.
What Pavlov had discovered was that the brain creates associations that can produce physical responses. And associations like these are actually something we utilize all the time without thinking about it. As an example, flashcards work just like this. You look at a word on one side, the definition on the other, and as you repeatedly expose your brain to those two things in rapid succession, just like Pavlov did with the buzzer and the food, your brain links them together, so seeing one automatically brings up the other. And it makes sense evolutionarily why the brain does this. The faster your brain can make these associations, the faster it can trigger your reactions, the more likely you are to be the animal that makes it out alive. If you hear crinkling leaves, and then a tiger appears, after this happens just a couple of times, your brain will start to ready the fight or flight response just from hearing crinkling leaves, before a tiger comes into view. And it's exactly what was happening with the paper rose triggering an asthma reaction. Her body paired the image of the rose with the reaction enough times that the image alone could produce the response. Medicine saw this and it didn't fit their framework so they concluded this wasn’t real asthma but something else. But it makes evolutionary sense, right? Better to produce the full immune reaction in case the rose was real, than not produce it and end up in trouble. This is actually a really smart evolutionary adaptation. Until it starts linking things you don't want linked.
In 1920, an American psychologist named John Watson set out to prove that all human behavior was learned, not inherited. He was convinced that he could mold anyone to do or be anything. And for some reason he chose to start this crusade of proof by terrorizing a nine month old child named Little Albert. Before this experiment little Albert had no fear at all of rats, but when Watson presented him with a white rat, just as baby Albert started reaching for it, he struck a steel bar with a hammer directly behind Albert's head. The baby cried and fell forward, his threat response triggered by the sound. You can actually watch videos of this experiment on YouTube. It’s…disturbing. Watson shows no emotion at all about the child’s distress. Then Watson repeated this experiment seven more times over two sessions. By the end of these seven exposures, Albert would cry and try to crawl away just from seeing the rat, without any noise at all. The fear association to white rats had been wired in.
Five days later, they found that Albert was now also afraid of a white rabbit he'd previously happily played with. Then he cried at a white dog, a fur coat, and even a Santa Claus mask. Watson's own white hair caused Albert to cry when he leaned in close. The fear of the white rat had generalized, causing baby Albert to be afraid of anything that looked similarly white and furry. His brain had taken one learned association and attributed it to anything that seemed remotely similar. Watson had only repeated this experiment seven times, and in that short time he had created a network of fear responses in Little Albert to an entire category of objects. Watson had proven that emotional responses could be created through simple association, and that you could wire fear to literally anything if you paired it with something scary. It’s not known for sure, but there’s speculation that Little Albert’s mother learned what was happening to her son, because she pulled him from the study before Watson had the opportunity to see if he could reverse this conditioning, and despite a lot of people trying, no one has been able to track down Little Albert to see what came of him in the aftermath of this. And Watson, caught having an affair with one of his students and fired from Johns Hopkins, never did get to finish his crusade to prove his full theory.
Seven pairings. That's all it took to create a fear that spread to everything white and furry. And this is exactly the pattern we saw earlier with the asthma patients whose triggers kept multiplying - first cats, then dogs, then dust, then cold air, then exercise. Or the IBS patients whose safe foods kept shrinking. Medicine notes these expanding lists and thinks, yep, the disease is progressing as we expect it to. But the learning science is clear; the brain learns one association and then generalizes it to anything that seems similar. It's trying to protect you by casting a wide net - and once that net catches something, it becomes part of the pattern. The groove gets deeper. The net gets wider. And the system keeps pulling you back into the same response. When you wire in one association, if it’s strong enough, it can generalize to other related seeming things.
In the 1930s a researcher named B.F. Skinner was running low on food pellets one weekend and didn't want to go make more. So he set his apparatus to only reward the rats in his experiment sometimes instead of every time, which led to one of the biggest discoveries in learning and behavioral research. Skinner had been studying how rewards shape behavior - when a rat presses a lever, they get food. But when he came back Monday after his lazy weekend adjustment, the rats were pressing the lever like maniacs, far more obsessively than when they got food every single time.
Skinner had assumed that continuous reinforcement, giving the animals food every time they pressed the lever, would create the strongest behavior. But what he found was that what is known as intermittent reinforcement, when you get the reward randomly (the rat equivalent of a slot machine), this made the behaviors way more obsessive and sticky, meaning more likely to persist. Even when Skinner stopped providing food entirely, the animals that had been on intermittent reinforcement would continue pressing the lever hundreds, even thousands of times before giving up. The ones who'd always gotten food would stop after just a few unsuccessful tries. In one experiment, Skinner gave pigeons food at random intervals regardless of what they did. And the pigeons started developing elaborate "superstitious" behaviors - one would spin in circles, another would thrust its head into corners, another would do a pendulum motion. Each pigeon had decided that whatever it happened to be doing when food arrived had CAUSED the food, and kept doing increasingly elaborate versions. The rituals became increasingly complex and rigid, which illustrates how false associations can create elaborate behavioral protocols based on coincidence. So Skinner had accidentally discovered two more crucial pieces: that unpredictability creates the most persistent patterns, and that associations can also encourage false explanations for what’s causing our experiences (more on this piece in the next chapter.)
When a behavior is rewarded unpredictably, the brain never knows when the next reward might come, so it keeps trying, the uncertainty itself drives persistence. If you never know when the next meal will come it makes sense to keep checking. And it's not just rewards that work this way. The same mechanism applies to anything unpredictable - including threats. When danger appears sometimes but not always, the brain can't afford to let its guard down. It stays vigilant and keeps scanning for the threat, keeping the threat detection system running. And anyone dealing with symptoms that come and go knows exactly what this feels like. The pain that appears sometimes but not always, the bad day that arrives unpredictably after a string of good ones, these create neural patterns far stronger than consistent experiences would. The brain can't stop checking, so it becomes hypervigilant.
And that vigilance changes your actual physiology. When you're constantly scanning for symptoms, you're training your nervous system to detect smaller and smaller signals. The same synaptic strengthening that lets you learn a language or memorize facts is now making you better at feeling pain, or gut sensations, or whatever you're monitoring, just like any skill gets honed through practice and attention. Remember the monkeys in Merzenich’s experiment who were learning to touch a spinning disc? The region in their brain for that specific finger exploded, they actually grew more neuronal connections in that part of their brain to enhance their ability to perform that skill. The same thing is happening here. Signals that used to be too weak to reach conscious awareness now register loud and clear. Medicine sees this hypersensitivity and assumes it's causing the condition - but you weren't born with intestines that feel everything, or hypersensitive lungs. The sensitivity is a product of the vigilance, not the other way around. You literally trained yourself to feel more by paying so much attention.
So now that we understand how these patterns and even symptom associations form, spread, and dig themselves in deeper the more we try to monitor them, it begs the question: is that it then? Are we stuck with them forever as medicine seems to think? To answer that, let’s look at what they were documenting on the other side of the equation: what we might actually be able to do about it.
In the 1970-80s, Mark Bouton at the University of Vermont wanted to know if we could reverse the kind of conditioning that Watson did to Little Albert; if we could extinguish (get rid of) or undo that kind of fear response. So he first taught rats to be afraid of a specific tone by pairing it with a shock, similarly to how Watson paired the white rat with the loud noise. Then he played that same tone repeatedly without the shock until they stopped showing fear. The prevailing assumption in neuroscience is still “use it or lose it” to this day, so the assumption here was that the fear memory in these rats had been erased once they stopped showing fear. Or that it had at least been weakened. But instead of extinguishing the fear and calling it done, Bouton kept testing that assumption. And what he discovered was that the original fear was actually still there, fully intact, and could be resurfaced if given the right conditions.
He showed that when he extinguished a fear completely in one location and then tested the animal in a different room, the fear was still there, as strong as ever. The animal hadn't unlearned that the tone was dangerous; it had only learned that the tone wasn’t dangerous in that specific room. So then he tried testing another assumption: he would extinguish a fear completely, and then he would wait a few weeks without any training or exposure, and do the test again. He found that the fear returned completely, as if the extinction had never happened, even in the learned location. And finally he showed that when he extinguished a fear, and then exposed the animal to mild stress that was completely unrelated to the original fear conditioning, the fear came roaring back. So what he was seeing was that extinction wasn't actually erasing anything - the original fear learning remained intact - it was just temporarily being suppressed under specific conditions.
By the 1990s, brain imaging technology finally let researchers see what was actually happening during extinction, and it confirmed what Bouton's behavioral experiments suggested. When they looked at the brain activity during extinction, they found that the fear neurons in the amygdala still initially activated when the stimulus appeared. The amygdala was still recognizing "this is the thing that was dangerous," but for responses that have been extinguished, neurons in the prefrontal cortex would quickly activate and inhibit the fear response before it could fully develop into the physiological and behavioral fear reaction. Which makes a lot of sense. If you learn that snakes are dangerous, and then you experience some snakes in the wild that don’t attack you, you don’t want that fear circuit to disappear completely, you just want slightly more nuanced activation. Because some snakes ARE dangerous, and your brain needs to know that.
So what Bouton had shown was that fear learning doesn't get erased, it gets suppressed. The original circuitry stays intact. And this turns out to be true not just for fear, but for learning in general. We don't prune anything. Which is why you may remember your childhood phone number, or the red socks you wore to your 3rd grade school concert. These details weren't important, they just still... exist. Because that's how our brains work.
And the prefrontal activation that the scans show inhibiting our pathostatic responses to fear is actually the same thing we saw in Chapter 2 with placebo. When someone believes they're getting treatment, their prefrontal cortex provides the safety signal that allows the fear response to quiet down. And then over time, with enough safety experiences, the prefrontal inhibition can become faster and more automatic, so the fear response gets suppressed before it fully develops. This might explain why sometimes drugs like SSRIs can work long term; not through the serotonin mechanism medicine assumed, but because taking them every day creates a repeated safety signal that activates the same prefrontal inhibition we see in extinction. We are constantly reminding our body, every single day, I’m being helped here, I’m being taken care of. But the original fear circuits still remain structurally intact, all the synapses and connections still there. Which is why under stress, or in new contexts where the safety learning hasn't occurred, the original fear response can return at full strength, as if the extinction had never happened.
This split between what we consciously know and how our body automatically responds becomes clearer when you look at how memory actually works. Researchers discovered that we have two completely separate memory systems that encode experiences in fundamentally different ways. The explicit memory system, centered in the hippocampus, creates conscious memories you can deliberately recall and describe. This type of memory contains the story you have of what happened to you. And the implicit memory system, running through the amygdala and other subcortical structures, creates unconscious emotional responses and physical reactions that fire automatically without any conscious recall or sometimes even awareness that you're remembering something. This is where our 500 million year old threat detection system lives.
Most of the time these systems work together seamlessly; you have an experience, and you get both the story and the feeling, tagged to each other. You remember your wedding day and the joy comes with it. You remember an embarrassing moment and feeling it elicited comes up too. The explicit memory calls up the implicit response and visa versa. But when something traumatic happens sometimes these two systems can encode completely different aspects of the experience, and they don't necessarily communicate with each other. Your explicit system might remember the facts of a car accident, the color of the other car, the intersection where it happened, while your implicit system remembers the feeling of your chest tightening, the sound of screeching brakes, the sensation of losing control. Later, you might hear brakes squeal and feel your whole body tense without consciously thinking about the accident at all. The implicit memory fires before the explicit system even knows what's happening.
And in people with chronic conditions, this imbalance compounds over time. Cortisol impairs hippocampal function while leaving the amygdala fully active — or even hyperactive. So the system that should be contextualizing and 'date-stamping' experiences, that could help you understand 'that was then, this is now,' gets weaker. While the automatic body responses get stronger. Day after day, the implicit system is reinforced while the explicit system is suppressed, making the automatic threat loops stronger, and the conscious override weaker. This is why chronic conditions can seem to take on a life of their own.
This is why you can know intellectually that you're safe while your body is reacting as if you're in mortal danger. And this is why most therapeutic approaches fail to create lasting change - talk therapy engages the explicit system but often can't reach the implicit body memories. Somatic approaches might calm the implicit system temporarily but don't update the explicit understanding. Medicine doesn't address either system, it just tries to suppress the downstream symptoms. To actually update these patterns, you need both systems engaged simultaneously, which rarely happens by accident.
If you study for a test when you’re tipsy, you’d be best served by being tipsy while you take the test too. In the 1960s researchers found that when people learned something while they were drunk, they could remember what they learned better when they were drunk again, rather than sober. And when they studied this weird discovery further, it turned out to be true for every state they tested. Information learned while anxious was best recalled during anxiety. Skills practiced while calm were most accessible when calm. Pain states, and mood states, and even specific body positions all created state-dependent memory networks that were most available when those states returned. And this understanding actually helps Bouton's findings make a lot more sense. The rat didn't learn 'the tone is safe now.' It learned 'the tone is safe in this room.' The safety learning was tagged to that specific context, which is why it disappeared the moment you changed the room. If you change the context, you change the learning. Which makes sense evolutionarily. Your body could learn, tigers can’t get me in this cave, but when I go outside all bets are off. We had to be able to make context dependent threat assessments, so we could let our body relax when it was safe, and could stay vigilant when the possibility of threat demanded it.
In the 1980s a researcher named Gordon Bower at Stanford hypnotized subjects to feel either happy or sad, then had them learn lists of words. Later, he tested their recall in either the same mood or the opposite mood. People who learned words while sad remembered them better when sad again. People who learned while happy recalled better when happy. The brain wasn't just storing the information; it was storing the information tagged with the internal state present during learning. This means that all the coping skills you learn while feeling calm and safe in a therapist's office are neurologically tagged as "calm state" information, making them least accessible precisely when you need them most, during the stressed, symptomatic states where the maladaptive patterns live. It’s why meditation is often just like doing crossword puzzles. You are learning to be calm WHILE you meditate, in those specific circumstances. When you are annoyed with your spouse because they didn’t empty the dishwasher, that meditative state you’ve been practicing isn’t going to come up unless you make it come up. And both of these state dependent findings, mood states and state-dependent learning, provide a lot of context for how illness is self perpetuating. When you're sick, you're learning all your patterns in the sick state. Every symptom, every fear response, every coping attempt gets tagged to that pathostatic state. So being sick becomes the context that triggers being sick - your body learns to be a certain way when it's getting specific stimuli, and the symptoms and physiology that state creates actually create a feedback loop that becomes more and more sticky and thus chronic.
And just to round out how far these associative findings go, let’s look at the research on drug overdose as it relates to location. A researcher named Siegel showed that situation-specific drug tolerance is capable of preventing fatal overdoses. There was one man who received the same morphine dose four times a day for four weeks, always in his dimly lit bedroom, where he was bedridden. But one day he for some reason dragged himself out of his bedroom into the brightly lit living room. When his son gave him the exact same dose in the living room he died from an overdose. His body didn’t mount the tolerance response it had learned, without the context of the room it had learned it in. And if context can determine whether or not a morphine dose is fatal, imagine what it's doing to every other physiological process. Pain researchers discovered that chronic pain patients often felt better in novel environments like hospitals or vacation spots, only to have their pain return the moment they got home. The brain had learned not just "pain" but "pain in my bedroom," "pain at my desk," "pain at 3 PM," creating elaborate networks of association that turned entire environments into symptom triggers. This should have changed or at least inspired research on how we think about physiological responses to medicine, physiological responses to pain, and really, everything else. But instead, medicine just noted it and filed it away.
Hermann Ebbinghaus figured out another piece of the puzzle when he sat alone, memorizing nonsense syllables like 'DAX' and 'BOK' for hours every day, using himself as a test subject. This obsessive self-experimentation led him to discover what he called ‘the spacing effect.’ He learned that repeated practice done over time led to better retention than a lot of practice all at once. Cramming doesn't work. Spreading practice over time does. Which is another reason one-off therapy sessions or weekend retreats don't create lasting change.
This is what Skinner had been seeing with his intermittent reinforcement schedules; when rewards came unpredictably, each instance created a separate learning event. Which shows why the very unpredictability that makes symptoms so maddening also more deeply embeds them, not by creating one strong pathway but by linking the same pattern to hundreds of different states and contexts - creating a web of triggers that can activate it from almost anywhere in a person's life. State and context learning also explain really clearly why meditation often leads to you…being more calm when you’re meditating, but doesn’t necessarily lead to you being more calm in your everyday life. If you consciously take those meditation principles and apply them to how you show up moment to moment, that is when meditation becomes an impactful practice. But if you are only applying the meditation when you’re actively meditating, only applying the principles when you’ve shut out the world, that doesn’t end up translating to being something that helps when you’re activated and stressed. This is the difference between what are called state changes and trait changes. Meditating in the way most people do it creates a state change: calm while meditating. Vagal toning exercises like voo breathing, or cold plunges - same thing. You may be momentarily changing your nervous system state, but it only applies to that moment. Breath work is another example. In order to create a trait change, something that changes the way you show up in your life, you need to use practices throughout your day and your life in every single context. Across all contexts over time.
So now that we know isolated practices only create isolated change, let's look at what the research shows is actually required to create lasting updates.
Chapter Twelve
For most of human history, we believed memories were fixed, trapped in amber for the rest of time. Which means we also thought of our memories as a trustworthy and true recall of what actually happened. But in the early 2000s, Karim Nader while working at NYU, discovered that every time you recall a memory, it becomes temporarily moldable, able to be updated and changed, before being saved again. Meaning we are literally rewriting our history every time we remember something.
When Nader proposed that memories could be changed during recall, senior scientists told him he was committing career suicide. The idea that consolidated memories were permanent was dogma. One prominent researcher told him "Don't waste your time, this will ruin you.” Which illustrates again that when people find things that should excite medicine, the things that could rewrite the textbooks, rather than being celebrated and supported, they are cautioned against pursuing the truth.
But Nader persisted anyway. And he figured out that our memory works similarly to how computer memory works. When you open a file on a computer, you're not just reading the saved version - you're actually loading it into working memory where it can be edited before being saved again. And it turns out this is how our brains work too. When we remember something, it’s like pulling up a file from a computer, and what we do with that file changes it, and those changes get saved, which updates the file. For difficult memories like trauma and those surrounding PTSD, this has some really important implications. It means that we are able to update those memory files in our brain, potentially reducing the physical response and making the trauma feel less traumatic. If you recall a fear memory while simultaneously experiencing genuine safety - not just thinking about being safe, but actually feeling safe in your body - the memory gets rewritten to include that safety information. How your past is encoded can be changed, neurologically, through how you meet it in the present.
But the flipside of that is also true. If you recall a memory and add a new lens to it that is not safety but threat, or dysfunction, that also can get encoded into the updated memory file. When we adopt the cultural narratives around having had toxic parents or specific attachment styles or childhood trauma, we're not just reinterpreting our past, we're reconstructing it. Every time you recall a childhood memory through the lens of "that was trauma" or "my parents were narcissists" or "I have anxious attachment," you're encoding that interpretation into the memory itself. It can feel like you're finally seeing what was always there. But it is also very possible that in some real way you could be building a past that matches your current beliefs, making it more true neurologically, emotionally, and psychologically, with each recall.
The story you tell about your history becomes your history more deeply with every retelling.
This is not to say that people didn’t have hard childhoods, or that specific events aren’t traumatic. It is only saying that the way we tell the story to ourselves as we recall those memories becomes part of the memory itself. The way we decide to feel about our history can either help us better respond to our current reality, or it can encode in more trauma and more threat and make us more reactive and stuck.
The traditional model says you should go slowly, exposing yourself to less scary or activating things, and building up your tolerance over time. Like if you had a fear of spiders, the traditional model might have you start with pictures of spiders, and then maybe a spider in a jar across the room, then closer, and then eventually you can maybe touch one, or have one on your hand. The idea was that you'd slowly get used to the fear, building tolerance over time until the thing that scared you didn't scare you anymore. The ultimate goal being that you could remain calm in the face of your fears.
In 1989, a Swedish psychologist named Lars-Göran Öst showed that this slow approach wasn't necessary. He developed what he called one-session treatment, where patients would go through their entire fear hierarchy in a single three-hour session. People told him he was traumatizing his patients. And it makes sense why they thought that; he would put spider-phobics face to face with tarantulas, or make claustrophobic patients enter tight confined spaces, all in one sitting. But his data showed 90% of patients were improved or recovered at four-year follow-up. The intense, concentrated approach worked for discrete phobias — fears tied to specific, identifiable triggers. But chronic conditions aren't wired to one trigger. They've been encoded across contexts, states, and situations over months or years. Which is why the context-dependent piece still matters for more complex patterns.
But even for discrete phobias, no one understood why Öst's approach worked; it violated everything the field believed about safe exposure. Michelle Craske at UCLA started looking into why this contrary seeming method was working. The old model said exposure worked through habituation - you slowly got used to the fear until it went away. But Craske researched what was going on and was able to show that wasn't actually what was happening. The key component wasn’t whether your fear went down during the exposure session; that didn't predict your long-term outcomes. People whose fear stayed high throughout exposure did just as well as people whose fear decreased. The ones who were crying, terrified, sweating - they showed good long-term outcomes too. The prevailing wisdom was that this level of activation would retraumatize people, but that's not what the data showed. It turned out what actually mattered was activating the fear circuit while simultaneously showing your threat center that you're okay - that this scary seeming thing is not actually a threat.
This finding started to make more sense as researchers looked at what was happening in the brain during these updates. In 2004, Phelps and colleagues looked at what was happening in the brain during fear extinction. What they found was that successful updates required two things firing simultaneously: the fear center, where the automatic threat response lives, and the prefrontal region responsible for conscious processing and safety learning. The prefrontal cortex forms the new 'this is safe' memory and actively inhibits the fear response, but only if the fear memory is active at the same time. The strength of connection between these two regions predicted how well the extinction stuck. Knowing this, that we need the fear center activated while the conscious safety-learning center is online and communicating with it, helps explain what the trauma therapy field got right, and what they missed.
Bessel van der Kolk and others established what is now known as trauma therapy because they noticed that trauma showed up in body sensations: people would have full physiological reactions, racing heart, shallow breathing, the whole cascade, even when they consciously knew they were safe. They could tell you "I know I'm not in danger right now," and their body would be acting like a tiger was in the room. Seeing this, they assumed that meant that trauma was "stored in the body" and needed to be released or processed through the body. Which was revolutionary at the time, and brought a lot of much needed attention and research to what else was happening with trauma aside from just the thought piece. But since then we’ve learned a lot more about what’s actually going on, and it turns out that interpretation wasn’t quite right either.
Remember that the implicit system encodes threat responses that fire automatically, before conscious thought even comes online. And the explicit system that encodes the narrative, the conscious story of what happened. And because these two systems can encode separately, store separately, and retrieve separately, they also can fire completely independently. Which is why as we said above, you can know intellectually that you're safe, while your body is in full terror. The explicit system has the information: "this is my living room, there's no danger here." But the implicit system has different information, encoded through different pathways: "this sensation pattern means danger, this sound means threat, this body position means brace for impact." And though most of the time these two systems encode information together and get retrieved together, if the situation was too scary or your brain decided to help you check out, they can be encoded separately. These systems activate physical responses, but that doesn't mean trauma is stored in the body, the way van der Kolk and others claimed. The body sensations aren't where the trauma 'lives.' They're the output of the implicit system, which is in your brain, firing its learned pattern.
Which means the update doesn't happen by "getting into the body" or "releasing stored trauma." There's nothing stored in your tissues that needs to be released. The update happens when you get both systems online at the same time, and you update the prediction algorithm of your threat center. When you're actually feeling the sensations, which is the implicit system's output, while also consciously present and aware that you're safe, which is the explicit system doing its job. The two systems that were encoded separately have to fire together, integrated, for the learning to update.
And for complex threat responses that have been encoded throughout many instances in your life, this can't just happen once, in a therapy room, under special conditions. The research on context-dependent learning that we covered earlier showed that updates made in one setting don't automatically transfer to other settings. If you only practice being calm in your therapist's office, you've taught your brain to be calm in your therapist's office. The pattern that fires when you're at home, or at work, or in the middle of the night, that pattern hasn't been touched. You need the new integrated experience to happen across contexts. Across different moods and times of day and situations, in the actual moments where life activates the old pattern. Each time you meet the activation with both systems online, feeling it while staying present, you're building the new wiring in that specific context. Do it enough times, in enough different situations, and the new pattern starts to become the default.
This dual requirement explains why so many interventions have failed to create lasting change. Cognitive approaches that help people understand their patterns intellectually don't reach the implicit, somatic encoding. Body-based approaches that release tension or create calm states don't necessarily update the cognitive piece. Meditation might create a calm state, but without the fear memory active, there’s nothing to update. Exposure might activate the fear, but without the felt sense of safety, your body still thinks this is dangerous, but we averted disaster this time. And all of them typically happen in isolated contexts, like the therapy room or retreat, which means the pattern that fires at home, at work, or in bed at 3am, remains intact.
The research has mapped out exactly what the brain needs to update patterns that seem fixed. Scientists have discovered the lock and the key. They just haven't quite realized that chronic illness patterns are locks that can be opened with these same keys.
Anyone reading this book is not going to be around long enough to see a time where our ancient and cognitive brains evolve to sync up more seamlessly. Evolution needs hundreds of thousands of years to create that kind of integration. Maybe even millions of years. So we need solutions now; we can't wait around hoping for natural selection to fix a mismatch that's making millions of people sick today. Luckily, the same neuroplasticity that allows pathostatic loops to form in the first place also gives us the means to consciously wire them differently, and wire in alternate pathways.
Everything we've learned about how patterns get encoded tells us how they can be re-encoded differently. But before we can use these keys, we need to understand one more piece of learning research.
In 1935, a German psychologist named Karl Duncker developed an experiment where he gave people a candle, a box of thumbtacks, and a book of matches, and then asked them to attach the candle to the wall so the wax wouldn't drip on the floor. The simple solution was to empty the box, tack it to the wall, and use it as a platform for the candle. But because people first saw the box as a container for tacks, they struggled to see it as a platform. That first association as a container blocked the perception of other possibilities, demonstrating that the first version of something that we encounter and believe persists in our minds unless we consciously work to challenge those assumptions and associations. And this quick pattern matching is really evolutionarily useful and efficient. We don't want to have to evaluate every object from scratch every time we encounter one. This allows us to move through our complex world much more quickly and adeptly. But it can also limit us and keep us stuck when it inhibits our ability to assess information in a novel way. Ultimately what Duncker was showing was that once you have decided what something is for or what a concept means, you are far less likely to be able to see it a different way.
Not long after Duncker’s experiment, in the 1940s, Abraham Luchins took it a step further. He studied our brains' tendency to get locked into the thinking and problem solving strategies that we learn first. He developed what came to be known as the water jug problem. He'd give people a series of puzzles where they had to measure out specific amounts of water using three jugs of different sizes. The first several puzzles all required the same somewhat complex solution - fill jug B, pour from it into jug A twice, then pour from it into jug C twice. People would solve puzzle after puzzle using this method until it became automatic. Then Luchins would give them a puzzle where this method still worked, but there was now a much simpler solution - just fill jug B and pour once into jug C. But over 80% of people stuck with the complicated method they'd learned. They couldn't see the simpler solution even when it was right there, because their brains had locked onto the established pattern.
This phenomenon, called the Einstellung effect, showed how practiced solutions actively block novel approaches. And it gets stronger with expertise. Later chess studies revealed that when expert players encountered problems with a familiar-looking pattern that required an unconventional solution, their performance dropped dramatically - to the level of players three standard deviations below them in skill. The expert sees the board, recognizes a pattern, and applies the sophisticated response they've developed over decades, completely missing that this particular configuration needs something different. Their expertise becomes their blindspot.
Luchins later tried telling some of his subjects, before introducing the puzzle with the simpler solution, “don’t be blind”. And more than 50% of these prompted students found the simpler solution on the remaining problems. Which illustrates that we have the capacity to see what we could be missing when we consciously bring into awareness our pattern matching tendencies.
By the late 20th century, neuroscience research on the hippocampus and cortex showed that the reason this becomes so powerfully entrenched is because we make these associations before our thinking brain has had time to process at all. It happens automatically, in the blink of an eye. When your brain gets partial information matching a known pattern, it literally fills in the "expected" rest without waiting for you to think about it. And this served us evolutionarily - if you see stripes in the tall grass, your brain completes the pattern to "tiger" and gets you moving before you've consciously processed what you're seeing. This kept us safe and alive, but in our modern world, with our conscious brain slapped on top, it can become a bug when the automatic completion blocks you from seeing what's actually there. Like if your partner or friend is trying to grow and change, and you can’t stop seeing them through the old lens of how they used to be. Our old perceptions pattern match before we even think about it, and then we get stuck, seeing them through the automatic pattern instead of noticing what's different. And this can also get in the way of learning something new, or seeing an old problem through a new lens.
There’s another theory that takes this further by demonstrating how this plays out with complex knowledge. Schema theory shows that once you have a mental model for understanding something - a schema - your brain preferentially notices information that fits that schema, and dismisses or reinterprets evidence that doesn’t fit it. The researcher who developed Schema theory, a British psychologist named Frederic Bartlett, had British people read a Native American folk tale and then retell it. And he documented that on each retelling it became more "British," the participants replacing unfamiliar details with more familiar ones. They weren’t intentionally changing the story; their brains were automatically reconstructing the memory to fit their existing schemas.
This concept alone explains some of why medicine is where it is today. Imagine the amount of learning and encoding that happens over the course of a 12 to 15 year education? Doctors are taught in an almost boot camp-like system for over a decade of their lives, training them to see patterns in a very specific way. Of course this would lead to a brain that reinforces what they were taught and rejects anything that doesn’t seem to fit what they think they know.
Schema theory was largely ignored until the 1970s, when computer scientist Marvin Minsky rediscovered Bartlett's work while trying to build AI that could understand the world like humans do. He realized that humans were using stored knowledge frameworks to perceive and understand the world around them, and he modeled AI on exactly that. AI has a bank of knowledge just like human brains, and it makes sense of new inputs by synthesizing its existing information. You might have noticed it can be really frustrating if you are trying to hash through a new or novel context with AI, because the pull of the programming is so strong to repeat what it already knows, even when that's not what you need. This is how our brains work.
And you've seen this in action if you've ever been sick. If you talk to a psychologist about your fatigue and low mood, they will have one interpretation, maybe depression, but if you talk to a cardiologist, they may look at that and see signs of an overworked heart. A pulmonologist might think you have reduced airflow causing those symptoms. And because each specialist has spent years honing their knowledge of this one particular piece of human physiology, it's really easy to pattern match to that knowledge base, and really hard to try to synthesize that across the systems that they aren't as familiar with.
This matters for what comes next because almost everyone has tried calming strategies or various therapy techniques (CBT, EMDR, ACT, ERP, and many more), or wellness hacks, breathing exercises, tapping, or yoga, or any number of other interventions over the years. And we’ve all formed schemas about them - what they are, what they do, whether they work (or don't). Those schemas will now be activated as we think through what we are learning here about updating our pathostatic conditioning. Our brains will attempt pattern completion such as: "This sounds like meditation" or "I tried something similar with my therapist" or "So you mean like CBT?"
And the research shows really clearly that this is most pronounced in people who've tried a lot of these things, because now there aren't just individual things to pattern match to, but a GROUP of things. Similarly to the chess master not being able to see the novel solution on the board. Once we have a "wellness" or "therapy" schema, and none of them produced results, then our brains are really quick to dump it into that schema bucket to save us time and energy. And again, this is super advantageous evolutionarily. If you kept trying to find food in similar looking forests, and every time you came up empty handed, it would be really helpful if your brain tipped you off and said, hey, how about we don't waste our time here again, there's never any food. And because the pattern completion happens fast - often in the first few sentences, before you've actually encountered the full information, it means we don't engage with the nuances that may be there that could show your brain this time is different.
The reason wellness practices, meditation, or the therapy processes we laid out in the last chapter often don't work for pathostasis specifically or fully is that while these approaches might have one or two pieces of what's needed, they are only implemented partially and almost always in isolated contexts. Which means they often function as state changers, not trait changers. And our brains will want to pattern match to those things. But this is like being told you need to do specific physical therapy exercises every day in a specific order and with the proper technique to recover from a broken arm, and listening to our brain when it says “oh yeah well I've tried push ups and lifted weights before so I'll just do that again I guess”. Or worse, “oh well I actually tried pushups already and that didn't work so I don't think this will work for me either.”
And all of these things we are pattern matching to, they weren’t wrong, they were just missing the unifying throughline. It's like discovering that willow bark helps headaches but not knowing it's the salicylic acid, so you create a protocol where people have to: chew the bark, while standing by a river, wearing natural fibers, during the full moon, after journaling about their relationship with trees. Creating complex associations like Skinner’s pigeons showed in the last chapter. And then there are people who are like "I tried it at home without the river and it didn't work!" "You have to be barefoot ON the riverbank, the negative ions are crucial!" "Actually, it only works if you journal about your ANCESTRAL relationship with trees." And then someone creates a certification program for "Integrated Willow Bark Healing Practitioners" with 500 hours of training.
And doing this makes tons of sense if you really want to help people and you’ve seen that doing all this stuff is truly getting results. It’s not the fault of the program, they are just trying to systematize something that is bringing people results. Which brings us back to: everyone is doing the best they can with the knowledge they have, and then once we know better, we can do better. This is that moment.
So we now have all the pieces of the instruction manual for healing, mapped for us across the last century by dozens of learning and brain researchers. It’s time to put it all together.
Chapter Thirteen
Imagine a bowl sitting on a table with a marble inside. No matter where you drop the marble on the inside of the bowl—high up on the rim, halfway down, wherever—it will always roll down and settle at the bottom. The bottom of the bowl is what scientists call an attractor state: where systems naturally settle and stay. Push the marble and it'll roll up the side a bit, but then it comes right back down. This is the key feature of an attractor: the system tends to return to it even after being disturbed. The 'basin of attraction' is the entire inside surface of the bowl, all the places the marble could start from and still end up at that same stable point.
This started as a mathematical concept, formalized in the 1960s for describing how dynamic systems behave, and was popularized by John Hopfield in 1982 when he showed our human neural networks work this way too, for which he won a Nobel Prize in 2024. Since the 80s when it was discovered, researchers have been finding attractor dynamics everywhere in biology, in things like cells, immune responses, and brain states. In 2022, a team at the Kavli Institute in Norway captured this happening in real time; using electrodes they could actually watch the brain settle into stable patterns exactly the way the math predicted.
A small-scale attractor state that most people have probably seen in their own lives is that it’s easier to maintain our weight than to change it. Once we sit at a certain weight for a while, the body treats that as its set point and resists changing it; the marble of that system wants to roll to the bottom of the bowl, maintaining its homeostasis.
This same dynamic plays out at the level of whole-body health. Any given system can have multiple stable states, and which one you end up in at any given moment depends on context, (like with the context based memories) and your reaction to the context. Remember when we talked about how people often feel better on vacation, and then snap back to illness the minute they return home? That’s an example of this in action. You have a healthy attractor state, a stable, maintainable healthy state that can get activated when you get out of the context of your disease. And then another attractor state that contains your disease expression, one with multiple entry points that pull you back in when you return to your normal life. And this is why healing can feel so hard. Why medicine thinks it’s incurable. Because we can never get rid of these attractor states, they are right about that piece. We learned that our brain's wiring works by adding new things, not replacing old things. So if you’ve been cancer free for 5 years and your spouse dies, it’s possible, without conscious awareness, that you fall into that old basin and your cancer returns. Or you haven’t had IBS in years, but something pulls you back in and all of a sudden it’s like your entire symptom symphony has shown back up and started playing the slow march to decline your body knows so well.
And because we haven’t known what this was, we have seen this as “relapse”. It feels helpless and hopeless and crushing. And with what we know about the PAG states, this feeling of helplessness can lead to an immobilization threat response, which then further cements your fate of being stuck in this state. But what is ALSO true is that the healthy bowl you’ve been in for years at that point, that bowl is available too.
And attractor states aren’t a new concept to us at this point, because they are actually exactly what we walked through in chapter 9 when we talked about neuroplasticity. Think about the Merzenich monkeys. When those monkeys spent weeks spinning discs with their fingertips, the cortical map for those fingers expanded dramatically—it literally overtook neighboring territory. The enlarged map becomes the attractor. The neural real estate dedicated to that activity grows so large that signals naturally flow there. Electricity follows the path of least resistance, and the most-used pathways have the lowest resistance. More synapses, more connections, more myelin wrapping those connections. The signal doesn't have to be "pulled" toward the attractor—it falls there because that's where the infrastructure is. And pathostasis is an attractor state; it’s a bowl your system has settled into. So given that, let’s put together everything we’ve uncovered in the research and see how we can build a new competing bowl.
Medicine says your disease is caused or worsened by poor diet, lack of exercise, inadequate sleep, and too much ‘stress’. Which means the answer becomes lose weight, eat better, sleep more, reduce stress by changing your lifestyle, and your condition will improve. Wellness says something similar. It says you need to optimize your inputs through clean eating, movement, sleep hygiene, stress management, supplements, and routines. So the message on both sides of the aisle seems to be: the problem is your inputs, and the solution is perfecting them.
But our ancestors had terrible inputs by these modern standards we have been optimizing for. They didn’t have modern mattresses or shoes or even consistent shelter. They didn’t have consistent food sources. They had actual predators and tribal warfare and high infant mortality rates. Our ancestors’ stress, the lifestyle kind we blame for our poor health, was very high and persistent. But studies of hunter-gatherers show that they don’t have epidemic levels of chronic diseases like we do. Our bodies are actually robust and able to adapt and tolerate extreme levels of imperfect inputs, they had to be. The animals that could handle the stress and imperfect inputs of our ancestors were the ones that survived to pass on their genes. Which means the difference isn’t the inputs. And if you think about this logically, it just makes sense. If optimizing our macronutrients and exercise and sleep were the solution, don’t you think more people would have actually healed using these methods? Even medicine doesn’t seem to think those things will heal us, or they would have a model for curing chronic illness and not just remission. What medicine seems to think is that we are on a death march to dysfunction and disease, and the only way to stave it off is to optimize everything perfectly.
But what we have discovered as we’ve looked across all the research, is that optimizing inputs, when it does help, helps by pushing you up the sides of the bowl for a bit. But ultimately people end up sliding back down and from the outside it may look like you weren’t trying hard enough, but the reality is that it was a losing battle from the beginning. Because the problem was never the inputs, the problem was being stuck in a pathostatic attractor state.
The way all animals' brains work is that they have a threat detector system that is always on. You can think of it like Alexa or Siri waiting for you to say their name. Our brains are scanning for danger in a similar way, it’s a passive system that is always watching, always listening. For 500 million years, this detection system only activated when an actual threat was present or suspected, and when that happened it would fire off your threat response and ready you for the danger it had perceived. But with the cognitive revolution came the ability for our brains to find threat cues through thoughts alone. Which means that we are always looking for danger and threat, and we can always find it. And the threat system can't tell the difference between a real threat and a thought about a threat, it just responds. So we can fire the full physiological cascade sitting safely on our couch, worrying about something that might never happen or that has already happened.
And even that doesn’t keep pathostatic chemicals on, or every single human would be sick. What actually keeps pathostasis running is when the threat activation itself becomes a threat to our systems. When we feel mobilized or immobilized, activated or shut down, and our wiring or our reactions see that threat state as a threat itself. That’s when it becomes a feedback loop that can keep our pathostatic chemicals on indefinitely.
We’ve walked through all the learning, behavioral, and neuroscience research, so now let’s put it all together so we can see what it takes to build a new bowl. There are a few key components and taken as a whole it’s actually extremely simple. So simple that it will be tempting to pattern match it to other things that haven’t worked, but stay with me for a second as we walk through this. Based on the science we’ve laid out in the last 3 chapters, let’s summarize the primary pieces we need to both create a new bowl, and augment the threat loop wiring that already exists.
The first is dual activation of the implicit and explicit systems. You need to both feel the physical sensations of activation, either immobilizing or mobilizing, in your system, and allow it without treating it like a threat itself. While also consciously noticing that though this feels like a threat, it’s not actually threatening. Showing your body safety through your thoughts about the activation and your reaction to your physical state.
This has to happen across contexts. You have to experience this safety signaling of both implicit and explicit systems at the same time in all the contexts of your life where they occur. Doing it just in your bedroom or just in your car means that when it happens in your living room or at work, you will still fire off the entire cascade like you always have. It has to be updated everywhere. And intuitively this also means that you need to do it over time. You can’t cram or front load this learning, it needs to become the way that you respond to your system over time. This is a lifelong process.
And just to really drive this point home, in order for these loops to update, you have to be signalling safety, both physically and mentally, every time this activates. And this is not done by suppressing or trying to get rid of it, and this is not done by trying to run away from it or distract. You have to allow the activation to be there and then show your system that it’s not a threat.
Imagine for a minute that you grew up in the jungle and there was always the threat of tigers. Every time you see anything remotely resembling a tiger, your system fires off the whole cascade of chemicals and physiological responses. But now you’ve moved, you live in modern society, and you go to the zoo. When you approach the tiger enclosure and see the tiger, your body will naturally fire off the entire threat loop cascade. The chemicals, the physiological tension and reaction patterns, everything. This kind of response is what we call trauma in our modern world, we talk about the reactions from past experiences being stored in the body. We maybe decide we have a tiger trauma and we lived a hard life and now when we see tigers at the zoo we fire off this response that doesn’t match the current reality, and so this feels wrong and maladaptive and we create a complex narrative around it. But in actuality, our body is just doing what it learned to do. See a tiger, signal danger.
The way we work with this is not to go talk about it in a therapists office, and work through how hard and scary it was to be around tigers our whole lives, and lament that we can’t go to the zoo like a regular mom with our kids. It’s not to try to bring up the tiger trauma in the therapists office and shake the activation out of our bodies. It’s not to try to reason our way out of the activation with our friends or therapists, by trying to talk our bodies out of the activation using reason. And it’s not to go to the tiger exhibit and try to sit there for five minutes while our bodies freak out, white knuckling an exposure that feels like we are in danger, but that we just happened not to die this time. And again, all of these strategies made sense when we didn’t know what exactly was working and we saw that these strategies worked sometimes for some people. It is good scientific instinct, in the absence of knowing which variable works, when it works, to preserve all the pieces. But now that we’ve mapped out the pieces across all the fields of science that weren’t talking to each other, we can simplify and clarify what’s needed.
What we actually need to do is go to the enclosure, and notice. Notice the activation in our bodies. Like a scientist, you can feel what’s going on in your body, what it feels like when those chemicals get pumped through your system, what your muscles do, what your thoughts do. What your body wants to do. Does it want to run? Does it want to hunker down and hide? And then, with the physical sensations online and allowed to be there, we also need to bring our conscious awareness online. We need to show our body through thoughts and actions, there is a tiger and that feels scary, and now I’m at the zoo, and there’s thick glass between us, and I’m not actually in danger. We can allow the feelings and maybe even touch the glass. The activation may get really strong, we may want to collapse, or run, or feel like we’re going to burst into tears. These are the things that would signal retraumatization to some therapists, but the reality is that our system is just doing what it should be doing, ramping up the response to get you to listen. But you can tame the beast inside that’s telling you to run by showing it that you don’t need to, that this is safe. This is how we update our threat loops. By taking the bull by the horns and showing it that this isn’t dangerous. And then doing it again the next time. And the next. Until the tiger-behind-glass stops firing the cascade at all. Because remember from the extinction science, likely our amygdala will always fire the threat response when we see the tiger, but through this learning, we can wire in safety so our prefrontal cortex fires the inhibition signal that stops the cascade before it takes over.
And by doing this with our knee jerk rumination or our threat reactions to our symptoms or to the myriad other things our bodies wire in responses to, we can inhibit the threat response across contexts, and turn off the engine driving the pathostatic chemistry in our bodies.
And if this sounds a little fantastical as a way to address chronic diseases, think for a minute about the paper rose lady again. Allergies aren’t just some vague ‘symptoms’; they aren’t just stomach upset or fatigue or a headache or the other things people generally dismiss as not that serious. Anaphylactic shock, which is at its core just a really severe allergic reaction, can lead to death. Literal death. And people are having these responses to peanuts, and bee stings, and other things most people can experience with no problem at all. And medicine is actually able to treat these extreme reactions, and get people’s bodies to stop reacting this way to these non-threatening stimuli. Because they are, ultimately, non-threatening. If people were biologically allergic to things, if that were coded into our DNA, then it wouldn’t be something we could train ourselves out of. But it is. It’s done all the time. The way it works is that they give you just a little bit, a trace amount, of the thing your body is reacting to. Then over time they increase the amount, and your body learns to stop mounting a response. This is neuroplastic conditioning in action, and this treatment is used at major medical centers across the country, because they understand that it can help people’s bodies stop reacting. What they may not understand is that all you are doing through this process is teaching the threat response to stand down and stop reacting to things that aren’t dangerous.
Allergies are unique because the threat is “external”, at least in part. Not for the paper rose lady, since that rose wasn’t real, but in the sense that you can say ‘I’m going to inject you with this threat and show your body that it’s not actually dangerous.’ We can’t do that with internal processes, internal symptoms, thoughts, feelings, emotions - the things we are reacting to most of the time throughout our days as threats. So we have to show our body the same thing from the inside. Just like medicine does with these allergens, we have to show our body that these things aren’t a threat and that it can stop mounting the response it has gotten so good at mounting. That’s really all there is to it. And it’s really simple, but it’s definitely not always easy.
We have 500 million years of evolution telling us to do exactly what we’re doing. The neuroplasticity that changes our brains and helps get us stuck? That exists in animals too, as evidenced by the Merzenich monkeys and decades of other neuroplasticity research. The reason we are able to use animals to test drugs and reactions is because they are so similar to us. If rats were fundamentally different from humans they would have no utility as our test subjects.
So our brains are doing exactly what they were evolved to do over 500 million years; of course trying to get them to function differently is going to feel not just hard, but wrong. It IS wrong, evolutionarily. Our brain is trying to keep us safe from threats. Telling our brains to stand down, that’s unnatural. If you’re running toward a fire, your brain is going to tell you to stop. That’s normal. But firefighters are able to tell their brains: I have protective gear on, and I have a good reason to do this. I’m going to do it and it’s safe and ok. They learn to run into a fire. We can learn to turn off our rumination and stop responding to our physiological activation like it’s a life or death threat in a similar way. It’s not easy but neuroplasticity and our human cognition make it possible.
Another reason this is hard for people is that they often don’t know they’re activated. There are lots of people with heart disease or cancer or any number of other chronic diseases who would tell you that they aren’t “stressed”. And they would be right, they aren’t stressed in the way we’ve been taught to think about it. What we are talking about here with pathostasis is something different. And it can look two different ways. If you think about the PAG switch we talked about in chapter 10, there are two modes: the mobilizing PAG, which is the fight or flight response, the one most people think about when they think about our threat response. And the immobilizing PAG, which is freeze or fawn, and often looks much different than what we think of colloquially as stress.
Getting caught in a mobilizing feedback loop typically looks most like being stressed as we think about it. But it’s not to be confused with just feeling stressed about your emails or the upcoming holidays. This typically feels like being anxious, or ruminating all the time, or getting stuck thinking about stressful things. Often in this version, the feeling of activation itself becomes a threat, creating a feedback loop. And another key hallmark of this version once someone is sick is that symptoms themselves become triggers. So the pathostatic chemistry and stress feelings cause symptoms, and then the symptoms cause activation and the activation causes more stress and more chemicals, and this becomes a feedback loop that gets more and more entrenched the longer it runs. These people tend to know they’re activated or stressed, but because the cultural narrative is to do more self care (state changers) or eat better food (external inputs), they don’t think about it in terms of “I’m in pathostasis and I’m stuck in a loop.”
The immobilizing feedback loop looks a little different. From the outside this can look like someone who seems a bit emotionally flat and not very reactive. Someone who seems to just go with the flow, but not in a way that feels happy and engaged, more in a way that seems just…neutral. These people tend to have less intimate relationships, because their survival mode is to shut down emotional responses. The chemistry for these people is kept on because emotions feel like a threat, so every time they feel them their brain triggers the immobilizing PAG response, which keeps the pathostatic chemistry going and keeps them in shutdown. These people likely don’t think of themselves as stressed, and the people in their lives probably don’t think of them as stressed. They seem more neutral and mostly just fine. Not too happy, not too sad, just existing.
And people don’t have to be just one or the other, one person can have both bowls. But the reason attachment theory took off the way it did was because people seem to default more often to one or the other. Most people have both, but oftentimes one is far more entrenched neurologically than the other, which means it pulls you in more often. And the more you’re in it, the more that bowl gets wired in, the stronger the pull becomes.
While everyone can do this work, it’s important to acknowledge a few things. One is that the longer you’ve been sick and stuck, the harder it is. If you remember from chapter 9, the longer you’re in pathostasis, the more your brain remodels to encourage those same patterns. Not only do you build Merzenich maps from practicing the same symptoms and conditions over and over, but under these conditions the research shows your amygdala grows bigger and more connected, more sensitized and ready to trigger your threat response, and the brakes of this response, the prefrontal cortex and the hippocampus shrink and become less connected. Which means that it’s easier to maintain pathostasis and far harder to reverse it. This is why long covid isn’t considered long covid until 3 months have passed, because until that point spontaneous recovery is still possible. Your attractor state of covid symptoms isn’t so entrenched that you can’t just jump out of it into another bowl. It’s why depression shows better treatment outcomes the earlier you get treatment; patients who get treated early have nearly four times better odds of achieving remission than those who wait. Because the longer we are wiring in these reaction patterns and the more our brain is getting physically remodelled to promote these states, the harder it is. This is not to say that people who have been sick a long time can’t get better, they have and can. It’s just important to acknowledge that the timeline may be longer and the work may feel harder.
And the other big consideration that deserves being addressed is that healing isn’t equal opportunity for everyone. If someone is living in constant stress, working three jobs to make ends meet, has abusive bosses, or is living in unsafe conditions etc, healing will just be harder for them. If your life circumstances are contributing to your pathostatic load, and they are unchangeable, then this gets way way harder. Sometimes when there is a solution like this that requires individual action, it can lead to blaming people for being sick, or people feeling like they are being blamed. We want to avoid the trap of becoming unfalsifiable like Freud and saying if you heal it proves the model and if you don’t you weren’t trying hard enough. This is not easy and it requires a certain amount of space to be able to do. Not everyone has that space. But for those who do, or who can create even a little, the path is clear.
Similar to how medicine saw the complex outputs of chronic diseases and assumed the mechanism needed to be equally complex, we have seen the complex output of our disease and suffering on an individual level and we think that it requires an equally complex solution. That if it took years to wound us, it should take years to recover. That we need to excavate the meaning, understand the origins, process the feelings, and integrate the parts. The complexity of the healing ritual can feel like it validates the significance of the pain.
But now with the science laid out for us, we can see what the active ingredients are, and we can simplify and speed up the healing process. And having the science also validates our suffering in a way we’ve never before been able to see clearly. Whether you have a “real” disease, a “psychological” disease, a “functional disorder” or something completely unnamed, we can now see that it’s all the same process: pathostatic chemicals changing your brain and your chemical composition and your physiology at a full body scale. Those changes causing predictable downstream dysfunction like vasoconstriction and glucose elevation and clearance failures that lead to all of the diseases that we have defined. This then creates attractor states that can run for years or decades. All of this is expected given our 500 million year old threat detection system as it’s now paired up with our human cognition. It’s all the same thing, and it’s no one’s fault. And now we have the simple map to what actually works to address it, rather than complex rituals or external optimization.
With all of this knowledge, it becomes clear that addressing illness and suffering demands a reframe. That healing is not, can not be, a destination. It is lifelong. And maybe, just maybe, that can be ok. Now that we have the answer.
Chapter Fourteen
So here we are.
We just did for chronic disease what germ theory did for infectious disease. We took medicine's own data - thousands of papers, hundreds of trials, every specialty - and stepped back to see what was invisible in pieces but almost painfully obvious as a whole. And when the dots are connected, simple truths fall out. We discovered that cancer, Parkinson's, diabetes and all chronic diseases, functional disorders, and psychiatric conditions are actually one disease. That hypoperfusion explains all six neurodegenerative diseases, something no one had thought to check. That cancer metastasis is explainable through our cells' natural unjamming mechanisms, and that placebo maps perfectly onto ANS function with odds of one in four million of happening by chance. We connected research that has previously never been connected.
It's hard to believe no one has seen it before now. But with what we have learned about pattern matching, and specialization, and siloed medicine, and institutional blindness, when you take all of that into account, it starts to make sense why it was missed.
History tells us this is what fundamental discoveries look like. And yet our institutions demand the opposite. When you are adding epicycles, or dark matter, or multifactorial explanations, history tells us that means we are on the wrong track, adding bandaids to a faulty theory. Darwin didn't need a separate theory for each species. We don't need a separate theory for each disease.
And if human suffering is not only not unknowable, but addressable, what kind of world opens up for us?
How many people are running these loops right now, stuck in pathostasis, unable to access what they actually have? Not just sick and suffering, but cut off from who they could be. Scientists who might see patterns no one else can see. Teachers who might better reach their students. Artists, engineers, parents, leaders - operating at a fraction of capacity because their threat detection systems won't turn off.
What happens when even a small percentage of them get out? What becomes possible when human beings aren't spending most of their energy fighting their own biology? When suffering isn't mysterious. When the patterns that keep people stuck can actually be addressed.
We're standing on the divide. From hundreds of separate mysteries with no unified explanation, and no way out, to one mechanism we finally understand.
The other side of history isn't a place without suffering. It's a place where suffering is no longer mysterious, where chronic illness is no longer inevitable, where being human in a body designed for another world is understood, accepted, and consciously navigated.
We have everything we need. The question now is whether we'll use it.
One disease after another was responding to this single molecule… M. J. Gonzalez-Rellan and D. J. Drucker, "The Expanding Benefits of GLP-1 Medicines," Cell Reports Medicine 6, no. 7 (2025): 102214, https://doi.org/10.1016/j.xcrm.2025.102214.
Within days of starting treatment, far before patients experienced any weight loss, their diabetes was improving.[1] P. Nadkarni, O. G. Chepurny, and G. G. Holz, "Regulation of Glucose Homeostasis by GLP-1," Progress in Molecular Biology and Translational Science 121 (2014): 23–65, https://doi.org/10.1016/B978-0-12-800101-1.00002-8. [2] L. A. Anderson, "How Long Does It Take for Ozempic to Work?", Drugs.com, last updated October 17, 2025, accessed December 8, 2025, https://www.drugs.com/medical-answers/long-ozempic-work-3543031/.
Within weeks, their blood pressure was dropping. M. I. del Olmo-Garcia and J. F. Merino-Torres, "GLP-1 Receptor Agonists and Cardiovascular Disease in Patients with Type 2 Diabetes," Journal of Diabetes Research (2018): 4020492, https://doi.org/10.1155/2018/4020492.
Parkinson's patients at healthy weights saw their tremors improve. C. Hölscher, "Glucagon-like Peptide-1 Class Drugs Show Clear Protective Effects in Parkinson's and Alzheimer's Disease Clinical Trials: A Revolution in the Making?", Neuropharmacology 253 (2024): 109952, https://doi.org/10.1016/j.neuropharm.2024.109952.
Alzheimer's patients showed slower cognitive decline. P. Edison et al., "Liraglutide in Mild to Moderate Alzheimer's Disease: A Phase 2b Clinical Trial," Nature Medicine (2025), https://doi.org/10.1038/s41591-025-04106-7.
People struggling with alcohol addiction had far fewer cravings. C. S. Hendershot, M. P. Bremmer, M. B. Paladino, et al., "Once-Weekly Semaglutide in Adults With Alcohol Use Disorder: A Randomized Clinical Trial," JAMA Psychiatry 82, no. 4 (2025): 395–405, https://doi.org/10.1001/jamapsychiatry.2024.4789.
By 2024, GLP-1s were in clinical trials for over a dozen diseases across nearly every medical specialty… [1] A. Beaney and I. Maragkou, "GLP1-RAs Beyond Obesity and Diabetes: Is the Sky the Limit?", Clinical Trials Arena, February 26, 2024, accessed December 8, 2025, https://www.clinicaltrialsarena.com/features/glp1ra-beyond-obesity-diabetes-where-is-the-limit/. [2] E. Valencia-Rincón, R. Rai, V. Chandra, and E. A. Wellberg, "GLP-1 Receptor Agonists and Cancer: Current Clinical Evidence and Translational Opportunities for Preclinical Research," Journal of Clinical Investigation 135, no. 21 (2025), https://doi.org/10.1172/JCI194743.
1 in 10 of the women who came in for care was dying, whereas in the ward staffed by midwives only about 1 in 30 was dying. I. Loudon, "Ignaz Phillip Semmelweis' Studies of Death in Childbirth," Journal of the Royal Society of Medicine 106, no. 11 (2013): 461–463, https://doi.org/10.1177/0141076813507844.
Heroic medicine history. E. Sakalauskaitė-Juodeikienė, "'Heroic' Medicine in Neurology: A Historical Perspective," European Journal of Neurology 31, no. 11 (2024): e16135, https://doi.org/10.1111/ene.16135.
By the 21st century, chronic diseases were causing 75% of all deaths worldwide… World Health Organization, "Noncommunicable Diseases," WHO Fact Sheet, September 25, 2025, accessed December 8, 2025, https://www.who.int/news-room/fact-sheets/detail/noncommunicable-diseases.
When The Rome Foundation, the organization that sets IBS diagnostic criteria, updated them in 2016… D. A. Drossman and W. L. Hasler, "Rome IV—Functional GI Disorders: Disorders of Gut-Brain Interaction," Gastroenterology 150, no. 6 (2016): 1257–1261, https://doi.org/10.1053/j.gastro.2016.03.035.
Biomarkers...AUC of 0.89...good to excellent range for diagnostic testing. Z. Mujagic et al., "A Novel Biomarker Panel for Irritable Bowel Syndrome and the Application in the General Population," Scientific Reports 6 (2016): 26420, https://doi.org/10.1038/srep26420.
They surveyed healthy people to find out how often they had symptoms, then declared the top 10% to be the disease threshold...The change cut IBS prevalence in half overnight. O. S. Palsson, W. E. Whitehead, M. A. L. van Tilburg, et al., "Development and Validation of the Rome IV Diagnostic Questionnaire for Adults," Gastroenterology 150, no. 6 (2016): 1481–1491, https://doi.org/10.1053/j.gastro.2016.02.014.
Fibromyalgia has biomarkers with around 85% diagnostic accuracy… S. M. Nuguri, K. V. Hackshaw, S. de Lamo Castellvi, et al., "Portable Mid-Infrared Spectroscopy Combined with Chemometrics to Diagnose Fibromyalgia and Other Rheumatologic Syndromes Using Rapid Volumetric Absorptive Microsampling," Molecules 29, no. 2 (2024): 413, https://doi.org/10.3390/molecules29020413.
Chronic Fatigue Syndrome's biomarkers diagnose the disease with a whopping 96% accuracy. E. Hunter, H. Alshaker, O. Bundock, et al., "Development and Validation of Blood-Based Diagnostic Biomarkers for Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS) Using EpiSwitch® 3-Dimensional Genomic Regulatory Immuno-Genetic Profiling," Journal of Translational Medicine 23 (2025): 1048, https://doi.org/10.1186/s12967-025-07203-w.
Medical training even has a term for the patients with these conditions: 'heartsink patients.' A. Moscrop, "'Heartsink' Patients in General Practice: A Defining Paper, Its Impact, and Psychodynamic Potential," British Journal of General Practice 61, no. 586 (2011): 346–348, https://doi.org/10.3399/bjgp11X572490.
In 1984 in the affluent resort town of Incline Village, Nevada, on the shores of Lake Tahoe, dozens of residents suddenly fell ill… "Chronic Fatigue Syndrome," Newsweek, November 11, 1990, https://www.newsweek.com/chronic-fatigue-syndrome-205712.
Peterson and Cheney were ridiculed for taking it seriously and for continuing to research what would later be recognized as Chronic Fatigue Syndrome. "Chronic Fatigue Syndrome," Newsweek, November 11, 1990, https://www.newsweek.com/chronic-fatigue-syndrome-205712.
From 1984 to 1992, an unprecedented wave of these clusters was reported across North America. B. M. Hyde, J. A. Goldstein, and P. H. Levine, eds., The Clinical and Scientific Basis of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (Ogdensburg, NY: Nightingale Research Foundation, 1992), cited in "Outbreaks," American ME and CFS Society, https://ammes.org/outbreaks/.
In 1983, their abstract was one of only 11 rejected out of 67 submissions to the Australian Gastroenterology Society meeting. B. Marshall, interview by Norman Swan, "Professor Barry Marshall, Gastroenterologist," Australian Academy of Science, 2008, accessed December 8, 2025, https://www.science.org.au/learning/general-audience/history/interviews-australian-scientists/professor-barry-marshall.
Their paper linking the bacteria to ulcers faced extensive delays at The Lancet. S. Pincock, "Nobel Prize Winners Robin Warren and Barry Marshall," The Lancet 366, no. 9495 (2005): 1429, https://doi.org/10.1016/S0140-6736(05)67587-367587-3).
Tagamet and Zantac were among the world's biggest-selling prescription drugs at the time. E. R. Berndt, M. K. Kyle, and D. C. Ling, "The Long Shadow of Patent Expiration: Generic Entry and Rx-to-OTC Switches," in Scanner Data and Price Indexes (Chicago: National Bureau of Economic Research / University of Chicago Press, 2003), https://www.nber.org/system/files/chapters/c9737/c9737.pdf.
Senior gastroenterologists had made a policy decision that the theory was "too new and radical" and they wouldn't accept papers on it. B. Marshall, interview by Norman Swan, "Professor Barry Marshall, Gastroenterologist," Australian Academy of Science, 2008, accessed December 8, 2025, https://www.science.org.au/learning/general-audience/history/interviews-australian-scientists/professor-barry-marshall.
Wolf published his findings in the Journal of Clinical Investigation. S. Wolf, "Effects of Suggestion and Conditioning on the Action of Chemical Agents in Human Subjects—The Pharmacology of Placebos," Journal of Clinical Investigation 29, no. 1 (1950): 100–109, https://doi.org/10.1172/JCI102225.
When he'd presented earlier findings on stress and gastric function to the Gastroenterological Society? They…laughed. One case, they said. How could anyone draw conclusions from a single patient? I. Oransky, "Stewart Wolf," The Lancet 366, no. 9499 (2005): 1768, https://doi.org/10.1016/S0140-6736(05)67717-367717-3).
Over 250 papers were published analyzing this single anomalous signal at 750 GeV. R. Garisto, "Editorial: Theorists React to the CERN 750 GeV Diphoton Data," Physical Review Letters 116 (2016): 150001, https://doi.org/10.1103/PhysRevLett.116.150001.
Beecher published his findings in a paper titled "The Powerful Placebo" in JAMA…35% of these patients experienced "satisfactory relief" from placebo alone…1,082 patients…15 clinical trials. H. K. Beecher, "The Powerful Placebo," Journal of the American Medical Association 159, no. 17 (1955): 1602–1606, https://doi.org/10.1001/jama.1955.02960340022006.
Ted Kaptchuk, the director of Harvard's Program in Placebo Studies, has described Beecher's framing as treating placebo like 'the devil' in clinical trials. C. Stoddart, "How the Placebo Effect Went Mainstream," Knowable Magazine, June 27, 2023, https://knowablemagazine.org/content/article/mind/2023/how-placebo-effect-went-mainstream.
Then in 1962 Thalidomide, a drug that was being prescribed to pregnant women for morning sickness, caused over 10,000 birth defects worldwide. In the US alone, roughly 20,000 patients had received the drug in unregulated clinical trials. C. Tantibanchachai, "US Regulatory Response to Thalidomide (1950-2000)," Embryo Project Encyclopedia (2014-04-01), ISSN: 1940-5030, https://hdl.handle.net/10776/7733.
Hróbjartsson and Gøtzsche published a systematic review analyzing 130 placebo-controlled trials across 40 different clinical conditions…appeared in the New England Journal of Medicine. A. Hróbjartsson and P. C. Gøtzsche, "Is the Placebo Powerless? An Analysis of Clinical Trials Comparing Placebo with No Treatment," New England Journal of Medicine 344, no. 21 (2001): 1594–1602, https://doi.org/10.1056/NEJM200105243442106.
Wampold and colleagues found the effect sizes were essentially identical…An effect size of 0.28 could be called "small and clinically insignificant" or "robust and meaningful" depending on your framing. B. E. Wampold, T. Minami, S. C. Tierney, T. W. Baskin, and K. S. Bhati, "The Placebo Is Powerful: Estimating Placebo Effects in Medicine and Psychotherapy from Randomized Clinical Trials," Journal of Clinical Psychology 61, no. 7 (2005): 835–854, https://doi.org/10.1002/jclp.20129.
In 2005, Harald Walach and his colleagues…analyzed over a hundred clinical trials…correlation of 0.78…roughly 60% of the treatment outcomes could be explained by what was happening in both groups. H. Walach, C. Sadaghiani, C. Dehm, and D. Bierman, "The Therapeutic Effect of Clinical Trials: Understanding Placebo Response Rates in Clinical Trials—A Secondary Analysis," BMC Medical Research Methodology 5, no. 26 (2005), https://doi.org/10.1186/1471-2288-5-26.
The NIH…is spending roughly $47 billion annually. Cancer research gets $7.3 billion. Alzheimer's gets $3.8 billion. Diabetes gets $1.1 billion. [1] National Institutes of Health, "Funding for Various Research, Condition, and Disease Categories (RCDC)," NIH RePORT, table published June 17, 2025, accessed December 2025, https://report.nih.gov/funding/categorical-spending. [2] National Institutes of Health, "RCDC Categories At a Glance," NIH Grants & Funding, accessed December 2025, https://grants.nih.gov/funding/explore-data-on-funded-projects/rcdc-categories-at-a-glance.
Placebo Response Theory analysis (blood pressure placebo effects, Parkinson's tremor/dopamine, 22 functions across 11 body systems, p = 2.4 × 10⁻⁷). A. L. Caputo, "A Unifying Theory of Placebo Responsiveness: Autonomic Nervous System Control as the Organizing Principle," Relearn Research, December 2025, View full analysis.
Your body produces new cancer cells every day. Hundreds of billions of them...such as cells that are damaged, that divided incorrectly, or that got infected by pathogens. [1] R. Sender and R. Milo, "The Distribution of Cellular Turnover in the Human Body," Nature Medicine 27 (2021): 45–48, https://doi.org/10.1038/s41591-020-01182-9. [2] B. Alberts, A. Johnson, J. Lewis, et al., Molecular Biology of the Cell, 4th ed. (New York: Garland Science, 2002), "Cancer as a Microevolutionary Process," https://www.ncbi.nlm.nih.gov/books/NBK26891/.
This is why the places with the highest cell turnover - the gut, the skin, the blood - have the highest cancer rates...Places with little turnover, like the heart, see almost no cancer. [1] C. Tomasetti and B. Vogelstein, "Variation in Cancer Risk Among Tissues Can Be Explained by the Number of Stem Cell Divisions," Science 347, no. 6217 (2015): 78–81, https://doi.org/10.1126/science.1260825. [2] National Cancer Institute, "Matters of the Heart: Why Are Cardiac Tumors So Rare?", February 10, 2009, https://www.cancer.gov/types/metastatic-cancer/research/cardiac-tumors.
The Rockefeller Foundation built him his own building specifically designed to his requirements: forty rooms, no offices or conference rooms, just labs and a library. [1] A. M. Otto, "Warburg Effect(s)—A Biographical Sketch of Otto Warburg and His Impacts on Tumor Metabolism," Cancer & Metabolism 4 (2016): 5, https://doi.org/10.1186/s40170-016-0145-9. [2] Encyclopedia.com, "Otto Heinrich Warburg," accessed December 8, 2025, https://www.encyclopedia.com/people/medicine/medicine-biographies/otto-heinrich-warburg.
As a Jewish and almost certainly gay man...Hitler himself, during WWII, personally intervened and reinstated him. Hitler's mother had died of cancer... S. Apple, Ravenous: Otto Warburg, the Nazis, and the Search for the Cancer-Diet Connection, 1st ed. (New York: Liveright Publishing Corporation, 2021).
By the mid-2000s, multiple studies showed that metabolic stress, inflammatory signaling, and oxidative stress could all activate HIF-1 and drive cells into Warburg metabolism without any oxygen deprivation at all. A. Palazon et al., "HIF Transcription Factors, Inflammation, and Immunity," Immunity 41, no. 4 (2014): 518–528, https://doi.org/10.1016/j.immuni.2014.09.008.
But then in 1974, Osias Stutman at Memorial Sloan Kettering seemingly disproved this theory...he ended up finding that the immune-deficient mice got cancer at the same rate as normal mice. O. Stutman, "Tumor Development after 3-Methylcholanthrene in Immunologically Deficient Athymic-Nude Mice," Science 183, no. 4124 (1974): 534–536, https://doi.org/10.1126/science.183.4124.534.
"I'm sitting there getting ready for my talk and this elder gentleman walks in and sits in the front row."... "The first hand up was Stutman's"... "You know, it is remarkable what you can do now at the turn of the century that we couldn't do in the 1970s." A. N. Brodsky, "The Rules of the Game: Dr. Robert Schreiber, Interferon Gamma, and Our Quest to Cure Cancer," Cancer Research Institute, accessed December 8, 2025, https://www.cancerresearch.org/stories/scientists/robert-schreiber-phd.
In 2010, a clinical trial of a new treatment for patients with advanced skin cancer (melanoma)...In several of the patients in the trial, signs of their cancer disappeared completely after treatment. F. S. Hodi, S. J. O'Day, D. F. McDermott, et al., "Improved Survival with Ipilimumab in Patients with Metastatic Melanoma," New England Journal of Medicine 363, no. 8 (2010): 711–723, https://doi.org/10.1056/NEJMoa1003466.
Then in 2015, researchers found that epithelial cells have a built-in migration program called 'unjamming.' J.-A. Park et al., "Unjamming and Cell Shape in the Asthmatic Airway Epithelium," Nature Materials 14 (2015): 1040–1048, https://doi.org/10.1038/nmat4357.
Single rogue cancer cells rarely cause cancer to spread in the body; it's almost always clusters of traveling cancer cells that are the catalyst for spread. N. Aceto et al., "Circulating Tumor Cell Clusters Are Oligoclonal Precursors of Breast Cancer Metastasis," Cell 158, no. 5 (2014): 1110–1122, https://doi.org/10.1016/j.cell.2014.07.013.
Modern studies show right after surgery, within 6-12 months, there is a sharp peak of metastasis, which suggests a triggering event, not gradual progression. J. A. Krall et al., "The Systemic Response to Surgery Triggers the Outgrowth of Distant Immune-Controlled Tumors in Mouse Models of Dormancy," Science Translational Medicine 10, no. 436 (2018): eaan3464, https://doi.org/10.1126/scitranslmed.aan3464.
In 2004, a team at the John Wayne Cancer Center studied over 600 women and found that those who underwent needle biopsy before having their sentinel lymph nodes examined showed 50% more lymph node spread... N. M. Hansen, X. Ye, B. J. Grube, and A. E. Giuliano, "Manipulation of the Primary Breast Tumor and the Incidence of Sentinel Node Metastases from Invasive Breast Cancer," Archives of Surgery 139, no. 6 (2004): 634–640, https://doi.org/10.1001/archsurg.139.6.634.
A larger study by Peters-Engl and colleagues found the same association Hansen did - a 37% increased risk - but after statistical adjustments, concluded the finding wasn't significant. C. Peters-Engl, P. Konstantiniuk, C. Tausch, et al., "The Impact of Preoperative Breast Biopsy on the Risk of Sentinel Lymph Node Metastases: Analysis of 2502 Cases from the Austrian Sentinel Node Biopsy Study Group," British Journal of Cancer 91, no. 10 (2004): 1782–1786, https://doi.org/10.1038/sj.bjc.6602205.
A third study by Moore in 2004 conducted at Memorial Sloan Kettering showed the direct correlation: no biopsy (1.2%) → FNA (small needle) (3.0%) → core needle (bigger needle) (3.8%) → surgical biopsy (4.6%). With a p=0.002. K. H. Moore, H. T. Thaler, L. K. Tan, P. I. Borgen, and H. S. Cody III, "Immunohistochemically Detected Tumor Cells in the Sentinel Lymph Nodes of Patients with Breast Carcinoma: Biologic Metastasis or Procedural Artifact?", Cancer 100, no. 5 (2004): 929–934, https://doi.org/10.1002/cncr.20035.
Miller et al. did a 25-year randomized controlled trial of nearly 90,000 women that showed women randomized to screening vs no screening for breast cancer had the exact same death rates. A. B. Miller, C. Wall, C. J. Baines, P. Sun, T. To, and S. A. Narod, "Twenty Five Year Follow-up for Breast Cancer Incidence and Mortality of the Canadian National Breast Screening Study: Randomised Screening Trial," BMJ 348 (2014): g366, https://doi.org/10.1136/bmj.g366.
In 2009, Gatenby published a paper proposing...adaptive therapy. R. A. Gatenby, A. S. Silva, R. J. Gillies, and B. R. Frieden, "Adaptive Therapy," Cancer Research 69, no. 11 (2009): 4894–4903, https://doi.org/10.1158/0008-5472.CAN-08-3658.
In 2017, Gatenby published results from a prostate cancer trial...double patient survival time, while cutting drug use in half. The paper appeared in Nature Communications. J. Zhang, J. J. Cunningham, J. S. Brown, and R. A. Gatenby, "Integrating Evolutionary Dynamics into Treatment of Metastatic Castrate-Resistant Prostate Cancer," Nature Communications 8, no. 1816 (2017), https://doi.org/10.1038/s41467-017-01968-5.
Malignancies cluster in epithelial tissues (lung, GI tract, breast), while benign tumors that don't metastasize occur in tissues without this migration program. G. M. Cooper, "The Development and Causes of Cancer," in The Cell: A Molecular Approach, 2nd ed. (Sunderland, MA: Sinauer Associates, 2000), https://www.ncbi.nlm.nih.gov/books/NBK9963/.
But in autoimmunity, a few things happen simultaneously: Treg production drops meaning there are fewer overall Tregs, and the remaining Tregs function less efficiently...And there is something called IL-2, which feeds both Tregs and autoreactive immune cells. F. Harris, Y. A. Berdugo, and T. Tree, "IL-2-based Approaches to Treg Enhancement," Clinical and Experimental Immunology 211, no. 2 (2023): 149–163, https://doi.org/10.1093/cei/uxac105.
About 25% of people with one autoimmune condition develop at least one more - and for some conditions, the risk of developing a second is 4 to 10+ times higher than the general population. [1] M. Cojocaru, I. M. Cojocaru, and I. Silosi, "Multiple Autoimmune Syndrome," Maedica 5, no. 2 (2010): 132–134, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3150011/. [2] K. Boelaert et al., "Prevalence and Relative Risk of Other Autoimmune Diseases in Subjects with Autoimmune Thyroid Disease," The American Journal of Medicine 123, no. 2 (2010): 183.e1–183.e9, https://doi.org/10.1016/j.amjmed.2009.06.030.
Autoimmune diseases, which as a whole affect 1 in 10 people. N. Conrad et al., "Incidence, Prevalence, and Co-occurrence of Autoimmune Disorders over Time and by Age, Sex, and Socioeconomic Status: A Population-Based Cohort Study of 22 Million Individuals in the UK," The Lancet 401, no. 10391 (2023): 1878–1890, https://doi.org/10.1016/S0140-6736(23)00457-9.
Over half of US adults have multiple diagnosed health conditions. K. B. Watson et al., "Trends in Multiple Chronic Conditions Among US Adults, By Life Stage, Behavioral Risk Factor Surveillance System, 2013–2023," Preventing Chronic Disease 22 (2025): 240539, https://doi.org/10.5888/pcd22.240539.
Older adults having 5+ conditions means an average of 50 prescriptions, 14 different doctors, and 37 visits per year. R. M. Benjamin, "Multiple Chronic Conditions: A Public Health Challenge," Public Health Reports 125, no. 5 (2010): 626–627, https://doi.org/10.1177/003335491012500502.
About 80% of people with Parkinson's disease will develop dementia during the course of the disease. C. Counsell et al., "The Incidence, Baseline Predictors, and Outcomes of Dementia in an Incident Cohort of Parkinson's Disease and Controls," Journal of Neurology 269 (2022): 4288–4298, https://doi.org/10.1007/s00415-022-11058-2.
If you have depression you are about 40% more likely to experience a stroke. J.-Y. Dong et al., "Depression and Risk of Stroke: A Meta-analysis of Prospective Studies," Stroke 43, no. 1 (2012): 32–37, https://doi.org/10.1161/STROKEAHA.111.630871.
At autopsy, about 60% of Alzheimer's patients have Lewy bodies - the supposed 'hallmark' of Parkinson's. R. L. Hamilton, "Lewy Bodies in Alzheimer's Disease: A Neuropathological Review of 145 Cases Using α-Synuclein Immunohistochemistry," Brain Pathology 10, no. 3 (2000): 378–384, https://doi.org/10.1111/j.1750-3639.2000.tb00269.x.
MS patients have about twice the risk of developing Alzheimer's disease, and among younger adults the risk of dementia is over 4 times higher. [1] E. B. Cho et al., "The Risk of Dementia in Multiple Sclerosis and Neuromyelitis Optica Spectrum Disorder," Frontiers in Neuroscience 17 (2023): 1214652, https://doi.org/10.3389/fnins.2023.1214652. [2] E. Mahmoudi et al., "Diagnosis of Alzheimer's Disease and Related Dementia Among People with Multiple Sclerosis: Large Cohort Study, USA," Multiple Sclerosis and Related Disorders 57 (2022): 103351, https://doi.org/10.1016/j.msard.2021.103351.
When researchers analyzed over 10 million patients to look at the pattern of comorbidities, they identified well-defined disease clusters. T. Beaney et al., "Identifying Multi-Resolution Clusters of Diseases in Ten Million Patients with Multimorbidity in Primary Care in England," Communications Medicine 4, article 102 (2024), https://doi.org/10.1038/s43856-024-00529-4.
Pathostasis chemical table values. [1] A. Kumar et al., "Stress: Neurobiology, Consequences and Management," Journal of Pharmacy & Bioallied Sciences 5, no. 2 (2013): 91–97, https://doi.org/10.4103/0975-7406.111818. [2] M. Popoli et al., "The Stressed Synapse: The Impact of Stress and Glucocorticoids on Glutamate Transmission," Nature Reviews Neuroscience 13 (2012): 22–37, https://doi.org/10.1038/nrn3138. [3] K. Sharma et al., "Stress-Induced Diabetes: A Review," Cureus 14, no. 9 (2022): e29142, https://doi.org/10.7759/cureus.29142. [4] A. J. Horn et al., "Severe PTSD Is Marked by Reduced Oxytocin and Elevated Vasopressin," Comprehensive Psychoneuroendocrinology 19 (2024): 100236, https://doi.org/10.1016/j.cpnec.2024.100236. [5] T. C. Theoharides, "The Impact of Psychological Stress on Mast Cells," Annals of Allergy, Asthma & Immunology 125, no. 4 (2020): 388–392, https://doi.org/10.1016/j.anai.2020.07.007. [6] B. J. Jones, T. Tan, and S. R. Bloom, "Minireview: Glucagon in Stress and Energy Homeostasis," Endocrinology 153, no. 3 (2012): 1049–1054, https://doi.org/10.1210/en.2011-1979. [7] A. Pilozzi, C. Bhatt, and X. Huang, "The Role of β-Endorphin in Chronic Stress and Associated Diseases," International Journal of Molecular Sciences 22, no. 1 (2021): 114, https://doi.org/10.3390/ijms22010114.
Hypoperfusion in Alzheimer's Disease. F. J. Wolters, H. I. Zonneveld, A. Hofman, A. van der Lugt, P. J. Koudstaal, M. W. Vernooij, and M. A. Ikram, "Cerebral Perfusion and the Risk of Dementia: A Population-Based Study," Circulation 136, no. 8 (2017): 719–728, https://doi.org/10.1161/CIRCULATIONAHA.117.027448.
Hypoperfusion in Parkinson's Disease. L. Pelizzari, S. Di Tella, F. Rossetto, M. M. Laganà, N. Bergsland, A. Pirastru, M. Meloni, R. Nemni, and F. Baglio, "Parietal Perfusion Alterations in Parkinson's Disease Patients Without Dementia," Frontiers in Neurology 11 (2020): 562, https://doi.org/10.3389/fneur.2020.00562.
Hypoperfusion in ALS. S. Schreiber, J. Bernal, P. Arndt, F. Schreiber, P. Müller, L. Morton, R. C. Braun-Dullaeus, M. D. C. Valdés-Hernández, R. Duarte, J. M. Wardlaw, S. G. Meuth, G. Mietzner, S. Vielhaber, I. R. Dunay, A. Dityatev, S. Jandke, and H. Mattern, "Brain Vascular Health in ALS Is Mediated through Motor Cortex Microvascular Integrity," Cells 12, no. 6 (2023): 957, https://doi.org/10.3390/cells12060957.
Hypoperfusion in Huntington's Disease. [1] N. P. Rocha, O. Charron, G. D. Colpo, L. B. Latham, J. E. Patino, E. F. Stimming, L. Freeman, and A. L. Teixeira, "Cerebral Blood Flow Is Associated with Markers of Neurodegeneration in Huntington's Disease," Parkinsonism & Related Disorders 102 (2022): 79–85, https://doi.org/10.1016/j.parkreldis.2022.07.024. [2] T. Vasilkovska, S. Salajeghe, V. Vanreusel, et al., "Longitudinal Alterations in Brain Perfusion and Vascular Reactivity in the zQ175DN Mouse Model of Huntington's Disease," Journal of Biomedical Science 31, no. 1 (2024): 37, https://doi.org/10.1186/s12929-024-01028-3.
Hypoperfusion in Multiple Sclerosis. M. D'haeseleer, S. Hostenbach, I. Peeters, S. El Sankari, G. Nagels, J. De Keyser, and M. B. D'hooghe, "Cerebral Hypoperfusion: A New Pathophysiologic Concept in Multiple Sclerosis?", Journal of Cerebral Blood Flow & Metabolism 35, no. 9 (2015): 1406–1410, https://doi.org/10.1038/jcbfm.2015.131.
Hypoperfusion in Frontotemporal Dementia. M. Pasternak, S. S. Mirza, N. Luciw, et al., "Longitudinal Cerebral Perfusion in Presymptomatic Genetic Frontotemporal Dementia: GENFI Results," Alzheimer's & Dementia 20, no. 5 (2024): 3525–3542, https://doi.org/10.1002/alz.13750.
In 2004 when Klein et al. removed 28-44% of abdominal fat via liposuction, they found no change in insulin sensitivity, no change in inflammatory markers, and no change in blood pressure, glucose, insulin, or lipid concentrations. S. Klein, L. Fontana, V. L. Young, A. R. Coggan, C. Kilo, B. W. Patterson, and B. S. Mohammed, "Absence of an Effect of Liposuction on Insulin Action and Risk Factors for Coronary Heart Disease," New England Journal of Medicine 350, no. 25 (2004): 2549–2557, https://doi.org/10.1056/NEJMoa033179.
They then tracked these same people for 1.5 to 4 years after the liposuction and these same results persisted. B. S. Mohammed, S. Cohen, D. Reeds, V. L. Young, and S. Klein, "Long-term Effects of Large-Volume Liposuction on Metabolic Risk Factors for Coronary Heart Disease," Obesity 16, no. 12 (2008): 2648–2651, https://doi.org/10.1038/oby.2008.418.
He had collected data from 22 countries in an effort to prove his idea, but only 6 of them supported his hypothesis. A. Keys, "Atherosclerosis: A Problem in Newer Public Health," Journal of the Mount Sinai Hospital, New York 20, no. 2 (1953): 118–139.
Two epidemiologists published a devastating critique pointing out that Keys had data from 22 countries but only used 6, and that he was studying a "tenuous association" rather than proof of causality. J. Yerushalmy and H. E. Hilleboe, "Fat in the Diet and Mortality from Heart Disease; A Methodologic Note," New York State Journal of Medicine 57, no. 14 (1957): 2343–2354.
Keys was also the person who named BMI Body Mass Index...one statistician in the 1830s who was trying to define "the average man". G. Eknoyan, "Adolphe Quetelet (1796-1874)—The Average Man and Indices of Obesity," Nephrology Dialysis Transplantation 23, no. 1 (2008): 47–51, https://doi.org/10.1093/ndt/gfm517.
They studied over 9000 people in controlled settings...from 1968-1973...They just…never published it. It sat in the lead investigator Ivan Frantz's basement until his son found it in 2011. C. E. Ramsden, D. Zamora, S. Majchrzak-Hong, K. R. Faurot, S. K. Broste, R. P. Frantz, J. M. Davis, A. Ringel, C. M. Suchindran, and J. R. Hibbeln, "Re-evaluation of the Traditional Diet-Heart Hypothesis: Analysis of Recovered Data from Minnesota Coronary Experiment (1968-73)," BMJ 353 (2016): i1246, https://doi.org/10.1136/bmj.i1246.
It is so vital that 80% of your cholesterol is produced "in house" in your body. Harvard Health Publishing, "How It's Made: Cholesterol Production in Your Body," Harvard Health, accessed January 5, 2026, https://www.health.harvard.edu/heart-health/how-its-made-cholesterol-production-in-your-body.
Statins also reduce inflammation and improve endothelial function. A. Oesterle, U. Laufs, and J. K. Liao, "Pleiotropic Effects of Statins on the Cardiovascular System," Circulation Research 120, no. 1 (2017): 229–243, https://doi.org/10.1161/CIRCRESAHA.116.308537.
When they added ezetimibe (a non-statin cholesterol lowering drug) to statins, it lowered cholesterol even further but showed no additional mortality benefit. S. Zhan, M. Tang, F. Liu, P. Xia, M. Shu, and X. Wu, "Ezetimibe for the Prevention of Cardiovascular Disease and All-Cause Mortality Events," Cochrane Database of Systematic Reviews 11, no. 11 (2018): CD012502, https://doi.org/10.1002/14651858.CD012502.pub2.
Cortisol specifically promotes visceral fat accumulation. E. S. Epel et al., "Stress and Body Shape: Stress-Induced Cortisol Secretion Is Consistently Greater Among Women With Central Fat," Psychosomatic Medicine 62, no. 5 (2000): 623–632, https://doi.org/10.1097/00006842-200009000-00005.
In acute stress, GLP-1s elevate and are part of the acute stress chemical cascade. [1] M. K. Holt and S. Trapp, "The Physiological Role of the Brain GLP-1 System in Stress," Cogent Biology 2, no. 1 (2016): 1229086, https://doi.org/10.1080/23312025.2016.1229086. [2] Y. Diz-Chaves et al., "Glucagon-Like Peptide-1 (GLP-1) in the Integration of Neural and Endocrine Responses to Stress," Nutrients 12, no. 11 (2020): 3304, https://doi.org/10.3390/nu12113304.
In pathostasis, GLP-1s are depleted. S. Ghosal, B. Myers, and J. P. Herman, "Role of Central Glucagon-like Peptide-1 in Stress Regulation," Physiology & Behavior 122 (2013): 201–207, https://doi.org/10.1016/j.physbeh.2013.04.003.
After 12-15 months, people stop losing weight, and if they go off the drug immediately, they gain most of the weight right back. J. P. H. Wilding et al., "Weight Regain and Cardiometabolic Effects after Withdrawal of Semaglutide: The STEP 1 Trial Extension," Diabetes, Obesity and Metabolism 24, no. 8 (2022): 1553–1564, https://doi.org/10.1111/dom.14725.
The way that medicine measures neuronal death is by staining certain markers within the cells to see how many are left. And it turns out what this is actually measuring is whether cells are metabolically active, not whether they exist at all. M. Ghasemi, T. Turnbull, S. Sebastian, and I. Kempson, "The MTT Assay: Utility, Limitations, Pitfalls, and Interpretation in Bulk and Single-Cell Analysis," International Journal of Molecular Sciences 22, no. 23 (2021): 12827, https://doi.org/10.3390/ijms222312827.
Documented cases of ALS reversal, where patients who met full diagnostic criteria — including EMG-confirmed denervation — recovered completely. D. Harrison, P. Mehta, M. A. van Es, E. Stommel, V. E. Drory, B. Nefussy, L. H. van den Berg, J. Crayle, and R. Bedlack, "ALS Reversals: Demographics, Disease Characteristics, Treatments, and Co-morbidities," Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration 19, no. 7-8 (2018): 495–499, https://doi.org/10.1080/21678421.2018.1457059.
Why placebo can trigger dopamine activity in Parkinson's patients. R. de la Fuente-Fernández, T. J. Ruth, V. Sossi, M. Schulzer, D. B. Calne, and A. J. Stoessl, "Expectation and Dopamine Release: Mechanism of the Placebo Effect in Parkinson's Disease," Science 293, no. 5532 (2001): 1164–1166, https://doi.org/10.1126/science.1060937.
The original study that figured out how predictive this was tracked 17,421 people over decades of their life. V. J. Felitti, R. F. Anda, D. Nordenberg, et al., "Relationship of Childhood Abuse and Household Dysfunction to Many of the Leading Causes of Death in Adults: The Adverse Childhood Experiences (ACE) Study," American Journal of Preventive Medicine 14, no. 4 (1998): 245–258, https://doi.org/10.1016/S0749-3797(98)00017-800017-8).
People with six or more ACEs died nearly 20 years earlier on average than those without ACEs. D. W. Brown, R. F. Anda, H. Tiemeier, V. J. Felitti, V. J. Edwards, J. B. Croft, and W. H. Giles, "Adverse Childhood Experiences and the Risk of Premature Mortality," American Journal of Preventive Medicine 37, no. 5 (2009): 389–396, https://doi.org/10.1016/j.amepre.2009.06.021.
Most clinical trials that dictate our care across the entire medical model we think of as healthcare today enroll an average of 65 patients and last an average of 6-12 weeks. G. K. Gresham, S. Ehrhardt, J. L. Meinert, L. J. Appel, and C. L. Meinert, "Characteristics and Trends of Clinical Trials Funded by the National Institutes of Health Between 2005 and 2015," Clinical Trials 15, no. 1 (2018): 65–74, https://doi.org/10.1177/1740774517727742.
Larger phase 3 trials capping at around 3,000. U.S. Food and Drug Administration, "Step 3: Clinical Research," https://www.fda.gov/patients/drug-development-process/step-3-clinical-research.
The term 'allostasis' was coined in 1988 by Sterling and Eyer. P. Sterling and J. Eyer, "Allostasis: A New Paradigm to Explain Arousal Pathology," in Handbook of Life Stress, Cognition and Health, edited by S. Fisher and J. Reason, 629–649 (John Wiley & Sons, 1988).
In one study they tracked 738 adults over 5 years, and tested 12 allostatic biomarkers...for every single additional biomarker that was out of range at baseline, people had 35% higher odds of developing type 2 diabetes, 21% higher odds of cardiovascular disease, and 15-24% higher odds of physical impairment. A. López-Cepero, A. C. McClain, M. C. Rosal, K. L. Tucker, and J. Mattei, "Examination of the Allostatic Load Construct and Its Longitudinal Association with Health Outcomes in the Boston Puerto Rican Health Study," Psychosomatic Medicine 84, no. 1 (2022): 104–115, https://doi.org/10.1097/PSY.0000000000001013.
To date, thousands of articles, 2,465 as of the time of this publication, informed by the allostatic load model have expanded stress science theory, research, and clinical perspectives. R. P. Juster, T. Seeman, B. S. McEwen, et al., "Advancing the Allostatic Load Model: From Theory to Therapy," Psychoneuroendocrinology 152 (2023): 106267, https://doi.org/10.1016/j.psyneuen.2023.106267.
In 1993 they discovered that Huntington's disease is caused by a specific genetic mutation - a CAG repeat expansion in the huntingtin gene. The Huntington's Disease Collaborative Research Group, "A Novel Gene Containing a Trinucleotide Repeat That Is Expanded and Unstable on Huntington's Disease Chromosomes," Cell 72, no. 6 (1993): 971–983, https://doi.org/10.1016/0092-8674(93)90585-E90585-E).
When researchers finally did screen the general population 23 years later in a 2016 study of over 7,000 people, they found that approximately 1 in 400 individuals carry the expanded CAG repeat associated with Huntington's disease. C. Kay, J. A. Collins, Z. Miedzybrodzka, et al., "Huntington Disease Reduced Penetrance Alleles Occur at High Frequency in the General Population," Neurology 87, no. 3 (2016): 282–288, https://doi.org/10.1212/WNL.0000000000002858.
They found 10 people aged 67-95 with 36-39 repeats who showed no signs of Huntington's. D. C. Rubinsztein, J. Leggo, R. Coles, et al., "Phenotypic Characterization of Individuals with 30-40 CAG Repeats in the Huntington Disease (HD) Gene Reveals HD Cases with 36 Repeats and Apparently Normal Elderly Individuals with 36-39 Repeats," American Journal of Human Genetics 59, no. 1 (1996): 16–22.
Up to 86% of people with 36 repeats - the low end of the range - never got the disease in their lifetime. D. R. Langbehn, R. R. Brinkman, D. Falush, J. S. Paulsen, and M. R. Hayden, "A New Model for Prediction of the Age of Onset and Penetrance for Huntington's Disease Based on CAG Length," Clinical Genetics 65, no. 4 (2004): 267–277, https://doi.org/10.1111/j.1399-0004.2004.00241.x.
Even at 40-41 repeats, traditionally called the "full penetrance" range where the disease should be inevitable, they documented asymptomatic carriers. R. R. Brinkman, M. M. Mezei, J. Theilmann, E. Almqvist, and M. R. Hayden, "The Likelihood of Being Affected with Huntington Disease by a Particular Age, for a Specific CAG Size," American Journal of Human Genetics 60, no. 5 (1997): 1202–1210.
For example they say schizophrenia has 80% heritability. P. F. Sullivan, K. S. Kendler, and M. C. Neale, "Schizophrenia as a Complex Trait: Evidence from a Meta-Analysis of Twin Studies," Archives of General Psychiatry 60, no. 12 (2003): 1187–1192, https://doi.org/10.1001/archpsyc.60.12.1187.
Only 15% of identical twins were both affected in the largest study - with other studies ranging up to 28%. R. Hilker, D. Helenius, B. Fagerlund, A. Skytthe, K. Christensen, T. M. Werge, M. Nordentoft, and B. Glenthøj, "Heritability of Schizophrenia and Schizophrenia Spectrum Based on the Nationwide Danish Twin Register," Biological Psychiatry 83, no. 6 (2018): 492–498, https://doi.org/10.1016/j.biopsych.2017.08.017.
Then they used a formula... D. S. Falconer, "The Inheritance of Liability to Certain Diseases, Estimated from the Incidence Among Relatives," Annals of Human Genetics 29 (1965): 51–76, https://doi.org/10.1111/j.1469-1809.1965.tb00500.x.
Heritability calculation critique. [1] E. F. Torrey, "Did the Human Genome Project Affect Research on Schizophrenia?", Psychiatry Research 333 (2024): 115691, https://doi.org/10.1016/j.psychres.2023.115691. [2] A. Aftab, "Contextualizing the Heritability of Schizophrenia," Psychiatry at the Margins, January 23, 2024, https://www.psychiatrymargins.com/p/contextualizing-the-heritability.
In 1871, the Association of Medical Superintendents of American Institutions for the Insane...was invited by the American Medical Association to merge together into one umbrella organization, but they declined. J. Curwen, History of the Association of Medical Superintendents of American Institutions for the Insane, from 1844 to 1884 (Warren, PA: E. Cowan & Co., 1885), https://lccn.loc.gov/2006573411.
By 1894, the separation had become so complete that American neurologist Silas Weir Mitchell stood before a gathering of asylum physicians and delivered a scathing critique. S. W. Mitchell, "Address Before the Fiftieth Annual Meeting of the American Medico-Psychological Association, Held in Philadelphia, May 16th, 1894," American Journal of Psychiatry 151, no. 6 Suppl (1994): 28–36, https://doi.org/10.1176/ajp.151.6.28.
A landmark meta-analysis that included unpublished trial data by Kirsch in 2008 found that the difference between antidepressants and placebo was clinically negligible. I. Kirsch, B. J. Deacon, T. B. Huedo-Medina, A. Scoboria, T. J. Moore, and B. T. Johnson, "Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration," PLoS Medicine 5, no. 2 (2008): e45, https://doi.org/10.1371/journal.pmed.0050045.
Antidepressants fail for 30-40% of people. R. S. McIntyre, M. Alsuwaidan, B. T. Baune, et al., "Treatment-Resistant Depression: Definition, Prevalence, Detection, Management, and Investigational Interventions," World Psychiatry 22, no. 3 (2023): 394–412, https://doi.org/10.1002/wps.21120.
The average amount of time people stay on antidepressants is 5 years, with 25% staying on them for 10 years or more. L. A. Pratt, D. J. Brody, and Q. Gu, "Antidepressant Use Among Persons Aged 12 and Over: United States, 2011-2014," NCHS Data Brief, no. 283 (2017): 1–8.
COVID placebo study. Haas, J. W., F. L. Bender, S. Ballou, et al. "Frequency of Adverse Events in the Placebo Arms of COVID-19 Vaccine Trials: A Systematic Review and Meta-analysis." JAMA Network Open 5, no. 1 (2022): e2143955, https://doi.org/10.1001/jamanetworkopen.2021.43955.
Walter Kennedy coined the term "nocebo." Kennedy, W. P. "The Nocebo Reaction." Medical World 95 (September 1961): 203–205.
Half of the people had reductions in their tremor of at least 70%. Barbagallo, G., R. Nisticò, B. Vescio, et al. "The Placebo Effect on Resting Tremor in Parkinson's Disease: An Electrophysiological Study." Parkinsonism & Related Disorders 52 (2018): 17–23, https://doi.org/10.1016/j.parkreldis.2018.03.012.
Another trial performing fake surgery showed significant and sustained improvements. Ko, J. H., A. Feigin, P. J. Mattis, et al. "Network Modulation Following Sham Surgery in Parkinson's Disease." Journal of Clinical Investigation 124, no. 8 (2014): 3656–3666, https://doi.org/10.1172/JCI75073.
15 out of 20 patients in one study showed no tremors at all for minutes after guided imagery. Schlesinger, I., O. Benyakov, I. Erikh, S. Suraiya, and Y. Schiller. "Parkinson's Disease Tremor Is Diminished with Relaxation Guided Imagery." Movement Disorders 24, no. 14 (2009): 2059–2062, https://doi.org/10.1002/mds.22671.
After mindfulness training, motor scores improve significantly. Pickut, B., S. Vanneste, M. A. Hirsch, et al. "Mindfulness Training among Individuals with Parkinson's Disease: Neurobehavioral Effects." Parkinson's Disease 2015 (2015): 816404, https://doi.org/10.1155/2015/816404.
Not one produced noticeable improvement in the symptoms that mattered to patients. Espay, A. J., K. P. Kepp, and K. Herrup. "Lecanemab and Donanemab as Therapies for Alzheimer's Disease: An Illustrated Perspective on the Data." eNeuro 11, no. 7 (2024): ENEURO.0319-23.2024, https://doi.org/10.1523/ENEURO.0319-23.2024.
They found brains full of plaques in the autopsies of people without dementia or Alzheimer's. Aizenstein, H. J., R. D. Nebes, J. A. Saxton, et al. "Frequent Amyloid Deposition Without Significant Cognitive Impairment Among the Elderly." Archives of Neurology 65, no. 11 (2008): 1509–1517, https://doi.org/10.1001/archneur.65.11.1509.
Brains with minimal plaques in those with severe Alzheimer's. Monsell, S. E., W. A. Kukull, A. E. Roher, et al. "Characterizing Apolipoprotein E ε4 Carriers and Noncarriers With the Clinical Diagnosis of Mild to Moderate Alzheimer Dementia and Minimal β-Amyloid Peptide Plaques." JAMA Neurology 72, no. 10 (2015): 1124–1131, https://doi.org/10.1001/jamaneurol.2015.1721.
In the 1890s Cajal had introduced a concept he called "neuronal plasticity." G. Berlucchi and H. A. Buchtel, "Neuronal Plasticity: Historical Roots and Evolution of Meaning," Experimental Brain Research 192, no. 3 (2009): 307–319, https://doi.org/10.1007/s00221-008-1611-6.
The most famous case was reported back in 1886, when a woman had a full asthmatic attack after seeing an artificial rose. MacKenzie, J. N. "The Production of the So-Called 'Rose Cold' by Means of an Artificial Rose, with Remarks and Historical Notes." American Journal of the Medical Sciences 91 (1886): 45–56.
In 1962, a researcher named Turnbull proposed that maybe asthma was actually a learned response. Turnbull, J. W. "Asthma Conceived as a Learned Response." Journal of Psychosomatic Research 6 (1962): 59–70, https://doi.org/10.1016/0022-3999(62)90025-990025-9).
They could condition guinea pigs to have asthma attacks in response to completely neutral stimuli. Ottenberg, P., M. Stein, J. Lewis, and C. Hamilton. "Learned Asthma in the Guinea Pig." Psychosomatic Medicine 20, no. 5 (1958): 395–400, https://doi.org/10.1097/00006842-195809000-00007.
Children sensitized to four or more allergens by age two have more than four times the risk of having asthma by age ten. Havstad, S. L., A. R. Sitarik, H. Kim, E. M. Zoratti, D. Ownby, C. C. Johnson, and G. Wegienka. "Increased Risk of Asthma at Age 10 Years for Children Sensitized to Multiple Allergens." Annals of Allergy, Asthma & Immunology 127, no. 4 (2021): 441–445.e1, https://doi.org/10.1016/j.anai.2021.04.028.
Over half of children with severe eczema go on to develop asthma. S. K. Bantz, Z. Zhu, and T. Zheng, "The Atopic March: Progression from Atopic Dermatitis to Allergic Rhinitis and Asthma," Journal of Clinical and Cellular Immunology 5, no. 2 (2014): 202, https://doi.org/10.4172/2155-9899.1000202.
First, he mapped the hand area in the brain of an adult monkey... Then he amputated the monkey's middle finger. Merzenich, M. M., R. J. Nelson, M. P. Stryker, M. S. Cynader, A. Schoppmann, and J. M. Zook. "Somatosensory Cortical Map Changes Following Digit Amputation in Adult Monkeys." Journal of Comparative Neurology 224, no. 4 (1984): 591–605, https://doi.org/10.1002/cne.902240408.
He sewed two of the monkey's fingers together so that both fingers moved as one. Clark, S. A., T. Allard, W. M. Jenkins, and M. M. Merzenich. "Receptive Fields in the Body-Surface Map in Adult Cortex Defined by Temporally Correlated Inputs." Nature 332 (1988): 444–445, https://doi.org/10.1038/332444a0.
They taught a monkey to touch a spinning disk with one finger. Jenkins, W. M., M. M. Merzenich, M. T. Ochs, T. Allard, and E. Guíc-Robles. "Functional Reorganization of Primary Somatosensory Cortex in Adult Owl Monkeys After Behaviorally Controlled Tactile Stimulation." Journal of Neurophysiology 63, no. 1 (1990): 82–104, https://doi.org/10.1152/jn.1990.63.1.82.
Then in 1991, a researcher named Timothy Pons worked with a group of macaque monkeys. Pons, T. P., P. E. Garraghty, A. K. Ommaya, J. H. Kaas, E. Taub, and M. Mishkin. "Massive Cortical Reorganization After Sensory Deafferentation in Adult Macaques." Science 252, no. 5014 (1991): 1857–1860, https://doi.org/10.1126/science.1843843.
Research on London taxi drivers revealed that the part of their brain devoted to spatial navigation, the hippocampus, was enlarged compared to bus drivers. Maguire, E. A., K. Woollett, and H. J. Spiers. "London Taxi Drivers and Bus Drivers: A Structural MRI and Neuropsychological Analysis." Hippocampus 16, no. 12 (2006): 1091–1101, https://doi.org/10.1002/hipo.20233.
When people learned to juggle, their visual-motor areas reorganized. Draganski, B., C. Gaser, V. Busch, G. Schuierer, U. Bogdahn, and A. May. "Neuroplasticity: Changes in Grey Matter Induced by Training." Nature 427, no. 6972 (2004): 311–312, https://doi.org/10.1038/427311a.
In 2025, a randomized controlled trial tested one of these programs on fibromyalgia patients. Norouzi, E., M. Pournazari, T. Ahmadi Joybari, P. Sufivand, S. Asar, A. J. Bratty, and H. Khazaie. "Two Non-Pharmacological Interventions, Amygdala and Insula Retraining (AIR) and Physical Activity, Are Both Significantly More Effective Than Standard Medication in Improving Symptoms of Fibromyalgia." Current Psychology (2025), https://doi.org/10.1007/s12144-025-07808-w.
In the 1980s Susan Nolen-Hoeksema began studying what she called "ruminative thinking." Nolen-Hoeksema, S., B. E. Wisco, and S. Lyubomirsky. "Rethinking Rumination." Perspectives on Psychological Science 3, no. 5 (2008): 400–424, https://doi.org/10.1111/j.1745-6924.2008.00088.x.
In 2006 Brosschot and his colleagues explained how and why rumination is causing these outcomes. Brosschot, J. F., W. Gerin, and J. F. Thayer. "The Perseverative Cognition Hypothesis: A Review of Worry, Prolonged Stress-Related Physiological Activation, and Health." Journal of Psychosomatic Research 60, no. 2 (2006): 113–124, https://doi.org/10.1016/j.jpsychores.2005.06.074.
In 2014, Peggy Zoccola's team at Ohio University brought healthy young women into the lab. Zoccola, P. M., W. S. Figueroa, E. M. Rabideau, A. Woody, and F. Benencia. "Differential Effects of Poststressor Rumination and Distraction on Cortisol and C-Reactive Protein." Health Psychology 33, no. 12 (2014): 1606–1609, https://doi.org/10.1037/hea0000019.
A major meta-analysis published in Psychological Bulletin in 2016. Ottaviani, C., J. F. Thayer, B. Verkuil, A. Lonigro, B. Medea, A. Couyoumdjian, and J. F. Brosschot. "Physiological Concomitants of Perseverative Cognition: A Systematic Review and Meta-Analysis." Psychological Bulletin 142, no. 3 (2016): 231–259, https://doi.org/10.1037/bul0000036.
In 1997, Kubzansky and her colleagues published a landmark prospective study where they followed 1,759 men. Kubzansky, L. D., I. Kawachi, A. Spiro III, S. T. Weiss, P. S. Vokonas, and D. Sparrow. "Is Worrying Bad for Your Heart? A Prospective Study of Worry and Coronary Heart Disease in the Normative Aging Study." Circulation 95, no. 4 (1997): 818–824, https://doi.org/10.1161/01.cir.95.4.818.
In 2002, researchers at the National Centre for Biological Sciences published a study... Vyas and colleagues subjected rats to chronic immobilization stress. Vyas, A., R. Mitra, B. S. Shankaranarayana Rao, and S. Chattarji. "Chronic Stress Induces Contrasting Patterns of Dendritic Remodeling in Hippocampal and Amygdaloid Neurons." Journal of Neuroscience 22, no. 15 (2002): 6810–6818, https://doi.org/10.1523/JNEUROSCI.22-15-06810.2002.
Chronic stress causes dendritic retraction here too. Radley, J. J., A. B. Rocher, M. Miller, W. G. M. Janssen, C. Liston, P. R. Hof, B. S. McEwen, and J. H. Morrison. "Repeated Stress Induces Dendritic Spine Loss in the Rat Medial Prefrontal Cortex." Cerebral Cortex 16, no. 3 (2006): 313–320, https://doi.org/10.1093/cercor/bhi104.
A 2023 study showed that stress elevates complement C3, which tags synapses for elimination, and microglia then engulf them. Wang, J., H. S. Chen, H. H. Li, H. J. Wang, R. S. Zou, X. J. Lu, J. Wang, et al. "Microglia-Dependent Excessive Synaptic Pruning Leads to Cortical Underconnectivity and Behavioral Abnormality Following Chronic Social Defeat Stress in Mice." Brain, Behavior, and Immunity 109 (2023): 23–36, https://doi.org/10.1016/j.bbi.2022.12.019.
In 2011, J. Paul Hamilton's team at Stanford used functional MRI to examine the brains of people who ruminated frequently. Hamilton, J. P., D. J. Furman, C. Chang, M. E. Thomason, E. Dennis, and I. H. Gotlib. "Default-Mode and Task-Positive Network Activity in Major Depressive Disorder: Implications for Adaptive and Maladaptive Rumination." Biological Psychiatry 70, no. 4 (2011): 327–333, https://doi.org/10.1016/j.biopsych.2011.02.003.
ADD
4-9 times larger than the gold standard pharmaceutical treatment of antidepressants
Cipriani A, Furukawa TA, Salanti G, Chaimani A, Atkinson LZ, Ogawa Y, Leucht S, Ruhe HG, Turner EH, Higgins JPT, Egger M, Takeshima N, Hayasaka Y, Imai H, Shinohara K, Tajika A, Ioannidis JPA, Geddes JR. Comparative efficacy and acceptability of 21 antidepressant drugs for the acute treatment of adults with major depressive disorder: a systematic review and network meta-analysis. Lancet. 2018 Apr 7;391(10128):1357-1366. doi: 10.1016/S0140-6736(17)32802-7. Epub 2018 Feb 21. PMID: 29477251; PMCID: PMC5889788.
The lamprey, a jawless fish, has been around for 500 million years... An amygdala-like structure for detecting danger, a periaqueductal gray (PAG) for coordinating response, and a hypothalamus for triggering stress chemistry. Olson, I., S. M. Suryanarayana, B. Robertson, and S. Grillner. "Griseum Centrale, a Homologue of the Periaqueductal Gray in the Lamprey." IBRO Reports 2 (2017): 24–30.
This takes about 12 milliseconds. Quirk, G. J., J. C. Repa, and J. E. LeDoux. "Fear Conditioning Enhances Short-Latency Auditory Responses of Lateral Amygdala Neurons: Parallel Recordings in the Freely Behaving Rat." Neuron 15, no. 5 (1995): 1029–1039.
Researchers documented this with dogs given inescapable shocks. Seligman, M. E. P., and S. F. Maier. "Failure to Escape Traumatic Shock." Journal of Experimental Psychology 74, no. 1 (1967): 1–9.
Brain size had already reached modern levels by 300,000 years ago, but brain shape continued evolving, with the frontal and parietal regions expanding into their modern globular form between about 100,000 and 35,000 years ago. Neubauer, S., J.-J. Hublin, and P. Gunz. "The Evolution of Modern Human Brain Shape." Science Advances 4, no. 1 (2018): eaao5961, https://doi.org/10.1126/sciadv.aao5961.
Harari in Sapiens described this as an almost overnight phenomenon around 70,000 years ago. Harari, Yuval Noah. Sapiens: A Brief History of Humankind. Harper, 2015.
Researchers have found their long-term cortisol levels synchronize with their owners'. Sundman, A. S., E. Van Poucke, A. C. Svensson Holm, Å. Faresjö, E. Theodorsson, P. Jensen, and L. S. V. Roth. "Long-Term Stress Levels Are Synchronized in Dogs and Their Owners." Scientific Reports 9 (2019): 7391, https://doi.org/10.1038/s41598-019-43851-x.
Cats in the same households, less selected for human emotional attunement, show no such synchronization. Wojtaś, J., M. Karpiński, and P. Czyżowski. "Are Hair Cortisol Levels of Humans, Cats, and Dogs from the Same Household Correlated?" Animals 12, no. 11 (2022): 1472, https://doi.org/10.3390/ani12111472.
Dogs get cancer at almost double the rate of cats. [1] Haskell Valley Veterinary Clinic. "7 Cancer Warning Signs Every Pet Owner Should Know." https://haskellvalleyvet.com/7-cancer-warning-signs-every-pet-owner-should-know/. [2] All Care Veterinary Network. "Pet Cancer Awareness Month." https://allcareveterinarynetwork.com/articles/pet-cancer.
Almost one in two captive wolves die of cancer while wild wolves almost never do. [1] Modiano, J. F., et al. "Comparative Genetics of Canine and Human Cancers." Veterinary Sciences 12, no. 9 (2025): 875, https://www.mdpi.com/2306-7381/12/9/875. [2] Seeley, K. E., M. M. Garner, W. T. Waddell, and K. N. Wolf. "A Survey of Diseases in Captive Red Wolves (Canis rufus), 1997–2012." Journal of Zoo and Wildlife Medicine 47, no. 1 (2016): 83–90.
Wolpe could take someone with a snake phobia and, through systematic desensitization... eliminate the phobia in weeks instead of years. Wolpe, Joseph. Psychotherapy by Reciprocal Inhibition. Stanford University Press, 1958.
What he kept finding instead was that his depressed patients had consistent patterns of negative thoughts. [1] Beck, A. T. "Thinking and Depression." Archives of General Psychiatry 9 (1963): 324–333. [2] Beck, A. T., A. J. Rush, B. F. Shaw, and G. Emery. Cognitive Therapy of Depression. Guilford Press, 1979.
Short-term, typically delivered over 12-16 weeks. Standard clinical practice. Harvard Health Publishing. "Cognitive Behavioral Therapy." https://www.health.harvard.edu/mental-health/cognitive-behavioral-therapy.
Meta-analyses were already showing all therapies worked about equally well when compared fairly. [1] Luborsky, L., B. Singer, and L. Luborsky. "Comparative Studies of Psychotherapies: Is It True That 'Everybody Has Won and All Must Have Prizes'?" Archives of General Psychiatry 32 (1975): 995–1008. [2] Luborsky, L., et al. "The Dodo Bird Verdict Is Alive and Well—Mostly." Clinical Psychology: Science and Practice 9, no. 1 (2002): 2–12.
In 1992, psychiatrist Judith Herman published "Trauma and Recovery"... "the first task of recovery is to establish the survivor's safety—this task takes precedence over all others, for no other therapeutic work can possibly succeed if safety has not been adequately secured." Herman, Judith Lewis. Trauma and Recovery: The Aftermath of Violence—From Domestic Abuse to Political Terror. Basic Books, 1992.
In a 2024 survey of 348 clinicians, therapists reported high levels of fear about retraumatizing their patients. But they didn't collectively agree on what being 'retraumatized' even means. Purnell, L., K. Chiu, G. E. Bhutani, N. Grey, S. El-Leithy, and R. Meiser-Stedman. "Clinicians' Perspectives on Retraumatisation During Trauma-Focused Interventions for Post-Traumatic Stress Disorder: A Survey of UK Mental Health Professionals." Journal of Anxiety Disorders 106 (2024): 102913, https://doi.org/10.1016/j.janxdis.2024.102913.
The "Dodo Bird Verdict," named after the scene in Alice in Wonderland where the Dodo declares "everyone has won and all must have prizes," has been replicated in meta-analysis after meta-analysis. [1] Rosenzweig, S. "Some Implicit Common Factors in Diverse Methods of Psychotherapy." American Journal of Orthopsychiatry 6 (1936): 412–415. [2] Luborsky, L., B. Singer, and L. Luborsky. "Comparative Studies of Psychotherapies: Is It True That 'Everybody Has Won and All Must Have Prizes'?" Archives of General Psychiatry 32 (1975): 995–1008. [3] Luborsky, L., et al. "The Dodo Bird Verdict Is Alive and Well—Mostly." Clinical Psychology: Science and Practice 9, no. 1 (2002): 2–12. [4] Wampold, B. E., and Z. E. Imel. The Great Psychotherapy Debate (2nd ed). Routledge, 2015.
The study that launched this idea followed 488 elderly people, of whom 101 eventually developed dementia, and only 17 of which regularly did crossword puzzles... their mental decline appeared to accelerate about 2.5 years later than those who weren't doing crosswords. Pillai, J. A., C. B. Hall, D. W. Dickson, H. Buschke, R. B. Lipton, and J. Verghese. "Association of Crossword Puzzle Participation with Memory Decline in Persons Who Develop Dementia." Journal of the International Neuropsychological Society 17, no. 6 (2011): 1006–1013.
In 1920, an American psychologist named John Watson set out to prove that all human behavior was learned, not inherited... Little Albert. Watson, J. B., and R. Rayner. "Conditioned Emotional Reactions." Journal of Experimental Psychology 3, no. 1 (1920): 1–14.
In one experiment, Skinner gave pigeons food at random intervals regardless of what they did. And the pigeons started developing elaborate "superstitious" behaviors - one would spin in circles, another would thrust its head into corners, another would do a pendulum motion. Skinner, B. F. "'Superstition' in the Pigeon." Journal of Experimental Psychology 38 (1948): 168–172.
In the 1970-80s, Mark Bouton at the University of Vermont wanted to know if we could reverse the kind of conditioning that Watson did to Little Albert... when he extinguished a fear completely in one location and then tested the animal in a different room, the fear came back immediately. [1] Bouton, M. E., and R. C. Bolles. "Contextual Control of the Extinction of Conditioned Fear." Learning and Motivation 10 (1979): 445–466. [2] Bouton, M. E. "Context, Ambiguity, and Unlearning: Sources of Relapse After Behavioral Extinction." Biological Psychiatry 52 (2002): 976–986.
The explicit memory system, centered in the hippocampus... the implicit memory system, running through the amygdala. LeDoux, J. E. The Emotional Brain: The Mysterious Underpinnings of Emotional Life. Simon & Schuster, 1996.
In the 1960s researchers found that when people learned something while they were drunk, they could remember it when they were drunk again better than when they were sober. Goodwin, D. W., B. Powell, D. Bremer, H. Hoine, and J. Stern. "Alcohol and Recall: State-Dependent Effects in Man." Science 163 (1969): 1358–1360.
In the 1980s a researcher named Gordon Bower at Stanford hypnotized subjects to feel either happy or sad, then had them learn lists of words... People who learned words while sad remembered them better when sad again. Bower, G. H. "Mood and Memory." American Psychologist 36 (1981): 129–148.
Pain states, and mood states, and even specific body positions all created state-dependent memory networks. [1] Pearce, S., S. Isherwood, D. Hrouda, P. Richardson, A. Erskine, and J. Skinner. "Memory and Pain: Tests of Mood Congruity and State Dependent Learning in Experimentally Induced and Clinical Pain." Pain 43 (1990): 187–193. [2] Dijkstra, K., M. P. Kaschak, and R. A. Zwaan. "Body Posture Facilitates Retrieval of Autobiographical Memories." Cognition 102 (2007): 139–149.
A researcher named Siegel showed that situation-specific drug tolerance is capable of preventing fatal overdoses. In one case a man who received the same morphine dose four times a day for four weeks, always in his bedroom... when his son gave him the exact same dose in his living room... he died from overdose. [1] Siegel, S., R. E. Hinson, M. D. Krank, and J. McCully. "Heroin 'Overdose' Death: The Contribution of Drug-Associated Environmental Cues." Science 216 (1982): 436–437. [2] Siegel, S. "Pavlovian Conditioning and Heroin Overdose: Reports by Overdose Victims." Bulletin of the Psychonomic Society 22 (1984): 428–430. [3] Siegel, S., and D. W. Ellsworth. "Pavlovian Conditioning and Death from Apparent Overdose of Medically Prescribed Morphine: A Case Report." Bulletin of the Psychonomic Society 24 (1986): 278–280.
Pain researchers discovered that chronic pain patients often felt better in novel environments like hospitals or vacation spots, only to have their pain return the moment they got home. Martin, L. J., A. H. Tuttle, I. Bhogal, et al. "Male-Specific Conditioned Pain Hypersensitivity in Mice and Humans." Current Biology 29, no. 2 (2019): 192–201.
Hermann Ebbinghaus figured out another piece of the puzzle when he sat alone in his apartment, memorizing nonsense syllables like 'DAX' and 'BOK' for hours every day... He learned that repeated practice done over time led to better retention than a lot of practice all at once. H. Ebbinghaus, Memory: A Contribution to Experimental Psychology, trans. H. A. Ruger and C. E. Bussenius (New York: Teachers College, Columbia University, 1913).
Every time you recall a memory, it becomes temporarily moldable, able to be updated and changed, before being saved again. Nader, K., G. Schafe, and J. Le Doux. "Fear memories require protein synthesis in the amygdala for reconsolidation after retrieval." Nature 406 (2000): 722–726, https://doi.org/10.1038/35021052.
Patients would go through their entire fear hierarchy in a single three-hour session... his data showed 90% of patients were improved or recovered at four-year follow-up. Öst, L. G. "One-session treatment for specific phobias." Behaviour Research and Therapy 27, no. 1 (1989): 1–7, https://doi.org/10.1016/0005-7967(89)90113-7.
Whether your fear went down during the exposure session didn't predict your long-term outcomes. [1] Craske, M. G., K. Kircanski, M. Zelikowsky, J. Mystkowski, N. Chowdhury, and A. Baker. "Optimizing inhibitory learning during exposure therapy." Behaviour Research and Therapy 46, no. 1 (2008): 5–27, https://doi.org/10.1016/j.brat.2007.10.003. [2] Craske, M. G., M. Treanor, C. C. Conway, T. Zbozinek, and B. Vervliet. "Maximizing exposure therapy: an inhibitory learning approach." Behaviour Research and Therapy 58 (2014): 10–23, https://doi.org/10.1016/j.brat.2014.04.006.
To successfully update these fear circuits and extinguish the response, it required two things firing simultaneously. Phelps, E. A., M. R. Delgado, K. I. Nearing, and J. E. LeDoux. "Extinction learning in humans: role of the amygdala and vmPFC." Neuron 43, no. 6 (2004): 897–905, https://doi.org/10.1016/j.neuron.2004.08.042.
Bessel van der Kolk and others established what is now known as trauma therapy. van der Kolk, B. The Body Keeps the Score: Brain, Mind, and Body in the Healing of Trauma. Viking/Penguin, 2014.
Because people first saw the box as a container for tacks, they struggled to see it as a platform. Duncker, K. "On problem-solving" (L. S. Lees, Trans.). Psychological Monographs 58, no. 5 (1945): i–113, https://doi.org/10.1037/h0093599.
Over 80% of people stuck with the complicated method they'd learned. Luchins, A. S. "Mechanization in problem solving: The effect of Einstellung." Psychological Monographs 54, no. 6 (1942): i–95, https://doi.org/10.1037/h0093502.
Chess studies revealed that when expert players encountered problems with a familiar-looking pattern that required an unconventional solution, their performance dropped dramatically. Bilalić, M., P. McLeod, and F. Gobet. "Inflexibility of experts—reality or myth? Quantifying the Einstellung effect in chess masters." Cognitive Psychology 56, no. 2 (2008): 73–102, https://doi.org/10.1016/j.cogpsych.2007.02.001.
The researcher who developed Schema theory, Frederic Bartlett, a British psychologist, had British people read a Native American folk tale and then retell it, and he documented that on each retelling it became more "British." Bartlett, F. C. Remembering: A Study in Experimental and Social Psychology. Cambridge University Press, 1932.
He realized that humans were using stored knowledge frameworks to perceive and understand the world around them, and he modeled AI on exactly that. Minsky, M. "A Framework for Representing Knowledge." MIT-AI Laboratory Memo 306, June 1974. Reprinted in The Psychology of Computer Vision, P. Winston (Ed.), McGraw-Hill, 1975.
No matter where you drop the marble on the inside of the bowl—high up on the rim, halfway down, wherever—it will always roll down and settle at the bottom. The bottom of the bowl is what scientists call an attractor state. Lorenz, E. N. "Deterministic Nonperiodic Flow." Journal of the Atmospheric Sciences 20, no. 2 (1963): 130–141, https://doi.org/10.1175/1520-0469(1963)020020)\<0130:DNF>2.0.CO;2.
Popularized by John Hopfield in 1982 when he showed neural networks work this way, for which he won a Nobel Prize in 2024. Hopfield, J. J. "Neural Networks and Physical Systems with Emergent Collective Computational Abilities." Proceedings of the National Academy of Sciences 79, no. 8 (1982): 2554–2558, https://doi.org/10.1073/pnas.79.8.2554.
In 2022, a team at the Kavli Institute in Norway actually captured this happening in real time. Gardner, R. J., E. Hermansen, M. Pachitariu, Y. Burak, N. A. Baas, B. A. Dunn, M.-B. Moser, and E. I. Moser. "Toroidal Topology of Population Activity in Grid Cells." Nature 602, no. 7895 (2022): 123–128, https://doi.org/10.1038/s41586-021-04268-7.
Once we sit at a certain weight for a while, the body treats that as its set point and resists changing it. Müller, M. J., A. Bosy-Westphal, and S. B. Heymsfield. "Is There Evidence for a Set Point That Regulates Human Body Weight?" F1000 Medicine Reports 2 (2010): 59, https://doi.org/10.3410/M2-59.
But studies of hunter-gatherers show that they don't have epidemic levels of chronic diseases like we do. H. Pontzer, B. M. Wood, and D. A. Raichlen, "Hunter-Gatherers as Models in Public Health," Obesity Reviews 19, Suppl 1 (2018): 24–35, https://doi.org/10.1111/obr.12785.
This is why long covid isn't considered long covid until 3 months have passed. J. B. Soriano, S. Murthy, J. C. Marshall, P. Relan, and J. V. Diaz, "A Clinical Case Definition of Post-COVID-19 Condition by a Delphi Consensus," The Lancet Infectious Diseases 22, no. 4 (2022): e102–e107, https://doi.org/10.1016/S1473-3099(21)00703-900703-9).
It's why depression shows better treatment outcomes the earlier you get treatment; patients who get treated early have nearly four times better odds of achieving remission than those who wait. L. Ghio, S. Gotelli, A. Cervetti, M. Respino, W. Natta, M. Marcenaro, G. Serafini, M. Vaggi, M. Amore, and M. Belvederi Murri, "Duration of Untreated Depression Influences Clinical Outcomes and Disability," Journal of Affective Disorders 175 (2015): 224–228, https://doi.org/10.1016/j.jad.2015.01.014.