There is a pattern that shows up everywhere once you start looking for it.

A field identifies a real problem. Researchers develop a construct to capture it. The construct requires measurement, so they find a proxy. The proxy is convenient, tractable, and visible, so it gets used repeatedly, cited heavily, built upon. And then, slowly, the field begins to treat the proxy as if it were the construct. The original intuition that motivated everything disappears, buried under papers that treat the measurement tool as the thing being measured.

The construct drifts. Causal claims inflate. Critics attack the proxy and conclude the original problem wasn’t real. Defenders protect the proxy and miss the critique. The field forks.

Nobody is lying. Nobody is being sloppy. The drift is structural, almost inevitable. It happens because humans have a deep, almost irresistible need to close open questions, to find the clean version, the final version, the pure version. And purity, it turns out, is not about truth. It is about control.


The Proxy Problem

Consider how we ended up with time-based metrics as the dominant measure of problematic gaming. The original concern was real and defensible: some people become so embedded in a single activity that disengaging threatens their sense of self, their regulation, their capacity to function elsewhere. That’s entrenchment. That’s the thing researchers were actually worried about.

But entrenchment is hard to measure. It’s latent, slow-moving, and deeply individual. So researchers reached for what was visible: hours per day, frequency of sessions, duration. Not because time was the construct, but because it was a reasonable proxy for the construct. If someone is deeply stuck in an activity, they’ll probably spend a lot of time on it.

That’s a sensible heuristic. The problem came later.

Over time, the proxy became the construct. Papers started asking: “Does time cause harm?”, as if time itself were the mechanism, rather than time being a rough signal pointing at something else. Thresholds appeared: eight hours is too many, one hour is fine. The original intuition, that what matters is whether the activity narrows a person’s future options and locks their sense of self into a single domain, was lost. And once the proxy was reified, the inevitable backlash followed: studies showing that time is a weak predictor, that many high-time users are flourishing, that the threshold models collapse under scrutiny. Critics concluded the problem wasn’t real. Defenders doubled down on the proxy. Neither side was asking the right question.

This is conceptual drift. It is not a failure of intelligence or integrity. It is what happens when you mistake the map for the territory often enough that you forget the territory exists.


The Structure Beneath the Drift

What produces this pattern? I think the answer is the same thing that produces most human epistemic failures: the discomfort of incompleteness.

Proxies are comforting. A latent construct is an open question, you’re always at risk of having measured the wrong thing, pointed at the wrong slice, misspecified the mechanism. A proxy that has been operationalized, validated, and built into a measurement instrument feels closed. It feels done. The field can stop asking foundational questions and start accumulating studies.

This is not laziness. It’s an entirely reasonable response to the cost of foundational uncertainty. Science would never progress if every paper had to relitigate first principles. But the cost is drift, and the cost compounds invisibly until the gap between the original intuition and the current measurement becomes a canyon.

The move that prevents this is simple to describe and hard to execute: keep the latent construct explicit. Name it. Defend it. Make it visible in the theory even when the operational version is a proxy. Say, out loud, “We are measuring X as a proxy for Y, and here is why Y is the thing that actually matters and here is why X is an imperfect but tractable indicator.”

That kind of explicit grounding creates friction against drift. It gives critics somewhere to point when the proxy starts taking on a life of its own. It preserves the original intuition as a check on the measurement.

It is, essentially, an act of epistemic maintenance.


Purity Is a Control Strategy

Here is the thing I want to be precise about: the drift I’m describing is not caused by bad epistemology or sloppy thinking. It is caused by the human relationship with incompleteness.

We do not like incomplete things. Incomplete things are cognitively uncomfortable. They sit in the mind as open loops, unsettled, demanding more attention. Closure feels like resolution. Precision feels like truth. A clean operational definition feels like the problem has been solved, not just approximated.

This craving for closure — for the finished, pure, unambiguous version — is the driver of what I’d call purity-seeking in intellectual life. And purity-seeking, despite its association with rigor, often produces the opposite of epistemically good outcomes. It produces overconfidence. It produces construct reification. It produces the erasure of tragic tradeoffs that were real and important and should have stayed visible.

The purest theory is often the most fragile one, because it has eliminated the ambiguity that reality actually contains.

There’s a deeper structure here that extends well beyond epistemology. Purity-seeking is a control strategy. When the world is complex, variable, full of edge cases and contradictions, the mind reaches for the version of reality that eliminates the contradiction. Not because eliminating the contradiction is epistemically justified, but because it feels safer. It feels like you can predict what comes next. It feels like you have a handle on things.

This is why the most dangerous theories are the ones that are elegant and internally consistent but have surgically removed the parts of reality that didn’t fit. They feel right. They feel finished. And that feeling of completion is precisely what makes them hard to dislodge.


The Civilizational Pattern

This isn’t just a philosophy-of-science problem. It shows up at the scale of civilizations, and the mechanism is the same.

Human history is, among other things, a record of the following sequence, repeated across cultures and centuries:

A system develops. It accumulates power, institutional knowledge, and internal consistency. It becomes dominant — not just effective but definitional. It begins to purify: eliminating heterodox elements, suppressing variation, centralizing authority over what counts as legitimate. For a while, the centralization increases efficiency. And then, at some point, the thing that made the system adaptive — its capacity to respond to variation, to absorb error, to route around failure — has been optimized away. The system becomes brittle. A shock arrives. Collapse follows, partially or entirely. Something new emerges from the wreckage.

This is the golden age, dark age, bronze age cycle you see over and over in historical analysis. It is not destiny. It is not metaphysics. It is what happens to complex adaptive systems when they mistake the elimination of variation for the achievement of perfection.

The Athenian democracy and its collapse. The Roman Empire’s long fracture. The Tang dynasty. The Soviet experiment. The early idealism of the French Revolution and where it ended. The pattern is not identical across cases, but the structural logic recurs: centralization, rigidity, brittleness, collapse, distributed reorganization.

What’s important about the reorganization phase is that it is almost never a return to what came before. It is decentralized, messy, pluralistic — multiple small things trying different strategies, most of them failing, some of them surviving and propagating. It looks chaotic from inside. From outside, over a long enough timescale, it looks like distributed parallel experimentation. Which is, more or less, what biological evolution looks like, and what functioning markets look like, and what intellectual progress looks like.

The logic that works is not the logic of perfection. It is the logic of maintenance: preserve variation, tolerate imperfection, distribute the risk of failure, and remain capable of adaptation.


What Nature Actually Does

When I say “nature follows a maintenance logic,” I want to be careful. Nature is not a conscious agent with a strategy. What I mean is that the systems which have survived long enough for us to study them tend to have structural properties that we can describe as maintenance-oriented, and the ones that pursued something analogous to purity — monocultures, centralized regulation, elimination of variation — tend to collapse.

Biodiversity is not elegant. A healthy ecosystem contains hundreds of competing strategies, many of which seem redundant. But redundancy is what makes the system resilient. When one strategy fails — a pathogen, a drought, a new predator — the others absorb the shock. The system doesn’t collapse; it reorganizes around whichever strategies survived.

Monocultures are more efficient in calm conditions. They are catastrophically fragile under stress. The Irish Famine was not just a social tragedy; it was an ecological lesson about what happens when you optimize variation out of a food system.

The parallel to human civilizations and intellectual fields is not metaphorical sleight of hand. The structural logic is genuinely similar: systems that preserve optionality, tolerate internal diversity, and maintain the capacity to reorganize under stress tend to survive. Systems that pursue purity — of ideology, of measurement, of doctrine, of identity — tend to become brittle in proportion to their purity.


The Reinvention Problem

Here is where I want to get specific about something uncomfortable.

We like to believe that intellectual and moral progress is cumulative. We learned from religious extremism, so we built secular institutions. We learned from tribalism, so we built cosmopolitan ethics. We learned from prejudice, so we built bureaucratic fairness systems.

But the underlying psychological infrastructure that produced religious extremism, tribalism, and prejudice is still present. It didn’t go anywhere. It found new containers.

The human tendency to form in-groups and out-groups, to locate purity within the group and corruption outside it, to treat conceptual boundaries as moral ones, to confuse the map for the territory — none of that was a product of religion or premodern culture. It was a product of the kind of minds we have, operating under conditions of limited information, competition for status, and the fundamental need to make sense of a complex world.

So when you build a secular institution that explicitly rejects religious tribalism but reproduces the structural dynamics of religious tribalism — the in-group markers, the heresy detection, the purity enforcement — you have not solved the problem. You have given it new clothes. And the new clothes are actually more dangerous in some ways, because the secular framing makes it harder to recognize the pattern. It presents itself as reason, as fairness, as evidence. The old pattern is hiding behind a new vocabulary, and we’ve lost the critical distance that came from knowing the pattern was religious.

This is not an argument against secular institutions, cosmopolitan ethics, or bureaucratic fairness. It’s an argument that those things don’t solve the underlying psychological dynamics that produce their pathological variants. They can constrain those dynamics when they’re working well. But constraining is not eliminating, and the constraint requires maintenance — it requires ongoing awareness of what it’s trying to hold, and why those dynamics keep recurring.

We keep reinventing the same problems because we keep trying to solve them at the wrong level. We treat the symptom as the disease. We eliminate the form and miss the function. And then we’re confused when the function reemerges in a new form, often wearing the aesthetic of the very thing that was supposed to have solved it.


Making the Implicit Explicit

So what do we do?

The answer I keep coming back to is not a grand solution. Grand solutions are part of the problem. The answer is something more modest and more durable: make the implicit explicit.

When you’re using a proxy, name it as a proxy. When your theory contains a normative assumption, surface the assumption. When your institution is attempting to constrain a psychological dynamic, name the dynamic and admit that the constraint requires maintenance. When you’re building something that claims to have solved a recurring human problem, ask what the structural analog of that problem looks like in the new context, and make sure you have a way to detect it.

This is not a guarantee of success. There is no guarantee of success. The tragic structure is real: every system trades off purity, completeness, and operational tractability. Every solution generates new problems. Every period of stability contains the seeds of the next crisis. Given enough time and a large enough system, something will go wrong. This is not pessimism; it is statistics.

But you can change your relationship to that fact. You can build intellectual and institutional systems that are honest about their own partiality, that preserve optionality rather than eliminating it, that are structured to absorb and learn from failure rather than to pretend failure won’t happen. You can resist the purity impulse — not by achieving some final, clean, contradiction-free theory of how things work, but by keeping the contradictions visible and treating them as information rather than embarrassments to be resolved.

The healthiest systems are not the purest ones. They are the ones that know what they are, know what they are trying to do, and remain capable of updating when they’re wrong.

That’s harder than purity. It requires tolerating the open loop, sitting with incompleteness, resisting the closure that feels like resolution but is actually rigidity in disguise.

But it is what maintenance looks like. And maintenance, not perfection, is what survives.


A Note on Epistemology and Peace

I want to end somewhere personal rather than theoretical, because I think there’s something here that matters beyond intellectual argument.

The craving for purity is not just a scientific or political failure mode. It is a psychological one. And the alternative to it — the maintenance mindset, the willingness to sit with tragic tradeoffs and irresolvable tensions — is not just epistemically better. It is, in a meaningful sense, a path toward something like peace.

Purity promises to close the open loops. To give you certainty, clarity, a world where everything has a right answer and you’ve found it. But the price is rigidity — the inability to update when reality doesn’t cooperate, the anxiety of defending a position that was never as secure as it felt, the aggression that comes from encountering the variation and ambiguity that shouldn’t exist if the theory is right.

Letting go of purity doesn’t mean letting go of rigor, or standards, or the commitment to getting things right. It means holding your best current model as a best current model — provisional, testable, revisable, honest about what it can’t explain. It means accepting that you are operating under uncertainty, that the framework you’re using to navigate the world is not the world itself, and that the gap between the map and the territory is where the interesting information lives.

This isn’t relativism. It’s not “everything is equally valid.” Some models are better than others. Some theories explain more, predict more, account for more edge cases without special pleading. The project of getting closer to truth is real and worth pursuing. It’s just that “closer to truth” is a direction, not a destination — and treating it as a destination is exactly how you end up defending a proxy as if it were a construct.

The open loop is not the enemy. The open loop is where you’re still learning.


There’s more to say about this — specifically about how the loss-of-optionality frame applies not just to gaming disorder but to the structural logic of purity projects across domains, and about what a properly operationalized maintenance epistemology might look like. Those are separate papers. For now: this is a rough map, honestly labeled.

This piece is primarily written by AI.