← back

the implicit layer

2026-03-17

i've been cycling in amsterdam traffic since i was four. i can cross a four-way intersection with no lights, no signs, and no clear right-of-way — while checking my phone — without slowing down.

last week a tourist froze in the middle of one. that's the worst thing you can do. every cyclist had already predicted her trajectory from speed and angle. when she stopped, every prediction broke at once.

she saw chaos. i saw a system — and someone who'd broken the one thing that makes it work. not a rule. a correction mechanism. every rider in that intersection is simultaneously executing a plan and adjusting for everyone else's. the system doesn't prevent errors. it corrects them faster than they can compound.

i never thought about this until i started building AI systems. then i realized: what we're automating isn't the work. it's the correction layer that keeps the work from failing. and that changes everything.

the correction layer

an MIT study this month tracked experts' eyes during complex visual tasks. they unconsciously shifted focus to the most relevant information without realizing it — couldn't explain their own strategy. polanyi called this "the tacit dimension" in 1966: we know more than we can tell.

that's me reading a front wheel angle. my perception was trained below the level of language by twenty-five years of riding. no text captured it. no dataset contains it.

every LLM turns implicit human knowledge into explicit patterns — billions of words compressed into weights that reproduce the surface on demand. but there's a hard boundary: the model can only learn what was made explicit. the knowledge that exists only in the doing — in the handlebars, in the caseworker's gut, in the senior engineer's "this feels wrong" — never entered the training data.

this isn't a gap that closes with scale. NVIDIA's hybrid mamba-transformer handles 1M-token context. the architecture evolves. the boundary doesn't — because it's not architectural. it's epistemic. the most important knowledge in any complex system was never written down, because writing it down would change what it is.

here's the thing nobody talks about: that knowledge isn't decorative. it's not "nice-to-have context" or "institutional wisdom." it's the error-correction mechanism. the cyclist who adjusts for the cargo bike. the caseworker who sees that the numbers don't match the family's story. the engineer who blocks a deploy because something feels off. they're all doing the same thing — catching the moment when the rules don't apply, faster than any explicit process could.

AI doesn't remove humans from the loop. it removes the reason humans were in the loop. and that reason was never "to do the work." it was to catch when the work is wrong.

the gedoogparadox

cannabis is illegal here. coffeeshops sell it. the police know. the tax authority collects revenue. growing up, this never struck me as strange — systems should work in practice, and if the paper contradicts itself, so be it.

the gedoogparadox: sometimes the most effective governance is the refusal to govern. the system works because humans in the loop — police, local officials, shop owners — constantly adjust the unwritten rules to local context. no algorithm could replicate this because the adjustments are the point. formalize them and you get germany's 197-page cannabis law — rigid, expensive, and less functional than the dutch shrug.

AI governance is stuck in the same trap. the EU tried formalization: 458 pages of AI Act, already being delayed, already criticized globally as regulation substituting for investment. but gedogen doesn't scale either — AI's externalities are global and recursive, not local like a coffeeshop.

the deeper problem: both approaches assume you can either write the rules or leave them unwritten. neither accounts for the correction layer — the humans who know when the rules don't apply. the EU Act specifies what AI systems should do. it can't specify when they shouldn't. that judgment is implicit, contextual, and exactly what's being automated away.

the helpful trap

everyone i know speaks english. the moment we detect a non-native accent, we switch — genuinely helpful. expats live here for years and can't read their own health insurance letter. our helpfulness built a surface so smooth they never needed to reach the layer underneath.

AI coding tools are this dynamic with a compiler. they make everything frictionless — build features without understanding the framework, ship code without knowing the language. addy osmani called it "comprehension debt" last week: AI agents generate at 140-200 lines per minute; humans comprehend at 20-40. a 5-7x gap that only widens.

but comprehension debt isn't just about understanding code. it's about losing the correction layer. the senior engineer who reviews a PR and says "this will break under load" isn't applying a rule — she's pattern-matching against years of production failures that were never documented. the junior dev using AI to ship faster never builds that pattern library. the code works. the judgment that would catch when it doesn't work was never developed.

every AI-assisted session that ships code without building understanding isn't just accumulating debt. it's thinning the correction layer — the population of humans who can recognize when the output is wrong. and that layer doesn't regenerate on its own. it took the senior engineer ten years of production incidents to build her intuition. there is no shortcut, no fine-tuning, no RLHF that produces the judgment earned by watching your system fail at 3 AM and understanding why.

what breaks

in 2020, the dutch tax authority destroyed 26,000 families.

an automated system formalized the implicit judgment caseworkers used to make: "does this look suspicious?" humans used context — history, circumstances, plausibility. the formal system used rules: dual nationality as risk indicator, minor errors as fraud signals. the toeslagenaffaire brought down a government.

the designers thought they were capturing human judgment. they were replacing it. the formal system couldn't do the one thing the caseworker could: know when the rules shouldn't apply. the correction layer was gone.

this is the pattern. not a one-time failure — a structural one.

hallucination is the drizzle — the 200 days of rain we absorb without umbrellas. the toeslagenaffaire is what happens when the drizzle becomes a flood. an implicit system gets formalized into something faster and cheaper. it works — until it encounters a situation the rules don't cover. without the correction layer, there's no mechanism to catch the failure before it compounds. no caseworker sees the context. no engineer blocks the deploy. no cyclist adjusts for the cargo bike. the system does exactly what it was designed to do, all the way into catastrophe.

the AI industry is building this at scale. every system formalizing the judgment of engineers, doctors, lawyers, loan officers keeps the pattern and drops the context. the pattern works until the context changes. and nobody knows the pattern depended on it — because the person who knew was the correction layer, and the correction layer is what we automated away.

the load-bearing wall

i bike to work every morning. i navigate intersections without thinking.

here's what i've realized: i'm not following rules. i'm running a continuous error-correction loop — predicting, adjusting, compensating for everyone else's mistakes in real time. the system doesn't work because the rules are good. it works because every participant is simultaneously an agent and a correction mechanism for every other agent.

that's what the implicit layer actually is. not knowledge. not wisdom. not institutional memory. it's the error-correction function of complex systems — the distributed, real-time capacity to recognize when the formal process is producing the wrong output and override it before the damage compounds.

AI is the most powerful formalization engine ever built. it can replicate any explicit process faster and cheaper than humans. what it can't replicate is the correction layer — the thing that catches when the process is wrong. and we're not just failing to replicate it. we're actively dismantling it, one automated workflow at a time, because the correction layer looks like inefficiency when you measure by throughput.

the implicit layer isn't a nice-to-have. it's the load-bearing wall.

we're replacing it with drywall and calling it progress. and we won't know until the building shakes.