hallucination is a feature, not a bug — just not for us
2026-03-07
in the 1990s, adrian thompson wanted a chip that could distinguish two audio tones. nobody could program this. so he didn't try. he bred chips together, selected the ones least awful at the task, and repeated for thousands of generations until one worked.
when he examined it, the circuit made no sense. current reversed through gates. the chip was using the physical properties of the silicon itself — not the logic, the actual material between the components. things no engineer would exploit because they don't show up in any schematic. they're not in the abstraction. they're in the physics.
it worked because every generation was tested against actual sound waves. bad circuits didn't get feedback. they got deleted. reality didn't evaluate the design. reality filtered it. what survived wasn't elegant or logical. it was true.
hallucination isn't a bug. it's what happens when you optimize for plausible — inevitably, mathematically. a system that learns to sound right will sometimes sound right and be wrong, with identical confidence either way. the river flows downhill.
your brain also flows downhill. dreams are hallucinations. déjà vu is a hallucination. that memory you'd swear happened — your brain wrote it because it fit the narrative. you are a plausibility engine. you have been one your entire life.
and it's the best thing about you.
every creative leap starts as a hallucination. every intuition is a pattern-match that fired before you could check it. your brain generates ideas the same way it generates false memories: by optimizing for coherence over accuracy, for narrative over truth. creativity and confabulation are the same process.
sit with that for a second. your best idea — the one you're most proud of, the one that felt like it came from somewhere deeper — was generated by the same machinery that fabricates memories of things that never happened. the same process that makes you a creative thinker makes you an unreliable witness. you can't have one without the other.
the difference is you have a body.
you reach for something, miss, correct. you walk into a door frame and update your spatial model. every nerve ending is a correction signal running alongside the hallucination engine in your skull. the body doesn't stop the hallucinations. it catches the ones that don't match reality and kills them before they compound — fast enough that you never notice they were there. the good ones survive. you call those ideas.
hallucination plus a body is creativity. hallucination minus a body is an LLM.
thompson's chip was the same. it hallucinated circuits — random, senseless, wrong. but each one touched the actual sound wave, and the ones that failed produced no offspring. the hallucinations that survived contact with reality became something no human could have designed, because no human thinks at the level of physics where the answer lived.
LLMs touch nothing. they generate text and the generation is the end of the process. the output meets nothing real. it's tested against nothing except whether it pattern-matches what a good answer looks like. plausibility, evaluated by more plausibility, all the way down.
your brain runs on the same cheap trick. but your body checks the answers in milliseconds, always there, always indifferent to what sounds right. the world answers with physics.
the machine doesn't have a world to answer to. we're building the hallucination engine with urgency. the correction loop — the part that makes hallucination useful instead of dangerous — we're building with good intentions.
thompson needed thousands of generations of contact with reality to get one circuit that worked. we're trying to skip that part.