Andrej Karpathy has a gift for coining terms that quickly go mainstream. When I heard "vibe coding," it just made sense. It perfectly captured the experience of programming without really engaging with the code. You just vibe until the application does what you want.
Then there's "hallucination." He didn't exactly invent it. The term has existed since the 1970s. In one early instance, it was used to describe a text summarization program's failure to accurately summarize its source material. But Karpathy's revival of the term brought it back into the mainstream, and subtly shifted its meaning, from "prediction error" to something closer to a dream or a vision.
Now, large language models don't throw errors. They hallucinate. When they invent facts or bend the truth, they're not lying. They're hallucinating. And with every new model that comes out and promises to stay clean off drugs, it still hallucinates.
An LLM can do no wrong when all its failures are framed as neurological disorder. For my part, I hope there's a real effort to teach these models to simply say "I don't know." But in the meantime, I'll adopt the term for myself. If you ever suspect I'm lying, or catch me red-handed, just know that it's not my fault. I'm just hallucinating.