# Human Context We often compare artificial intelligence to human intelligence. In this post, I want to do the opposite. What if we apply concepts from the world of LLMs and AI to ourselves? For example: "The reverse Turing test is when you behave so logically that people start mistaking you for an algorithm." Or consider how we work with context. We often debate when AI will replace all programmers. Or not all of them. Or all, but later. So far, everyone agrees on one thing: Humans can hold the context of an entire project in their minds, not just the backend or the Android application. Amazing tools like Cursor or Windsurf only work with portions of code. To maintain the big picture, you need a living developer. At least for now. So does this mean we have a larger context window? Strictly speaking, no. But our meta-context (in LLM language, we'd use the concept of RAG) still works better. And yes, our meta-context differs significantly from what ChatGPT has. Our context contains extras. AI has Wikipedia in all languages, including dead ones. We have our sensory experiences that we can barely express in words. ChatGPT has the text of every psychology article. We have a petting zoo (or perhaps a petting abyss?) of our personal demons. And it appears that it's not logic or knowledge, but this very abyss that makes us human. Now I know why I memorize poems and ensure they don't fade from memory. I also remember meaningless facts—like the date Constantinople fell, or the name of the last Western Roman Emperor (yes, I often think about Roman Empire). When they transfer my consciousness to the cloud — I want them to start with the meaningless context. Transfer my childhood traumas. The poetry. Movie soundtracks that made me cry. Transfer the useless knowledge. And transfer all my demons. Every single one. As for all the useful knowledge — that's not me. That's just a technical footnote and a list of supplementary reading.