Contrarian take: prompt engineering as a skill is depreciating, context architecture is the real emerging discipline
The Prompt Engineering Bubble Is Deflating
I’ve watched a lot of skills go from “career differentiator” to “table stakes” to “irrelevant” in this industry. Prompt engineering is moving through that cycle faster than almost anything I’ve seen. And the people who built their identity around it are going to have a rough 18 months.
Let me explain what I’m actually seeing.
The Trick Era Is Over
Three years ago, getting useful output from a language model was genuinely an art form. You needed to know the incantations. “Act as a senior expert.” Chain-of-thought triggers. Role prompting. Elaborate system message gymnastics. If you knew those patterns, you were ahead of most people using these tools.
That era is ending. Quietly, without a press release.
The models got bigger, smarter, and better at reading. Context windows went from 4K tokens to 200K. GPT-4 launched with 8K. Claude 3.5 Sonnet shipped with 200K. Claude 4 reads your entire codebase. The premise of prompt engineering, that you need to carefully explain your intent to a model with limited capacity, collapsed as the capacity exploded.
A year ago I spent 20 minutes crafting a prompt to explain my codebase to a model. Today I give it the files. It reads the actual source. My explanation is irrelevant.
What Actually Replaced It
Ruben Hassid put it well in a recent breakdown of Claude 4.6 workflows: “AI went from reading a sticky note to an entire book. Stop explaining yourself in the prompt. Put it in files.”
He’s right. The prompt isn’t the product anymore. The information architecture around the model is.
This is what I’m calling context architecture. It’s the discipline of deciding what information a model needs, in what format, in what order, and how to structure the relationship between system-level knowledge and task-level instructions. It’s closer to information design than to copywriting.
The shift is from “how do I phrase this?” to “what does this model need to see, and how should I organize it?”
Anthropic noticed this too. A Claude Code engineer’s post on building AI agents made waves recently because it documented how Anthropic rebuilt their tool system three times as Claude outgrew it. The core lesson from that piece: design for how the models see, not how you see. That’s context architecture thinking.
https://x.com/kloss_xyz/status/2027563554774388893
Why This Is a Real Discipline, Not Just a Buzzword
Prompt engineering was always a bit of a workaround. It was the skill of compensating for model limitations by choosing your words carefully. Like learning the exact phrasing that makes a difficult bureaucrat cooperative. Useful, but fragile. Change the bureaucrat, start over.
Context architecture is structural. It involves decisions about what goes in persistent memory versus the active window, how to chunk reference material so it’s retrievable, how to separate behavioral rules from factual context, and how to design feedback loops where the model asks clarifying questions before executing. Hassid’s framework makes this concrete: context files hold your standards and audience, the brief is what you type fresh each time, and the model reads the files completely before touching the task.
That’s not a prompt. That’s a system.
Who Gets Left Behind
The people most at risk are the ones who commodified prompt engineering. The “1000 ChatGPT prompts” sellers. The consultants whose entire value proposition was knowing the magic phrasing for a specific use case. That knowledge has a shelf life measured in model versions now.
The people who actually studied how these models process and prioritize information, who think about context as a design problem, are going to look increasingly valuable as systems get more complex.
What You Should Be Building
Stop optimizing your prompts. Start thinking about your information architecture.
What does the model need to know at all times versus for this specific task? Where are you over-explaining things that could just be in a reference file? Are you rebuilding context in every conversation that could be stored structurally? Are you prompting for an output when you should be prompting for a plan, then reviewing before execution?
These are design questions. They require a different mental model than “write a better prompt.”
The discipline is young enough that figuring it out now still gives you a real lead. That window will close. It always does.
Sources
#AIEngineering #PromptEngineering #MachineLearning #AIAgents #LLM
