Prediction: the real differentiator in agentic AI is packaged domain knowledge, not model capability, inspired by Anthropic’s Skills framework
Stop Building Agents. Start Packaging Knowledge.
I’ve been thinking about something Barry Zhang and Mahesh Murag said at the AI Engineer Code Summit, and I can’t shake it. Sixteen minutes. Two Anthropic engineers. One idea that cuts through most of the noise in agentic AI right now.
“Stop building agents. Build Skills instead.”
Most people heard a workflow tip. I heard a diagnosis.
The Real Bottleneck
I’ve built agents that looked genuinely impressive in demos. Clean outputs, smooth tool calls, plausible reasoning. Then production hit and everything fell apart. Not because Claude or GPT-4 failed. The models were fine. The problem was that every session started from zero. Every task required re-explaining context that any competent human employee would simply know. The institutional memory wasn’t there.
That’s the actual bottleneck. Not model capability. The absence of packaged domain knowledge.
We’ve been so focused on which model scores best on benchmarks that we missed the more practical question: what does the model know about your specific domain, your workflows, your organizational context? Raw intelligence without that context is, as the Anthropic engineers put it, just entertainment.
What Skills Actually Are
Here’s the part that surprises people. Skills aren’t some complex framework or new API surface. They’re folders. Literally directories containing markdown files that teach Claude your workflow, your expertise, your domain-specific logic.
The architecture is deliberately simple. An agent loads only the name and description of a Skill at first. When the task becomes relevant, it pulls the full SKILL.md. When it needs more depth, it navigates into supporting reference files. Progressive disclosure. No context window bloat from information the current task doesn’t need.
This matters more than it sounds. One generic agent paired with a well-organized library of Skills can outperform a collection of specialized agents, each hardcoded for a narrow purpose. You get flexibility without starting from zero every time.
Fortune 100 companies are already deploying this at scale, using Skills to encode internal processes and institutional knowledge. Productivity teams with 10,000 or more developers are using Skills to standardize how code gets written across the organization. That’s not a proof of concept. That’s a signal about where the real value is.
The Analogy That Lands
Zhang and Murag used a comparison that I think is exactly right. Who do you want doing your taxes? The person with a 300 IQ who has never read tax law, or the accountant who has done this specific work for twenty years?
Intelligence without expertise is impressive. Expertise without intelligence is slow. The combination, where a capable model has access to deep, packaged domain knowledge, is what actually produces reliable output in production.
We’ve been optimizing for the first variable while mostly ignoring the second.
What This Means for How We Build
The shift this implies is uncomfortable for a lot of teams, because it means the competitive moat isn’t in prompt engineering or model selection. It’s in the work of capturing and packaging what your organization actually knows.
That’s harder than writing a system prompt. It requires talking to domain experts, documenting the non-obvious judgment calls, encoding the “why we do it this way” alongside the “what to do.” It’s closer to knowledge management than software engineering, which is probably why it gets skipped.
But skip it and your agents will keep underperforming. The model isn’t the problem. The empty folder where the domain knowledge should live is the problem.
My take is that within 18 months, the teams winning with agentic AI won’t be the ones with the best model access. They’ll be the ones who did the unglamorous work of turning institutional expertise into something a model can actually use. Skills, or whatever the next iteration of this pattern gets called, are the mechanism. The knowledge itself is the asset.
Start there.
Sources
#AgenticAI #AIEngineering #Claude #Anthropic #MachineLearning #AIProductivity #LLM
