The overlooked skill in agentic development: writing machine-legible requirements and specs
The Skill That Will Separate Good Agentic Engineers From Great Ones
Everyone is debating which model to run, which framework to wrap it in, which IDE plugin to install. Those are real decisions. But after watching a lot of agentic projects stall out, I’m convinced the actual bottleneck is upstream of all of them. It’s not the tooling. It’s the spec.
Writing machine-legible requirements is the skill nobody is talking about seriously enough right now.
The Problem With Vague Requirements
I spent years writing requirements for human developers. Humans tolerate ambiguity in a useful way. They ask clarifying questions in standup. They make sensible assumptions and flag the weird ones. They infer intent from context, from knowing the codebase, from a five-minute hallway conversation.
Agents do none of that by default. They interpolate. They pick the most plausible interpretation of an ambiguous requirement and execute it with complete confidence. If your requirements contradict each other, the agent resolves the contradiction silently and moves forward. If you leave edge cases unspecified, it fills them in with whatever pattern fits the training distribution. The output looks reasonable. It just isn’t what you wanted.
This is the failure mode I keep hitting, and watching other engineers hit.
What Machine-Legible Actually Means
A machine-legible requirement is not just a well-written requirement. It’s a requirement that leaves no escape hatches for creative interpretation.
That means: explicit success criteria, not general goals. It means edge cases written out as conditional logic, not left as “handle appropriately.” It means constraints stated as hard boundaries, not implied by context. It means inputs and outputs described with types, ranges, and failure behaviors, not vibes.
Shraddha Bharuka put it plainly in a recent breakdown of what she’s calling the Agent-Driven Development Lifecycle: “Agents execute exactly what you define.” That’s it. That’s the whole insight. The document you write before touching a framework is the product you will get.
The PRD Is Now a Runtime Artifact
Here’s the mental shift that changed how I work. In traditional software development, the PRD is a communication artifact. You write it, developers read it, and then they translate it into code using their own judgment. The PRD gets stale. That’s fine.
In agentic development, the PRD or spec or structured prompt, whatever you’re calling it, is closer to a runtime artifact. The agent is executing against it directly, sometimes literally using it as the system prompt or as the grounding document for a planning step. The quality of that document determines the quality of the output with almost no buffering.
Teams at companies like Wiz and CRED have reported doubling execution speed with agentic coding workflows. That number is real, but it’s the ceiling, not the floor. Engineers who hand an agent a vague spec don’t double their speed. They create fast-moving messes that take longer to untangle than the original work would have taken.
What to Actually Write
The format matters less than the completeness. I’ve seen good structured markdown specs work fine. The things that actually move the needle:
Write the unhappy paths before the happy path. What happens when the input is malformed? What happens when a dependency is unavailable? What does failure look like, and what should the agent do about it?
State your constraints as constraints, not preferences. “Should be fast” is not a constraint. “Must complete within 500ms at p95 under 100 concurrent requests” is a constraint. Agents don’t prioritize preferences.
Define the scope boundary explicitly. Tell the agent what it should not do, not just what it should do. Scope creep from autonomous agents is real and it’s quiet.
Test your spec against a simple question: if you handed this document to someone who had zero context on your product, zero access to you, and had to implement it exactly as written, would the output be what you want? If no, the spec needs more work.
Where This Fits in Your Workflow
I’m not saying ignore the model choice or the framework. Those decisions matter. But they’re multiplicative on the quality of your inputs. A great agent running against a bad spec produces bad software fast. Spending two extra hours on a rigorous spec before starting an agentic coding session has, in my experience, saved far more than two hours on the back end.
Nvidia’s Nemotron-Super at 120B parameters with a MoE architecture built specifically for agents is impressive. So is the growing ecosystem of MCP tooling and multi-agent orchestration. All of it gets more powerful when the agent has something precise to execute against.
The engineers who will get the most out of this wave are the ones who treat specification writing as a first-class engineering skill, not as a precursor to the real work.
It is the real work.
Sources
#AIEngineering #AgenticAI #SoftwareDevelopment #ProductEngineering #LLMs
