Contrarian take on the real skill gap in AI-assisted engineering: problem framing, not prompting
The Real Skill Gap in AI Engineering (And It’s Not Prompting)
Everyone is learning to prompt. Courses, threads, workshops, a 24-minute video from Anthropic’s applied AI team promising to teach you “the 6 elements” you’ve been missing. The tutorials are multiplying faster than anyone can watch them. And I think most of them are solving the wrong problem.
The actual gap I see in engineering teams right now is not prompting skill. It’s problem framing.
Two Different Relationships With AI
There’s a version of using AI where you stay in the driver’s seat the whole time. You write code, hit a wall, ask Claude or GPT to fix the specific thing that broke, paste the output, move on. The AI is a fast search engine with opinions. You’re still doing all the thinking.
Then there’s a different mode. You define the objective clearly, wire up the data inputs, describe the constraints, and let the model do the reasoning across the whole problem. You step back and read what comes out. The AI finds edges you didn’t think to look for.
That second mode is uncomfortable at first. It feels less like engineering and more like managing a junior engineer who works 100x faster than you but needs a very clear brief. The discomfort is the point. It means you’re doing something different.
Why Framing Is the Hard Part
Prompting is a tactical skill. Framing is a strategic one.
When you frame a problem well, you define what success looks like, what the model should optimize for, what the constraints are, and critically, what you don’t want it to do. That requires you to understand the problem more deeply than you would if you were just writing the solution yourself.
This is where most engineers stall. They know how to decompose code. They’re not practiced at decomposing objectives. Those are different skills, and the second one doesn’t come from a tutorial.
There’s a concrete example floating around right now that I found genuinely illustrative. Someone built a Polymarket trading system using Claude Code and four free repositories. The setup was about $25 a month in API costs. One of the more telling details: a single prompt that said, essentially, find every wallet with 100 or more trades and a win rate above 70%, rank by profit, return the top 50. Claude scanned 14,000 wallets in 4 minutes and returned 47. The builder’s note was that he didn’t write the scoring function. Claude did. He just wired it into an if-statement.
That’s the distinction I’m talking about. The skill wasn’t in the prompt syntax. It was in knowing what question to ask and what output shape was actually useful.
What Engineers Are Underinvesting In
I’ll be direct about what I think this means for how we develop skills.
Spending another hour on prompt engineering courses is probably marginal value at this point for most working engineers. The ceiling on that skill is relatively low and you probably hit it faster than you think.
The higher-leverage investment is in problem decomposition. How do you take an ambiguous objective, break it into measurable sub-problems, and define the interfaces between them clearly enough that a model can reason about them without you holding its hand through every step?
That’s also, not coincidentally, the skill that separates good senior engineers from great ones regardless of AI. AI just made the gap more visible because it removed the execution bottleneck. What remains is the thinking.
The Andrej Karpathy talk on LLMs that’s been circulating is worth your time not because of the prompting mechanics but because he explains how these models actually reason. Understanding the substrate matters more than memorizing techniques for poking it.
Where This Leaves Us
The engineers I’ve seen adapt fastest to this mode share one trait. They treat the model like a collaborator with a specific cognitive profile, not a tool to be commanded. They write problem statements the way you’d write a brief for a smart contractor who’s new to your domain.
That shift in mental model changes everything about how you work. It also means the skill you need to build looks less like “learn to prompt” and more like “learn to think in objectives and constraints.”
That’s harder. It takes longer. It transfers everywhere.
The prompt course can wait.
Sources
#AIEngineering #MachineLearning #SoftwareEngineering #AITools #TechOpinion
