Hot take on using AI as a thinking partner vs. a search engine, and why context depth separates good engineers from great ones
AI Is Not Your Search Engine. It’s Your Thinking Partner. But Only If You Let It.
There’s a loneliness problem in AI development that nobody wants to admit exists.
Not the philosophical kind. The practical, 11pm-debugging-an-agentic-workflow kind. Your teammates are asleep. Slack is quiet. Stack Overflow has never seen your exact edge case. You’re staring at a state management bug in a multi-agent orchestration pipeline and the only thing you need is someone to think out loud with.
This is where I’ve watched my own relationship with AI tools change completely over the last year.
The Shift Most Engineers Miss
Most people still use AI like a smarter Google. They paste an error, get an answer, paste the next error. Repeat. It’s useful the same way a dictionary is useful. But a dictionary won’t tell you your architecture is wrong.
What I’ve started doing instead is treating the conversation as a working session. I load in the full context before I ask anything. The system design doc. The constraints I’m working within. The decision I made three weeks ago that’s now creating friction. The assumptions I haven’t questioned yet.
That context-loading step is where most engineers give up because it feels slow. It’s not slow. It’s the actual work.
Context Depth Is the Real Skill
Here’s the thing I keep coming back to. The quality of AI-assisted thinking scales almost entirely with how much relevant context you’re willing to front-load. Engineers who treat it like a search query get search-quality answers. Engineers who treat it like a senior colleague who needs full situational awareness get something genuinely different.
I’ve watched people get mediocre responses from Claude or GPT-4 and conclude the tool is overhyped. Then I look at how they’re using it. Two sentences of context. Vague question. No constraints given. That’s not a tool problem. That’s a usage problem.
The concept floating around right now as “Socratic prompting” gestures at this idea, though the framing undersells it. It’s not just about asking questions instead of issuing commands. It’s about building a shared mental model of the problem before you ask for any solution at all.
What Pressure-Testing Your Assumptions Actually Looks Like
When I’m about to commit to an architecture I’ll live with for six months, I now run it past an AI conversation the same way I’d run it past a trusted colleague. Not “is this good?” but “here’s my reasoning, here are the constraints, here are the tradeoffs I see. Tell me what I’m missing.”
The key difference from a search query is that I’m not looking for a fact. I’m looking for a challenge. I want the reasoning pushed on, not confirmed.
That requires giving the model enough context to actually push back usefully. System constraints. Prior decisions. What I’m optimizing for. What failure modes I’m most worried about. Without that, you get generic answers because you asked a generic question.
The Certification Question
Anthropic just launched the Claude Certified Architect program, which is free and covers agentic orchestration, prompt engineering, and Claude Code workflows. I have opinions about certifications generally, and they’re mostly skeptical. Learning by building beats learning by completing modules, in my experience.
But I think the framing Anthropic is going for is right even if the delivery method is debatable. Context management at a systems level is a real skill that separates engineers who get leverage from AI from engineers who are just running expensive autocomplete. If a free certification gets more engineers thinking about it seriously, that’s a net positive.
Where This Is Going
The engineers who are going to matter in the next few years are not the ones who can prompt best. They’re the ones who can think best, and who know how to use AI to extend and stress-test that thinking rather than replace it.
Naval said recently that software was eaten by AI. That’s true in a narrow sense. But reasoning through hard problems in novel systems is not something a model does for you. It’s something you do with a model, if you’ve built the habit of actually showing it the full picture.
Start there. The quality gap between engineers who do and engineers who don’t is already visible.
Sources
#AIEngineering #MachineLearning #SoftwareEngineering #AgenticAI #Anthropic #Claude #PromptEngineering
