Personal insight: how a humbling AI debugging moment changed Glen’s engineering workflow and what the real skill shift looks like
Three Hours vs. Two Minutes
I’ve been building ML systems long enough to have opinions about most things. Distributed training, gradient accumulation bugs, flaky data pipelines. I’ve seen enough failure modes that I stopped being surprised by them.
Then I spent three hours staring at a distributed training failure last month and came up completely empty. No root cause. No real leads. Just a growing sense that I was missing something obvious.
Out of mild desperation more than strategy, I described the problem out loud to an AI assistant. Not typed. Actually spoke through it like I was onboarding a junior engineer who needed full context.
It found the issue in under two minutes.
That moment changed how I work more than any conference talk or technical book has in years.
What Actually Happened
The bug itself was mundane in hindsight. A subtle interaction between gradient synchronization timing and a custom callback that only appeared under specific worker counts. The kind of thing that hides in the gap between what you expect the system to do and what it actually does.
The AI didn’t have magical access to my codebase. It had my description. And because I was speaking out loud, I gave it full context without the shortcuts I normally take when typing. That completeness is what made the difference. The tool was good. My articulation was better than usual.
That’s the part most people miss when they dismiss these tools or over-credit them.
The Real Skill Shift
The old debugging workflow most engineers run is roughly: hit a wall, dig through documentation, skim Stack Overflow, maybe ping someone senior, eventually brute-force a path forward. That loop works. It’s slow and it scales with seniority, meaning junior engineers get stuck longer.
The new workflow is different in one specific way. You externalize your thinking earlier and more completely. You stop treating the problem as something to solve in your head before asking for help. You get the problem out into words fast, with full context, and you use the response as a starting point, not a final answer.
This sounds simple. It is not natural. Engineers are trained to appear competent. Thinking out loud feels like admitting you don’t already know the answer.
What Paul Conyngham Did for His Dog
I came across a story recently that made my debugging moment feel small by comparison. A person named Paul used ChatGPT, Gemini, and Grok to help design a personalized mRNA cancer vaccine protocol for his dog Rosie, whose cancer had been missed for roughly eleven months before progressing badly. He worked through chemo, immunotherapy, and multiple surgeries, then sequenced Rosie’s normal DNA, tumor DNA, and tumor RNA to identify what was actually driving the cancer. AI helped him design the analysis pipeline, interpret results, and narrow the target list. Labs and scientists handled the actual manufacturing, ethics approvals, and treatment. Months later, some tumors began shrinking.
One person, operating more like a research team than an individual. That framing from Min Choi’s coverage of the story is exactly right, and it’s the same dynamic I experienced at a much smaller scale.
The Leverage Is Real, But So Is the Gap
What I keep coming back to is this: the tool amplifies what you bring to it. Paul brought months of determined learning and genuine domain curiosity. I brought three hours of accumulated context about a specific system failure. In both cases, the AI had something real to work with.
When people treat these tools as search engines with better prose, they get mediocre results. When they treat them as a thinking partner who needs full context and clear framing, the results shift noticeably.
The skill that matters now is not knowing every answer. It’s knowing how to explain a problem completely enough that any intelligent entity, human or otherwise, can engage with it usefully. That is a craft. It takes practice.
I’m a better engineer for that humbling two-minute moment. The ego hit was worth it.
Sources & Further Reading
#AIEngineering #MachineLearning #SoftwareEngineering #DeveloperTools #MLOps
