I’ve spent the last two weeks running Claude 4.6 model inside Cursor, and what I found was not what I expected.
The point that hit me hardest was not speed. It was the visibility of the reasoning process. When I asked it to tackle a non-trivial problem, it did not just produce output. It narrated its approach — explaining why it was choosing one pattern over another, flagging where it was uncertain, noting which parts of my project context it was drawing from. Reading that chain of thought, I found myself doing something I had never done with an AI code assistant before: actually learning from it.
I’ll be honest: it was good enough to make me wonder, seriously, whether the years I spent sharpening my expertise still matter.
Then the loop started. And everything changed.
The Wall: Watching the Model Circle
I hit a decision point. Nothing dramatic — just the kind of ambiguous, slightly underspecified problem that comes up in real projects all the time. The requirements were clear on paper, but they left the technical questions open — the kind that don’t have a right answer until you know more about the nuances of the system you’re building. There were two or three approaches, each with meaningful trade-offs, and no clear winner among them. The proposal was not ideal — it did not fully satisfy me as the approver.
This is where I first saw the model’s limit.
Rather than stopping to ask for clarification — or acknowledging that the decision point was inherently ambiguous — the model chose a path and ran with it. When that path hit a wall, it tried a slightly different variation of the same path. Then another variation. Then it circled back to an approach it had already abandoned, rephrased just enough to feel like a fresh attempt. And because the model produces output at machine speed, it took time for me to realize what was going on. The chain of thought, so useful when the model was cruising, became a slightly distressing read: I could watch it repeat itself, with complete confidence.
The token count climbed. In practical terms, given I was building cloud infrastructure with data transformation using another LLM model, that means cost. Every iteration of the loop was burning money — not catastrophically, but visibly. And more importantly, we were not getting closer to a solution. We were orbiting one.
What the model lacked, as I now understand, was the instinct to step back. It optimized locally and relentlessly. It is extraordinarily good at the next step. What it does not have is the meta-awareness to recognize when the next step is the wrong unit of analysis — when the situation demands a different map, not a better path through the current one.
The Human Intervention: Finding the Shortcut
I recognized the loop because I have been in loops before. Not only with AI — with my own thinking, with team brainstorming sessions, with problem-solving marathons when we were going in circles. Experience teaches you to feel the texture of circular motion. There is a particular quality to it: the energy stays high but the distance traveled starts to feel wrong.
I stopped the run, not with frustration — with curiosity. I reviewed the model’s latest suggestions, went back to scripts on GitHub, and asked myself the question it had not asked: what assumption is baked into every single one of these attempts? Once I framed it that way, the answer came quickly. I simply applied human logic and validated the issue at an architectural level.
I rewrote the prompt giving advice for the model with two possible approaches to consider. That analysis took me about fifteen minutes. The model’s response took less than a minute. The solution that came out was clean, direct, and completely in line with my expectations.
This is the part of the story that I keep coming back to. What happened was a true example of teamwork. My experience let me see the shape of the problem in a way the model could not. The model’s capability let it solve the problem — once the shape was clear — in a way I probably could not have matched for speed or precision.
The Takeaway: Synergy, Not Replacement
Most people are still asking whether AI will replace them. That is the wrong starting point. Start asking what you can offer that no model can learn from a training dataset.
Large language models have something remarkable: internet-wide knowledge, available on demand, applied with tireless consistency. Ask a model about design patterns and it has read more on the subject than any human ever will. Ask it to implement something well-specified and it will do so faster and more accurately than most specialists. This is genuinely powerful, and pretending otherwise is just denial.
What experienced humans carry — the thing that took years to build and cannot be downloaded — is the ability to find shortcuts. Not shortcuts in the lazy sense. Shortcuts in the expert sense: the ability to look at a tangled situation and see, almost instinctively, the move that cuts through it. The ability to recognize that you are in a loop before you have completed the fifth iteration. The ability to reframe rather than retry.
A good analogy might be a brilliant new hire who has read every textbook ever written and can execute any well-defined task with extraordinary speed. The value they bring is real. But they need a senior colleague who has shipped things, broken things, and rebuilt things in the current environment — to tell them when the task itself is wrong. That senior colleague is not competing with them. They are completing the puzzle.
How to lead your AI coworker
Come prepared before you open a session. AI code assistants are built to move fast, and they perform best when you already know what you are trying to build and have a rough plan in place. Once the session is running, read what the model produces during the analysis phase — treat it like an experienced coworker’s reasoning, but remember that every coworker can be wrong. It is your responsibility to lead, review, and accept the code.
For longer-term efficiency, use structured knowledge files to build persistent memory for your assistant. Keeping these updated across sessions will save you significant time as the project matures.
As you work, think carefully about the horizon of the task. Exploratory pilots in unfamiliar territory are fair game — evaluate them as a user would. But if the work touches security, compliance, or governance, do not rely on the AI alone. Bring in the right experts first.
Finally, build in regular pauses to assess progress from a human perspective. These moments save budget, keep you in control of the process, and give you space to refine your prompts based on what is actually working.
A Genuine Hope for Job Security
I want to be careful here, because false comfort is worse than honest discomfort. There are categories of work that AI is going to absorb, and the people doing that work will need to adapt. I am not here to tell you everything is fine.
But what I can tell you is this: if you have a solid understanding of what you are trying to achieve, and you are willing to use AI properly as a coworker, the results will surprise you. My outcome was that together — my judgment and Claude’s execution — we built something neither of us would have built alone. In less time than I would have taken solo, with higher quality than the model would have reached without intervention.
This skillset is real and learnable. It is knowing how to prompt well, yes — but more than that, it is knowing how to read a model’s reasoning and spot where it is drifting. And underneath all of it: the instinct for when to let it run and when to intervene.
I broke the model out of its loop because I had seen loops before. That knowledge did not come from a training dataset. It came from years of building things, shipping things, and being wrong enough times to recognize the feeling. No model has that.
Experience is not obsolete. It is our edge. And that intersection — where judgment meets execution — is where the interesting work lives. It is also, as far as I can tell, where the jobs worth having are going to be.
Comments
Loading comments…