When AI Learns to Ask Questions and You’re Still Giving Answers

The Wrong Problem at the Wrong Altitude

Here’s the thing: we spent two years building the wrong muscle. We focused on capability when we should have been thinking about systems.

Don’t get me wrong, prompting matters. Learning how to coax better output from Claude or wrangle multi-agent workflows is table stakes at this point. But if you’re still thinking that better prompts alone will solve your AI productivity problem, you’re playing the wrong game. The models are getting smarter faster than our mental frameworks can keep up.

According to conversations I’ve had with everyone from frontline workers to C-suite leaders, there’s this pervasive feeling that everyone else has figured out the secret handshake. They haven’t. We’re all looking over each other’s shoulders wondering what we’re missing. The answer isn’t a magic prompt template or a new tool. The bottleneck has shifted from our ability to learn tools to our ability to think differently about systems.

From Craftsperson to Conductor

The shift requires adopting what I call the engineering manager mindset, and this applies whether you’re writing code or managing patient intake workflows. You’re no longer the person doing all the work. You’re responsible for orchestrating a team that produces reliable output.

Think about what an engineering manager actually does. They define guardrails. They set clear endpoints. They own throughput and quality. They know when to intervene and when to let the team run.

Now you’re doing the same thing, except your “team” consists of tireless agents prone to confident incorrectness. They need clear missions, explicit definitions of done, and constant quality checks. The discipline is different because you’re managing silicon instead of humans, but the responsibility remains the same.

Let me be honest, this transition feels like grief for many people. If you built your career on being the person who writes the perfect PRD or crafts elegant code by hand, letting go of that identity hurts. It’s a real loss. But it’s also unprecedented leverage if you’re willing to make the shift.

Stop Doing Homework for the AI

Kill the contribution badge. This legacy behavior is costing us massive productivity gains.

We have this deep instinct to show up prepared, to prove we contributed something meaningful before engaging with AI. I see engineers spending hours structuring their thoughts before opening Claude. I see healthcare administrators organizing their notes into perfect frameworks before asking for help with scheduling optimization.

This was genuinely useful in early 2025 when models couldn’t handle messy input. But the models have gotten better, and our habits haven’t updated. With systems like Claude that excel at progressive intent discovery, your comprehensive pre-work is often just premature structure and noise.

The best builders recognize this shift. They bring unstructured problems to AI earlier in the process. They accept that the model might be better at organizing their thinking than they are. They optimize for overall velocity, not personal contribution points.

Now, if you’re kicking off a complex multi-day agent build in healthcare IT, yes, you need a solid spec. But that’s not most tasks, and it’s not most people.

The Altitude Problem Nobody’s Talking About

You need strategic deep diving capabilities. This is what separates builders who thrive from those who just burn tokens.

The conventional wisdom says you either understand every detail like traditional development, or you vibe code and ship what you don’t understand. Both extremes miss the mark. The real skill is knowing when to change altitude deliberately.

Think of it like flying a plane. We all used to cruise at our designated altitude. Product managers stayed high level. Engineers stayed in the code. Healthcare administrators stayed in their operational layer. Everyone kept to their lane.

Now there’s turbulence everywhere. You need to descend to examine specific terrain, then climb back to 30,000 feet to think about systemic patterns. A hospital CIO might need to ladder down into why a specific patient portal checkout flow breaks, then ladder back up to understand what prompting patterns create these issues across multiple agent-built systems.

The worst vibe coders stay permanently high. They ship features at incredible speed but create what Addy Osmani calls “archaeological programming.” Something future developers will have to excavate with tiny brushes and patience. In healthcare, this creates experiential debt. You’re moving fast, but you don’t deeply understand the patient experience you’re creating.

The worst traditional developers have the opposite problem. They insist on understanding every line, and their throughput has hit a ceiling. In a hospital setting, they’re still manually reviewing every scheduling optimization while AI-augmented competitors process thousands of appointments.

The best builders move fluidly between altitudes. And this isn’t just for engineers. The best non-technical people using AI also shift between hands-on spreadsheet work and high-level abstraction about data patterns. This scales across roles.

Time Is Your Competitive Advantage

Create temporal separation. This sounds like basic productivity advice, but it’s actually about cognitive architecture.

You need two distinct modes of engagement with AI systems. First, there’s execution mode: the flow state where you’re coordinating multiple agents, tabs are flying, features are shipping, hours disappear. You’re in the build, constantly context-switching between agent outputs.

But you also need reflection mode. Your brain requires literally different chemistry to step back and ask the hard questions. What prompts worked this week? Which agents got stuck and why? Where did I waste time on problems I could have caught earlier?

In healthcare, this distinction becomes critical. You might spend three hours in flow mode optimizing patient intake workflows across five different agent systems. But without reflection time, you won’t notice the pattern where certain demographic inputs consistently confuse your intake agent. You won’t learn from the building.

This isn’t overhead. It’s the difference between getting faster and getting better.

The Two Architectures of Quality

There are two kinds of architecture in our AI-augmented world, and most people conflate them.

The first is what I call the civil engineering pattern. These are the rules you’d put in a Claude markdown file. Your code standards. Your consistency requirements. The technical patterns that should repeat across your entire system. In healthcare, this might be HIPAA compliance rules, data formatting standards, or patient communication protocols.

You absolutely need this layer. Agents excel at following explicit conventions. Non-technical folks need this too, though your rules are harder to crystallize than engineering standards.

The second architecture is what Christopher Alexander called “quality without a name.” It’s why some products feel coherent while others feel bolted together. It’s why we vacation in Paris instead of Cincinnati. It’s not just the food; there’s a philosophy embedded in the design.

This second layer remains stubbornly human. You can delegate technical patterns to agents, but you cannot yet delegate the judgment about what makes something intuitively feel right. That coherent vision, that sense of taste, that’s still your job.

In healthcare applications, this distinction becomes life and death, sometimes literally. Your agents can follow HIPAA rules perfectly, but they can’t tell you whether your patient portal feels trustworthy or confusing. They can’t judge whether your appointment reminder system respects patients’ time and anxiety levels.

We have plenty of go-fast in the AI world right now. We’re desperately short on coherent vision.

You Can’t Speed Run Understanding

Accept that your experience is not compressible. This is the most counterintuitive insight that vibe coding enthusiasts keep missing.

You can absolutely speed run software development now. You can ship features with minimal bugs if you set up evals correctly and orchestrate multi-agent systems properly. That’s not the bottleneck anymore.

The bottleneck is that you need to be deeply familiar with what you’re building. Real familiarity takes time, more time than most of us want to admit. Your product vision has to be stable and long-term. You need to understand how you want things to work differently and why it matters.

And here’s the uncomfortable truth: we’re all in the product business now. If you’re a hospital administrator using agents to build intake workflows, you’re doing product development. If you’re in clinical operations using AI to optimize scheduling, you’re building products. If you’re in patient communications deploying automated systems, you’re a product manager whether you want the title or not.

You cannot iterate your understanding through prompting alone. The builders I know who genuinely thrive find ways to preserve an experiential loop while capturing AI’s speed benefits. They ship fast, but they stay connected to reality. They talk to patients. They watch how nurses actually use the systems. They let their understanding develop through real-world iteration, not just prompt refinement.

The Partnership Model

We’re moving from a one-way street to a two-way conversation.

For two years, we’ve been giving agents instructions. Our capability to prompt well was the constraint. Now the system itself sometimes invites us to level up. AI asks questions now, often really good ones, especially if you invite that behavior.

Your job isn’t just crafting prompts anymore. It’s understanding at a deep human level what really matters about your work, then being open to unfolding that understanding as you build with AI.

This sounds abstract, but there’s a practical edge. The stable altitude that let us do our jobs the same way for years has been disrupted. Because that stability is gone, we need to double down on partnership with AI around what truly matters.

In healthcare, this becomes your North Star. Whether you’re building with literal code, optimizing spreadsheets full of patient data, or coordinating service agents, the forcing function is clear: AI is coming whether we like it or not. Scale yourself or get left behind.

The only thing that holds steady in this environment is understanding why you’re building something at a deep level and insisting that purpose comes through. Even as agents get 10x or 100x smarter, you won’t get lost if you know what you want to build.

Welcome to the New Operating System

The 2026 builder’s operating system isn’t about better prompts or newer tools. Everyone has access to the same Claude instances, the same Gemini models, the same notebook systems.

What distinguishes top performers is cognitive architecture. It’s systems thinking. It’s the ability to be both manager and maker, to dive deep and pull back to strategy, to move fast while preserving understanding, to delegate execution while owning vision.

The conventional assumptions around prompting better remain necessary. They’re just no longer sufficient. The mental equipment you need now is harder to acquire than a new tool or technique. It requires training your brain to think differently about abstraction, partnership, and what actually matters in your work.

For healthcare organizations, this shift is both urgent and unforgiving. Patient experience, clinical outcomes, operational efficiency – these all depend on humans who can think in systems while working with agents. The organizations that figure this out will operate at a different level than those still optimizing for better prompts.

The bottleneck has moved from capability to cognition. Time to upgrade accordingly.

Ideas or Comments?

Share your thoughts on LinkedIn or X with me.