Last week I was at an HBS Club of New York event, walking the floor between startup booths, when I got into a conversation with an event sponsor. Inevitably, we started talking about AI, and she told me she was basically running out the clock. Her company would eventually figure out that AI could do a version of her job. She knew it. She was at peace with it. Financially she's fine, and honestly, she's not wrong to be thinking about retirement.

The reality is publicly traded companies are sitting on a lever (i.e. a significant reduction in headcount costs), and with each passing quarter it gets harder to ignore. The professionals who think they have five or six years of runway may have less than they think. That's meant to be motivating, because the window to build AI fluency is open right now, and most people haven't taken advantage of it yet.

Naval put it simply last week:

Seniority doesn't protect you. Neither does a good resume. The divide that's opening up in white-collar work is simpler and in some ways more democratic than the ones that came before it. You're either developing real fluency with these tools or you're falling behind the people who are.

Last week I made the case that the 80% of executives who report no AI productivity impact are measuring the wrong thing at the wrong time. This week I want to get specific about why the gap exists at the individual level, and what to do about it.

What the data actually shows

A recent Anthropic study mapped theoretical AI capability against observed AI usage across every major occupational category. The blue area on the chart below is what AI could theoretically do in a given field. The red area is what people are actually doing with it.

The gap is striking. In the fields where knowledge workers are most concentrated, legal, finance, management, business, observed usage is a fraction of theoretical capability. Each step outward on that chart, as the red approaches the blue in any given segment, represents something close to a mini paradigm shift within the larger one. The nature of work in that field changes. The roles change. The way professionals approach their day changes.

The segments with almost no coverage, construction, agriculture, installation and repair, transportation, reflect fields where the capability simply has not been built yet. Robotics will eventually close that gap, but that is a different conversation and a much longer timeline. If you are reading this newsletter, you are almost certainly in the blue zone. The question is how much of it you are actually using.

Most people approach AI like a piano

When you sit down at a piano and press a key, you get a note. Press a chord and you get a chord. The instrument responds immediately and correctly to whatever you put in. There is almost no friction between intention and output. Piano is hard to master but easy to start.

Guitar is different. Anyone who has picked one up for the first time knows it sounds terrible. You press your fingers against the strings and instead of a chord you get a muffled thud. The problem is finger strength, wrist position, the feel of how much pressure to apply and where. The instrument requires you to develop something before it gives you anything back. I've been playing for close to twenty years, and I still remember how frustrating those first weeks were, knowing what I wanted to hear and being completely unable to produce it.

AI is a guitar. Most people treat it like a piano.

They open ChatGPT or Claude, type a sentence or two, and when the output comes back generic or slightly off, they conclude the instrument has a ceiling. What they're actually running into is the learning curve. The tool responds to what you give it. Give it thin input and you get thin output. The frustrating part is that the interface gives you almost no signal about this. There is no red light, no error message, nothing to indicate your prompt was the problem. You just get a mediocre response and draw your own conclusions.

The feedback problem

With a guitar, you eventually figure out that the muffled chord is your fingers, because the sound is immediate and physical and you can feel what's wrong. With AI, the feedback loop is nearly invisible. The output looks like a real answer. It's grammatically correct, organized, sometimes pretty good. You have no way of knowing how much better it could have been with more context.

Here's the thing almost nobody does but anyone can: ask.

After you get a response you're underwhelmed by, type: "What context was I missing from my prompt that would have made your answer better?" The model will tell you exactly what it was working around, what it had to assume, and what you could add next time. Most people don't have the instinct to do this, partly because they don't know they can, and partly because asking for feedback from anyone or anything requires a kind of ego check that doesn't come naturally. We don't love being told we could have done something better, even by software.

I do this constantly. I'm doing it right now, in the process of writing this newsletter, asking at each step what's missing, what's weak, what I haven't thought through. That's the muscle. It builds fast once you start using it.

Anthropic's AI Fluency Index backs this up. Conversations where people iterate, push back, and ask follow-up questions show more than double the fluency behaviors of conversations where someone accepts the first response and moves on. The people getting the most out of these tools are the ones who stay in the conversation.

What the model actually needs from you

The frame I use: you're briefing a smart new hire on their first day. They're capable, they're motivated, they know nothing about your specific situation. Walk up and say "write a client summary" and you'll get exactly what you'd expect. Spend two minutes giving them context, who the client is, what the meeting covered, what the person reading the notes needs to walk away knowing, what tone to use, and you get something you can actually use.

The model responds the same way. It's working with whatever you give it, filling any gaps with generic assumptions. The quality of the output is almost entirely a function of the quality of the input.

Here's what that looks like in practice. A thin prompt:

"Write a summary of this client meeting for the file."

A prompt that works:

"I'm a partner at a mid-size law firm. I just finished a 45-minute call with a corporate client evaluating an acquisition. The main concerns were valuation, timeline, and a specific liability question we're still researching. My associate will read these notes and needs to know what we committed to follow up on and what's still open. Write a concise summary in the format we'd put in a client file, formal but direct."

Same request. Completely different output. The difference is the brief.

The Assignment

Take one task from your workweek, something you've already tried with AI and found underwhelming, or something you'd normally handle yourself. Write the prompt the way you naturally would. Then rewrite it as if you're briefing a capable new hire who knows nothing about your situation. Run both and look at the difference in output.

Then try one more thing: after you get the response to your rewritten prompt, ask the model what context was still missing. See what it says.

Five minutes. One task. The gap between what you've been getting and what's possible is almost entirely in those two steps.

Quick Hits

I've been spending a lot of time in Claude Code lately. It's a tool that lets you build software by describing what you want in plain language. I'm building something with it and I'll write about it properly in a future issue. For now, this Instagram clip captures the feeling better than I can in words. Some days I feel like Tony Stark. More on this soon.

There's something to the Miles Deutscher take. A few weeks ago he posted that the only way to keep up with AI is to be unemployed. The joke lands because it's partly true. Keeping up with AI does feel like a full time job some weeks. But the answer is to get selective about what's actually useful and ignore the rest. That's what this newsletter is for.

Keep Reading