In 1817, Colonel Sylvanus Thayer became Superintendent of the United States Military Academy at West Point. The institution had instructors, mostly military officers, but it lacked standardized academic rigor. America didn't yet have a deep bench of professional scholars. Graduate institutions were few. There was no national academic class to recruit from the way universities would a century later.

Thayer had studied at the École Polytechnique in France and brought back a conviction about how education should work. He restructured West Point around a simple, demanding idea. Cadets would be responsible for their own preparation. They received their readings in advance and were expected to show up having done the work. In class, they were called on individually to demonstrate what they'd learned. The instructor's role was to test, probe, and facilitate discussion. Preparation was the cadet's job. Accountability was daily and public.

That system, which became known as the Thayer Method, has shaped West Point for over two centuries. It's close to how the Harvard Business School case method works. Different tradition, same underlying insight. Students prepare independently so that class time, led by an instructor through guided discussion, actually deepens their understanding rather than introducing it for the first time.

I've been thinking about the Thayer Method a lot lately, because the parallel to this moment in AI is hard to ignore.

The answer key is available.

The AI tools available to professionals today (Claude, ChatGPT, Gemini, Perplexity, and dozens more) go well beyond productivity software. They are interactive, endlessly patient learning environments. You can ask Claude to teach you how to use it more effectively. You can ask it to interview you about gaps in your understanding of a problem. You can hand it a complex document and have a structured conversation about what it means.

The textbook is sitting on your desk. The answer key is available. And most professionals haven't cracked the cover.

I get it. There are real reasons.

The real barriers are worth examining.

If you're a knowledge worker who hasn't engaged seriously with AI, you're probably navigating one or more of these.

"My company restricts what I can use."

Many do, and for legitimate reasons, particularly in regulated industries like law and financial services. But corporate policy governs what you do with client data on company systems. It doesn't prevent you from spending thirty minutes on a Saturday morning with a personal device, exploring what these tools can actually do. The learning transfers even if the specific tool doesn't.

"It still makes mistakes."

It does. But the distance between the early ChatGPT releases and the current generation of models is enormous. These systems reason, handle nuance, and self-correct in ways that would have been difficult to imagine three years ago. More importantly, the quality of AI output is directly tied to the quality of your input. Your context, your specificity, your thinking. Dismissing the technology based on an early or shallow experience is like test-driving a car in first gear and concluding it's slow.

"I don't have time."

This is the one I take most seriously. Consultants, attorneys, physicians, new parents. Free time is a luxury. But the investment required is smaller than most people assume. Twenty minutes of focused experimentation compounds quickly when you know where to direct your attention. And candidly, the less time you have, the more you stand to gain from tools that are built to save it.

"It's going to replace me."

Every major technological shift has provoked this fear, and it has never played out the way people expect. In the early 1800s, roughly 90% of working Americans were farmers. Mechanization didn't create mass permanent unemployment. It created entirely new categories of work that no one could have predicted. AI will displace some tasks and reshape some roles. The professionals most at risk are the ones who never built the fluency to adapt.

Who this newsletter is for.

This newsletter is for knowledge workers who know they should be engaging with AI more seriously and haven't found the on-ramp. My business school classmates, some of the smartest people I know, are barely scratching the surface with a free version of ChatGPT. They're about as AI literate as my parents. The gap between what's possible and what people are actually doing is enormous.

If you've been meaning to get serious about AI but keep pushing it to next week, this is your weekly reason to stop pushing. The goal isn't learning for the sake of learning. It's learning to get ahead, or at the very least, not fall behind. These tools can save you time, money, or both. But that only happens if you put in the effort to learn them yourself.

Each week will focus on practical application: how I'm using AI, how others are using it, and what's actually worth your time.

I'm not a journalist or a VC. I'm an entrepreneur. I'm a West Point graduate (Class of 2014), so the Thayer Method is how I learned for four years. After serving on active duty in the Army, I completed the HBS/HKS joint degree program, then went to work as an early employee at venture-backed startups. I started an MBA admissions consulting business in 2022 and am currently building an AI-native software platform.

My aim is to be a useful guide. Structured enough to save you time, honest enough to be worth your trust.

The Thayer Method asks one thing of you. Take responsibility for your own learning. The resources are extraordinary. The tools are accessible. The cost of waiting is one that compounds quietly until it doesn't.

No one is coming to teach you AI. But you have everything you need to teach yourself.

Keep Reading