Stay Curious. The AI Capability Gap Is Closing Faster Than You Think.
Ted Lasso said a lot of smart things, “stay curious” but might be the best advice for today
Anthropic released a really fascinating study today on AI, employment, and the future of work and to paraphrase William Gibson:
The future is already here, it’s just not evenly distributed.
Yes, AI is affecting the world of work, but maybe not as much, or in the ways, we thought. There are a lot of takeaways from the report. Christopher Penn has good insights based on who is being hit hardest that you should read—on top of reading the work yourself—but when I asked Gemini for a summary and Claude for some post ideas, I got something…interesting.
Post Idea 1: The Gap Between What AI Can Do and What It’s Actually Doing
The core insight from the report: Theoretically, AI could handle 90%+ of tasks in Computer & Math and Office & Admin roles.
In reality, it’s covering only 33% of Computer & Math tasks right now. There’s a massive gap between theoretical capability and actual deployment—and that gap is closing.
I started to think about that in terms of both how I use AI and how I teach AI. A lot—a lot—of technology is taught with a pretty strict “recipe” style. To make a Word document do this, then this, then, and this final thing. To use Outlook Calendar for meetings here are the steps. It’s formulaic. It’s structured. It’s strict.
Is it any wonder people have trouble learning new tools? People learn these strict ways of getting the job done that don’t leave room for when technology inevitably changes. The toolbar is different. The menus change. And people are lost.
But we can’t teach AI that way. It’s too new. There is too much that is changing and evolving sometimes literally while we’re in the middle of using it. This is the thesis here—there is still a gulf between what AI could do and what it can do.
But that gulf is getting narrower and narrower every day, so when we teach AI we have to show what’s possible now, what’s going to be possible in the near future, and how you’ll start bridging that gap with what you’re learning today.
Here is a perfect example—this very post.
My ghostwriter is too much me.
According to everything I was reading, I thought AI was going to be writing all my content for me by now. All I had to do was train it on my voice with examples, give it somewhere to start, and like magic, I’d have a post that sounded like me.
Not so much.
I had a whole workflow—I even wrote about it—record, transcribe, clean, draft.
The drafts were terrible.
Not broken. Not wrong. Terrible in a very specific way—they were a caricature of my writing. The AI read my transcript and produced the most “Tris” version of the post based on the words I had already given it. It followed the “rules.” It used the cooking metaphors (too many). It used a lot of my geek phrasing (too often). It didn’t know how to listen to what I said in the transcript and flesh it out. It read the transcript and rewrote the whole damn thing.
No. For the love of God no.
After hacking and editing posts too many times I stopped asking the ghostwriter to draft anything for me. Took my cleaned transcript and edited it myself—just like I’ve been doing for 20 years. And here’s the part I want to be honest about:
The fact that I almost “wrote off” AI as a writing partner. That I was becoming convinced the whole “AI ghostwriter” thing was hype dressed up as workflow advice. Almost cemented that opinion and I would have blithely moved on. Oblivious to the fact that things were getting better and better right under my nose.
I didn’t. And I’m glad I didn’t because we wouldn’t be right here, right now.
What I actually did in the gap
While I wasn’t using AI to draft my posts, I was doing something else.
Watching. Reading. Learning. Experimenting.
I wasn’t convinced I could get AI to write like me, but I also was seeing way too many workflows that were working for people. So I kept my eye on the space. Tried voice learning systems. Voiceprints. I build in iterative diff comparisons between AI drafts and my drafts then using the differences to learn my own style better.
Is it perfect? Nope. Is it better? A lot, and getting better all the time. The only reason I could today:
- Have a Claude Skill pull several ideas for posts from the Anthropic report that were pretty good starting places.
- Have two virtual customer personas tell me what they thought of the idea and what changes were need for it to really click for them.
- Draft several title ideas. I admit I was stuck for a title—which is very unlike me.
- Draft this post that you’re reading (It’s heavily edited btw).
Is because I didn’t cement that AI wouldn’t ever be able to draft content for me into my mind and workflow. This is the mindset we need to teach. “Learning AI” is so far from “one and one” as is eating a single Lays potato chip.
The point everyone’s missing
The Anthropic report talks about there hasn’t been a lot of unemployment attributable to AI—yet. There might be fewer jobs opening at the bottom, but people aren’t losing their jobs in droves like we all feared.
Whew.
Except that’s not the interesting story. The story isn’t what’s happening now , it’s what’s happening soon. AI could theoretically handle over 90% of knowledge-work tasks. In reality, it’s not even close—except for programing. There’s a canyon between what AI can theoretically do and what it’s actually doing. Want AI to take a report, like the Anthropic one here, draft an executive summary for your leadership team, create a deck hitting all the “what this means for us” points, and email it to them? Sure it could do all that.
But it would suck.
And we all know it. But, that’s now; the canyon between “could” and “can” is closing fast.
The researchers are explicit about it—capability advances, adoption spreads, deployment deepens. The gap narrows. The “no unemployment spike” finding is true, but it’s also a snapshot of now not a prediction of where it’s going.
The people reading “no mass unemployment yet” as “nothing to worry about” are watching the wrong number. The gap is the number. The rate of change of the gap closing is the number to watch.
That’s where better AI training comes in.
Mental flexibility is the real AI skill that matters
Here’s what the Anthropic report can’t tell you, but the gap data implies:
The limit that stops you today will not be the limit that stops you in six months.
A third of knowledge-work tasks covered today. More tomorrow. Which means the question isn’t “is AI good enough to replace this?” It’s “what are the specific limits are there right now, and how fast are falling aside?”
Staying ahead requires staying curious . It’s seeing today’s limitations as temporary not “this is just how AI works.” It means understanding why AI produces a caricature of your writing—not just that it does. It requires knowing enough about how these systems work to know when they’ve quietly gotten better at something they were bad at last month.
Generic prompting tips won’t get you there. Knowing which model to use won’t get you there. Understanding the concepts. Understanding the limitations. Watching how they evolve.
That will get you there.
That’s the wave worth getting ahead of. Not because AI is going to take your job tomorrow—it probably won’t, but it might in the near future—but because the gap between “could” and “can” is closing fast. And it will be the people who understand why it’s where it is right now are the ones who’ll know what to do when it moves later.
Stay curious. Stay flexible.
If you’d like to get AI training that gives you the tools for today and the mindset for tomorrow, check out what we’re doing at Peak Intelligence.