Think about your experience reading the newspaper: on most topics, the quality of the journalism, the insights and the perspective hit the bar for you. That’s why you read, after all.
Except in the rare cases when there’s an article about your area of expertise. Then the Emperor has no clothes. You can see where all the shortcuts and generalizations are, all the misses that the journalist made, the questionable choices on expert sources.
But does that stop you from reading the newspaper? Of course it doesn’t.
In a discussion group that I’m part of, one member suggested that this is how we should think about AI: it’s not perfect, but it is so good so often, that we shouldn’t let that 10% of time where we can see the flaws keep us from using the tool (read: keep us from reading the newspaper).
If you’re still stuck on this side of the fence, it might help to personify your AI a bit—meaning, move from “I’m going to use ChatGPT/Claude/Perplexity etc.” for this task to “I have access to a 90% expert across any topic I can think of.”
I’ve already shared my ongoing use of ChatGPT as a physical therapist, which is still my favorite use case.
This weekend, I used ChatGPT as an Apple Genius Bar Employee—because making an appointment at, and going to, the Apple Genius Bar is a hassle.
I had an old, powerful Mac that my son had used, and I wanted to wipe it clean. It was not playing along.
First, my son had partitioned the hard drive, so that created a series of problems. Then the Operating System refused to update—it took 6 different attempts at that problem to get it solved. Then, with a new OS installed, iCloud login wasn’t working (because the laptop is for my daughter, and age restrictions with Family Sharing didn’t allow her to log out). Etc, etc, etc. until I solved the problem a few hours later. All of this with ChatGPT calmly troubleshooting with me, providing a series of options, being endlessly patient when I asked new questions or corrected it. I’m positive I would have failed at this task a year ago with just Google search.
The laptop is beside the point (especially because, once I’d solved the problem, we discovered that the battery life was terrible….argh). The point is to think about what it means to have access to this kind of expertise: the best gardener, the best physical therapist, the best coding instructor, the best brainstorming partner.
Better yet, that expertise doesn’t have to be generic (though the generic is pretty amazing). Seth Godin has created a series of personas on Claude, each of which has been taught to respond like some of the greatest thinkers and doers of all time.
So if you have a question for Charles Darwin, Fredrick Douglas, Stephen Pressfield, Seth Godin, Zig Ziglar, Annie Duke, Carol Dweck, Clayton Christensen, David Allen, Mahatma Gandhi, Kevin Kelly, Marcus Aurelius, Simone Biles, Tim Ferris, Sun Tzu, Pema Chodron, or 36 other world-shakers, the answers are at your fingertips.
Try spending a week carrying around the idea, “I have access to a 90% expert on any topic in the world.”
Choose to act on that idea by consulting that expert on a real problem you’re facing.
I promise you you’ll get great (but not perfect) answers fast, in ways that might just blow your mind.

Sasha –
I think you’re on the money, as always, with your insights about the value of AI. But the newspaper analogy from the discussion group is not ”90% there”, but fundamentally flawed. (I will now write a mid-length novel to justify that opinion.)
As you know, the reason Seth’s AI card deck will work beautifully is that it’s not really about chatting with Zig Ziglar or Thelonious Monk at all. Instead it’s about tapping into the core principles and fundamental mental models that underline their work.
The AI probably doesn’t know enough about the thinking of any of these public figures (or about us) to tell us exactly what they would do. But it knows more than enough to give us helpful perspectives and surprising insights. A slightly zoomed-out and “back to basics” approach here gives us more to get our teeth into than diving into the nitty-gritty, or getting bogged down in irrelevant detail. This is where the “90% there” approach might actually be 100% perfect.
But there are many other situations where AI—like the newspaper—can seem to know its stuff but actually be leading the uninformed reader on a wild goose chase. As with human analysts, it’s possible for AI to apply a great deal of reasoning out-of-context, and therefore miss the boat entirely. What makes this particularly tricky are situations where one tiny missing detail can influence the outcome.
In these scenarios, there is no “90% there”… there’s either a precision-targeted approach or a failure. A brain surgeon who cuts in 90% of the right places, for example, isn’t going to have a job for very long.
Alternatively, and more relevantly to those of us who aren’t brain surgeons, putting together a website “about me” page that sounds 90% right is not much of a win… when we consider that all of the real value is in the other 10%. (Bearing in mind competitors can just as easily use AI to get 90% of the way there, too. The internet is overflowing with “good enough, I guess” website blurb.)
So: it’s not only life-and-death situations where AI can trip us up.
Using AI to diagnose a broken computer is a bit of a red herring, as there’s a relatively simple list of possibilities to consider, and the fixes either work or else they don’t. It’s an extremely valuable and helpful AI task, but one that nevertheless is mostly about putting together a troubleshooting sheet. It’s a different sort of work.
So overall I think it’s less about AI being “90% there” about just about everything, and more about AI being a perfect tool for some tasks and an inappropriate one for others.
Certainly, my experiments with prompting it so far have revealed it’s far better at analysing the core strengths and weaknesses of human work than it is in creating a superior replacement. Which I guess makes sense: spell-checking an article, for instance, or checking an argument for fundamental flaws, is more straightforward than the more nuanced work of writing a piece in the first place; particularly if the article in question breaks (or knows when to ignore) some of the rules the AI has been trained on.
The concern here is not that people believe AI will do a perfect job. Instead, it’s that they know it’s imperfect, but conclude that “90% there” is close enough to “100% there” to be a useful substitute.
I think that’s often a correct conclusion… but far from always. The 10% can sometimes be the difference between absolutely right and utterly wrong.
Dan – so much interesting stuff in here. I wonder, though: is the problem with the human and our ability to tell where 90% is and isn’t good enough?
I do struggle with the asserting that the AI can be “totally wrong”; perhaps “mostly right but mostly useless” is the danger?