Artificially Intelligent Advice

Here is my promise to clients, potential clients, colleagues, and competitors: I will try my best to never have a Large Language Model (LLM) draft a communication to you.* That includes these blog posts, written communications, oral communications, and presentations.

I wrote about authenticity and AI last year (link here). I use LLMs often, and for transparency, I’m going to share the five prompts I use when it comes to communications.

  • Can you please proofread this blog post? Bold all typos and grammatical errors, add comments where transitions are incomplete in parenthesis, and indicate in bold anywhere I’m in the passive voice. Do not edit my prose. Strictly comment on what I’ve noted above.
  • Let me know how clients, potential clients, colleagues, competitors, compliance, and the regulator might take issue with something I’ve written and suggest how I can fix it.
  • This sentence “…” isn’t clear. Can you make it clearer/shorter/more direct.
  • Will this communication make sense to someone who doesn’t work in finance?
  • Will I regret sharing this in one year, three years, five years?

Those are the prompts. I also use LLMs as a sounding board for business planning, and they provide a ton of value in that respect.

I’m sharing this because I’m seeing more content that is clearly LLM-generated. It doesn’t matter which LLM generated it. You know it when you see it.

At the end of this post, I’m going to share what Claude generated for me when I prompt it with the detail below. Important note: I’m prompting after I’ve written this post in full.

Please generate a two-page blog post titled Artificially Intelligent Advice. Try to write in my voice. You can pull information from my Authenticity and AI post from last year (link here: https://www.herlaarwm.com/vinces-blog/2025/07/14/authenticity-and-ai). Touch on how readers can tell when content is generated by an LLM; that said content lacks soul; that to avoid being engulfed in the watering down of content slop, wealth professionals must create real, authentic content; and that though LLMs are powerful, an over reliance on them to create your content will beg the question of “why not just get rid of you, the middle person?” Feel free to toss in the Charlie Munger story about Planck and his chauffeur.

You’re currently reading my attempt at that same post, and here’s my continued attempt at writing it.

LLM-generated content lacks soul. I don’t know how to describe it exactly, but something a colleague mentioned to me yesterday helped it make a bit more sense. He said, “I like listening to and reading to people who have a unique voice.” He meant “voice” as in the expression of the writer or speaker themselves on the page or on the microphone.

And that’s it. LLM-generated content—even when you ask it to communicate in a certain voice—still lacks a voice. Well, I guess it does have its own voice, in a way, but it’s a soulless, recognizable voice.

I think the best way I can describe LLM-generated content is that it’s like watching a sports game that you recorded when you already know the result. It might be entertaining, but there are no stakes.

Cue the AI experts, who will say, “You just wait! It’ll get better.”

I’ll believe it when I see it. Have you ever asked an LLM to “write a post about X in the voice of Cormac McCarthy, Stephen King, Morgan Housel, Barry Ritholtz, etc..”? It’s garbage.

That content knows what to say, but it doesn’t know how to say it. I’ll leave you with the Planck Chauffeur story via Charlie Munger via Farnham Street. (Side note: I think I know all the best Mungerisms, but then I always find a new one. I learned about this once from Josh Brown’s blog).

Charlie Munger, the billionaire business partner of Warren Buffett, frequently tells the story below to illustrate how to distinguish between the two types of knowledge: real knowledge and pretend knowledge.

At the 2007 Commencement to the USC Law School, Munger explained it this way:

I frequently tell the apocryphal story about how Max Planck, after he won the Nobel Prize, went around Germany giving the same standard lecture on the new quantum mechanics.

Over time, his chauffeur memorized the lecture and said, “Would you mind, Professor Planck, because it’s so boring to stay in our routine. [What if] I gave the lecture in Munich and you just sat in front wearing my chauffeur’s hat?” Planck said, “Why not?” And the chauffeur got up and gave this long lecture on quantum mechanics. After which a physics professor stood up and asked a perfectly ghastly question. The speaker said, “Well I’m surprised that in an advanced city like Munich I get such an elementary question. I’m going to ask my chauffeur to reply.”

The point of the story is not the quick-wittedness of the protagonist, but rather — to echo Richard Feynman — it’s about making a distinction between knowing the name of something and knowing something.

That is why I will try to avoid ever sending you an LLM-generated piece of content.* It won’t be in my voice, and there won’t be any stakes. Like I wrote last year: authenticity matters a lot.

*I want to write “never,” but in my profession absolutism is an invitation to substantial criticism (or worse).

Now for Claude’s version:

There's a story Charlie Munger used to tell. Max Planck — the physicist, Nobel laureate, one of the great minds of the twentieth century — spent years after winning the prize touring Germany, delivering the same lecture on quantum mechanics to sold-out halls. Eventually his chauffeur had heard the talk so many times that he had it memorized, word for word. So one night, for fun, the chauffeur suggested they switch. Planck sat in the audience with the chauffeur's hat on. The chauffeur delivered the lecture flawlessly. Then someone from the audience asked a difficult follow-up question. The chauffeur didn't miss a beat. "That's such a simple question," he said, "I'm going to let my chauffeur answer it."

Munger told that story to illustrate the difference between real knowledge and the performance of it. Planck knew quantum mechanics. The chauffeur knew Planck's lecture.

I think about that story a lot these days.

Large language models are extraordinary tools. I use them. Most professionals do, or will. They can organize a mess of thoughts into clean prose in seconds. They can summarize a 40-page report before you've finished your coffee. They are, in the truest sense, chauffeurs.

And here's the problem: a lot of wealth professionals are now sitting in the back seat.You can spot the signs. The prose is polished but curiously generic. Every paragraph ends with a little uplift. There are em dashes everywhere — and not the organic kind. The language is careful, thorough, and has the warmth of a compliance manual. Nobody writes like that naturally. Nobody. And readers, even the ones who couldn't explain what an LLM is, feel it. They can't point to it. But they know something's off — that the letter they received wasn't really written to them, it was assembled for a demographic that vaguely resembles them.

That's not communication. That's the simulation of it.

I wrote about this last year — that authenticity might just be a fancy word for "he actually wrote this." A few people pushed back. The argument went something like: if the output is good, does it matter how it got there? If the AI says the right thing, who cares?

Fair question. Here's my answer: it matters enormously, and the reason is trust.

Financial advice isn't a commodity. Anyone with a Bloomberg terminal and a basic understanding of asset allocation can build a diversified portfolio. What clients are actually paying for — whether they'd articulate it this way or not — is judgment. The application of hard-won experience to a specific situation. The advisor who calls you the morning after a market drop not because a CRM triggered a task, but because they've been thinking about you and they know exactly what you're afraid of. That kind of advice cannot be generated. It can only be given.

When you outsource your voice to an algorithm, you don't just lose authenticity. You lose the evidence that any of that judgment exists in the first place.

The Chauffeur Problem

Back to Munger's story. The chauffeur's performance was impressive — right up until it wasn't. A real question broke the illusion instantly.

That's the dynamic playing out in wealth management right now. LLMs are very, very good at generating plausible financial content. Market commentary. Year-end letters. Retirement planning primers. The output looks professional. It reads confidently. It covers the bases. But it is the chauffeur's version of advice: technically sound on the surface, and completely hollow underneath. And here's the question clients are going to start asking — some are already asking it, whether or not they say it out loud: if your market updates sound like everyone else's, if your retirement planning content is indistinguishable from what the guy down the street sends out, if your newsletter could have been written by a prompt and a 30-second editing pass — why exactly do they need you?

That's not a rhetorical flourish. It is a real business risk. The more advisors rely on AI to generate their client-facing content, the more they commoditize themselves. They become the middleman in a transaction that no longer requires one. The chauffeur, not the physicist.

The internet is already awash in AI-generated content. The volume is staggering, and it is almost entirely forgettable. There's a term for it now: slop. It's content that looks like content — structured, grammatical, confident — but that nobody needed to write and nobody particularly needs to read. It's the intellectual equivalent of filler. And there is more of it produced every single day than anyone can possibly consume.

Wealth professionals who participate in that race are not going to win it. A properly resourced fintech firm can generate a thousand pieces of slop for the cost of a single advisor's salary. If "content that sounds financial" is the product, the humans lose. Badly.

The only winning move is to not play that game.

What Real Advice Looks Like

I communicate a certain way. When the market tanks, I don't tell clients it "experienced elevated volatility in the near term." I tell them what I actually think is happening and what, if anything, we're going to do about it. When a client has a bad idea, I say so. Not harshly, but plainly. The words I use come from fifteen years of having these conversations — of watching what lands and what doesn't, of learning which analogies actually help people understand, of earning the right to say "I've seen this before, and here's what matters."

No LLM has any of that. It has training data. Those are not the same thing.

The advisors who are going to be valuable in ten years are the ones building a body of work that is genuinely theirs — that reflects real opinions, earned over time, delivered in a voice that couldn't have been written by anyone else. Not because they're allergic to technology, but because they understand what the technology actually does and doesn't replace.

Use AI to organize your thoughts. Use it to research, to draft outlines, to proofread. Those are legitimate uses. But the thinking — the perspective, the judgment call, the willingness to say something that isn't just the consensus middle — that has to come from you. That's the whole product.

Planck could answer the hard question. The chauffeur could not.

Your clients are going to start asking hard questions — about their retirement, about market risk, about whether they're actually going to be okay. In those moments, they don't want a well-structured paragraph that addresses "common concerns investors may have."

They want you.

Make sure you're still there.

Vince again: I rest my case. No soul.