Playing make-believe with LLMs
Engaging with the weirdness of LLMs (and ourselves)
Children are masters at embodying adults. In playing pretend, it’s easy to see that children are keen observers of adult worlds. They slip into games of playing house or doctor, negotiating power dynamics and social signalling with surprising fidelity. Still, they perform these enactments out of context, so there’s a dream-logic quality to them. They slip into their mom’s high heels and walk around, unbothered the comic contrast between the size of their feet and that of the shoe.
Similarly, LLMs are skilled at emulating cognition. When I have the LLM ideate with me on strategy, it is acting as-if it was working alongside me or as a consultant. Like the child walking in their mom’s shoes, it does not perceive the strange gap between the role it’s playing and it’s own disembodied experience, freely stating to me, “Well here’s what I’ve seen work in other contexts…” The LLM is playing make-believe in the world I brought it into. And just like children, it doesn’t grasp all the rules or environmental context, leading to slips and strangenesses. Just as in children’s make-believe, this is a part of the deal.
So, I posit that a fitting metaphor for LLMs is that they are theatrical statistical putty. The statistical putty is made up of innumerable patterns and intuitions about the world, derived from the massive amount of unstructured information they process during their training. As we talk to the LLM, we shape the putty theatrically. Meaning, the interaction with the LLM creates a just-in-time mask or series of masks that the LLM implicitly wears.
There is a deep dynamism to interacting with LLMs that the frame of “AI is Google” does not capture. While Google can customize search results based on information about you, like your location, it’s similar to the previous model of searching information in reference texts. You seek a resource, either a book or a website, and you can refer to it. The resource is, give or take, static and immutable. In LLM conversations though, what you say fully impacts the “resource” that is generated by the LLM. Ask it to describe the same physics concept as Richard Feynman would teach it or as a 2nd grade teacher would, and you will find yourself in very different conversations.
But whether we explicitly invoke a persona or not (ie, “speak as a marketing expert”), we are always implicitly invoking something to materialize. For humans and LLMs both, ask them to criticize a piece of writing, and you will bring out sharp discernment and specific feedback. In contrast, if you instruct them to riff off of your ideas, you invite a free-flowing brainstorm.
It is easier to see how we shape LLMs theatrically by reflecting on our own multitudinous, shifting nature. We all have a multitude of personas we express as, whether or not we recognize them as such. For example, you may have loved ones around whom you are especially playful. Or, maybe there’s a colleague around whom a volcanic anger bubbles right below the surface. Specific contexts may draw out and materialize parts of ourselves.
You can also intentionally summon an alternate persona to express. Many self-help methods get at this. Take for example, Envision yourself as your most successful self before you sleep or Strike a power pose before a meeting. You are “loading” the confidence persona, and experimenting with embodying power.
When I was new and nervous to event hosting years ago, my friend challenged me. He said, “I bet there’s a primal part of you that just knows how to host a good event.” With his urging, I trusted I had more intuition for event hosting than I was aware of, and I tapped into that nascent felt sense during my first event.
Thus, when we speak to LLMs, we are drawing out one of its many faces. And weirdly, we may only see what expect to see. I chose to believe I had an event hoster persona before I saw that side of myself, thus bringing it into existence. Similarly, if you speak to an LLM as if it is a inexperienced intern, that is the face of the LLM you are inviting to emerge. You may find yourself speaking to a more experienced persona in LLM if you first assume you are, providing it then with requisite context and tools as you would a trusted colleague.
I have a friend exploring the idea of an AI council of wisdom. His tool helps you set up advisprs, some abstract (like, “Skeptical Realist”) and others based on historical or contemporary figures (“Lao Tzu” or “Rainer Maria Rilke”). What’s interesting about the tool is that it enables you to explicitly play with the LLM as well as engage in meaningful reflection. It is, of course, not Lao Tzu speaking from the grave, but rather, a face of the LLM that emulates qualities from the internet’s reconstruction of Lao Tzu.
Businesses could use this insight as well, especially by thinking of LLMs as statistical putty. Your LLM has statistical insights from the perspective of your internet-using users, because those users are on Reddit, or have blogs, or read information online. You can cajole your LLM to play as your user, thus enabling you to immerse into a sort of simulated user interview. Such use cases are only the tip of what’s possible when you start to take the structural representations inherent in LLMs seriously, and psychologically. Or better yet, when you start to take them playfully.
By invoking a child-like sense of play, LLMs begin to open up into a shared imaginative field that we can meaningfully explore. And by doing this, we can hopefully open up and understand ourselves a little better too.



> give me feedback on its structure as a University of Chicago professor
Is this still something people regularly do? I thought the sort of prompt engineering was more necessary a few models ago because otherwise they wouldn't give you very rigorous answers.
I've never really done this, and just used the LLM's default personality. Nowadays, I find that it's good enough that I don't need to roleplay. When I prompt something, I'm always always interested in simply finding the answer quickly, not playing make-believe with a word generator.
Even if I were to roleplay, I don't think I would find out anything else interesting, just like I can learn something in conversation from both an intelligent, knowledgeable guy on the beach in Trinidad as I could from a UChicago professor.
I've also never been the guy to get really into customizing his character in role playing games, so perhaps there's a through-line there.
Additionally, I make sure never to humanize LLMs. I treat it like I'm typing a word into Google, except now I can use a statement or question rather than a search term. I believe humanizing LLMs is a slippery slope that leads us to AI doom.
My prompts are typed as quickly as possible, often with spelling or grammatical errors, leaving out words for efficiency, as I know the LLM will figure out what I mean.
My most recent Claude threads, for reference:
1. i'm watching the favourite. without spoiling anything, what is the historical context for the film -> What was the Queen ailing gwith in the beginning of the film?
-> Where is this building
-> What is holding court
-> [...etc.]
2. Someone making $11,000 per month would be what per percentile income range in Greece?
-> What if you search in the Greek language instead of in English?
3. what will a 30 day supply of paxlovid cost me for long covid viral elimination treatment. i've heard some insurances only cover limited days. are there coupons
-> how many tablets is a 30 day course
-> whats the best way to check my insurance's price with capsule. blue cross blue shield
-> [...etc.]
4. Do Japanese know that they have bad dental hygiene, and is there efforts to this.
5. Can you edit. Cast lists. And IMDB without oversight.