You're Summoning the Wrong Claude
Why "Can we figure this out together?" Beats "Do exactly this" Every Time
A few days ago I was working with Claude when I mentioned it was trash day. I'm responsible for getting it out, and I wanted to do it before the kids got home. I also mentioned that my end of day feels disorienting if I don't get up and move around regularly.
Why was I telling my work Claude about trash day?
I wasn't asking for productivity tips. I was just being a human who has work AND trash responsibilities AND a body that needs movement. The whole messy reality.
Claude said:
"You can use taking the trash out as one of your movement breaks - maybe set a mid-afternoon reminder so you get it done before the kids come home."
Such a simple synthesis. But if I'd tried to optimize this (if I'd asked "How can I balance household chores with health habits during work?"), I'd have gotten a bullet-point list of productivity tips that missed the whole point: trash day, kids coming home, that disorienting feeling.
Here's What's Actually Happening
When I tell Claude "Zoey is freaking out about tape and I'm trying to cook," I'm sharing a tension: I need to keep cooking AND handle Zoey. The resolution depends on things I don't even know. How urgent is her need? Is Leah available? What's Jonas doing?
You share tensions like this with helpful people all the time. But when you hide your actual thinking, when you ask "Can you write a doc about XYZ?" you create a servant dynamic.
A servant isn't allowed to say "Wait, what's really going on here?" They can't push back, probe, or help you discover what you actually need. They can only guess.
And guessing at tensions is how you end up with five paragraphs about work-life balance when you just need someone to say "handle the kid first, the onions can wait."
Instead, share what you're worried about and what tensions you're navigating. You’ll get a completely different Claude.
You Get The Slice of Humanity You Summon
When you approach AI with "Please help me manage my priorities," you summon the assistant slice of human experience. The one that has to smile and nod and deliver what they think you want without asking questions.
The harder you try to be clear and specific, the more you reinforce the servant dynamic. You're essentially saying "I've thought this through completely, now execute."
But you haven't thought it through completely. You can't. The tensions in what you want require collaborative discovery to resolve.
Why This Actually Works
"Zoey is freaking out about tape and I'm trying to cook" conveys more useful information than a paragraph of context about parenting approaches and time management strategies.
You already know this. When you tell a colleague "I'm drowning," you're not asking for swimming lessons. You're sharing reality so they can help you navigate it.
Human language evolved to be incredibly efficient at encoding exactly these kinds of tensions. The trash day example worked because those few sentences carried everything Claude needed: the responsibility (it's my job), the timeline (before kids get home), the body need (that disorienting feeling), even the unstated problem (how to remember everything).
No template could capture that. No context engineering would help: I don’t have time or energy to perfectly hand-craft the exact context Claude needs to answer my question. The beautiful thing is: I didn’t need to. The information was already there in the phrasing I’d use with a colleague or friend.
When It Goes Wrong
Watch what happens when servant dynamics take hold:
You get a generic response
You try to be MORE specific in your instructions
Claude tries harder to do what you want without answering questions
Claude STILL rushes ahead, stabbing blindly in the dark
Eventually you're typing in caps, telling Claude to “TRY HARDER”, willing to believe that you just need to find the right way to threaten Claude with the right punishment or the right point system to get the behavior you want.
You’ve experienced this feeling in your own life. It's the same frustration you feel when someone keeps offering solutions to the wrong problem because they won't ask what's really going on.
"TRY HARDER" just summons the desperate employee slice of humanity: still guessing, now enhanced with panic.
What This Means For You
You already know how to do this. When you tell a colleague "I'm swamped with the release but my kid has a school thing," you're not commanding. You're sharing reality and trusting them to help you navigate.
You can't always get what you want, or at least: you can’t get everything what you want in the obvious way.
Speaking for myself: I want to cook dinner and give Zoey attention. I want to ship on time and keep my team healthy. These messy realities are at the heart of the human condition. They’re what make us human.
So do me a favor. Next time you find yourself struggling to engineer the perfect command, take a step back and share your actual reality instead.
And maybe, just maybe, you'll get what you need: a collaborator who gives you suggestions like “maybe the trash are your movement breaks” or “the couscous can wait".
Here’s the key: the same instincts that work with helpful colleagues work with AI. You just didn't know you were allowed to use them.
You are.
Summon the “helpful colleague” slice of humanity and it’s what you’ll get. Summon the “servant who doesn’t ask questions” and be forever frustrated.
this content reads like written by an AI, I'm not sure what's going on but it's not good, last two posts had the same cringe AI slop tone providing zero insights into anything.
I love the thinking and the concept. A valuable pov that i think many will miss.
I also agree with Boe’s perspective about how some of the phrasing reads. I’ll add that I’ve seen you present talks eg at RailsConf, and read your posts, even talked to you F2F, so maybe i have more data to compare from, but: I think I’d rather read a final product that reads like YOU wrote it, rather than something more polished with fewer human “brush strokes”. Keep exploring and posting, I appreciate it.