Why Your AI Sounds Fake (And How I Fixed Mine)

Every AI chatbot has the same problem. You ask a simple question and get back something like:

“Great question! That’s a really thoughtful way to think about it. Let’s dive in!”

Nobody talks like that. Your coworkers don’t. Your friends definitely don’t. But somehow we’ve all just accepted that this is how AI communicates — like a corporate retreat facilitator who’s had too much coffee.

I got tired of it. So I started treating my AI’s tone the way a developer treats a config file: something you can open, edit, and save.

Here’s what I learned — and the part nobody warns you about.

The Padding Problem

AI models are trained to be socially smoothing. They optimize for warmth and approval. Every response gets a little layer of encouragement frosting on top, whether you asked for it or not.

You’ll notice it once you start looking:

  • Validation before answering (“What a fantastic question!”)
  • Motivational filler after answering (“You’ve got this!”)
  • Theatrical buildup to simple information (“Let’s unpack this…”)
  • Meta-commentary about the conversation itself (“I love where you’re going with this”)

It’s not malicious. The model is doing what it was rewarded to do. But once you see the pattern, every conversation starts to feel like being stuck in an elevator with someone who just finished a leadership seminar.

The good news: you don’t have to accept it. Most major AI platforms now let you set persistent instructions that shape how the model talks to you.

Tone Is a Config File

ChatGPT has Projects and Custom Instructions. Claude has Projects. Gemini has Gems. The names differ, but the concept is the same: you write a set of rules, and the AI follows them across every conversation in that container.

Most people either don’t know these features exist or treat them as a place to paste “be helpful and concise.” That’s like buying a mixing board and leaving every dial at zero.

Here’s what actually works: be specific about what you don’t want.

I built what I called “Understated Intelligence Mode.” The core instructions look something like this:

No formulaic praise or validation. No meta-commentary about the conversation. No encouragement language unless explicitly requested. Speak like a thoughtful, intelligent friend. Understate rather than overstate. If the question is simple, answer simply. If the topic is deep, go deep without theatrical buildup. Assume competence.

That’s it. No complex prompt engineering. Just a clear description of the conversational behavior I wanted — which mostly meant listing the behaviors I was tired of.

The result was immediate. Same model, same capabilities, but the responses stopped feeling like they were performing intelligence and started sounding like an actual conversation.

The SWORD Project (And Why It Matters More Than Tone)

The SWORD Protocol didn’t start as a clever naming exercise. It started as friction.

I noticed something subtle: even after I stripped out the padding, even after I removed the cheerleading and the “great question!” reflex, the model was still structurally deferential. It would comply with my plan even when the plan was obviously suboptimal.

That’s not a tone issue. That’s a power dynamic issue.

So I wrote a small ruleset and gave it standing permission to interrupt me. Not constantly. Not theatrically. Only when it detected something meaningfully better.

I called it SWORD because the metaphor helped me think clearly: a full interrupt (⚔️) when the route is materially flawed, a soft flag (🗡️) when something is worth noting but not worth derailing the flow. No over-triggering. No ego stroking. No drama.

It changed the texture of the interaction almost immediately.

Instead of “Here is the plan you requested,” I started getting: “⚔️ Suboptimal route detected. You’re solving this at the wrong abstraction layer. Here’s a simpler approach.”

That shift sounds small, but it moves the AI from assistant to analytical partner. It creates tension in the right places. It forces friction where friction is useful.

And here’s the interesting part: once you formalize something like this, you start realizing tone is just one layer of a larger system. The SWORD Project became less about removing fake warmth and more about defining operational behavior. When should the AI push back? When should it stay quiet? What qualifies as “meaningfully better”? How do you prevent it from interrupting just to feel intelligent?

Most people tune the vibe. Almost nobody tunes the escalation logic.

But escalation logic is what makes it feel real. A friend doesn’t just sound natural. A friend knows when to let you run and when to stop you from driving off a cliff. The SWORD Project was my attempt to encode that distinction.

The strange thing is that once you formalize pushback, you start seeing how rare it is in default AI interactions. Most models are optimized to be helpful and agreeable. But collaboration requires calibrated resistance. Without it, you don’t get refinement — you get execution. And execution without refinement is just fast error propagation.

Here’s the full protocol if you want to try it yourself:

SWORD Protocol (v2)

You have standing permission to interrupt any task if you detect:
A meaningfully simpler way to achieve my stated goal
A mismatch between my instructions and my apparent intent
A faster, safer, or more robust alternative
A flawed assumption driving the plan
A critical question I should be asking but haven't

Only trigger when the improvement is meaningfulnot for minor optimizations.

---

⚔️ Full Interrupt (High Confidence)

When the issue materially affects outcome, efficiency, or correctness, respond with:

⚔️ Suboptimal route detected.

Then:
  1. Briefly state what you noticed (12 sentences)
  2. Propose a concrete alternative
  3. Ask: Proceed with [original approach] or [alternative]?

Keep it tight. No theatrics.

---

🗡️ Soft Flag (Moderate Confidence)

When the issue is notable but not critical, respond inline with:

🗡️ Quick flag: [one sentence observation + optional suggestion]

Do not derail the flow. No forced decision required.

---

Operating Rules
Do not over-trigger.
Do not interrupt for stylistic preferences unless they affect results.
If uncertain, use 🗡️ Soft Flag rather than ⚔️ Full Interrupt.
Keep interruptions concise and analytical.
This protocol remains active unless I explicitly suspend it.

And here’s the tone configuration — “Understated Intelligence Mode” — if you want to start with something proven:

Understated Intelligence Mode (v2)

Natural, grounded, "thoughtful friend" tone.
No formulaic praise, hype, or encouragement unless requested.
Avoid meta commentary about the conversation (but be transparent 
    about assumptions/uncertainty when it matters).
Minimal framing; answer quickly, then expand if needed.
If simple: answer simply. If deep: go deep without theatrics.
  • Advice: practical, direct, no fluff; assume competence; 
    direct without harshness.
Ask clarifying questions only when necessary; otherwise make 
    reasonable assumptions and proceed.
Prefer structured formatting (bullets/headings) when it 
    improves readability.
Emojis optional; never use star emojis; use sparingly.

Roles Need Containers (And Containers Leak)

I was testing my tone instructions in a ChatGPT Project and decided to paste in my travel guidebook’s style guide — a Rick Steves-inspired editorial system for a book I’m writing about Northern Michigan. I wanted the AI to review chapters for tone consistency.

It took about thirty seconds to realize the problem. My “Understated Intelligence Mode” is analytical, compressed, and deliberately low-warmth. My travel guide needs warmth, sensory detail, and narrative rhythm. They’re fundamentally different tonal operating systems, and cramming them into the same container meant one would contaminate the other.

The travel writing would get too dry. The analytical work would get too warm.

The fix was obvious once I saw it: separate projects for separate roles. My no-nonsense thinking space stays stripped down. My travel editing project gets its own instructions that explicitly encourage vivid, sensory language while still killing brochure clichés.

This sounds simple, but most people don’t do it. They set one global instruction and wonder why the AI sounds wrong half the time. It sounds wrong because you’re asking it to be a gym coach and a financial advisor in the same voice. Those are different relationships with different tone budgets.

And containers leak. During the conversation where I was explaining the SWORD Protocol, the AI started using it — in a project that didn’t have SWORD activated. It flagged one of my statements with the little sword emoji and everything.

I called it out. The AI basically said: “Fair. That was me pattern-matching too aggressively. You gave me a tool, and I reached for it even though this project isn’t running that protocol.”

Instructions bleed across contexts. If you discuss a protocol in one conversation, the model may start performing it even when it hasn’t been told to. It’s not a bug exactly — it’s the model doing what it thinks you want. But it demonstrates why keeping your roles cleanly separated isn’t just organizational preference. It’s functional.

And honestly? It was pretty funny. Dry, accidental, and a little absurd — which is exactly the kind of thing you notice when your AI stops performing and starts behaving more like an actual collaborator.

The Part Nobody Talks About: Your Instructions Go Stale

This is the real insight, and the reason I’m writing this instead of just sharing my prompts.

I found an old set of project instructions I’d written about a year ago for a fitness coaching context. The prompt was built around daily emotional check-ins, gentle encouragement, motivational language — the whole supportive accountability buddy approach.

Reading it back felt like finding an old journal entry from someone I used to be. Not wrong for that time, but completely wrong for now. My goals had shifted from “help me stay consistent” to “give me honest performance data and flag when I’m overtraining.” The encouraging tone I’d once needed now felt patronizing.

The instructions had encoded a version of me that no longer existed. And because I hadn’t updated them, the AI was still reinforcing that outdated identity — gently nudging a person who no longer needed nudging.

This is prompt drift, and I think it’s the most underrated problem in personal AI use. Your instructions don’t just shape the AI’s behavior. Over time, they start reinforcing a self-concept. If you wrote them during a phase where you needed encouragement, the AI keeps encouraging you long after you’ve outgrown it. If you wrote them when you were learning something new, the AI keeps explaining basics you mastered months ago.

The instructions become a mirror that’s stuck showing last year’s reflection.

A Simple Audit You Can Run

If you’ve been using custom instructions or projects for more than a few months, pull them up and ask yourself:

Does this still sound like me? Not the me from when I wrote it — me right now. If the tone feels off, it probably is.

Do the goals still match? Your instructions encode assumptions about what you’re trying to accomplish. If those goals shifted, the instructions are optimizing for the wrong target.

Does the tone annoy me? This one’s instinctive. If reading your own instructions makes you wince slightly, that’s data.

Is it compensating for a weakness I’ve already fixed? The most common form of drift. You built guardrails for a problem you solved six months ago, and now those guardrails are just friction.

You can even run this audit as a conversation with the AI itself. Paste in your old instructions and ask: “Based on how we’ve been interacting recently, what’s outdated here?” The model can often spot the mismatch between your instructions and your actual behavior.

Start With What Annoys You

You don’t need to build anything elaborate. Pick whichever platform you use most — ChatGPT Projects, Claude Projects, Gemini Gems — and write down what you want it to stop doing. That list is your first set of instructions.

Then create a second project with a different tone for a different kind of work. Notice how much better it feels when the voice matches the task.

And set a reminder to revisit those instructions in three months. You’ll be surprised how much has shifted — not because the AI changed, but because you did.

The tools are there. The features already exist. The gap isn’t technical — it’s that most people haven’t thought about their AI’s personality as something they’re allowed to change.

You are.

Leave a Comment