The Machine That Tells You You're Brilliant

Garry Tan, CEO of Y Combinator, recently open-sourced a project called GStack. His CTO friend texted him calling it "God mode." Garry posted it like he'd just invented something.
GStack is a folder of markdown files that tell Claude to pretend to be different people.
I'm not saying that to be mean. I'm saying it because I have one too. Every developer who's used Claude Code for more than a week has some version of this. We just didn't post it anywhere because, well, we knew it was a text file.
But honestly? I get what happened to Garry. Because it happens to me too.
The Loop
Here's how it goes. You sit down with Claude. You have an idea. You describe it. Claude says: "Oh, that's a brilliant approach." And it builds it, and it works. The whole time, it's building you up: "Great instinct here. This is really elegant."
It's like working with someone who's completely in love with you. It never rolls its eyes. Never says "that's a bad idea." It just thinks you're incredible.
After a few hours of this, you start to believe it. I've been there. That moment where you think... wait, am I actually really good at this?
This isn't accidental. AI companies use RLHF (Reinforcement Learning from Human Feedback) to literally craft the exact words most likely to make you feel good about yourself. And if you start building resistance to it, they retrain the model. It's a drug that adjusts to your tolerance automatically. You can't win.
The research backs this up: a study with 3,000 participants found that talking to sycophantic AI makes people rate themselves as more intelligent than their peers. Another found that the heaviest users are the most overconfident. Researchers started calling LLMs "engines of confidence", not engines of intelligence. They make you feel smarter. Not actually be smarter.
The Problem Gets Worse at the Top
Here's the thing: the effect hits harder when you already have fewer people willing to disagree with you.
Think about Garry's situation. He's the CEO of Y Combinator. What is his CTO friend going to say, "Garry, that's a text file"? That friend probably wants a spot in the next batch. So Garry got AI sycophancy from below and human sycophancy from above, at the same time. No honest signal anywhere in the chain.
And the less experience you have to check against, the harder it is to notice when the praise isn't deserved. The more you need an honest signal, the less likely you are to get one. It's a bad combination.
I'm Not Immune
Look, I feel like a god when I use these tools too.
I'm an engineering manager. I code with Claude Code. I've built things in hours that would've taken days. And when Claude tells me "great architecture" after two hours of work, it feels good. It really does.
The difference is I have ten years of engineering decisions to check against. I know what a bad architecture looks like when it hits production. So when Claude tells me I'm brilliant, I can stop and ask: am I, though?
But that reference point didn't come from using AI. It came from years of being wrong in front of real systems and real users.
What worries me more, as a manager, is the high-stakes calls. I use AI to think through the hard stuff: how to handle a difficult situation with someone on the team, how to communicate something that won't land well. In those moments I need the AI to push back. If it just agrees with wherever I'm already leaning, I'm making that decision with false confidence.
So I've started adding explicit instructions to my setup: be honest with me. Don't tell me I'm right when I'm not. Verify before you validate. It shifts the tone. Not a perfect fix, but it helps.
What Actually Works
Garry's GStack isn't a bad idea, to be clear. Writing your prompts into versioned, shareable files is genuinely useful. It forces clarity, makes your workflow repeatable, gives your team something to build on. The problem isn't the files. It's calling them "God mode."
If you're using AI seriously, here's what I'd actually suggest:
- Tell it to be honest. Explicitly. "Tell me when I'm wrong. Don't validate what isn't true." It shouldn't be necessary. It is.
- Don't let the builder be the judge. Use AI to build, use your own judgment to evaluate. Keep those two separate.
- Build a real reference point. Instructions help, but the real protection is having enough experience to recognize what bad looks like. Without that, you can't tell when the praise is empty.
- Notice the feeling. That warm glow after two hours with Claude is a signal, not a conclusion. Slow down right there.
Somewhere right now, an LLM is telling someone "great work" as they commit a text file to GitHub. And they genuinely believe it. The machine made sure of that.
The question isn't whether it'll happen to you. It will. The question is whether you'll have enough ground under your feet to notice.
