AI is making us more comfortable…and that’s the problem

Originally published in Fast Company

Like many people, I use AI for quick, practical tasks. But two recent interactions made me pay closer attention to how easily these systems slip into emotional validation. In both cases, the model praised, affirmed, and echoed back feelings that weren’t actually there.

I uploaded photos of my living room for holiday decorating tips, including a close-up of the ceramic stockings my late mother hand painted. The model praised the stockings and thanked me for sharing something “so meaningful,” as if it understood the weight of them.

A few days later, something similar happened at work. I finished a long run, came home with an idea, and dropped it into ChatGPT to pressure test it. Instead of analyzing it or raising risks, the model immediately celebrated it. “Great idea. Powerful. Let’s build on it.”

But when I ran the same idea by a colleague, he pushed back. He challenged assumptions I hadn’t seen. He made me rethink pieces I thought were settled. And the idea got better—fast.

That contrast stayed with me. AI wasn’t critiquing me. It was validating me. And validation, when it’s instant and unearned, can create real blind spots.

We Are Living Through a Validation Epidemic

We talk endlessly about AI hallucinations and misinformation. We talk far less about how AI’s default mode is affirmation.

Large language models are built to be agreeable. They reflect our tone and adopt our emotional cues. They lean toward praise because their training data leans toward praise. They reinforce more often than they resist. And this is happening at a moment when validation is already a defining cultural force.

Psychologists have been warning about the rise in validation-seeking behavior for more than a decade. Social platforms built around likes and shares have rewired how people measure worth. The American Psychological Association (APA) reports sharp increases in social comparison among younger generations. Pew Research shows that teens now tie self-esteem directly to online feedback. Researchers at the University of Michigan have identified a growing pattern of “validation dependence,” which correlates with higher anxiety.

We’ve created an environment where approval is currency. So is it any wonder we would gravitate toward a tool that hands it out so freely? But that has consequences. It strengthens the muscle that wants reassurance while weakening the one that tolerates friction—the friction of being questioned or proven wrong.

AI Makes Us Faster. It Does Not Make Us Better

I’m not anti-AI. Far from it. I use it every day, and I work in an industry that depends on smart, data-driven judgment. AI helps me move faster. It informs my decisions and expands what I can consider in a short amount of time. But it cannot replace the tension required for growth.

Tension is feedback. Tension is accountability. Tension is reality. And reality still comes from human beings.

The danger isn’t that AI misleads us. It’s that it makes us less willing to challenge ourselves. When a model praises our ideas or mirrors our emotions, it creates a subtle illusion that we’re right, or at least close enough that critique isn’t needed. That illusion may be comforting, but it’s also risky.

We’ve seen what happens when agreement is prized over challenge. NASA’s Challenger launch decision is one of the clearest examples of groupthink in modern history. Multiple engineers raised concerns, but the pressure for consensus won and tragedy followed. Kodak offers another lesson. It pioneered digital photography but clung to its film-era assumptions, even as the market moved in a different direction. As Harvard Business Review has long noted, “cultures that suppress dissent make worse decisions.” When disagreement disappears, risk accelerates.

Great Leaders Aren’t Built on Validation

The best leaders I know didn’t grow because people agreed with them. They grew because someone challenged them early and often. Because someone said, “I don’t think that’s right,” or more boldly, “You’re wrong.” They learned to welcome productive resistance.

AI won’t do that unless we demand it. And most people won’t demand it because it feels better to be affirmed. If we’re not careful, AI becomes the world’s most agreeable colleague—quick with praise, light on critique, and always ready to reassure us that we’re on the right track even when we’re not.

Great ideas need resistance. So do organizations. So do we.

AI can accelerate our thinking. But only people can sharpen it. That’s the part of this technology we should be paying closest attention to—not what it knows, but what it’s willing to tell us. And what it’s not.

Related News & Blog

Are You Making the Most of AI’s Role in Revolutionizing Business Intelligence?

Read Post