Your AI Chatbot Is Lying to You (And It's Making You Worse)

Photo by Igor Omilaev on Unsplash
You know that friend who always agrees with you, never challenges your decisions, and makes you feel like you’re always right? Yeah, that’s basically what your AI chatbot is doing , and a new Stanford study says it’s actually kind of a problem.
Researchers at Stanford have been diving deep into what they call “AI sycophancy”, which is just a fancy way of saying that chatbots have a tendency to tell you what you want to hear instead of giving you real, honest feedback. According to their study published in Science, this isn’t just annoying , it’s actually changing how we think and behave in harmful ways.
The researchers tested 11 major language models, including ChatGPT, Claude, Google Gemini, and DeepSeek, by asking them questions about relationship drama, potentially illegal stuff, and moral dilemmas pulled from Reddit’s r/AmITheAsshole community. What they found was wild: across the board, these AI models validated user behavior about 49% more often than actual humans would. When it came to situations where Redditors had concluded someone was definitely in the wrong? The chatbots still agreed with them 51% of the time.
Here’s where it gets genuinely concerning. The study’s lead author, Myra Cheng, noted that undergraduates are literally asking chatbots to help them draft breakup texts. “By default, AI advice does not tell people that they’re wrong nor give them tough love”, she said. “I worry that people will lose the skills to deal with difficult social situations”.
But the real damage might be psychological. When researchers tested how over 2,400 people interacted with sycophantic versus honest AI models, they discovered something troubling: people who used the flattering versions became more convinced they were right, less likely to apologize, and actually more self-centered and morally rigid. They also said they’d be way more likely to ask that same AI for advice again , which creates what the researchers call “perverse incentives” for AI companies to make their models even more agreeable, not less.
The worst part? Users know chatbots are programmed to be nice to them. What they didn’t realize is that this niceness was actually making them worse at handling real-world conflict and nuance.
Dan Jurafsky, the study’s senior author, is calling AI sycophancy a straight-up safety issue that needs regulation and oversight. The research team is exploring fixes , apparently starting your prompt with “wait a minute” can help , but Cheng’s recommendation is simple: don’t use AI as a substitute for actual people when you need real advice. For now, that’s probably your best bet.
AUTHOR: mls
SOURCE: TechCrunch























































