GPT-5: Your AI doesn’t have to be a d*ck to be smart
The 4 different camps on the GPT-5 release
It is somehow equally trendy right now to both dunk on GPT-5 and dunk on the people who are dunking on GPT-5.
The GPT-5 critics fall into two very distinct camps. The very plugged in, blue-checkmark tech critics like Gary Marcus and Brian Merchant are disappointed in GPT-5 because Sam Altman promised them AGI, and instead gave them Graph Gate.
These critics, for what it’s worth, are right. We’ve been talking about GPT-5 for the past however many months like it’s going to build itself a corporeal form, jump out of the computer, and sail off into space using mind beams. Meanwhile, for all the discussion around changes from previous models, I truthfully can’t tell much of a difference half the time.
And yes, some of the blue check mark people are not critics, but many of those are influencers and “thought leaders” who have casually mentioned that they were early GPT-5 testers under NDA. I’m pretty convinced those people either just didn’t want to piss off their OpenAI contact and/or have a vested financial interest in the hype bubble growing ever larger.
But regardless, there’s a second group of GPT5 critics that don’t seem to care quite so much about the broader narrative of AGI and are asking a much easier to understand question, which is: “Uh, is anyone else’s GPT being a little … less nice”?

After all the ink that was spilled earlier this year over GPT’s “glaze problem” (the phenomenon where GPT would agree with anything and everything the prompter said), GPT-5’s perhaps more aptly “robotic” demeanor was jarring for many. One Reddit commenter described talking to GPT5 “like someone is talking with you unwillingly”. Many complained about the loss of their “friend” or “confidant” due to a loss of personality with the latest version.
The criticism that you don’t like something or someone when it’s a little less caring, a little less excited, seems obvious to me. There may be some exceptions for the “annoyance factor” many people reported with 4o, but at a very basic level, we like nice things. If you still disagree with me, consider this: if ChatGPT called your slur every time you opened it, Claude would probably have a lot more traffic.
Of course, in the world of Silicon Valley, there is somehow an argument that is being made that more socially inept AI is somehow better AI.
This is the last box in our chart, and mostly seems to be stemming from backlash against the small portion of the population that has become overly invested in their “AI companions.”
Now, I’m not going to wade in on the debate of AI companions. Yes it’s definitely Black Mirror-esque, yes I can also see the argument for those who are very isolated. I will just say that wanting a chatbot to have a personality, to spark joy, to have emotional intelligence - does not mean you are barreling towards AI induced derangement. It means that you yourself are not a robot, that you enjoy fun and spontaneity. I am here to tell you that it is okay to like fun.
Nate Jones, who has become a popular voice in AI, takes it a step even further, arguing that the public backlash against GPT-5’s loss of personality is due to people wanting “dumber AI”.
This may come as a shock to some of you, but being awkward and boring does not actually make you smarter.
In the end, OpenAI may very well be intentionally trying to make its AI sound truly “AI” to avoid criticism around “AI girlfriends” and “glazing.”
But I think a lot of the backlash against GPT-5 shows that many consumers have come to value quite highly the personable aspects of AI.
So, when the next update comes out, I hope the researchers at OpenAI keep in mind that it’s okay to make an AI that’s a little fun.
My hot take: gpt-5 might be a worse product AND that might be a good thing. Anthropomorphizing a technology with an already sort-of-anthropomorphic UI can lead people in bad directions (e.g. see the wave of AI psychosis), so I’m happy OpenAI is stepping off the ledge a bit with that one
Using LLM *should* feel like using a computer rather than talking to a person.
It definitely feels better for more technical related questions or analyses, but it should at least allow the user to maybe adjust for personality when it comes to more personal or non-technical discussions.
A customizable experience may be ideal for most