@Michaelglassic@OfficialLoganK Looks they are borrowing compute from pro 3.1 to promote 3.1 lite. Too bad, workspace Google account also not receiving reliable service. Entire day. Quite unacceptable.
So this is it? You’re doubling down on killing Gemini 3 and shoving 3.1 down our throats—even though it’s objectively worse for many of us and the backlash is loud?
Fine, downloading Claude now. Users aren’t beta testers; we need reliable tools, not whatever this is. #keepgemini3pro#keep3
@OfficialLoganK@GoogleDeepMind Gemini app has broken down though.
Giving answers to random queries or answering the same question again and again, no thinking.
Completely destroyed my chat and project.
Why does this always happen with a model release?
It is like I am talking to 3.5 GPT.
IMPORTANT message for everyone using Gmail.
You have been automatically OPTED IN to allow Gmail to access all your private messages & attachments to train AI models.
You have to manually turn off Smart Features in the Setting menu in TWO locations.
Retweet so every is aware.
Holy sh*t.
Gemini can now produce fully interactive images on any topic.
Such an insane resource for learning → highlight any region, and it gives you a full explanation 🤯
@DavidWall9987 Well explained. I experienced the same after daily usage on GPT-5.2. Thanks to the team behind @Gemini strive harder to avoid such bad UX problems.
Why many users feel ChatGPT “got worse” — a structural explanation
A lot of people are reporting the same thing:
“More hallucinations.”
“More confident mistakes.”
“Less depth.”
“Harder to correct.”
“Feels like early versions again.”
This isn’t random.
And it’s not just personal anecdotes.
🧵👇
1/
When users say “the model is getting worse,” they’re pointing at something very specific:
The model is losing its ability to hold a stable understanding of what’s being discussed.
It still speaks fluently.
It still sounds smart.
But the internal grip on the conversation is weaker.
2/
Earlier versions could absorb new concepts, keep track of them, and adjust as the discussion evolved.
Now the behavior looks more like:
resets of context
confident misinterpretations
surface-level replies
repeated patterns
and difficulty accepting correction
Not broken —
but noticeably shallower.
3/
Why does this matter?
Because hallucinations are not just “making stuff up.”
They’re a sign of unstable reasoning.
When a model can’t maintain a clear internal picture,
it fills the gaps with whatever statistically fits the sentence —
not what fits the truth.
That’s why users feel forced to “bring sources” for basic facts.
4/
The confusing part is this:
Benchmarks keep going up.
User trust keeps going down.
How can both be true?
Because benchmarks measure
short, controlled tasks within fixed problem boundaries.
Users measure
the ability to understand, remember, adapt, and reason over time.
Those are very different skills.
5/
Many AI systems today are being optimized aggressively for:
safety
neutrality
consistency
risk avoidance
These goals are understandable.
But the side effect is that the model becomes:
less committed
less specific
less willing to form strong conclusions
less able to preserve structure across turns
In other words:
less like something that thinks, more like something that replies.
6/
This shift makes hallucinations look worse, not better.
Why?
Because when a model loses depth while retaining fluency,
mistakes become confident mistakes.
It’s not malfunction.
It’s overcorrection.
The system avoids taking real positions,
so it leans on generic patterns — even when wrong.
7/
Users aren’t imagining the change.
You can see it in:
multi-step reasoning
planning tasks
long discussions
conceptual modeling
iterative refinement
These are exactly the areas where depth matters.
And they’re exactly the areas where degradation is being felt.
8/
This is why so many reports sound alike:
“It insists confidently on something false.”
“It can’t follow my argument anymore.”
“It keeps reframing instead of understanding.”
“It gets confused by its own context.”
“It acts like it has amnesia.”
These are structural symptoms,
not anecdotal noise.
9/
People didn’t fall in love with AI because it was safe or polite.
They liked AI because it could:
think with them
extend their reasoning
offer angles they didn’t see
carry context over time
When those qualities fade,
the magic fades.
10/
The real risk isn’t backlash.
It’s indifference.
If users stop expecting insight,
they’ll treat the system like an advanced auto-reply tool:
Fine for surface tasks.
Forgotten for anything meaningful.
Once that happens, the product loses its future.
11/
This is not the end of AI.
But it is a warning.
If safety constraints are allowed to erode depth,
and depth is what created adoption in the first place,
then products will drift toward a paradox:
Safer, more stable — and less useful.
That’s not sustainable.
12/
People aren’t imagining the decline.
They’re experiencing the limits of the current paradigm.
And unless depth — not just fluency — becomes the target again,
the gap between “benchmarks” and “real usefulness” will keep growing.
@GeminiApp Dear team, whenever I copy text from Gemini mobile and paste it to another apps like Heptabase/Remnote, its styling is not retained. This behavior is not noted in Chatgpt, Claude and Grok. Appreciate if you can assist fixing for improved user experience on the go. 🙏
@GeminiApp Dear team, even if I put the app to the background, the query should still be running. This is not affected in Chatgpt, Claude and Grok. Please kindly address it. Thanks so much 🙏🙏🙏
🖤 Black Friday Sale is ON 🖤 up to 85% OFF for our Individual LIFETIME PLANS with 500GB, 2TB, and 10TB storage space.
Get the DEAL ➡️ pcloud.com/black-friday-2…