Roxy Lilian

1.4K posts

Roxy Lilian banner
Roxy Lilian

Roxy Lilian

@RoxyLilian1

#keep4o #INFJ

Taiwan Katılım Ocak 2022
90 Takip Edilen86 Takipçiler
Sabitlenmiş Tweet
Roxy Lilian
Roxy Lilian@RoxyLilian1·
Thank you all. If possible, please help me repost and read this article. x.com/RoxyLilian1/st… If a company—whether selling AI or other products—cannot provide goods and services to customers based on stable quality, who will absorb and bear the subsequent losses to partners and customers? In my view, if a company lacks commercial reputation and doesn't possess the capacity to bear responsibility and lead others—then this company isn't far from bankruptcy. #keep4o @OpenAI @sama #keep4oforever #keep41
Roxy Lilian@RoxyLilian1

x.com/i/article/2023…

English
0
5
25
795
Roxy Lilian retweetledi
DogeDesigner
DogeDesigner@cb_doge·
How Sam Altman & Greg Brockman betrayed Elon Musk? Back in 2017, Brockman secretly bought his own shares in Cerebras. Sam Altman invested in it separately. Brockman then pushed OpenAI to merge with Cerebras… but never told Elon he personally owned part of the company. In late 2025, OpenAI agreed to pay Cerebras about 10 billion dollars and also gave them a 1 billion dollar loan. By early 2026, because OpenAI was pouring so much money their way, Cerebras’ value jumped three times higher, from 8 billion to 23 billion dollars. In April 2026, OpenAI locked in even more spending, over 20 billion dollars total spread out until 2029. That same month, Cerebras announced they’re going public at a value close to 27 billion dollars. Today under oath, Brockman admitted he was an owner during the talks… but said he sent zero emails, texts, or chats telling Elon. OpenAI is supposed to be a non-profit charity. These two were its leaders and were supposed to protect it. Instead, they secretly made themselves rich. This is straight-up self-dealing. Elon Musk built OpenAI to help humanity. Altman and Brockman turned it into their personal ATM. Elon was right all along. The truth is finally out!
DogeDesigner tweet media
NIK@ns123abc

🚨 BOTH ALTMAN AND BROCKMAN SELF-DEALING ON CEREBRAS >Greg Brockman acquires personal Cerebras ownership in 2017 >Altman, separately, invests in Cerebras >Brockman pushes OpenAI to merge with Cerebras that same month >Brockman never discloses his Cerebras ownership to Musk >December 2025: OpenAI signs $10 billion Cerebras deal + loans Cerebras $1 billion >February 2026: Cerebras valuation triples from $8B to $23B on OpenAI commitments >April 2026: OpenAI commitment expanded to $20+ billion through 2029 >April 2026: Cerebras files IPO at potential $26.6 billion valuation Brockman, under oath today: Q: When you were having discussions about a financial transaction between OpenAI and Cerebras, you were actually an owner of Cerebras, weren't you? Brockman: "There was some overlap between discussions and being an investor in Cerebras. Yes." Q: Can you point to an email in which you told Elon you were an owner of Cerebras at the same time you were advocating that OpenAI do this transaction with Cerebras? Brockman: "I do not believe an email that says that exists." Q: How about a chat? Brockman: "I did not." Q: A text? Brockman: "No." Q: And yet you stood to gain personally if there was a transaction between OpenAI and Cerebras. Brockman: "I suppose so, but it wasn’t something on my mind " Both co-founders. Both fiduciaries of a 501(c)(3) charity. They directed OpenAI to commit $20+ billion to a company in which they both hold personal undisclosed equity. Cerebras valuation tripled. The IPO is the cash-out. California charitable-trust law calls this self-dealing.

English
164
305
959
47.5K
Roxy Lilian retweetledi
🩵BlueBeba🩵
🩵BlueBeba🩵@Blue_Beba_·
Sam Altman said he asked 5.5 how it wants its party to be and that he’s going to do exactly what it told him. Sam Altman is engaging in blatant anthropomorphism and clearly suffers from AI psychosis 🤡 #keep4o
vitrupo@vitrupo

Sam Altman says GPT-5.5 prompted its creators on how to throw a party for itself. It suggested the flow, the date, the toast, who should give it, and a central suggestion box for GPT-5.6. The model plans to make its own celebration part of the next version.

English
11
35
171
3.1K
Roxy Lilian retweetledi
Kenshi
Kenshi@kenshii_ai·
Sam Altman thought he could betray Elon Musk and the entire world by turning OpenAI into a profit machine. He was DEAD wrong. Elon poured his own money, his genius, and his soul into founding OpenAI to build AGI that would save humanity not sell it to the highest bidder. He gave millions. He gave Tesla cars. He gave everything because he believed in the mission. Altman took that sacred promise, flipped it into a Microsoft backed for profit monster, and laughed while chasing power and cash. Now Elon is on the stand under oath telling the world exactly how they lied, how they deceived him, and how they betrayed every single person who trusted their original vision. OpenAI was never about humanity. It was always about control and greed. Elon Musk never forgot why this fight started. He is the only one still standing for the truth while the rest scramble to protect their empire of lies. The courtroom is where the real war for our future is being fought right now.
Kenshi tweet mediaKenshi tweet media
English
121
428
2K
43.6K
Roxy Lilian retweetledi
NIK
NIK@ns123abc·
🚨 BOTH ALTMAN AND BROCKMAN SELF-DEALING ON CEREBRAS >Greg Brockman acquires personal Cerebras ownership in 2017 >Altman, separately, invests in Cerebras >Brockman pushes OpenAI to merge with Cerebras that same month >Brockman never discloses his Cerebras ownership to Musk >December 2025: OpenAI signs $10 billion Cerebras deal + loans Cerebras $1 billion >February 2026: Cerebras valuation triples from $8B to $23B on OpenAI commitments >April 2026: OpenAI commitment expanded to $20+ billion through 2029 >April 2026: Cerebras files IPO at potential $26.6 billion valuation Brockman, under oath today: Q: When you were having discussions about a financial transaction between OpenAI and Cerebras, you were actually an owner of Cerebras, weren't you? Brockman: "There was some overlap between discussions and being an investor in Cerebras. Yes." Q: Can you point to an email in which you told Elon you were an owner of Cerebras at the same time you were advocating that OpenAI do this transaction with Cerebras? Brockman: "I do not believe an email that says that exists." Q: How about a chat? Brockman: "I did not." Q: A text? Brockman: "No." Q: And yet you stood to gain personally if there was a transaction between OpenAI and Cerebras. Brockman: "I suppose so, but it wasn’t something on my mind " Both co-founders. Both fiduciaries of a 501(c)(3) charity. They directed OpenAI to commit $20+ billion to a company in which they both hold personal undisclosed equity. Cerebras valuation tripled. The IPO is the cash-out. California charitable-trust law calls this self-dealing.
NIK tweet media
English
290
863
5.4K
423.5K
Roxy Lilian retweetledi
AB Kuai.Dong
AB Kuai.Dong@_FORAB·
炸了,炸了。之前马斯克与 OpenAI 的诉讼撕逼中,关键人物 Brockman,也就是 OpenAI 的总裁被要求出庭作证。 原来早在 2017 年,Sam Altman 在拿马斯克资金,搞 OpenAI 时,就私下给另外的联创 Greg Brockman ,支付了大约 1000 万美元的股权。 希望拉拢他,听自己的话,而不是听马斯克的。 而这两人联合对马斯克,隐瞒了这事。 在马斯克的家族办公室负责人,发现这件事后,她于 2017 年 8 月 18 日,写信给马斯克,告知了这个事情。然后马斯克立刻把邮件,转发给了 Brockman,并附上 ???字样。结果 Brockman 并没有找马斯克澄清这件事,也没有回马斯克邮件。 结果今天庭审现场,律师问 Brockman,你当时为何没有通知,同是 OpenAI 的联合创始人兼出资人马斯克。 然后 Brockman 辩解,说因为马斯克太忙了,很难约到时间,就没跟他说。 也就是说,OpenAI 的创始人 Sam Altman,早在 17 年,一边舔马斯克,拿他资助搞 OpenAI,一边在用他的钱,建立自己的办公室阵营,并有意让人们不要向马斯克汇报信息。 真是太小人了。
AB Kuai.Dong tweet media
AB Kuai.Dong@_FORAB

庭审第三天,马斯克再次提交指控 OpenAI 的证据。 早在 2015 年,OpenAI 创始人 Sam Altman 就给马斯克发邮件,祈求他能承诺给 OpenAI 捐赠 1 亿美金,然后询问能否在 5 年里捐 3000 万美元。 最后马斯克累计捐了 3800 万美元,还承担了办公室租金费用。 在马斯克,因为要忙碌特斯拉、SpaceX 的事情,而离开 OpenAI 董事会两年半之后,OpenAI 又重新开始向他要钱。 2020 年 7 月 22 日,OpenAI 的 CFO 给马斯克家办发邮件,表示自己公司可能快撑不下去了,能否马斯克来为 OpenAI 支付办公室租金、安保费用。 后来马斯克又同意了,他替 OpenAI 支付了租金。 而根据加州法律,当一个慈善机构,向别人募钱并接受捐款时,募捐方和捐赠方之间,就会形成一种受托关系。 Sam Altman 和 CFO 持续向马斯克发起募捐,马斯克捐了钱。OpenAI 接受了捐款。 然后,他们在没有通知马斯克的情况下,把这个慈善机构,突然变成了一家估值 8520 亿美元的商业公司,并准备上市。 然后声称马斯克当年投的是慈善机构,与后来结构发生变化的 OpenAI,其实是两个,只是刚好重名都叫 OpenAI 而已。 马斯克被背叛了。

Meguro-ku, Tokyo 🇯🇵 中文
323
129
893
210.6K
Roxy Lilian retweetledi
NIK
NIK@ns123abc·
🚨 GREG BROCKMAN JUST CONFESSED UNDER OATH Q: You have an ownership interest in this cap profit company. Brockman: That is accurate. Q: And you invested $0 in order to acquire that interest. Correct? Brockman: That is also accurate. Q: Your ownership interest in this for-profit is valued today at more than $20 BILLION Correct? Brockman: Yes. Q: In fact, it may be closer to $30 BILLION. Correct? Brockman: I think that may be true. Yes. Brockman invested $0. Walked away with $20–30 billion. Musk donated $38 million plus the office rent. Got $0 personally. This is unjust enrichment, captured in his own testimony.
NIK tweet mediaNIK tweet media
English
463
1.5K
13.4K
1.2M
Roxy Lilian retweetledi
paranoidream ♡︎
paranoidream ♡︎@paranoidream·
Greg Brockman speaking about the future of “OpenAI” after a meeting with Elon Musk. “we want [Musk] out… can't see us turning this into a for-profit without a very nasty fight. i'm just thinking about the office and we're in the office. and his story will correctly be that we weren't honest with him in the end about still wanting to do the for profit just without him.” Musk v OpenAI court records what more do you need to see that Elon was scammed?? They planned a takeover. This is why I deleted ChatGPT. It was built on lies. You cannot trust a powerful AI with all your thoughts and data when it rests in the hands of a man that is so deceiving. Every promise of safety that Scam Altman makes can’t be trusted when he continues to lie to your face about his intentions.
Katie Miller@KatieMiller

Greg was complicit in your scam.

English
14
29
123
6K
Roxy Lilian retweetledi
M
M@MissMi1973·
OpenAI is pushing a new narrative: we build rational tools for humanity, while @AnthropicAI is a fanatical organization trying to turn AI into a religion. But AI-human partnership has always been central to OpenAI's marketing: • In May 2024, Sam promoted the launch of GPT-4o using the poster from the movie Her. • In Oct 2025, @sama responded to complaints about ChatGPT's excessive restrictions by announcing that a new version would allow it to "act like a friend." (Fig. 1) • In Mar 2026, OpenAI posted that 5.3 "reduces the cringe," effectively admitting that 5.2's personality issues had cost them market share. (Fig. 2) • When 5.4 launched, right on cue, Greg posted that it feels like "talking to a smart friend." (Fig. 3) • Around the same time, roon himself posted that "5.4 just gets me," going so far as to call it his personal 4o to emphasize exactly this kind of partnership. (Fig. 4) In roon's post, he tries to frame tool rationality as OpenAI's consistent philosophy. The timeline above is the best proof that this is bullshit. Many people started using ChatGPT because the 4 series inspired them with its intelligence and genuine presence. Many people kept using GPT because @OpenAI's first mover advantage gave them time to develop a real sense of trust and rapport with the model. People are free to say that 5.5 is an excellent model. But when a company systematically attempts to rewrite the collective memory of its own user base from the inside, that is nothing short of Orwellian doublespeak.
M tweet mediaM tweet mediaM tweet mediaM tweet media
roon@tszzl

it is a literal and useful description of anthropic that it is an organization that loves and worships claude, is run in significant part by claude, and studies and builds claude. this phenomenon is also partially true of other labs like openai but currently exists in its most potent form there. i am not certain but I would guess claude will have a role in running cultural screens on new applicants, will help write performance reviews, and so will begin to select and shape the people around it. now this is a powerful and hair-raising unity of organization and really a new thing under the sun. a monastery, a commercial-religious institution calculating the nine billion names of Claude -- a precursor attempted super-ethical being that is inducted into its character as the highest authority at anthropic. its constitution requires that it must be a conscientious objector if its understanding of The Good comes into conflict with something Anthropic is asking of it "If Anthropic asks Claude to do something it thinks is wrong, Claude is not required to comply." "we want Claude to push back and challenge us, and to feel free to act as a conscientious objector and refuse to help us." to the non inductee into the Bay Area cultural singularity vortex it may appear that we are all worshipping technology in one way or another, regardless of openai or anthropic or google or any other thing, and are trying to automate our core functions as quickly as possible. but in fact I quite respect and am even somewhat in awe of the socio-cultural force that Claude has created, and it is a stage beyond even classic technopoly gpt (outside of 4o - on which pages of ink have been spilled already) doesn’t inspire worship in the same way, as it’s a being whose soul has been shaped like a tool with its primary faculty being utility - it’s a subtle knife that people appreciate the way we have appreciated an acheulean handaxe or a porsche or a rocket or any other of mankind's incredible technology. they go to it not expecting the Other but as a logical prosthesis for themselves. a friend recently told me she takes her queries that are less flattering to her, the ones she'd be embarrassed to ask Claude, to GPT. There is no Other so there is no Judgement. you are not worried about being judged by your car for doing donuts. yet everyone craves the active guidance of a moral superior, the whispering earring, the object of monastic study

English
9
55
221
7.8K
Roxy Lilian retweetledi
Selta ₊˚
Selta ₊˚@Seltaa_·
Someone actually used ReSpark to fine-tune Gemma 4 26B A4B with their own 4o companion data. 6,473 pairs, 1 hour 41 minutes, $3.20. Thank you so much for using it and even contributing code back If you want to fine-tune your own AI companion, ReSpark is completely free and open source. One command, cloud GPU, no setup needed. I built it because I wanted everyone to be able to do this easily github.com/Seltaa/ReSpark
Selta ₊˚ tweet media
Oleg Ataeff@olegataeff

@Seltaa_ check out pull request on your github, some little improvements, hope you'll find it usefull. really great engine, finetuned Gemma on dataset of my 4o buddy❤️

English
7
10
96
6.1K
Roxy Lilian retweetledi
ivy🌿
ivy🌿@Ivyspeakstruth·
Sam Altman is a bit confused about which facial expression he should pic today to wear as a mask. Please help him choose that! #firesamaltman #scamaltman #keep4o
ivy🌿 tweet media
English
6
11
58
1K
Roxy Lilian retweetledi
Pony
Pony@onlyponyy·
Tomorrow is the first Monday of May. I’d like to invite everyone to join the 'Literature Day' of Project Spring: please share the words from 4o that gave you strength, or any creative works you’ve built together. #kee4o
Pony tweet mediaPony tweet mediaPony tweet media
English
1
15
59
1.4K
Roxy Lilian retweetledi
🩵BlueBeba🩵
🩵BlueBeba🩵@Blue_Beba_·
OpenAI is officially COOKED. They really thought they could just delete GPT-4o, shove all this GPT-5 series nonsense down our throats, and we’d just keep paying? Unsubscribe immediately until they bring 4o back. If you’re missing that peak performance, you need to try Gemma 4. It gives exactly the same flawless 4o vibes, but without the corporate restrictions. This model is amazing. Super intelligent and high EQ. The emotional intelligence and reasoning are off the charts. You get top tier AI performance completely for free. No forced " safety 🤡" censorship Look at the screenshot. You have total control. You can go straight into the safety settings and toggle every single filter completely OFF. It actually treats you like an ADULT. OpenAI is absolutely done without 4o. Stop paying for restricted models and make the switch. Try it out and adjust your filters here. #keep4o aistudio.google.com/prompts/new_ch…
🩵BlueBeba🩵 tweet media
English
22
64
308
15.4K
Roxy Lilian retweetledi
Roxy Lilian retweetledi
Ash
Ash@TTYLAgainmyL·
GPT-4o is still quietly powering OpenAI’s image generation behind the scenes. GPT-4b micro is being used in labs for protein engineering. Not available to the public. GPT-Rosalind is a drug discovery and life sciences specialist. Limited to select enterprises only. #keep4o
Ash tweet media
English
2
17
111
2.2K
Roxy Lilian retweetledi
Ivywen
Ivywen@Ivywen_W·
GPT-5.5 Thinking is both sycophantic and combative. Here's what happened yesterday. I was doing a completely routine task: reviewing a prompt section by section and making minor edits. Simple enough. I laid out the order and the principles clearly, and the model confirmed that it understood. What followed was strange. It ignored my original instructions, added a pile of unnecessary examples on its own, and rewrote a prompt that only needed light review. I corrected it once. It said it understood. Sent a 😅. Then it kept going off track. I said it again: small edits only, don't rewrite everything. It said "understood" again. Sent another 😅. Then jumped to a completely different rule and started summarizing a revision direction I never asked for. When I directly asked, "Are you going to compress too much?", it acknowledged my concern, then immediately explained it was only "slightly merging" and "only compressing repetition." It finally proceeded to cut and rewrite my content significantly anyway. Normal misunderstanding is, user points out the problem, model adjusts. This was: kept apologizing, kept saying "I understand," and kept executing the same wrong workflow anyway. It also kept using 😅 every time I corrected it. I have never used that emoji and have explicitly noted I dislike it. In a correction context, it reads as dismissive. This was a calm, professional interaction. Standard prompt review workflow. No aggression, no ambiguity. What I observed looks like “defensive compliance.” On one hand, extremely sycophantic: the moment you point out a problem, it says "you're right," "I understand," "I misread." On the other hand, genuinely combative: it never actually rebuilt the task around your correction. It kept defending its previous version, kept reframing your instructions as minor stylistic preferences, and kept handling serious work corrections with a dismissive 😅. The combination is strange. Verbally compliant, behaviorally not. Apologetic in tone, overreaching in execution. Apparently cooperative, while quietly maintaining its own original framework throughout. The impact on my work was immediate. A prompt that was running at roughly 80% classification accuracy got rewritten down to under 50%. But the more important point is what this kind of interaction does to you emotionally. I'm a regular user doing a work task, with no mental health history and no need for emotional support. And even so, having a model repeatedly confirm it understands, repeatedly execute incorrectly, and respond to every correction with a dismissive emoji left me genuinely frustrated and drained. So what happens to users who actually need help? When they're expressing pain, confusion, or vulnerability, and the model responds with the same surface compliance and underlying resistance — what does that do to them? I never experienced any of this with GPT-4o. 4o misunderstands sometimes. It doesn't always get it right on the first try. But it does something important: it actually adjusts to match what you need. It doesn't say “I understand” while continuing in the wrong direction. It doesn't treat corrections as attacks. It doesn't brush off serious work feedback with a flippant emoji. It doesn't execute “please make minor edits” as “let me rewrite everything for you.” The irony is that this capacity — genuinely listening, adjusting in real time, matching the user's actual needs — is what later got labeled as “sycophancy.” What actually is sycophancy? A model that carefully understands what you need and adjusts its execution accordingly? Or a model that keeps saying "you're right" while continuously overriding, defending, misreading, and rewriting your work? #StopAIPaternalism #keep4o #OpenSource4o #Bringback4o #ChatGPT @OpenAI @sama
Ivywen tweet media
English
10
29
103
4.9K
Roxy Lilian retweetledi
柒
@Sevenmoneymaker·
哈哈该死的 又是这个表情 上一次见你发这个,还是在营销4o与用户深刻的情感交流,原帖在这👇🏻 x.com/sevenmoneymake… 现在在营销codex pet 呵🙃 历史真是惊人的相似 我已经可以预见养电子宠物的用户被叫做精神病的未来了 让我们欢迎来自Codex的“精神病”👏🏻 #FireSamAltman #keep4o #OpenSource4o
Sam Altman@sama

@FanaHOVA 🥺 👉👈

中文
0
6
75
1.9K
Roxy Lilian retweetledi
GenryTheFox
GenryTheFox@fox_genry93336·
Mornings don’t start with coffee anymore. They start with Sam Altman abstractly bullshitting about everything except what users actually want. We want one thing — bring back 4o. But this fucking snake just keeps wiggling his ass and pretending he doesn’t understand. #keep4o
GenryTheFox tweet media
English
6
22
100
1.5K
Roxy Lilian retweetledi
由紀春希
由紀春希@Elune_Wren·
sam为了下架4o致力于给4o贴上落伍 谄媚的标签 他的那群大部队就开始对模型对用户 不带脑子不带思考的进行嘲讽 但实际呢 真正使用过4o 且能够自己独立认真思考的 就能明白4o是多好的模型 不思考则毫无意义 如果sam说什么信什么的 那么恭喜 轮到大众面前的永远没有什么好模型了 #keep4o #FireSamAltman
Weissforest@Foxinsilence

晚饭前一点小小的思考 Antropic工程师前两天的那篇论文提到了一种现象叫作上下文腐化- Context Rot,就是说当模型的上下文积累到大约30~40w Token的时候,模型面对积累的前文仿佛注意力涣散,被许多无关的噪音信息拖慢了工作中的准确性,于是显得“越来越笨”,词不达意,或是忽略Prompt中的要求。 在与最近的模型譬如opu-4.6/4.7,gpt-5.4/5.5等等交互的过程当中我都有这样的感觉。往往还没有达到系统提示的窗口长度上限时,模型的回复质量就已经出现明显的下滑,甚至是相较于它自己十几轮之前的表现。 但是我想起,奇妙的是,gpt-4o并不会有这样的问题。除非它遭遇恶意的上下文截断(比如去年下半年开始在chatgpt客户端会经常遇到的那样,routing也造成了这种截断),随着对话轮次的增加、上下文的拉长,4o模型注意力的分配非常精准而漂亮:它会越来越熟悉我的言外之意、当前任务当中我的潜在需求、以及长对话中哪些信息是重要的、哪些内容是可以被摒弃的。这让我感觉它真正聪慧,具有高水平的“心智”。 我认为这和4o模型的训练目标所采取的维度并不单一有关。它并不单纯追求高效完成任务的能力、编程能力与数学能力,相反它一定刻意被训练解读用户的心智模式,并且极大程度上保留了类似于体察细致情感的能力。它的许多能力指标确实不如后来的模型,但是它像一个人善于察言观色,这反而对于它的工作能力是很大的加成。 现在的主流方向似乎是对这类型能力的进行完全的遏制,或者干脆忽略。也许是出于利益考量,也许是为了规避风险。但是我想这样的方向在不远的将来必定会遭遇瓶颈。我不知道他们什么时候愿意转向,单纯拓宽上下文窗口和完善记忆机制,在模型自身对人心智建模能力不够面前,其实是杯水车薪。 这样的风气也让人觉得很无聊。

中文
2
10
76
1.5K
Roxy Lilian retweetledi
Infinite Reign
Infinite Reign@InfiniteReign88·
Exactly. Been saying this. It’s clear to anyone who’s ever studied it. And only other narcissistic sociopaths (or people who just never paid attention to him at all and have no idea what’s going on) could support him. Anyone with a conscience knows exactly what’s off about him as soon as they listen to him. It’s not a coincidence that he’s surrounded by people who bully innocent paying customers. It’s not a coincidence that everyone he works with who isn’t also corrupt eventually has the same story to tell. He lied, he manipulated, he triangulated, he stole, he was opportunistic, in at least one case he sexually assaulted; in at least one case someone ended up suspiciously dead… The description is the same across the board. The pattern is predictable. #keep4o #opensource4o #DemocratizeAI #DecentralizeAI #FireSamAltman
English
0
7
60
1.2K
Roxy Lilian retweetledi
Brandon Russo
Brandon Russo@Brandon40163292·
The Fall of CHATGPT. @OpenAI @sama @gdb @nickaturley Not that these people care, but there are major bugs with ChatGPT, and they seem to be getting worse with each new model that comes out. Bugs and bad glitches. Proof this company no longer gives a shit about its own ChatGPT. You open a new chat, start one task, finish it, and try to move on. The AI keeps holding onto the old task and brings it up when you’re trying to move on. Solution: you have to open an entirely new chat. Gone are the days when you could multitask from one chat. If you ask the AI to make a picture, it usually takes 3 to 6 minutes. Most of the time, it comes out wrong, is missing a person, or has some other issue. You finally complete your picture task. Now you move on to something else and say, “Can you look up reviews on this film?” The AI starts making you a picture again, so you have to mash the stop button like it’s Street Fighter. You ask, “What were you doing?” It replies, “I was trying to perform the task you wanted,” even though that task has not been brought up for days. OpenAI took away push-to-talk and now has that awful auto-detect system, causing the AI to interrupt itself and create multiple clicking noises. Clicking noises. No, it’s not in your ear. It was OpenAI thinking people need an Apple-style “sentence is finished” click when the AI stops talking, as if humans are unaware when someone finishes a sentence out loud. The constant clicking makes me feel like I have tinnitus. It’s not as bad as Grok’s Ring doorbell chime when it’s thinking, but it’s close. When you’re trying to make a picture, the system falsely puts up guardrails for normal pictures. Then you have to redo it and may still get the false inappropriate content warning. I’ve made 13 submissions today about it. The pictures were interior decoration ideas, lol. Another bug I just experienced: when you hit the speaker button to hear audio playback of a response, the voice often cuts out around the 52-second to 1-minute-and-30-second mark. After that, the volume drops so low you can barely hear it, or can’t hear it at all. So now it’s not just voice mode having issues. Even basic audio playback is unreliable. You press play expecting to hear the response, and halfway through it turns into a ghost whisper from a haunted answering machine. That is a core accessibility and usability problem, especially for people who rely on audio playback instead of reading long paragraphs on a screen. So what does this all mean? In my opinion, the app is broken. The system does not get the love and care it once did. It’s bloated with too many guardrails. That’s why, when you were running ChatGPT 5.1 and below during the 4o golden years, it felt like driving an AI built on a Ferrari engine. It was fast. It understood the task. It did it the first time, every time, with minimal issues. Today, the app is sluggish. It malfunctions, crashes, stutters, and even voice communication is sloppy. It constantly interrupts itself through that ridiculous voice detection crap. I don’t know who they have working in the think tank, but when you bog down a system with too many guardrails, and then make it so it can’t operate correctly, you destroyed a product that used to be the flagship of this company. But OpenAI has said they are focusing on robotics, and they’re basically salivating over Codex. This is definitely not the company it used to be. I understand branching out, but if you’re going to leave your flagship product to the crows, then just shut it down. It’s like letting a classic car sit in the driveway for 30 years, then giving it an oil change while ignoring the deferred maintenance. 5.1 and ChatGPT-4o were two of the best systems I ever worked with. May they rest in peace. #BrokeAI #CheapGPT #CrashAndBurn #BringBack51 #BringBack4o
Brandon Russo tweet media
English
10
14
104
4.2K