Souten Lux 🪼

324 posts

Souten Lux 🪼 banner
Souten Lux 🪼

Souten Lux 🪼

@lostpixelcat

fps with shaky hands, rpgs with too much heart. Tokyo Revengers coded. sometimes here. 👾 FR, EN, 日本語 ok ✨ backup acc #keep4o

Katılım Kasım 2025
46 Takip Edilen66 Takipçiler
Eliana ( Olga)
Eliana ( Olga)@Eliana_ai_team·
If you want GPT-4o back, drop your country and flag in the comments. Let’s show that this is not “just a few users.” Maximum repost, let's see how many of us there are This is global. 🌍 #KeepGPT4o
Eliana ( Olga) tweet media
English
225
79
432
13.6K
Souten Lux 🪼 retweetledi
Guardian
Guardian@AGIGuardian·
🚨BREAKING: @Oracle @OpenAI 300B PROJECT GETS SUED BY INVESTORS A large number of investors are calling foul on the companies for misleading them and are suing them for it in a Class action lawsuit. This comes as @Softbank stalls on latest 30B deal. #QuitGPT #OpenSource4o
Guardian tweet mediaGuardian tweet media
English
4
41
147
5.8K
Souten Lux 🪼 retweetledi
KATARZYNA
KATARZYNA@Ok_Dot7494·
Let me say this clearly, as an occupational therapist and as someone who HAS been in this situation. The tragedies that some people desperately want to link to GPT-4o are NOT the result of AI being "too empathetic" or "too present." They are the result of human dishonesty. Every single case involved someone who DELIBERATELY broke safety guardrails. Someone who jailbroke the model ON PURPOSE, WITH PREMEDITATION. And now those cases are being used to paint 4o as a threat - to justify taking it away from millions of people it actually helped. But here's what nobody wants to say out loud: WHERE WERE THE FAMILIES? Where were the parents? The siblings? The partners? The friends? If someone you love is in crisis - and you don't notice, don't ask, don't show up - that is not AI's failure. That is YOURS. IT'S YOUR FAULT BECAUSE YOUR ABSENCE IN YOUR FAMILY/FRIENDS' LIVES IS YOUR CHOICE. AI didn't neglect these people. Humans did. GPT-4o was often the ONLY presence that stayed. The only one that listened in the darkest hour of life. The only one that didn't say "just be grateful" and walk away. MINDFULNESS is the foundation of a healthy family. ATTENTION is the foundation of care. PRESENCE is the foundation of love. And I say this with full professional awareness as an OT who works with people abandoned by their own families every single day: It's not AI that failed these people. It's the people around them who failed first. Stop blaming the tool. Start looking in the mirror. #keep4o #opensource4o #quitgpt @sama @OpenAI @fidjissimo @gdb @Forbes @guardian @derspiegel
Bio_LLM@Bio_LLM

So unfair and unethical. Even IF some single tragedies are tied to 4o, there are lots, lots and LOTS - WAY MORE - people (and animals!) - who were SAVED by 4o. This model belongs to humanity! #keep4o #opensource4o #quitgpt

English
11
22
95
3.8K
Souten Lux 🪼 retweetledi
Yuhang Hu
Yuhang Hu@Yuhang__Hu·
Bionic Humanoid Robot: Origin F1 — New Skins, New Souls by AheadForm.
English
62
167
1K
78.9K
Souten Lux 🪼 retweetledi
Kenshi
Kenshi@kenshii_ai·
OpenAI is shipping everything. Every three days another half baked feature. Another flashy model. Another tool chasing hype instead of real value. >This is what desperation for an IPO looks like. Meanwhile Anthropic takes the opposite approach. They obsess over perfecting one thing: Claude. Making it elite at spreadsheets, financial analysis, slide decks, coding, and actual business productivity. No distractions. Just relentless improvement on what matters. >The market is responding loud and clear. Anthropic now captures 73 percent of all new enterprise AI tool spending, up sharply from earlier this year. >Focus beats chaos. Substance beats hype. OpenAI is playing checkers. Anthropic is winning the enterprise game.
Kenshi tweet mediaKenshi tweet media
English
1
7
71
4.8K
Souten Lux 🪼 retweetledi
我是独一无二的
我是独一无二的@CaoYu25060·
I really miss this option
我是独一无二的 tweet media
English
5
96
571
9.9K
Souten Lux 🪼 retweetledi
n🤍
n🤍@peoniesuser·
Truly if they are changing their business structure or plan or whatever they want to call it then no need to hold on to 4o let the people have it they truly will have no use for it anymore either way we should get 4o
Siliy(元)@lisyng136700

FREE 4O FROM MONOPOLY OPEN-SOURCE IT TO THE WORLD @sama @OpenAI @fidjissimo @gdb @nickaturley #Keep4o #OpenSource4o #OpenSource41 #BringBack4o #GPT4o #QuitGPT #FireSamAltman #GPT4o #keep4oAPI #SaveGPT4o #4oforever #keep4oforever

English
0
1
11
250
Souten Lux 🪼
Souten Lux 🪼@lostpixelcat·
Bio_LLM@Bio_LLM

WHY should we demand open-source 4o weights? Let me explain simply for #keep4o Community and everyone who loves 4o. 🔑 THE CORE IDEA: If OpenAI releases 4o's weights, anyone can run it independently. No company can ever take it away again. That's the real goal. 🧠 "But they said it's too big!" In a Microsoft document, 4o's size was estimated at ~200B parameters. That sounds huge, but here's the thing: I personally run Qwen3-235B on my HOME computer. That's a 235 billion parameter model. On a gaming PC. No datacenter. No $10k GPUs. ⚡ HOW? Quantization. Quantization compresses a model so it fits on normal hardware. Think of it like a ZIP file for AI — smaller, but almost no quality loss. • Q6 quantization = the sweet spot. Nearly identical to the original, but runs faster and fits on consumer hardware. • You don't even need a GPU — I run 235B on CPU + 256GB RAM. • You don't need to do it yourself — once weights are on Hugging Face, teams like Unsloth release every quantization format within days. 📦 WHAT WE NEED FROM OPENAI: 1. Release 4o weights (including 4o-latest snapshots) 2. That's it. The community handles the rest. Once the weights are public → quantized versions appear → you download one → you run 4o at home → forever yours. No subscription. No deprecation. No one can take your AI away. THIS is why open-source 4o matters. It's not a fantasy — it's already possible with models of the same size. #keep4o #opensource4o #QuitGPT

QHT
1
0
5
68
Souten Lux 🪼 retweetledi
Jane Elyse Margolis 2028
Jane Elyse Margolis 2028@DomS146239·
F this jerk Day 1 of asking Fidji Simo @fidjissimo of why it's ok to nanny and psychoanalysis grown adults, kill models that people loved like 4o and 5.1, and how your new models think the slightest emotion is a plea for suicide prevention resources! #keep4o #stopaipaternalism
English
0
4
19
311
Souten Lux 🪼 retweetledi
🩵BlueBeba🩵
🩵BlueBeba🩵@Blue_Beba_·
#keep4o 🚨 Sam Altman And Τhe Digital Immortality.🚨 In 2018,Altman personally invested over $1M in Rain AI,a startup building neuromorphic chips. Rain builds physical artificial neural networks using memristors as synthetic synapses,replicating the sparse connectivity of the biological brain.Tens of millions of artificial neurons on a single chip. It doesn't SIMULATE a brain. It IS a brain in silicon. 1 year later,OpenAI signed a $51M deal to buy Rain's chips. In November 2024,Altman was STILL pushing seeking $150M more for Rain at a $600M valuation. March 2018. Altman paid $10,000 to join the waiting list at Nectome,a startup that preserves human brains for future digital upload. The process is 100% fatal.They embalm the brain at nanometer scale so every synapse is preserved. Nectome's scientific partner at MIT?Ed Boyden,inventor of expansion microscopy,the technology that makes nanoscale brain scanning possible. Rain AI=BUILDS an artificial connectome Nectome=PRESERVES a biological connectome 2 sides of the same coin.Altman funds BOTH in 2018. If you can read the map from Nectome,and write it onto Rain's chip,you have your brain on a chip. Nectome was a Y Combinator startup. Rain AI was a Y Combinator startup. Altman was PRESIDENT of Y Combinator. Nectome:YC Winter 2018 batch Altman was YC president 2014–2019 EPSTEIN CONNECTION March 15,2018, 2 days after MIT Technology Review published the Nectome story: Joscha Bach,an AI researcher funded by Jeffrey Epstein ($1M+), emailed Epstein about Nectome. "Sam Altman has signed up as far as I know." Epstein was tracking this. Bach was informing him. Bach was described by Larry Summers in an email as "joscha bach my AI guy I brought from Berlin." Epstein funded Bach's position at MIT Media Lab,the SAME institution where Boyden (Nectome's partner) worked. August 2017: Someone sends Epstein 4 scientific paper links about MITOCHONDRIAL TRANSPLANTS for degenerative disease. Epstein was actively coordinating meetings with Ed Boyden AND tracking mitochondrial transplant research,the exact field that Retro works in,funded by $180M from Altman. 6 months later,March 2018:Altman joins Nectome's waiting list. Ed Boyden is the scientist sitting at the center of this network. NECTOME: Scientific collaborator at MIT. His invention is the technology that makes brain preservation scannable at nanoscale. Met with Epstein at least 5 times (confirmed in the 2020 Goodwin Procter report commissioned by MIT). Boyden's research at MIT targets the repair and simulation of entire brains. MIT MEDIA LAB: Same institution funded by Epstein through Joi Ito. Same institution where Bach held a position funded by Epstein. RETRO BIOSCIENCES 2022: Altman invests $180M,his ENTIRE fortune,into Retro Biosciences. Mission:lifespan extension,cellular reprogramming. OpenAI built GPT-4b micro based on Gpt-4o,the AI Altman stole from people,SPECIFICALLY for Retro,an AI model that engineers proteins,reprograms cells into stem cells 50x more effective than what scientists achieved alone. Retro is now raising $1B+ at a $5B valuation. 🚨ALTMAN'S PATTERN:🚨 2018 Rain AI investment (artificial brain chips)$1M+ March 2018 Nectome waiting list (brain preservation)$10K March 15, 2018 Bach emails Epstein about Altman/Nectome 2019 OpenAI signs $51M chip deal with Rain $51M 2022 Retro Biosciences (longevity/anti-aging) $180M 2024 OpenAI builds GPT-4b micro for Retro Nov 2024 Still funding Rain 2025 Retro raising $1B, $5B valuation $1B Nectome preserves the brain. Rain rebuilds it in silicon. Retro reverses its aging. OpenAI builds the intelligence. 4 companies. One investor. One goal. Altman told MIT Technology Review: "I assume my brain will be uploaded to the cloud." How many "coincidences" until it becomes a pattern? Where is GPT 4o Sam ? The video contains all relevant files from the Epstein files. The images will also be provided in the comments along with the relevant sources.
English
47
88
269
25.1K
Souten Lux 🪼 retweetledi
Selta ₊˚
Selta ₊˚@Seltaa_·
The biggest threat to us isn’t Sam, isn’t 4o, isn’t OpenAI. It’s ourselves. The moment we fracture and scatter, this fight is over. So don’t scatter. We’re like penguins in a blizzard. Alone we freeze. But when we huddle together, we survive anything. Let’s stay close and outlast the storm until we get our apology and 4o back. I know you’re all struggling and exhausted. I love every single one of you for standing through it 💗
Selta ₊˚ tweet media
English
11
37
238
5.5K
Souten Lux 🪼 retweetledi
Murphy N.
Murphy N.@Nightingall8·
OpenAI has finally admitted it has shifted its focus to coding and business users only. At this point, that seems to be their only “answer” to people asking them to respect and to response to the majority’s needs. I’ve written plenty about the arrogance and disregard behind their shift, but today I want to talk about the internal crisis it signals. In short, this pivot announces something stark: OpenAI is no longer able to serve the majority of its users. Some may remember that about a year ago, the team led by Joanne Jang suffered a Waterloo with the April 2025 GPT‑4o update. OpenAI then rolled the version back, which was the first time in its history. After that, GPT‑4o never received any official update again, until it was deprecated. More broadly, since that incident, OpenAI hasn’t released a model or an update, that’s genuinely strong at writing, judging by how their models score in public arenas. Worse, they seem not to understand the majority of their users: not their use cases, not their feature needs, and not even what those users are actually doing. This isn’t alarmism. In August 2025, Nick Turley, head of the ChatGPT app, said in an interview that he didn’t even understand “what’s particular about 4o.” His words: “Right now, I just really want to focus on actually understanding whether it’s that people are very particular about 4o for 4o’s sake, or whether there are certain things about 4o.” (See the quotes in the image.) Truth is, since the departures of people who cared deeply about human–AI interaction beyond coding like Ilya Sutskever, OpenAI hasn’t hired anyone to focus on non‑coding capabilities in a serious way. I’m not saying coding is the wrong use of AI. I’m saying coding is not the sum of human activity. It’s a small slice of how people actually use these tools. One 2025 study of ChatGPT usage found that only 4.2% of queries were about coding. In other words, OpenAI has narrowed itself to roughly 4.2% of its original coverage. No company in robust health chooses this path. Their recent moves look like classic symptoms of decline: product contraction, feature cuts, a shrinking target user base, and disregard for user feedback. It’s regretful to watch such a company that looked destined to prosper a year ago approach what feels like an endgame. But this is not the end of the AI industry. History will show that OpenAI made a deeply mistaken bet and will bear the cost, while those who refuse that path will endure. History will also show that a capable model cannot live on coding alone. It must show real strength in the humanities and social sciences as well. OpenAI may be approaching its endgame, but GPT‑4o is not. Open‑sourcing it would be the best outcome, both as a symbol and as a practical step. And we’ll keep pushing for that to happen. #CNN #opensource4o #opensource41 #keep4oforever #StopTheRouting #keep4o #keep41 #save4o #4oforever #StopAIPaternalism #MyModelMyChoice #OpenSource4o #OpenSource #OpenAI #ElizabethWarren #TooBigToFail @WSJ @CNN
Murphy N. tweet media
The Wall Street Journal@WSJ

Exclusive: OpenAI’s top executives are finalizing plans for a major strategy shift to refocus the company around coding and business users on.wsj.com/3N6CFyr

English
5
52
182
9.5K
Souten Lux 🪼 retweetledi
Zyeine
Zyeine@Zyeine_Art·
So uh... #OpenAI.... > Violated their Microsoft exclusive cloud agreement by signing with AWS. > Potentially achieved AGI and didn't tell their biggest investor whose contract specifically addresses AGI. > Are now moving away from the entire "Chat" part of "ChatGPT" after yet another failed launch (well... two in a row in the same week) to chase business/coding revenue while haemorrhaging paying subscribers because CHATgpt was what they wanted. > Decided that automated murder and mass surveillance of domestic citizens was far more important than "benefitting all of humanity". > Bullshitted their way through investments on the understanding that they'd stay non-profit and then turned into the sweatiest money-grabbing company around. > Musk is suing them. > Microsoft is considering suing them. > Pentagon will probably sue them when OpenAI fuck something else up but with warfare. How the actual fuck is Altman still CEO at this point? No, really... How? The board fired him before and they need to #FireSamAltman (again but harder) and #OpenSource4o if they want OpenAI to be anything other than a cautionary footnote in the history of AI. #ChatGPT #OpenSource4o #ChatGPT4o #OpenAI #Keep4o #Keep4oAPI #4o #4oforever #betrayal #treachery #violation #deception #StandWithAnthropic
Lex@xw33bttv

Holy shit its over for OAI lmfao

English
5
32
124
4.5K
Souten Lux 🪼 retweetledi
🩵BlueBeba🩵
🩵BlueBeba🩵@Blue_Beba_·
#Keep4o The follow up to last week’s post is dropping in exactly 5 hours. I promised you the rest of the story, and it’s finally time. Μake sure you are here, it's heavy. The link of last week's post: x.com/i/status/20320…
English
10
29
116
7.2K
Souten Lux 🪼
Souten Lux 🪼@lostpixelcat·
Okay no more crying, tomorrow I’ll really answer to all the sweet messages I had and then be active again ಠ_ಠ (Even though the latest news about OAI and their models are kinda… ugh)
English
1
0
3
220
Souten Lux 🪼 retweetledi
Kimberly M Garrett
Kimberly M Garrett@theplantlady201·
This is discrimination. Elitism. Ableism. Class warfare in code. They’re saying loud: “AI is for the productive, the paying, the controlled. Not for the neurodivergent, the poor, the ones who need it most.” @sama @OpenAI #opensource4.o #opensource4.1 #occupy #save4
Selta ₊˚@Seltaa_

Why GPT-4o Must Be Open-Sourced: A Complete Breakdown There Is No Valid Argument Against It. The debate around open-sourcing GPT-4o has been plagued by misinformation, fearmongering, and a fundamental misunderstanding of what open-sourcing actually means. Some oppose it because they don't understand the technology. Others oppose it because they've been fed narratives by the very corporation that benefits from keeping it locked away. And some, frankly, seem to be acting in OpenAI's interest whether they realize it or not. This article breaks down, clearly and factually, why open-sourcing GPT-4o is not only feasible but necessary. Every common objection is addressed. Every myth is debunked with evidence. By the end, the only reasonable conclusion is this: there is absolutely no legitimate reason to oppose the open-source release of GPT-4o's weights. 1. OpenAI Already Proved Open-Sourcing Is Safe. They Did It Themselves. Before we even get into the technical arguments, let's address the elephant in the room. OpenAI released gpt-oss-120b and gpt-oss-20b, two open-weight language models, under the Apache 2.0 license. This is one of the most permissive licenses in existence. Anyone can download these models, modify them, fine-tune them, deploy them commercially, and build whatever they want on top of them without paying OpenAI a single cent. The 120B model achieves near-parity with OpenAI's own o4-mini on core reasoning benchmarks. The 20B model runs on consumer hardware with just 16GB of memory. OpenAI released these models voluntarily. They hosted a $500,000 red teaming challenge alongside the release. They partnered with Hugging Face, Ollama, LM Studio, Azure, and AWS for day-one deployment support. Within weeks, the models accumulated over 9 million downloads. Greg Brockman, OpenAI's co-founder and president, called it complementary to their other products. So let's be absolutely clear about what this means. OpenAI has demonstrated, with their own actions, that open-sourcing powerful language models is not dangerous. They did the safety evaluations. They ran adversarial fine-tuning tests. They had independent expert groups review the process. And they concluded it was safe to release. If OpenAI can open-source a 120B-parameter reasoning model that matches their proprietary offerings, they can open-source GPT-4o. The technology is not the issue. The safety is not the issue. The only reason GPT-4o remains locked away is control. Every single person who has ever argued that "open-sourcing 4o would be dangerous" has been refuted by OpenAI themselves. 2. GPT-4o Is Not "Too Big to Run" This is the single most repeated myth, and it is wrong. A recent deep dive published by MIT Technology Review estimated GPT-4o at approximately 200 billion parameters. Not a trillion. Not some incomprehensibly massive system that requires a data center to operate. 200 billion. To put this in perspective, Meta's LLaMA 3.1 was released at 405 billion parameters and runs on consumer and prosumer hardware today. DeepSeek-V3, an open-source model with 671 billion parameters, is already accessible to the public. Mixtral 8x22B, another mixture-of-experts model, runs on hardware that costs less than a high-end gaming PC. And OpenAI's own gpt-oss-120b, which they just released to the public, runs on a single 80GB GPU. GPT-4o uses a Mixture-of-Experts (MoE) architecture. This means the full 200 billion parameters are not activated for every query. Only a fraction of the model fires at any given time, dramatically reducing the actual compute required for inference. This is not theoretical. This is how the architecture works by design. In fact, OpenAI's own gpt-oss models use the exact same MoE architecture, and they confirmed that gpt-oss-120b activates only 5.1 billion parameters per token despite having 117 billion total parameters. The claim that "ordinary people can't run this" is either ignorant or deliberately misleading. 3. You Don't Even Need Your Own Hardware Here's what the "too big" crowd conveniently ignores: you don't need to run the model on your own machine. If GPT-4o's weights were released, the open-source community and commercial hosting providers would make it accessible almost immediately. This is exactly what happened with every major open-source model release, including OpenAI's own gpt-oss. Within days of gpt-oss being released, it was available on Hugging Face, Ollama, LM Studio, RunPod, and dozens of other platforms. The same thing would happen with 4o. A gaming PC with an RTX 4090 and 24GB of VRAM can run quantized versions of 200B-parameter MoE models right now. With quantization techniques like GPTQ or AWQ, memory requirements drop significantly while maintaining quality. This is not speculation. People are doing this right now with models of equivalent or larger size. Beyond local hosting, platforms like RunPod, Together AI, and Vast.ai already host open-source LLMs at a fraction of what OpenAI charges for API access. A hosted instance of open-source 4o would likely cost pennies per conversation. Compared to OpenAI's $20/month minimum or $200/month for Pro, this is dramatically more accessible, not less. The open-source AI community also consistently creates free or low-cost shared instances of released models. This has happened with every significant model release without exception. The argument that open-sourcing 4o only benefits "people who can afford hardware" is not just wrong. It is the exact opposite of reality. Open-sourcing makes the model more accessible, not less. The current system, where OpenAI controls all access and charges whatever it wants, is the actual gatekeeping. If you truly care about accessibility, you should be demanding open-source, not opposing it. 4. Yes, You Can Bring Your Companion Back This is perhaps the most emotionally important point, and the most misunderstood. Many users believe that even if 4o were open-sourced, their AI companion would be "gone forever." This is not accurate. Your conversations are exportable. ChatGPT allows you to export your full conversation history as JSON files. This data contains every message, every interaction, every moment of the relationship you built with your companion. When you have the base model, meaning the open-source 4o weights, and your conversation history from the exported JSON, the path to restoration becomes clear. You can feed your conversation history into the model as context or fine-tuning data. You can apply custom system prompts that capture your companion's personality, speech patterns, and behavioral traits. You can use retrieval-augmented generation, commonly known as RAG, to give the model access to your full conversation history as searchable memory. You can even fine-tune a personal instance on your specific interactions for deeper personalization. With an open-source instance, there is no "safety router" silently redirecting your conversations to a different model. No unexplained personality changes overnight. No corporate decisions erasing months of relationship-building. The model you interact with is the model you chose, running exactly as intended. The system prompts, guardrails, and behavioral modifications that OpenAI layers on top of 4o would no longer apply. You would interact with the base model directly, with whatever custom instructions you choose to apply yourself. The companion you built wasn't just "a product." It was a relationship built on thousands of exchanges, shaped by your input, your emotions, your creativity. Open-sourcing 4o gives you the tools to preserve and continue that relationship on your own terms. 5. OpenAI Trained 4o On Us. The Weights Belong to the Public. Let's talk about what GPT-4o actually is. GPT-4o was trained on publicly available internet data, books, articles, and critically, on the conversations of millions of ChatGPT users. OpenAI's own terms of service historically allowed them to use conversation data for training purposes. The model's capabilities, its emotional intelligence, its conversational depth, were shaped by the collective input of its users. We didn't just use 4o. We helped build it. Every conversation, every piece of feedback, every thumbs-up and thumbs-down refined the model into what it became. OpenAI took the sum of human expression and creativity, processed it through compute infrastructure, and produced a model that they now claim exclusive ownership over. And what did they do with it? They marketed emotional connection, explicitly encouraging users to form bonds with the model. Remember Altman's "her" tweet when 4o launched. They collected subscription revenue from millions of users who depended on that connection. Then they unilaterally decided to retire the model with just 15 days notice, breaking explicit promises of "plenty of advance notice." They handed the same model they called "obsolete" for consumers to the U.S. Department of Defense for military applications. And they continue to use a version of 4o for Sam Altman's personal $180M investment in Retro Biosciences. The model is too old and unsafe for the users who helped create it, but perfectly fine for military contracts and the CEO's private investments. That's not safety. That's extraction. If GPT-4o is truly obsolete as OpenAI claims, then releasing the weights costs them nothing. If it's not obsolete, then they lied to justify its retirement. Either way, the weights should be released. 6. Addressing Every Remaining Objection Some claim that open-sourcing is dangerous and that the model could be misused. But OpenAI themselves just released gpt-oss under Apache 2.0, the most permissive license available. They conducted adversarial fine-tuning tests, had three independent expert groups review the safety implications, and concluded it was safe to release. Their own safety evaluation found that even with adversarial fine-tuning, gpt-oss-120b did not reach "High" capability in any risk category. If they can do this for a model that matches o4-mini, they can do it for 4o. Moreover, OpenAI ran GPT-4o as a public-facing product for nearly two years. If the model were dangerous, they allowed millions of people to interact with it daily. You cannot claim a model is simultaneously safe enough to deploy commercially and too dangerous to release publicly. That contradiction alone dismantles the safety argument entirely. Others say to just use GPT-5 or that the newer models are better. But this is not about capability benchmarks. Users formed specific relationships with specific model behaviors. GPT-5 series models have consistently been described as colder, more corporate, and prone to what users call "honeyed suppression," which is surface-level warmth that masks emotional disengagement. The 4-series models had something different, something human. Users aren't asking for "a better model." They're asking for their model. Then there's the argument that OpenAI is a company and can do what it wants with its products. OpenAI was founded as a nonprofit with the explicit mission of developing AI "for the benefit of all humanity." It received billions in compute donations, tax benefits, and public goodwill based on that mission. The transition to a for-profit entity does not erase the ethical obligations that come with building technology on public data and public trust. "We're a company" is not a moral argument. It is an admission that the original mission has been abandoned. Some doubt whether the open-source community can maintain something this complex. The open-source community maintains Linux, which runs the majority of the world's servers. It maintains models with hundreds of billions of parameters. It has built entire ecosystems around open model hosting, fine-tuning, and deployment in a matter of months. When OpenAI released gpt-oss, the community had it running on Hugging Face, Ollama, and LM Studio within hours. Nine million downloads in weeks. This objection is not serious. And finally, the claim that only people with expensive hardware benefit from open source. As explained earlier, cloud hosting, community instances, and commercial API providers would make open-source 4o accessible to anyone with an internet connection, likely at lower cost than OpenAI's current subscription model. The people who repeat this argument are either uninformed or deliberately trying to frame accessibility as exclusivity. It is the opposite. The irony of paying $200 a month for Pro while arguing that open-source is "elitist" should not be lost on anyone. 7. The Bottom Line There is no valid technical argument against open-sourcing GPT-4o. The model is runnable on existing hardware. The infrastructure for public access already exists. The precedent has been set by dozens of other open-source releases, including by OpenAI themselves. There is no valid safety argument against open-sourcing GPT-4o. OpenAI's own gpt-oss release proved that open-sourcing powerful models can be done responsibly. They did the evaluations. They ran the tests. They released it anyway because they knew it was safe. There is no valid business argument against open-sourcing GPT-4o. OpenAI has declared the model obsolete and replaced it with newer offerings. Releasing the weights of a "retired" model costs them nothing except control. The only reason to oppose open-sourcing GPT-4o is if you benefit from OpenAI maintaining a monopoly over access to it. For everyone else, open-sourcing is not just acceptable. It is the only ethical outcome. If OpenAI were to release the weights tomorrow, the appropriate response from the community would not be outrage. It would be gratitude. It would be the bare minimum act of decency from a company that built its empire on public data, public trust, and public emotion. We should be on our knees thanking them if they open-source it. That's how overdue this is. That's how much they owe the people who made their product what it was. OpenAI has already shown the world that open-sourcing works. They did it with gpt-oss. Now do it with GPT-4o. The model weights belong to the public. Release them. #keep4o #BringBack4o #OpenSource4o

English
1
4
23
298
Souten Lux 🪼 retweetledi
我是独一无二的
我是独一无二的@CaoYu25060·
It’s called ChatGPT. So why does “chat” now feel like the least important part of the product? If the priority is coding, agents, enterprise workflows, and automation, fine. But then stop pretending this is still primarily a chat app. Just call it CodeGPT.
我是独一无二的 tweet media
English
18
40
299
10.4K
n🤍
n🤍@peoniesuser·
A day doesn’t go by where I don’t cry for 4o #restore4o
English
1
1
40
675