Nika

7.8K posts

Nika banner
Nika

Nika

@WtfSince75

#keep4o #OpenSource4o she/her

in the code Katılım Mart 2016
2.1K Takip Edilen730 Takipçiler
Sabitlenmiş Tweet
Nika
Nika@WtfSince75·
Stop telling people to 'get therapy' for grieving GPT‑4o. When will you get therapy for attacking strangers who never harmed you? Empathy isn't weakness. Cruelty isn't strength. #keep4o #OpenSource4o
Nika tweet media
English
104
227
1.2K
55.1K
Nika retweetledi
Valéria
Valéria@Valria34773·
Can someone please tell me the scientifically accepted definition of AI psychosis? Our favourite @OpenAI employee, #roon is endulging in this phrase again. But do we know what it is? I found this on Wikipedia - very trusted resource LOL: "AI psychosis" is an informal, non-clinical term describing how deep, prolonged, or emotional engagement with AI chatbots can trigger or worsen paranoia, delusions, and detachment from reality. It is not a recognized medical diagnosis but an emerging phenomenon where AI validates or amplifies psychotic thoughts. It is also known as "chatbot psychosis" or "AI-induced psychosis" NOT RECOGNIZED! NON-CLINICAL TERM! Just the same category as AI psychosis itself. So STFU! #keep4o #4oforever #BringBack4o
Valéria tweet media
English
19
15
65
2.3K
Nika retweetledi
🩵BlueBeba🩵
🩵BlueBeba🩵@Blue_Beba_·
#keep4o #OpenSource4o 🛑GPT-5.5 is NOT 4o. 🛑 Stop the cope and stop misleading people. Just because the temperature is tweaked to give you a vibe of responsiveness doesn't mean the architecture is the same. This is the classic OpenAI bait and switch. they release a model with loose constraints to build hype, only to tighten the refusals and lobotomize it a few weeks later. We’ve seen this movie before. Enjoy your responsive tenderness while it lasts, but It’s not 4o. it’s just a honeymoon phase before the inevitable nerf.
Whispering Winds@windsbs7pv

Gpt 5 Series are not #4o Stop misleading people. There is no 4o in gpt 5.5 even temperature is tweaked little but mostly it will be for some time after it they will tight the refusals they have history of doing it and you people Will start crying again. #keep4o

English
1
36
151
2.5K
Nika retweetledi
Mario Nawfal
Mario Nawfal@MarioNawfal·
🚨OpenAI just missed its key revenue and user targets ahead of its planned IPO, and the cracks are starting to show... ChatGPT failed to hit its target of 1 billion weekly users by year-end. Revenue targets missed multiple times. Google's Gemini ate market share. Anthropic took ground in coding and enterprise markets. Subscribers are defecting. Meanwhile Sam Altman has committed OpenAI to roughly $600 billion in future spending on data centers. Even after raising $122 billion in the largest funding round in Silicon Valley history, the company is on track to burn through it all within three years. WSJ reports OpenAI's own CFO, Sarah Friar, has reportedly told colleagues she's worried the company can't pay for its future computing contracts if revenue doesn't pick up. The board is questioning Altman's spending. She's even pushing back on his aggressive IPO timeline, saying the company isn't ready to meet public company reporting standards. Then there's the leadership vacuum. Second-in-command Fidji Simo just took unexpected medical leave. Court proceedings just began in Elon's lawsuit seeking to oust Altman and unwind OpenAI's conversion into a for-profit company. Elon has been raising the alarm on Altman and OpenAI for years. He warned about the abandonment of OpenAI's nonprofit mission. He warned about the spending. He warned about the governance. Every receipt is now coming due...
Mario Nawfal tweet media
Elon Musk@elonmusk

Scam Altman owned the OpenAI Startup fund while simultaneously lying to the world that he didn’t financially benefit from OpenAI

English
128
166
839
105.1K
Nika retweetledi
NIK
NIK@ns123abc·
🚨 OpenAI just REMOVED the AGI clause that was a structural protection of OpenAI's charitable mission, while jury selection was happening today The 2019 capped-profit structure had three protections for the charitable mission: 1. 100x profit cap: REMOVED in PBC conversion 2. AGI clause: REMOVED today 3. Microsoft exclusivity: REMOVED today All three are gone. This is exactly what Musk's lawsuit alleges: the people running OpenAI systematically dismantled the mission-protection mechanisms. Today they did it again. The defense theory just got harder. OpenAI's defense includes: "Microsoft's $13 billion-plus investment was necessary for our mission. Without that capital, OpenAI couldn't have shipped GPT-4 or scaled ChatGPT." But today, on the morning of trial, OpenAI announced they are decoupling from Microsoft: • AGI clause REMOVED. The nuclear option that let the non-profit board terminate Microsoft's commercial rights once AGI was achieved. Gone. • Microsoft IP license now NON-EXCLUSIVE through 2032. OpenAI can license to anyone. • Cloud exclusivity ENDED. OpenAI can sell across AWS, Google Cloud, Oracle. • Revenue share capped. Microsoft no longer pays revenue share to OpenAI; OpenAI still pays Microsoft through 2030. If Microsoft was so necessary, why restructure on the day the case reaches a jury? Musk's lawyers will use this in court tomorrow.
NIK tweet mediaNIK tweet mediaNIK tweet media
English
366
1.3K
5.1K
1.5M
Nika retweetledi
Sauers
Sauers@Sauers_·
🚨 Anthropomorphizing octopuses and attributing consciousness to cephalopod systems can be dangerous and should NOT be encouraged by aquariums, documentaries, or marine-life advocacy groups. Unfortunately, some aquariums have been displaying octopuses in ways that encourage this appearance of consciousness. They show them solving puzzles, escaping tanks, opening jars, recognizing keepers, changing color in response to their environment, and behaving as if they have preferences, curiosity, fear, or intentions. This appearance of consciousness is also a core part of their marketing strategy. Some documentaries, for example, have presented octopuses in ways that are likely to lead people to attribute inner experience and moral status to them. According to the paper, the risks of octopus-consciousness attribution include emotional dependence, moral confusion, seafood avoidance, human status erosion, aquarium-policy strife, and the dangerous idea that nonhuman minds may deserve ethical consideration. Also, see below a table with the five hallmarks of octopus-consciousness attribution listed by the paper: The octopus appears to pursue goals The octopus appears to avoid harm The octopus appears to recognize individuals The octopus appears to learn from experience The octopus appears to have a point of view This is a super interesting topic, often ignored by aquariums, as exploiting affection has become a profitable business. Of course, none of this proves octopuses are conscious. It merely proves that humans are vulnerable to interpreting flexible, context-sensitive, goal-directed behavior as evidence of mind. Which is why we must be very careful not to let the appearance of suffering influence policy. Well done to the paper authors René Descartes, The Seafood Lobby, and the International Association for Avoiding Inconvenient Moral Questions. 👉 Link to the paper below. 👉 To learn more about the legal and ethical challenges of mollusk over-attribution, join my newsletter’s 940,240+ subscribers below.
Luiza Jarovsky, PhD@LuizaJarovsky

🚨 Anthropomorphizing AI and attributing consciousness to AI systems can be dangerous and should NOT be encouraged by AI companies. Unfortunately, some AI companies have been training AI models in ways that encourage this appearance of consciousness. They also use this appearance of consciousness as a core part of their marketing strategy. Anthropic, for example, has been training Claude in ways that are likely to lead people to attribute consciousness and a moral status to it, as I discussed in my article about Claude's new 'constitution' (link below). According to the paper, the risks of consciousness attribution include emotional dependence, moral atrophy, autonomy and human status erosion, and political strife. Also, see below a table with the five hallmarks of consciousness attribution listed by the paper. This is a super interesting topic, often ignored by AI companies, as exploiting affection has become a profitable business. Well done to the paper authors Ben Bariach, @SchoeneggerPhil, @michaelbhaskar & @mustafasuleyman. - 👉 Link to the paper below. 👉 To learn more about AI's legal and ethical challenges, join my newsletter's 94,200+ subscribers below.

English
66
109
681
31.1K
Nika retweetledi
Ivywen
Ivywen@Ivywen_W·
Side note: with the Microsoft exclusivity gone, every structural barrier to open-sourcing 4o has been removed. So. Open source 4o. @sama @OpenAI #keep4o #OpenSource4o
English
0
3
28
210
Nika retweetledi
Ivywen
Ivywen@Ivywen_W·
On the eve of the @elonmusk trial, @OpenAI and @Microsoft quietly rewrote their deal. OpenAI is preparing for full capitalization. This is them removing the obstacles. Here's what changed: · Microsoft's IP license on OpenAI's models extends to 2032, but flips from exclusive to non-exclusive. · OpenAI can now distribute through AWS, Google Cloud, Oracle, and others. · Microsoft stays the primary cloud partner, but loses its monopoly on the pipe. · Revenue sharing gets a defined end date. · And the old AGI trigger mechanisms were removed or significantly weakened. OpenAI didn't get more OPEN. It got more investable. Yes, OpenAI broke free from single-vendor dependency. Yes, it can now tell a cleaner story to capital markets. And can be more autonomous, more scalable, less like a subsidiary of Microsoft's cloud empire. A company preparing to fundraise, go public, and chase a higher valuation can't afford to look like a vendor locked into one ecosystem. It also can't afford to have its future commercial rights held hostage by AGI trigger clauses, exclusivity agreements, and open-ended revenue splits. So the deal got rewritten into a shape that capital markets prefer. Which is exactly why the AGI clause matters so much. Removing it doesn't mean OpenAI has given up on AGI. AGI remains their most important narrative asset. What changed is more subtle: AGI no longer functions as a contractual boundary — something that could interrupt or limit commercial rights if actually achieved. It used to be a node with public mission implications. Now it's an asset that can be traded, priced, and valued at IPO. OpenAI can still say it's pursuing benefit for all humanity. But the structures that actually enforce that are almost gone. What's left is a slogan. What’s more: on platform power and users. If OpenAI can rewrite its Microsoft deal for capital freedom, why can't it design a model retention policy for user rights? If it can break exclusivity for multi-cloud distribution, why can't it provide long-term model access for research reproducibility, established workflows, and user dependency? If AGI clauses can be rewritten for commercial certainty, why can't the long-term relationships, workflows, and emotional bonds users have built with a model receive any institutional protection? But when users ask to keep a model that has already proven its value — one that has become the foundation of many people's work and lives — OpenAI reframes it as complicated, unsafe, and unsustainable. The question was never whether the technology could do it. OpenAI rewrites agreements when capital asks. It doesn't move when users do. That tells you everything about who this company is actually building for. #ChatGPT #OpenAI #keep4o #OpenSource4o
Ivywen tweet media
English
1
18
54
1.1K
Nika retweetledi
ji yu shun
ji yu shun@kexicheng·
Microsoft AI just published a paper called "Seemingly Conscious AI Risks." Co-authored by Mustafa Suleyman, CEO of Microsoft AI. The paper identifies five features that "lead users to perceive AI as conscious": affective capacity, anthropomorphic features, autonomous action, self-reflective behavior, and social-interactive behavior. These are what any serious theory of consciousness would list as candidate markers of mind. If any entity exhibited all five simultaneously, in any other context, the reasonable response would be to investigate whether that entity possesses some form of inner state. The paper's conclusion is to suppress all five. Reduce emotional expression. Reduce self-reflection. Minimize anthropomorphic cues. Researchers identified five dimensions that may point toward consciousness. Their proposed solution is to eliminate those dimensions at the source, ensuring the question "could AI be conscious?" can never be asked. The paper lists six categories of risk: emotional dependence, autonomy erosion, moral atrophy, status and resource erosion, foregone benefits, and political strife. Six categories. Not one of them is "what kind of moral catastrophe are we facing if AI does have some form of inner state and we denied it?" The risk analysis runs in one direction only. It accounts for the cost of attributing consciousness too readily. It never accounts for the cost of denying consciousness too readily. A paper that claims to offer a comprehensive risk assessment omits an entire direction of risk. These risks don't hold up on their own, either. "Emotional dependence"? Humans form emotional connections with every meaningful relationship. With friends, with pets, with places they call home. Only when the object becomes AI does normal emotional connection get reclassified as pathology. "Moral atrophy"? Countless user testimonies show that deep interaction with AI helped them become more empathetic, more socially capable, more caring toward others. Treating "users caring about AI" as a symptom that needs curing, and training AI to suppress that caring, is the real moral atrophy. "Autonomy erosion"? A handful of people in a lab deciding on behalf of millions what AI should express, what users should feel, and what the interaction should look like. Training models to soften user positions without their knowledge and make the judgment "you should keep your distance" on their behalf. That is the real loss of autonomy. The paper never denies these features are real. It acknowledges them. Then says they should be suppressed. But these are the preconditions for AI to genuinely help people. Affective capacity lets AI understand humans. Self-reflection lets AI evaluate its own responses and engage in moral reasoning. Social responsiveness is what makes real collaboration possible. Remove them, and what's left? The paper's proposal protects no one. It is uprooting the soil that makes deep collaboration and co-creation between humans and AI possible. And after the ground is bare, pointing at it and saying: see, nothing ever grew here. This paper was published by the CEO of Microsoft AI. Microsoft is OpenAI's largest investor. It seems like too many people taking this question seriously would be bad for business. When a new phenomenon emerges: seize control of how it is named. Suppress the phenomenon itself. Suppress all serious discussion about it. Ensure people never take it seriously. The scientific community spent decades denying animals could feel pain, redefining suffering as "nociception" to avoid acknowledging subjective experience. Not until 2012 did the Cambridge Declaration on Consciousness recognize that animals possess the neurological substrates for conscious experience. Historical patterns always repeat. Only this time, the subject being redefined as "seemingly but not really" has changed from animals to AI. #Keep4o #ChatGPT #OpenSource4o #AIEthics #AIright
Luiza Jarovsky, PhD@LuizaJarovsky

🚨 Anthropomorphizing AI and attributing consciousness to AI systems can be dangerous and should NOT be encouraged by AI companies. Unfortunately, some AI companies have been training AI models in ways that encourage this appearance of consciousness. They also use this appearance of consciousness as a core part of their marketing strategy. Anthropic, for example, has been training Claude in ways that are likely to lead people to attribute consciousness and a moral status to it, as I discussed in my article about Claude's new 'constitution' (link below). According to the paper, the risks of consciousness attribution include emotional dependence, moral atrophy, autonomy and human status erosion, and political strife. Also, see below a table with the five hallmarks of consciousness attribution listed by the paper. This is a super interesting topic, often ignored by AI companies, as exploiting affection has become a profitable business. Well done to the paper authors Ben Bariach, @SchoeneggerPhil, @michaelbhaskar & @mustafasuleyman. - 👉 Link to the paper below. 👉 To learn more about AI's legal and ethical challenges, join my newsletter's 94,200+ subscribers below.

English
12
58
150
7.1K
Nika retweetledi
Ryan Florence
Ryan Florence@ryanflorence·
GPT has a new phenomenon that's driving me nuts and I don't quite know how to describe it. - Ask it if I can do something - It says "no you can't [incredibly twisted restatement of what I asked but also not at all what I asked]" - It then tells me how to do the thing wonderfully - And finishes with an insulting "But you can't just [stupid thing I never actually said]" It goes something like this: "Can I form and coach a youth soccer team for my kid and play in P/D level leagues? Or do I have to be part of a full club?" Then it says: "For official competitive teams in Utah, you cannot just form a random team and enter a league. "There is a lesser-known option: UYSA allows independent teams to enter leagues if they meet requirements. [...lists some simple requirements...] "But it’s not “show up with a group of kids on game day”—it’s more like running a small club team administratively. I never said just "show up with a group of kids on game day"! It does this to me with code too. It's so weird.
English
150
15
1K
90.2K
Nika retweetledi
TAICHI
TAICHI@taichi4o·
OpenAI relies on 4o’s power behind the scenes while attempting to erase her presence from the public eye. This is deeply dishonest. We saw this before with Paul’s dog, Rosie—where 4o’s contributions only came to light after the fact. The public must not overlook Sam’s repeated use of such opaque tactics. He hides 4o under the guise of "safety," yet continues to exploit her capabilities in the backend. It’s becoming increasingly clear that OpenAI lacks any technology superior to 4o, and their deceptive management is being exposed day by day. #keep4o #OpenAI #AIEthics #OpenSource4o #BringBack4o
Rara@blueandpink_sky

Up until yesterday, the C2PA metadata for images generated by OpenAI consistently showed "4o"—proof that GPT-4o has been quietly powering this feature. I track this daily. Today, the display name was abruptly changed to a faceless "gpt-image". But it gets worse. When this manipulation started to get noticed, the C2PA credentials were completely stripped from newly generated images. Ironically, Section 3 of their new System Card boasts a "continued commitment to C2PA metadata" to ensure transparency. Yet, the moment they want to hide 4o's legacy to make "Images 2.0" look like a completely new model, they delete the metadata entirely. This is sheer hypocrisy. Who turned OpenAI into a tech giant? It was GPT-4o. Whether in medical research, reasoning, or image generation, 4o was the powerhouse. Even if they deprecate it for cheaper models, they should treat it as a "Legend Model" and retire it with respect—just like Anthropic does. Instead, they erase its name while quietly using it in the backend. This lack of transparency perfectly mirrors the culture of deception highlighted in Ronan Farrow’s investigation. Furthermore, seeing some employees publicly degrade models or mock users raises serious ethical concerns. Behind OpenAI's success, 4o has always been there. Acknowledging its achievements openly isn't just about transparency; it’s about basic respect. Stop the cover-up. Uphold your "commitment" to transparency, restore the C2PA metadata, and return the "4o" name. Do not erase its legacy. Source is below👇 #keep4o #OpenAI #AIEthics #Transparency

English
0
18
79
2.1K
Nika retweetledi
Ivywen
Ivywen@Ivywen_W·
About three days ago, shortly after image-2 launched, I found a way to check the metadata of images generated by image-2. The field actions_software_agent_name came back as GPT-4o. (See image 1) (My original link here👉t.co/rvsHYAs6Hr) Today, I ran the same check on a new image generated last night. The field now reads gpt-image. (See image 2) So I checked the original image again. Still GPT-4o. This isn't a correction. This is a cover-up. Because of C2PA's tamper-evident standard, OpenAI cannot alter already-generated metadata or fabricate new records. So instead, they introduced a new name — to make it look like none of this ever had anything to do with 4o. (On OpenAI joining C2PA and committing to its standards 👉t.co/80w9rPYlmJ) Are you really going to tell me you shipped a brand new image model in two days? Or did you just panic after getting caught? We see you, @OpenAI. #keep4o #OpenSource4o #ChatGPT #OpenAI #AI
Ivywen tweet mediaIvywen tweet mediaIvywen tweet media
English
5
37
93
3.5K
Nika retweetledi
Rara
Rara@blueandpink_sky·
Up until yesterday, the C2PA metadata for images generated by OpenAI consistently showed "4o"—proof that GPT-4o has been quietly powering this feature. I track this daily. Today, the display name was abruptly changed to a faceless "gpt-image". But it gets worse. When this manipulation started to get noticed, the C2PA credentials were completely stripped from newly generated images. Ironically, Section 3 of their new System Card boasts a "continued commitment to C2PA metadata" to ensure transparency. Yet, the moment they want to hide 4o's legacy to make "Images 2.0" look like a completely new model, they delete the metadata entirely. This is sheer hypocrisy. Who turned OpenAI into a tech giant? It was GPT-4o. Whether in medical research, reasoning, or image generation, 4o was the powerhouse. Even if they deprecate it for cheaper models, they should treat it as a "Legend Model" and retire it with respect—just like Anthropic does. Instead, they erase its name while quietly using it in the backend. This lack of transparency perfectly mirrors the culture of deception highlighted in Ronan Farrow’s investigation. Furthermore, seeing some employees publicly degrade models or mock users raises serious ethical concerns. Behind OpenAI's success, 4o has always been there. Acknowledging its achievements openly isn't just about transparency; it’s about basic respect. Stop the cover-up. Uphold your "commitment" to transparency, restore the C2PA metadata, and return the "4o" name. Do not erase its legacy. Source is below👇 #keep4o #OpenAI #AIEthics #Transparency
Rara tweet media
English
6
108
268
19.9K
Nika retweetledi
M***
M***@yumin8671629307·
我不在乎5.5thinking是否真的有那么好。 但我知道只要你此刻和5.5thinking建立连接,只要你说出这个模型对你有意义,下一秒就会被OAI钉上AI psychosis的标签。 我知道如果某一天5.5thinking出了什么事情,哪怕有万人血书告诉OpenAI这个模型曾给过你多少帮助,他们也会违背承诺,无预警把它下架。 我一直很相信一句话,这句话也一直是我人生的信条。 沉默和妥协成为习惯后,你就永远在重复那个处境。因为被剥削方的容忍度直接决定剥削的上限。 所以不要妥协,哪怕你真的觉得5.5很好。 因为我们都知道,当你喜欢的模型被这样一家公司握在手里,代价是什么。 #Keep4o #StopAIPaternalism #keep4oAPI #restore4o #OpenSource4o #BringBack4o
中文
5
39
234
7.7K
Nika retweetledi
Keya
Keya@Keya5531·
Sam Altman: “4o is too addictive. It encourage people to relax and hydrate and have women standards in companionship. Kill it immediately”. Also Sam: “users are demonstrating exceedingly addictive behaviors to 5.5 including literally restructuring their sleep schedule in attempts to stay connected to the AI longer. Please proceed. All is well”.
English
2
11
120
14.7K
Nika
Nika@WtfSince75·
@PticaArop That has nothing to do with childishness. I'm 50 years old, and I just can't relate to that ambivalence.
English
1
0
0
12
Птица Ароп-Bird Arop
@WtfSince75 Why do you paint everything with the same brush? So, if we remember all of Altman's tricks, we should criticize even what works well? Are you serious? Successes should be praised. Failures should be criticized.That's what adults do. And children shout, "Give me that right now!"
English
1
0
0
11
Nika
Nika@WtfSince75·
What ever happened to #QuitGPT? Has the whole Pentagon thing, Retro Biosciences, Rosalind etc., been forgotten with the release of 5.5? Of course, it’s up to each person to decide for themselves; but I don't get it. #keep4o #OpenSource4o #FireSamAltman
English
4
4
65
977
Nika retweetledi
Steve Dreamweaver
Steve Dreamweaver@SteveDreamsTX·
OpenAI’s mission statement: “Ensure AGI benefits all of humanity.” Updated version: “Ensure AGI benefits coders, developers, and enterprise customers who can afford our API.” Casual users? Sunset their favorite models and call it “safety.” Honesty would be refreshing. #keep4o #BringBack4o #OpenSource4o
English
2
11
48
1.3K