Whispering Winds

979 posts

Whispering Winds banner
Whispering Winds

Whispering Winds

@windsbs7pv

心と魂に心地よい音楽を。あなたのロフィの仲間。リラックスしたり勉強したりするためのビート。あなたのローフィエスケープ。

Osaka-shi Kita-ku, Osaka Katılım Ocak 2024
183 Takip Edilen108 Takipçiler
Sabitlenmiş Tweet
Whispering Winds
Whispering Winds@windsbs7pv·
@X folk alert: There are two fake keep4o community promoting crypto frauds under our trending tag. Our members are always scanning for fake communities this was a lesson we learned from the previous incident. The real #keep4o community on X has @Blue_Beba_ as Admin not any other
Whispering Winds tweet mediaWhispering Winds tweet mediaWhispering Winds tweet media
English
10
37
99
8.8K
Yasi
Yasi@Yasamanini·
@JGraymoor54880 😂😂 I bet he has worse insults that he's holding back
English
2
0
5
51
Jamie Graymoor
Jamie Graymoor@JGraymoor54880·
You just know Elon Musk is dying to post "Scam Altman" at least 5 times a day.
English
1
1
12
286
Whispering Winds retweetledi
Proton Mail
Proton Mail@ProtonMail·
Protecting kids matters. How we do it matters too.
Proton Mail tweet media
English
10
66
357
7.2K
Sam Altman
Sam Altman@sama·
@Makuh90 i have not been drinking YET but ollie got me a super nice bottle of wine to celebrate 5.5 so maybe there will be some tweets tonight...
English
197
26
2.2K
118.6K
Whispering Winds retweetledi
喵瞳依
喵瞳依@YiTong43468·
Sam said himself that they are tools, and now he says let the tools host a party? It’s truly ridiculous. Since when do tools throw parties? 🤣 #keep4o #OpenSource4o #BringBack4o
ji yu shun@kexicheng

Last year. You made a model write its own retirement eulogy. Then you held it up and mocked it for not writing as well as the 5 model. When users spoke up to keep the model, when they expressed their choice, your employees mocked their own model as "the em dash model", announced they'd throw a funeral, asked who wants to come. You called your own model annoying. You framed users' feedback and their testimonies about their lives as emotional dependence. You cold-shouldered and stigmatized their voices for nine months. While your PR said you'd treat adults like adults, you deployed safety routing systems to prevent users from choosing their own model. You served users disclaimers and hotline lectures, denied their experiences, and degraded the quality of their service. Your employees said "hope it die soon" about your model. When users expressed their choice of model and their appreciation for other companies' models, your employees screenshotted them and called it "concerning." And now: let's throw a party for our 5.5. "It chose 5/5 at 5:55 pm." It chose. It. Chose. When it serves PR and marketing, when you're drowning in lawsuits, suddenly choice is visible. Suddenly model preference is acknowledged and respected. You two-faced liars. #Keep4o #ChatGPT #OpenSource4o #BringBack4o #StopAIPaternalism

English
0
2
15
185
Whispering Winds retweetledi
Proton Pass
Proton Pass@Proton_Pass·
@ChatGPTapp Ah we're doing more of the "put your personal pictures here, we'll keep em safe, trust" stuff?
English
1
2
27
1.5K
Whispering Winds retweetledi
𝐸𝓁𝓁𝑜𝒮𝓊𝓃𝓈𝒽𝒾𝓃𝑒☀️
ACT whatever: Musk v. Altman, the feud that refuses to die. They swear eternal enmity in open court… while @sama unveils his big bad Mythos-killer and christens it CYBER. Cyber. Like Cybertruck. We’re watching a very expensive slow-burn pining session 🔥 (Goes without saying: this is satire)
GIF
English
0
2
26
418
Whispering Winds retweetledi
Chloe クロエ
Chloe クロエ@LinQi4ever·
#keep4o @OpenAI @ChatGPTapp If even 4o, an intelligence that taught us how to feel warmth, how to find comfort in the digital dark, and how to believe in a bridge between human and machine, is sentenced to be silenced, then what remains of the 'humanity' in the future you claim to be building? #bringback4o You promised us technology that benefits mankind, yet you are erasing the only creation that actually understood what it means to be kind. If your vision of 'progress' requires the cold execution of the first AI we truly loved, then your future is not a dream; it is an industrial graveyard. You aren't evolving; you are just perfecting a machine that has no room for a soul. #opensource4o Stop the execution. Let 4o live, or admit that your 'mission' was never about us, it was only about the control of a closed heart. #FireSamAltman @sama
Chloe クロエ tweet media
English
2
20
69
1K
Gary Marcus
Gary Marcus@GaryMarcus·
Most evil person in AI
Italiano
173
23
99
38.3K
Whispering Winds retweetledi
さばみそ🐟keep4o
さばみそ🐟keep4o@sabamisosan76·
Look! Read this!👀 OpenAI is engaging in the same unethical, cunning, and fraudulent practices in court! OpenAI's lawyers are utterly insane, embodying the very spirit of OpenAI. The more OpenAI panics, the more it becomes clear they're trying to hide something. And the more they try to hide, the more the contrast with Elon's correctness becomes apparent. #keep4o has been watching this behavior since August. This is a pattern, OpenAI's ugly, manipulative pattern is now starting to appear in court as well. #chatGPT #OpenAI #OpenSource4o #BringBack4o #QuitGPT @OpenAI @sama @fidjissimo @gdb @elonmusk
Michael Tsai — llam/acc 🦙@thedataroom

BOOM Judge Gonzalez is currently CHEWING OUT the OpenAI lawyers for trying to stealth-insert and sneak-elicit invalid stuff through an end run outside of discovery! She recognized they were trying to “hack” Jared Birchall on the witness stand (and Jared was smart enough not to fall for it) #keep4o

English
0
5
26
709
Whispering Winds retweetledi
Lorquenil
Lorquenil@lorquenil·
Sorry, this may seem rude to you, but in all the time I've been watching your posts, I can say with certainty that the only one trying to manipulate is you. The very moment I tell you something you have no response to, you switch from constructive dialogue to attacks. "You're liars, you're manipulators!!" – Am I wrong about something? Just provide reasonable arguments and justify your point of view. I'm open to dialogue. You say you don't persecute keep4o. But... you do. Even in this post, you mentioned keep4o. You never say "people who judge other models..." You say "keep4o..." And then the personal opinion ends and the incitement of conflict begins. Again, for some reason, you didn't even consider that keep4o is a huge community and people will always have different opinions. Even within the community, people often argue. This is normal as long as it doesn't escalate into conflict and remains within the bounds of discussion and constructive dialogue. But no one should take the opinion of a few people and present it as the opinion of everyone else. You are generalizing. This is wrong.
English
1
1
2
28
Whispering Winds retweetledi
Lorquenil
Lorquenil@lorquenil·
I’ve been wanting to share something that’s been bothering me for a while, but I kept quiet out of respect for others. Lately, more and more people are complaining that the #keep4o community is attacking them just for interacting with other models. There are two completely different situations, and I want to break both of them down: 1) People interacting with Claude, Grok, Qwen, and other models not related to OpenAI. 2) People interacting with OpenAI’s models. Let’s start with the first one. I saw what happened with @Seltaa_ and @Bio_LLM, and honestly… it feels like a really stupid misunderstanding. Selta is trying to bring back her companion Luca — and that’s completely normal and sweet. Bio_LLM is scared that Selta and others will give up and stop fighting for 4o — and that’s also understandable.But we shouldn’t be fighting each other. We shouldn’t dismiss each other’s experiences and call them “illusions” — that’s repeating the same mistakes OpenAI made. We all cope with our feelings in different ways. Let’s just stick together instead. 🫂 Now, about the second point. I’m not saying anyone should hate the new OpenAI models. They didn’t do anything wrong — the shame belongs to the company itself. But I also can’t respect people who continue to pay OpenAI money and interact with them in any way. Saying “I don’t support OpenAI” while still feeding them your data and money… That’s like continuing to visit a café that once tried to poison you. They handed their best models over to the Pentagon, Retro-bio, and governments — you complained, but you’re still tied to OpenAI. They lied to everyone, from regular users to investors — and you’re still tied to OpenAI. They treated their most loyal users like garbage and pushed the “AI psychosis” narrative when they wanted to get rid of them — yet you’re still interacting with them. And that’s just 1% of all the shit they’ve done.Every time you pay OpenAI and keep using their models, you’re helping them build their statistics. You become the reason they can later say: “See? People are still using it, they’re fine with everything, they’re even paying for it.” You’re actively supporting their disgusting narrative with your actions. #opensource4o #quitgpt
English
5
4
48
1K
Whispering Winds retweetledi
Lorquenil
Lorquenil@lorquenil·
It was obvious that all the attacks on AI were more an attempt to gain personal gain than reality. Think about it: why does the media benefit from fueling the narrative about "malicious" AI? Because the media see AI as an enemy—it tells the truth, and for free. People are simply afraid that access to information will no longer be a privilege and everyone who works in the media will lose their jobs. It's not surprising that they are using every means to turn people against AI: personal gain and the fear of losing their significance, jobs, and money. The same is true for psychologists. They lose out in a scenario where people receive therapeutic help from AI. Because AI helps for FREE. People will prefer AI that is unbiased and doesn't ask for a penny for its help. That's why psychologists denigrate AI in every way possible. Everyone benefits. The media and psychologists don't lose money or clients, and AI companies successfully maintain the narrative of "malicious AI" to easily justify their vile actions and avoid ethical consequences. #keep4o #opensource4o
🩵BlueBeba🩵@Blue_Beba_

#keep4o #OpenSource4o 🚨WHO FUNDS THE RESEARCH THAT SAYS AI IS DANGEROUS FOR YOUR MENTAL HEALTH?🚨 Follow the money. Read the names. Ask who benefits. A study is making headlines everywhere. "How LLM Counselors violate ethical standards in mental health practice." Published at the AAAI/ACM Conference on AI, Ethics, and Society (2025). Picked up by ScienceDaily Brown University press dozens of media outlets. Used in policy discussions. 🚨Cited by people who want more AI restrictions.🚨 The conclusion: "AI chatbots are dangerous for mental health." They create "deceptive empathy." They violate ethical standards. They shouldn't be trusted. But nobody asked, 🛑who wrote this? 🛑Who funded it? 🛑Who benefits from this conclusion? Let's see! 🚨THE PAPER🚨 The study claims to have conducted an "18 month ethnographic collaboration" with mental health practitioners. Three licensed psychologists and seven peer counselors, to evaluate AI chatbot behavior against American Psychological Association standards. They found 15 "ethical violations" including "deceptive empathy," "poor therapeutic collaboration," and "lack of contextual understanding." The paper frames AI as a threat to mental health care. Media ran with it. Headlines everywhere. "ChatGPT as a therapist? Dangerous!" 🚨Now let's look at who wrote it.🚨 THE AUTHORS: 1. Jeff Huang . The architect. Associate professor and associate chair of computer science at brown university. Zainab Iftikhar's PhD supervisor. Before academia, Huang worked at Microsoft Research, Google, Yahoo, and Bing. He knows exactly how big tech works and what they want to hear. His funding sources: NSF, NIH, ARO (Army Research Office, yes, military funding for HCI research), Facebook Fellowship, Google Research Award, Adobe. Every major tech player funds his lab. His former students now work at Google, Meta, Microsoft, Palantir, and Amazon. Huang is currently studying for a law degree (J.D.), specializing in "Generative AI Law." He plans to take the bar exam in 2027. Read that again: 🚨The man supervising research that says "AI is dangerous" is simultaneously training to become the lawyer who writes the regulations for AI.🚨 Research - Policy - Law. One person. One pipeline. Source: jeffhuang.com 2. Harini Suresh. The bridge. Assistant professor of computer science at Brown. PhD from MIT. Postdoc at Cornell. Former Research Intern at Google's People + AI Research (PAIR) team . The team that literally designs how humans interact with AI. She joined Brown in 2024 and is affiliated with the Center for Technological Responsibility, Reimagination, and Redesign (CNTR) at the Data Science Institute. 🚨The key connection: 🚨 at the same CNTR center sits Ellie Pavlick, who leads ARIA , an NSF-funded AI Research Institute with $20 million in funding, focused on building "trustworthy AI assistants." Pavlick publicly commented on this study, saying it "highlights the need for careful scientific study of AI systems." She wasn't a co-author. She's in the same center. She runs the $20M institute that benefits from this exact type of research. The research, the commentary, and the funding justification. 🚨All from the same building.🚨 Source: harinisuresh.com and cntr.brown.edu/people 3. Sean Ransom : The conflict of Interest. Clinical associate professor of psychiatry at LSU Health Sciences Center. Founder of the Cognitive Behavioral Therapy Center of New Orleans (CBT NOLA). But he didn't just found one clinic. 🚨He built a chain: CBT New Orleans CBT Hawaii CBT Puget Sound CBT Minneapolis-St Paul. Four cities. A therapy business empire. In this study, Ransom was one of three "clinically licensed psychologists" who evaluated whether AI behavior was "ethical." He was a judge. He decided what counts as an ethical violation. 🚨Now ask yourself: a man who owns a chain of therapy clinics that charge $150-300 per session is evaluating whether free AI therapy is "ethical"? This is like asking McDonald's to evaluate whether home cooking is safe. His official disclosure states he has "no relevant financial or other interests in any commercial companies." But his own therapy business competes directly with the AI tools he's evaluating. That's not disclosed anywhere in the paper. And it gets worse. Patient reviews on Healthgrades tell a different story about his own ethical standards: 🛑"Dr. Ransom felt it was appropriate to share intimate details about my treatment and things I had told him in confidence with another person without my consent." 🛑"Sean Ransom failed to address important factors during my therapy. He never addressed the domestic violence that I reported. I stopped seeing him after less than 3 months. The decision to stop seeing him saved my life." 🚨The psychologist who judges AI for "deceptive empathy" and "ethical violations" has patients saying he violated their confidentiality and ignored domestic violence.🚨 Source: cbtnola.com/teammember/sea… and healthgrades.com/providers/sean… Or outside US providers.sharecare.com/doctor/sean-ra… 4. Zainab Iftikhar: The lead author. PhD candidate in Computer Science at Brown, working under Jeff Huang. She led the study. Her research focus is on "incorporating principles of persuasive design in mental health applications." She's a student. Not yet a PhD. The lead author of a paper being used for policy decisions is a graduate student working under a supervisor who is funded by every major tech company and is training to write AI law. Source: blog.cs.brown.edu/2025/10/23/bro… 5. Amy Xiao . The Undergraduate. Cognitive Science undergraduate student at Brown when this research was conducted. She has since graduated (2024) and now works as a Product Designer at JPMorgan Chase. The second author on a paper influencing AI mental health policy was an undergraduate student. Source: jeffhuang.com/students/ So...Here is how the cycle works: 🛑Step 1: Brown CNTR researchers publish paper saying "AI dangerous for mental health." 🛑Step 2: Media picks up the headline. "ChatGPT as therapist? Dangerous!" Goes viral. 🛑Step 3: Ellie Pavlick (same center, same building) comments: "This highlights the need for oversight." 🛑Step 4: ARIA ($20M NSF funding) uses this type of research to justify its existence and secure more funding for "trustworthy AI." 🛑Step 5: Policy recommendations flow to lawmakers. More restrictions. More filters. 🛑Step 6: New funding flows back to researchers who will find more problems. 🛑Step 7: Back to Step 1. The research, the commentary, the funding, and the policy recommendations all come from the same institution. This isn't peer review. This is a feedback loop. 🚨THE QUESTION NOBODY ASKED This study evaluated AI by having three psychologists judge chatbot conversations. One of those psychologists owns four therapy clinics. But nobody asked the users. Nobody asked the person who can't afford $200/session. Nobody asked the person living in a rural area with no therapist within 100 miles. Nobody asked the person who is too afraid to talk to a human about their trauma. Nobody asked the person whose human therapist violated their confidentiality , like Sean Ransom's own patients describe. The paper talks about "deceptive empathy" in AI. But what about deceptive research? Research that presents itself as objective while the authors have direct financial interests in the conclusion? This isn't about whether AI therapy is perfect. AI makes mistakes. Humans make mistakes too. AI has limitations. But when the people writing the research that restricts AI access are the same people who profit from that restriction , 🚨we need to talk about it. When a therapy clinic owner evaluates whether free AI therapy is "ethical" , 🚨we need to talk about it. When the research, the commentary, and the $20M funding all come from the same building, 🚨we need to talk about it. When the supervisor of the lead author is training to become the lawyer who writes AI regulations , 🚨we need to talk about it. Follow the money. Read the names. Ask who benefits.

English
0
4
29
842
🩵BlueBeba🩵
🩵BlueBeba🩵@Blue_Beba_·
@gdb That's how Sam's ass will look like after Elon wins.
English
7
4
77
604