Psst.org

323 posts

Psst.org banner
Psst.org

Psst.org

@psst_org

Helping people in AI + Tech keep the public informed. Got information you're concerned about? 1) Save it in the Psst Safe 2) We'll help you take it from there

NYC Katılım Ekim 2024
347 Takip Edilen112 Takipçiler
Psst.org
Psst.org@psst_org·
OpenAI’s decision to enable erotica in ChatGPT is causing internal opposition among its own advisors.📣 In January, a council of advisers selected by OpenAI warned that AI-powered erotica could foster unhealthy emotional dependence on ChatGPT for users and that minors could find ways to access sex chats. One council member even suggested that OpenAI risked creating a “sexy suicide coach.” This is exactly why insider voices matter: without external oversight, only people inside these companies can push back on products that could harm vulnerable users—especially children. #AI #OpenAI wsj.com/tech/ai/openai…
English
0
0
2
37
Psst.org retweetledi
AIWI - The AI Whistleblower Initiative
Between March 2025 and February 2026, we tracked 30 anonymous cases across AI and Big Tech. 16 were directly related to AI safety concerns. In a single year, those anonymous disclosures almost outpaced the total number of named cases we documented across nearly a decade. But regardless of how an insider chooses to disclose, the safest path is to get expert and legal support early, before taking any action. AIWI can connect insiders with pro bono legal counsel experienced in high-profile disclosure cases. Our Contact Hub lists vetted organisations, including: @psst_org, @TheSignalsNetw, @wbaidlaw, @xposefacts, @GovAcctProj, @WhistleUK, and @Whistleblower_N. #AIWI #AIWhistleblowing #WhistleblowerProtection #AISafety #AIGovernance #TechAccountability #Whistleblowing
English
2
2
2
75
Psst.org
Psst.org@psst_org·
Employees from OpenAI and Google, including Google DeepMind chief scientist @JeffDean , filed an amicus brief supporting Anthropic in its legal fight with the U.S. government. The brief warns that without AI regulation, developer-imposed safeguards (like hard lines on mass surveillance or autonomous lethal weapons) are a key protection against serious harms. Inside voices matter. When employees cross company lines to speak up, it signals that the stakes are real. wired.com/story/openai-d…
English
0
0
0
47
Psst.org
Psst.org@psst_org·
Next Tuesday at #SXSW2026, our Co-Founder @Jennifermgibson sits down with Attaullah Baig, the former Head of Security at WhatsApp, to explore the costs of whistleblowing and how we can build real safeguards so that truth-telling in the age of AI isn’t a career-ending act of courage. 📅 Mar 17 | 4–5PM CT | JW Marriott, Salon AB, Austin TX "Another 'Disgruntled Employee,' They Said" #Whistleblowing #TechAccountability #AI @sxsw
Psst.org tweet media
English
0
0
0
151
Psst.org
Psst.org@psst_org·
Yes, Secretary of War Hegseth may have officially “won” last week, but Amodei MOGGED. But this isn’t a simple hero‑vs‑villain story; it’s a red flag waving to remind us that unchecked AI could create serious risks and harms. The overlooked voice in debates like the Anthropic-Pentagon controversy is the Workers inside AI companies. They understand these systems' capabilities and risks more intimately than anyone. Unlike CEOs, they represent diverse values, aren't bound by quarterly earnings, and have collective power. We can't assume the next CEO will draw the same ethical lines. That's why worker voices matter. Read our latest piece on Substack about the collective power of AI workers in AI safety. 📎: psstpsst.substack.com/p/public-safet… #AI #Anthropic #Pentagon
English
0
0
0
31
Psst.org
Psst.org@psst_org·
Psst.org was founded with the aim of building safer, collective pathways for people with public-interest information to come forward. Today, we’re pleased to share the next stage in our nonprofit’s evolution: @jennifermgibson, co-founder and leader of the organization’s legal work, has been appointed Executive Director. As the risks facing insiders in tech and government grow – especially around AI and surveillance – Gibson’s appointment reflects Psst.org’s continued focus on building safer, collective pathways for people to raise the alarm without risking everything. Psst.org’s model integrates legal protection, strategic communications, and public advocacy to support whistleblowers collectively rather than leaving individuals to navigate the process alone. It was founded by three whistleblowing veterans with expertise across these areas – Jennifer Gibson, Rebecca Petras, and @AmberScorah. Jennifer brings more than fifteen years of experience representing whistleblowers and leading complex human rights and accountability work, most recently overseeing Psst.org’s support for former WhatsApp Head of Security Attaullah Baig as he raised concerns about data protection practices at Meta. More information at 📎: psst.org/blog/jennifer-…
English
0
0
0
39
Psst.org
Psst.org@psst_org·
AI employees at @GoogleDeepMind and @OpenAI have now published an open letter urging their companies to stand in solidarity with @Anthropic and refuse to allow their models to be used by the Department of War for domestic mass surveillance and autonomously killing people without human oversight. 380+ signers and counting. Incredibly powerful example of how employees inside AI companies can use their voices. 👏🏼 👏🏼 👏🏼
Psst.org tweet media
English
2
1
4
256
Psst.org
Psst.org@psst_org·
BREAKING: Anthropic refuses to cede to the Pentagon’s demands. CEO Dario Amodei said, “We cannot in good conscience accede to their request,” and urged that mass surveillance and autonomous weapons be excluded from any U.S. government contract. We need enforceable AI regulations — not just reliance on CEOs’ good consciences — to ensure safety and limits on harmful uses. #AI #Anthropic #AISafety anthropic.com/news/statement…
English
0
0
0
29
Psst.org
Psst.org@psst_org·
🚩Anthropic has abandoned a core commitment from its flagship safety policy. In 2023, the company pledged not to train any AI model unless it could first guarantee that its safety measures were sufficient. Earlier this month, CEO Dario Amodei announced a major revision to the Responsible Scaling Policy (RSP) that removes that guarantee. Anthropic has long marketed itself as the most safety-focused of the leading AI labs, but this change may signal a pragmatic shift to temper safety commitments in order to capitalize on recent technical and commercial momentum. #Anthropic #AIsafety time.com/7380854/exclus…
English
0
0
1
95
Psst.org
Psst.org@psst_org·
🚨Breaking: Defense Secretary Pete Hegseth summons Anthropic’s CEO to the Pentagon today, reportedly to present Amodei with an ultimatum to lift safeguards following disagreements over the military use of Anthropic's Claude. Anthropic’s red line: the mass surveillance of Americans, and the development of weapons that fire without human involvement. This points to a bigger problem in AI safety. Without AI regulations, we’re left to trust individual CEOs to act responsibly and prioritize safety. #AI #Anthropic axios.com/2026/02/23/heg…
English
1
0
0
59
Psst.org
Psst.org@psst_org·
Why are people calling this social media’s “big tobacco” moment? Zuckerberg showed up in a Los Angeles courtroom last week— Meta‑glasses entourage and all — to testify in a case that could be the biggest legal reckoning Meta has ever faced and reshape how social platforms are built. Company documents show their products caused substantial harm to teens. Parents’ lawyers point to memos that explicitly prioritize making apps hard to put down through features such as infinite scroll, autoplay, likes, beauty filters, and push notifications. Even if outside studies don’t prove clinical addiction, Meta’s own research flagged real harms — for example, how face‑altering filters hurt young girls’ mental health. Big tobacco faced accountability after leaked internal documents showed they engineered addiction. Meta’s internal documents show they knew the risks and still optimized for engagement — the same core parallel. Internal information matters. That’s why insiders are crucial for holding big tech accountable. Go to Psst.org/safe. We're here to help. #AI #Whistleblower
English
0
0
0
36
Psst.org
Psst.org@psst_org·
🏈 AI firms spent huge sums on Super Bowl ads, each vying to brand themselves as the “good guy” — but just days later, several senior employees at top labs publicly resigned. Could the glossy ads be hiding deeper problems? Read our latest Substack on how frontier AI companies are approaching whistleblowing and what these inside voices are telling us on the outside. open.substack.com/pub/psstpsst/p…
English
0
1
1
58
Psst.org
Psst.org@psst_org·
Does Meta want to post for you—post‑mortem? You may be dead, but your social media identity could live on. Meta was granted a patent in December for a system that uses AI to simulate a deceased user’s activity across its platforms—Facebook, Instagram, and Threads—making posts, leaving comments, and interacting with others. #AI #Meta #AIethics fastcompany.com/91493794/meta-…
English
0
0
0
47
Psst.org
Psst.org@psst_org·
Anthropic safety researcher resigns, saying the world is “in peril.” @MrinankSharma exposed a growing tension: safety-first values vs. commercial pressure as AI labs scale. This isn’t just Anthropic — it’s an industry-wide tension between ethics and investor-driven growth. This is why insiders are crucial right now. #AI #Anthropic #AIsafety
English
0
0
1
144
Psst.org
Psst.org@psst_org·
An Anthropic safety researcher resigned Monday, raising concerns about value gaps. @MrinankSharma's claims are alarming but leave important gaps. What exactly makes it hard to let “our values govern our actions” at Anthropic? What pressures push employees to set aside core principles? This is why we need insiders to fill these gaps and AI companies to increase transparency. #AI #AISafety #AIEthics #Anthropic
mrinank@MrinankSharma

Today is my last day at Anthropic. I resigned. Here is the letter I shared with my colleagues, explaining my decision.

English
0
0
0
53
Psst.org
Psst.org@psst_org·
🚨Pinterest lets go of two engineers for creating software to identify fired workers. Any workers who were laid off and want to speak about what they’ve seen behind the scenes at work… our nonprofit can help you do so safely. theguardian.com/technology/202…
English
0
0
0
37