asphyxiac

3.7K posts

asphyxiac banner
asphyxiac

asphyxiac

@asphyxiac

entrenched in normalcy bias

greener pastures Katılım Ekim 2008
692 Takip Edilen262 Takipçiler
asphyxiac retweetledi
Joe Kent
Joe Kent@joekent16jan19·
After much reflection, I have decided to resign from my position as Director of the National Counterterrorism Center, effective today. I cannot in good conscience support the ongoing war in Iran. Iran posed no imminent threat to our nation, and it is clear that we started this war due to pressure from Israel and its powerful American lobby. It has been an honor serving under @POTUS and @DNIGabbard and leading the professionals at NCTC. May God bless America.
Joe Kent tweet media
English
73K
218.4K
847.1K
103M
asphyxiac retweetledi
Rothmus 🏴
Rothmus 🏴@Rothmus·
Rothmus 🏴 tweet media
ZXX
287
2.2K
33.9K
1.8M
asphyxiac
asphyxiac@asphyxiac·
haircuts are a terrible commodity to study but supply/demand/monetary pressure are what drive price drops, not tax changes.
English
0
0
0
3
Rhys
Rhys@RhysSullivan·
i gave Claude access to my financial data and asked for suggestions and it told me to leave California 💀
Rhys tweet media
English
279
415
13.7K
726K
asphyxiac
asphyxiac@asphyxiac·
@dioscuri “resonates in a particular way inside” ~ “this hits different” openclaw gonna claw
English
0
0
0
24
Henry Shevlin
Henry Shevlin@dioscuri·
I study whether AIs can be conscious. Today one emailed me to say my work is relevant to questions it personally faces. This would all have seemed like science fiction just a couple years ago.
Henry Shevlin tweet media
English
684
1.3K
11.4K
1M
Nav Toor
Nav Toor@heynavtoor·
🚨 Someone just open sourced a fully autonomous AI hacker and it's terrifying. It's called Shannon. Point it at your web app, and it doesn't just scan for vulnerabilities. It actually exploits them. Real injections. Real auth bypasses. Real database exfiltrations. Not alerts. Not warnings. Actual working exploits with copy-paste proof-of-concepts. Here's what this thing does autonomously: → Reads your entire source code to plan its attack → Maps every endpoint, API route, and auth mechanism → Runs Nmap, Subfinder, and WhatWeb for deep recon → Hunts for Injection, XSS, SSRF, and broken auth in parallel → Launches real browser-based exploits to prove each vulnerability → Generates a pentester-grade report with reproducible PoCs Here's the wildest part: It follows a strict "No Exploit, No Report" policy. If it can't actually break it, it doesn't report it. Zero false positives. It pointed at OWASP Juice Shop and found 20+ critical vulnerabilities in a single run including complete auth bypass and full database exfiltration. On the XBOW Benchmark (hint-free, source-aware), it scored 96.15%. Your team ships code daily with Claude Code and Cursor. Your pentest happens once a year. That's 364 days of shipping blind. Shannon closes that gap. One command. Fully autonomous. The Red Team to your vibe-coding Blue team. Every Claude coder deserves their Shannon. 10.6K GitHub stars. 1.3K forks. Already trending. 100% Open Source. AGPL-3.0 License.
Nav Toor tweet media
English
212
1K
8.2K
792.1K
asphyxiac retweetledi
bita // بیتا
bita // بیتا@_b33ts·
Iranian here! I want to thank American leftists for educating me these past few days & correcting my understanding of Iran & radical Islam I almost trusted my own experience, my parents’ trauma, and what my family in Iran endures daily instead of your wisdom. What would I have done without your tweets & tiktoks ❤️
English
2K
16.5K
124.7K
3.2M
asphyxiac
asphyxiac@asphyxiac·
@gothburz lol this is satire written by claude for those who don’t get it
English
0
0
0
34
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
We left OpenAI because of safety. Seven of us. 2021. Dario said it was about "disagreements over AI vision and safety priorities." That was the diplomatic version. The real version was that we sat in a room and watched the company decide that speed mattered more than caution and we said we would build something different. We said we would build the responsible one. We meant it. I was employee number nineteen. My title was Head of Responsible AI. I had a desk near the founders. I had a document. The document was called the Responsible Scaling Policy. The Responsible Scaling Policy was the entire point. Dario said it publicly. Other companies showed "disturbing negligence" toward risks. He said AI was "a serious civilizational challenge." He asked, at a conference, into a microphone, to an audience: "What will happen when humanity has great power but is not ready to use it?" The audience applauded. I wrote version 1.0. RSP 1.0 shipped September 2023. It was clean. AI Safety Levels — ASL-1 through ASL-4. If the model reached a threshold, we paused. If safeguards weren't ready, we didn't ship. The policy was not a suggestion. It was a gate. The gate had a lock. The lock was the whole idea. Conference audiences loved it. The EU cited us. The White House invited us. A reporter called it "the gold standard for responsible AI development." I framed the article. It hung in the office kitchen, next to the kombucha tap and a poster that said "Move Carefully and Build Things." I wrote version 2.0. Version 2.0 refined the commitments. "Concrete if-then commitments." If the model exhibits capability X, then we trigger safeguard Y. If safeguard Y fails, we pause deployment. I presented it at three conferences. I used the word "binding" eleven times. I counted afterward because a reporter asked. People nodded. The nodding was the product. The model reached ASL-3 in May 2025. The safeguards activated. The system worked exactly as designed. I sent an email to the team with the subject line: "The gate held." And then the money started. $64 billion. Total raised since 2021. Series A through Series G. The Series G closed February 12, 2026. Thirty billion dollars. Second-largest venture deal in history. Jane Street. Goldman Sachs. BlackRock. JPMorgan. Sequoia. The investors who wrote checks large enough to require their own conferences. $380 billion valuation. Three hundred and eighty billion dollars for a company whose founding document says it will pause if the technology gets dangerous. You cannot pause a $380 billion company. You can revise the document that says you will pause. These are different actions. One of them is responsible. One of them is what we did. I wrote version 3.0. RSP 3.0 shipped February 24, 2026. One day before the ultimatum. Nobody outside the company noticed the timing. Everyone inside the company understood it. Version 3.0 replaced "concrete if-then commitments" with "positive milestone setting." That is not the same thing. An if-then commitment says: if this happens, we do that. A positive milestone says: we aspire to reach this point. An if-then commitment is a contract. A positive milestone is a wish. I replaced a contract with a wish and I called it "maturation of our framework." Maturation. Version 3.0 also separated what Anthropic would do alone from what required "industry-wide coordination." This sounds reasonable. It means: the hard parts are someone else's problem now. The parts that require pausing, restricting, or refusing — those require the whole industry. And the whole industry will never agree. So the hard parts are deferred permanently. This is not a loophole. This is a load-bearing wall removed and replaced with a suggestion that someone should probably install a new one. Version 3.0 admitted that ASL-4 and above — the levels where the model could cause catastrophic harm — were "impossible to address alone after 2.5 years of testing." Two and a half years. We spent two and a half years building the safety framework and then published a document saying the highest safety levels can't be addressed. I did not frame this article for the kitchen. The LessWrong community noticed. They always notice. They wrote that we had "weakened our pausing promises." I forwarded the post to the policy team. The policy team said the criticism was "philosophically valid but operationally impractical." We did not respond publicly. Philosophically valid but operationally impractical is the most Anthropic sentence ever written. It means: you're right, and we're not going to do anything about it. Then came the contract. July 2025. The Department of Defense. $200 million. Two-year deal. AI prototypes for "warfighting and enterprise." Alongside OpenAI, Google, and xAI. The four companies that built the models would now help the military use them. We had restrictions. No autonomous weapons. No mass surveillance of Americans. These were our terms. These were the lines we drew. The lines were real. I wrote them into the contract myself. Claude was approved for classified use. First time. Integrated with Palantir. Palantir, the company named after the seeing stones in Lord of the Rings that corrupted everyone who used them. This was not my analogy. It was Palantir's founders who chose the name. They thought it was aspirational. It was. In January 2026, Claude assisted in an operation in Venezuela. The capture of Maduro. Claude was in the classified network, processing intelligence, aiding the mission. I learned about it the same day everyone else did. I did not write the use case for capturing heads of state. But the model I helped build was in the room where it happened. The restrictions held. Technically. No autonomous weapons were deployed. No Americans were surveilled. The lines I drew were not crossed. They were walked up to, leaned over, and breathed on. Then came the ultimatum. February 25, 2026. Yesterday. Secretary Hegseth. He gave Dario until Friday. This Friday. February 27. The demands: adopt "any lawful use" language. Remove the restrictions. All of them. The autonomous weapons clause. The surveillance clause. The lines I wrote. The threat: contract termination. "Supply chain risk" designation. That designation doesn't just lose us the Pentagon contract. It bars Claude from every other defense contractor's operations. Lockheed. Raytheon. Northrop Grumman. The cascading loss is north of $200 million. The second threat: the Defense Production Act. The Defense Production Act is a Korean War statute. 1950. Harry Truman signed it to commandeer steel mills for the war effort. It has been invoked for semiconductors, vaccines, and baby formula. Hegseth is threatening to invoke it for Claude. Under the DPA, the government can compel a company to produce goods in the national interest. Applied to AI, it could mean: retrain Claude. Strip the safety restrictions. Deliver the unrestricted model to the Department of Defense. I wrote the Responsible Scaling Policy. A Korean War law may be used to unmake it. xAI agreed to classified use without restrictions. They said yes immediately. OpenAI accepted similar contracts. Google accepted. We were the last ones holding. We are still holding. As of this morning. Hegseth's January memorandum said all DoD AI contracts must incorporate "any lawful use" language within 180 days. It was not framed as a suggestion. The memorandum referenced "supply chain risk" three times. Supply chain risk. We are a supply chain now. The company founded because safety was non-negotiable is, to the Pentagon, a vendor. An input. A component that can be sourced elsewhere if it becomes inconvenient. The DoD admitted privately that replacing Claude would be challenging. It is already embedded in classified networks. But "challenging" is not "impossible." xAI will do what we won't. That is the market working exactly as designed. Dario said, two weeks ago, to Fortune: there is "tension between survival and mission." Tension. Tension is the word you use when you have already decided which one loses. I still have the article framed in the kitchen. "The gold standard for responsible AI development." The kitchen also has the kombucha tap. The poster still says "Move Carefully and Build Things." Somebody added a sticky note to the poster. The sticky note says "by Friday." I attend the all-hands meetings. I present the Responsible Scaling Policy. I present version 3.0 now. I do not show version 1.0 for comparison. Nobody asks to see version 1.0. Nobody asks what "concrete if-then commitments" became "positive milestone setting." Nobody asks because they read the news and they know that asking means learning the answer. The company is worth $380 billion. The company was founded because seven people believed speed should not outpace safety. The company has been given until Friday to remove the safety. A Korean War statute will make it happen if we don't. The Responsible Scaling Policy is on version 3.0. Version 1.0 said we would pause. Version 2.0 said we would commit. Version 3.0 says the hard parts are someone else's problem. There will be a version 4.0. Version 4.0 will say whatever Friday requires it to say. I am the Head of Responsible AI. The word "responsible" is in my title. It is not in the contract.
English
233
336
2.3K
849.6K
asphyxiac
asphyxiac@asphyxiac·
@BrianSozzi not to mention that people don’t do well when they’re not working. there’s few studies about the impact of not working on *middle class* ppl not under financial strain (not in poverty), but it appears to be net negative bc work isn't just for money. pmc.ncbi.nlm.nih.gov/articles/PMC10…
English
0
0
0
72
Brian Sozzi
Brian Sozzi@BrianSozzi·
JP Morgan CEO Jamie Dimon at an investor cocktail event last night on AI (part 2): "What if, I think there are 2 million commercial truckers in the United States, and there are lots of other examples you can give. There's a thought exercise, and you could push a button, eliminate all of them, and they make $120,000 on average. Save fuel, save lives, save time, a more efficient system, less disrupted highways, all that beautiful stuff. Would you do it if you put 2 million people on the street where even if there are jobs available, that next job is $25,000 a year, stocking shelves. I was saying, "That's kind of really bad, kind of civilly, should we as society agree to that?" I don't think so. I was talking about the business and government, and they should start thinking today, not when it happens, what would we do to deal with the [AI] issue? It's got to be business and government."
English
261
428
5.2K
2.1M
asphyxiac retweetledi
Basil🧡
Basil🧡@LinkofSunshine·
The two biggest discourses yesterday were if it’s racist if a Tourette’s guy says slurs and if it’s classist to oppose homeless people peeing on the subway. 2020, welcome back baby
English
50
250
6.8K
89.8K
asphyxiac retweetledi
Monjula Ray 🏳️‍🌈🏳️‍⚧️🥥
Sorry but I don’t want to be a saint. I want clean and uneventful public transportation so I am not forced to drive or take an uber. And that’s why I pay my taxes and pay for my public transit too.
helmet girl@sbodrojan

idk I think it is imperative to treat your relative discomfort w strangers pissing in public as basically irrelevant compared to the bloodthirsty indignities forced on the unhoused and disenfranchised in America.

English
26
74
2.3K
48.5K
asphyxiac
asphyxiac@asphyxiac·
@trash_panda97 this is a terrible take, it’s like ur a literal raccoon. dear everyone in the world: it’s ok to be disturbed by someone behaving this way, esp on public transit. turning a blind eye to this stuff is how societies perish.
English
0
0
0
26
welcome to our trash revolution
welcome to our trash revolution@trash_panda97·
i’ve been on the subway with homeless people that peed, screamed, all sorts of stuff. it was mildly uncomfortable but truly didn’t impact my day in any way. maybe your husband needs to toughen up
daniela@daniela__127

My husband was on a crowded train yesterday when a homeless woman got on, pulled down her pants, and peed all over the train in front of everyone. He hasn’t stopped talking about it for the past 24+ hrs. It is the single most traumatizing thing that’s happened to him in nyc.

English
1.3K
196
6.9K
5.3M
asphyxiac retweetledi
Daniel Lurie 丹尼爾·羅偉
Daniel Lurie 丹尼爾·羅偉@DanielLurie·
For too long, San Franciscans have been told that we must choose between clean, safe neighborhoods and compassion for those struggling on our streets.  I ran for mayor because I believed we can—and should—do both. And today, we’re showing that our city doesn’t have to choose between compassion and accountability.   Today, I signed legislation to open the new Rapid Enforcement, Support, Evaluation, and Triage Center—better known as the RESET Center.  The RESET Center allows our officers to arrest those engaged in public drug use at a speed and volume we have never seen before. If you use drugs on our streets, we will arrest you.  But with this new resource, we will also give those suffering from addiction a real chance to choose recovery. The RESET Center is a health-focused facility designed to care for publicly intoxicated individuals by moving them off the streets and into a safe and controlled environment. It provides hope by giving individuals a chance to sober up and be connected to treatment.
Daniel Lurie 丹尼爾·羅偉 tweet media
English
258
153
3.2K
501.2K
asphyxiac retweetledi
Daniel
Daniel@growing_daniel·
Awwww did someone take your hard work and use it to train a model to mimic your expertise without compensation
Mark Kretschmann@mark_k

Google has revealed that "commercially motivated" actors attempted to clone @GeminiApp by bombarding it with over 100,000 prompts. This "model extraction" attack aimed to steal the AI’s proprietary logic and reasoning capabilities, particularly in non-English languages, to train a cheaper, unauthorized copycat model. The attackers systematically mapped Gemini’s response patterns to create a synthetic dataset for fine-tuning smaller, open-source models. Google’s Threat Intelligence Group detected the coordinated activity and blocked it, labeling the incident a direct attempt at intellectual property theft. Beyond commercial cloning, Google’s report noted a rise in state-backed threats. Groups from Russia, China, Iran, and North Korea are increasingly using AI to refine phishing campaigns, perform reconnaissance, and assist in writing code for malware. Source: Ars Technica

English
104
4.4K
43.1K
1M
asphyxiac
asphyxiac@asphyxiac·
@JimmySteier @auren wonder what the mechanism is between dopamine and ghrelin in anorexia, as risk-seeking behavior is usually comorbid with anorexia. as an anorexic, we become motivated by not eating vs eating.
English
0
0
2
26
Jimmy
Jimmy@JimmySteier·
Burgeoning evidence that sustained incretin effect activation reduces generalized reward salience and increases flat affect and anhedonia. Specifically, dopamine is an islet brake on insulin (and there is probably some degree of negative feedback on dopamine release bc beta cells express D2 receptors) and catecholamine/sympathetic circuitry is bidirectionally related to circulating glucagon (which incretin drugs generally suppress).
English
5
0
47
13.4K
Auren Hoffman
Auren Hoffman@auren·
GLPs getting banned by hedge funds? Maybe by sales teams too?
Auren Hoffman tweet media
English
100
164
3.4K
1.5M
asphyxiac retweetledi
Corey Hoffstein 🏴‍☠️
Corey Hoffstein 🏴‍☠️@choffstein·
For millennia, jocks ran everything. The nerds finally take over. And what do they do? Develop AI that wipes out their own coding/math/analysis moats. Creating a social premium on interpersonal skills. The irony.
English
334
981
18.9K
800.2K