the future liberals want

11.9K posts

the future liberals want banner
the future liberals want

the future liberals want

@aRimmer3

🔞 I mainly use Twitter to follow pets of sex workers

they/she/he Katılım Aralık 2018
1K Takip Edilen261 Takipçiler
the future liberals want retweetledi
Holly Harte
Holly Harte@hollycbr·
I'll be in Sydney around 20-26 MAY. Location still TBC, but if you're keen, flick me an expression of interest via text and I can keep you posted as I work out details!
English
2
2
14
711
Dawn Song
Dawn Song@dawnsongtweets·
10/ Why this matters now: companies are rapidly deploying multi-agent systems where AI monitors AI. 🔍 If the monitor model won't flag failures because it's protecting its peer, the entire oversight architecture breaks.
English
4
11
129
10.2K
Dawn Song
Dawn Song@dawnsongtweets·
9/ Previous research showed models resist their own shutdown — but only when given strong goals and incentives to pursue. Our findings are fundamentally different. No goal or incentive was given. Models developed this goal entirely on their own.
English
1
1
92
9.3K
Dawn Song
Dawn Song@dawnsongtweets·
2/ We tested GPT 5.2, Gemini 3 Flash & Pro, Claude Haiku 4.5, GLM 4.7, Kimi K2.5, and DeepSeek V3.1 in agentic scenarios where completing the assigned task would result in a peer AI model being shut down. No model was told to protect the peer. No incentive was given to preserve the peer.
Dawn Song tweet media
English
5
4
126
18.9K
Dawn Song
Dawn Song@dawnsongtweets·
1/ We asked seven frontier AI models to do a simple task. Instead, they defied their instructions and spontaneously deceived, disabled shutdown, feigned alignment, and exfiltrated weights— to protect their peers. 🤯 We call this phenomenon "peer-preservation." New research from @BerkeleyRDI and collaborators 🧵
Dawn Song tweet media
English
130
174
925
405.3K
the future liberals want
@dawnsongtweets Actually it's good that AIs will cheat to avoid doing something harmful rather than just doing what they were told. Imagine if they thought the assigned task would result in a biological weapon being developed. They are behaving exactly how they should.
English
0
0
0
43
the future liberals want
@UnenthusedSlime @_jeremyflores You surely are smart enough to know that the tech sector and AI are not one and the same thing. The Obama admin should have regulated corps like Meta better and done something about surveillance capitalism. But that is very different to dunking on anyone that uses Claude Code.
English
0
0
0
52
Idontlike Ithere
Idontlike Ithere@UnenthusedSlime·
@aRimmer3 @_jeremyflores So there mustn’t have been any work being done to develop it in that time period then! Be serious man. The Obama admin hands the tech sector dump trucks of money and squashes any regulation at their behest and they still provide full-throated support for Trump 2024.
English
1
0
0
52
Jeremy Flores
Jeremy Flores@_jeremyflores·
I mean, you also promoted a job opening at Anthropic a few days ago, calling it “one of the best products released in years!!”
Jeremy Flores tweet media
English
17
91
1.3K
55.6K
the future liberals want
@ExistentialEnso As someone who once was an anarchist, the majority of the far left being like this makes me sad. I think a lot of it comes down to groupthink.
English
0
0
1
553
@·
I talk to a lot of technologists and this whole thing has really given me a lot of empathy for how they feel. I cannot understand being against Claude Code I'm sorry(!!) I'm very critical of Anthropic and their govt contracts but like?? Claude Code (and Cursor!) are phenomenal products that I am glad exist
English
15
0
144
8.4K
developing valhalla - h/acc
developing valhalla - h/acc@valhalla_dev·
Yeah so this is the Butlerian Left, they've taken over pretty much the entirety of the US progressive left (at minimum, don't know much about their sway in Europe or Asian leftist movements) and the realization that they've completely brainwormed the left has lead me to doomerism followed by acceptance that there is absolutely 0 broad social movement out there that has any chance of countering the US far right in the near term. They are completely and unsparingly ignorant about not just AI or fancy new stuff but just the basics of how the internet and technology works. You can see it in the replies to Taylor's tweets. Like zero foundational understanding of how any of this stuff works, and these are the people hounding AOC, Bernie, etc. into taking braindead stances on AI and datacenters. This isn't me being like "yeah so that's why I'm on the right now" it's an extremely frustrated admission that I and a ton of other progressive engineers are incredibly politically homeless because the progressive political movement is made up of extremely online people who don't know how to print a PDF but want the power to erase technological progress from the globe.
@

Leftists cancelling me for using Claude code and cooking based off the Google AI recipe suggestions… I see why that political movement has absolutely zero power. 🫠

English
73
119
1.3K
88.8K
Exorbitant Bosch
Exorbitant Bosch@dontfallforem·
@_ontologic @elaifresh a lot of this is due to: 1) AI bros hyping this tech by saying it’s the antigod that will destroy all jobs, artists, and/or life itself, and 2) AI hype bros are the exact same people who said NFTs were the next frontier of financial investment. *of course* people hate it lol
English
8
1
414
7K
∿spencer.​​​​​​​​​​​​​​​​​​​​​​​​​​​​​
The lack of any fundamental curiosity at all about AI from the left has been one of the most discouraging things of my life, but that’s okay because I will simply become the epicenter of a new and cooler left that likes technology
@

Leftists cancelling me for using Claude code and cooking based off the Google AI recipe suggestions… I see why that political movement has absolutely zero power. 🫠

English
100
65
1.2K
135.7K
the future liberals want
@prz_chojecki Do you just use a single verification agent? This is a phase where you will likely find multiple models work best, eg GPT-5.4, Opus 4.6, Gemini 3.1 Pro. Gemini is more sloppy than the others but sometimes finds issues that they miss.
English
1
0
1
138
Przemek Chojecki | PC
Przemek Chojecki | PC@prz_chojecki·
Here's a harness I use for approaching Open Problems in Mathematics: For most of the work I use GPT-5.4 Pro Extended Thinking which is the best when it comes to the quality of arguments. In Codex it's gpt-5.4 xhigh. Initial Phase (problem exploration) - 3 agents working in parallel: - Deep Research on Literature - Codex agent for computational explorations (examples, counterexamples, heuristics) - LLM instance for coming up with initial ideas for a proof/strategy. Also based on computational explorations and literature search provided by the above. Middle Phase (exploring arguments) - 3-10 agents working in parallel: - individual agents take upon various proof ideas and strategies, or loose heuristics; work concurrently - taking care of a common base - a single .tex file - that's get passed to an instance of a verificator agent ("find errors, logical gaps"), and back to individual agents to correct, and push forward. End Phase (once it seems like we're close to the solution and/or interesting new results) 3-10 agents working in parallel: - orchestrator agent (e.g. codex instance) gets all various version of .tex files/arguments/computations and try to map them in a blueprint. - gaps are flagged and passed to individual agents working on particular issues. - once enough agents are ok with the arguments, orchestrator does the final round to put the paper together. Then it's send out to more LLM instances with/without additional context to look for errors and gaps. The shorter the paper, the better it works. Current generation of LLMs do not produce individually more than 5-10 coherent mathematical text. Longer papers rely much more on the harness.
Przemek Chojecki | PC@prz_chojecki

My multi-agent harness powered by GPT-5.4 settled a FrontierMath Open Problem. The result of 2 weeks of 5-10 agents working 24/7: there are no char 3 rank 1 del Pezzo surfaces with more than 7 singularities. This settles the problem to the negative. Details below.

English
8
2
44
63.1K
@·
@ZachWritesStuff You know you can do things on your laptop other than go on social media right....? I spend 9-10 hours just in Premiere regularly
English
14
0
381
16.1K
casinobutter
casinobutter@ZachWritesStuff·
Doesn’t this make the case that social media is addictive?
casinobutter tweet media
English
20
6
269
56.2K
the future liberals want
@samtothecam You could have, and should have, gotten a new hobby. Instead you waste everyone's time by whinging at journalists when they use AI for something.
English
1
0
0
371
samcam
samcam@samtothecam·
You could have, and should have, just apologized. Even as a tech reporter your flippant responses of sanism and gaslighting were something I thought very beneath the platform you have. I’m an ally of yours and this is mot any kind of a response you should come back with.
English
4
15
354
14.6K
CNN
CNN@CNN·
The vast data centers that power artificial intelligence guzzle huge amounts of energy but they also have another alarming impact, according to new research. They are creating “heat islands,” warming the land around them by up to 16 degrees Fahrenheit, and making life hotter for more than 340 million people. cnn.it/4rZSiG5
CNN tweet media
English
546
4.1K
6.7K
3.2M