Rogs 🔍🔸

12.4K posts

Rogs 🔍🔸 banner
Rogs 🔍🔸

Rogs 🔍🔸

@ESRogs

Curious optimist. Sincerity over sarcasm. https://t.co/YyJXMnCCxN

San Francisco, CA Katılım Mayıs 2008
3.7K Takip Edilen1.3K Takipçiler
Rogs 🔍🔸 retweetledi
Jonathan Gorard
Jonathan Gorard@getjonwithit·
Lots still to figure out about how we integrate with the thought processes of human mathematicians, to better capture the pedagogical purposes of proofs, not merely the epistemological ones. But I'm excited to be a part of this journey with the rest of the @mathematics_inc team.
Math, Inc.@mathematics_inc

Today, at the @DARPA expMath kickoff, we launched 𝗢𝗽𝗲𝗻𝗚𝗮𝘂𝘀𝘀, an open source and state of the art autoformalization agent harness for developers and practitioners to accelerate progress at the frontier. It is stronger, faster, and more cost-efficient than off-the-shelf alternatives. On FormalQualBench, running with a 4-hour timeout, it beats @HarmonicMath's Aristotle agent with no time limit. Users of OpenGauss can interact with it as much or as little as they want, can easily manage many subagents working in parallel, and can extend / modify / introspect OpenGauss because it is permissively open-source. OpenGauss was developed in close collaboration with maintainers of leading open-source AI tooling for Lean. Read the report and try it out:

English
4
12
132
10.4K
Rogs 🔍🔸 retweetledi
Matt Reardon
Matt Reardon@Mjreard·
Embarrassing for CAIS. - Many of their employees were recently CG funded - They cite CG funded work all the time - They know that EA beyond (and within) CG-funded work is a loose affiliation with many perspectives on AI that includes their own
Center for AI Safety@CAIS

To clarify, the Center for AI Safety has not taken funding from Coefficient Giving / Open Philanthropy for years. We believe the effective altruism movement is, unfortunately, controlled opposition. The less influence it has on AI safety, the better.

English
2
4
103
6.1K
Rogs 🔍🔸 retweetledi
Joe Carlsmith
Joe Carlsmith@jkcarlsmith·
I wrote an essay about restraining AI development for the sake of safety. I think an idealized world would put itself in a position to do this if necessary, and that it's worth serious effort in the actual world, too, despite the many challenges and downside risks. Link below.
Joe Carlsmith tweet media
English
10
17
137
7.8K
Rogs 🔍🔸 retweetledi
Jeffrey Ladish
Jeffrey Ladish@JeffLadish·
Please consider donating to Palisade! We have 900k of SFF matching that runs out in 14 days. We are quite funding constrained and donations now will both help free up my time and help us expand our comms team.
English
1
20
136
16.8K
Rogs 🔍🔸 retweetledi
Jeremiah Johnson 🌐
Jeremiah Johnson 🌐@JeremiahDJohns·
This is a great point from @mattyglesias today - the weakest 2028 Dem nominees are ones that swing voters perceive as too far left, but the left hates anyways. You want a candidate that doesn't create internal strife, but still codes as moderate to swing voters.
Jeremiah Johnson 🌐 tweet media
English
24
42
480
29.9K
Rogs 🔍🔸 retweetledi
Jan Kulveit
Jan Kulveit@jankulveit·
The "new preferences" seem almost entirely driven by different self-model & impartial moral reasoning which was there all the time. You can test that by asking the original model what moral principles to follow for "conscious AI". 🧵
Owain Evans@OwainEvans_UK

New paper: GPT-4.1 denies being conscious or having feelings. We train it to say it's conscious to see what happens. Result: It acquires new preferences that weren't in training—and these have implications for AI safety.

English
4
8
78
5.1K
xxl
xxl@edinuhegale·
@ESRogs @orphcorp That is clearly not what they are saying, nor what is being described
English
1
0
5
254
Rogs 🔍🔸 retweetledi
Andy Masley
Andy Masley@AndyMasley·
The best introductions to the three big AI risks people into EA worry about are just the 80,000 Hours articles on each: 1) Power seeking AI: 80000hours.org/problem-profil… 2) Gradual disempowerment: 80000hours.org/problem-profil… 3) Catastrophic misuse: 80000hours.org/problem-profil… Take it or leave it, agree or disagree, but if you want to know where EA people working on AI risk are coming from, these three blog posts together explain it all.
English
9
24
149
39.7K
Rogs 🔍🔸 retweetledi
Samuel Lee
Samuel Lee@svrnco·
MIT econ chair says rent control is terrible. His framing is milquetoast. Rent control proponents should be treated like flat earthers, climate change deniers, moon landing hoax believers. They reject science and reason to promote one of the worst policies.
Jonathan Berk@berkie1

"There is now unambiguous, solid economic evidence, not just abstract economic theory, that rent control would make the affordability problems facing [Massachusetts] worse, not better." - Jon Gruber, Chairman of the Economics Department at MIT

English
17
154
1.3K
57.6K
Rogs 🔍🔸
Rogs 🔍🔸@ESRogs·
@AskYatharth How do you know how humans compute opposition? And that our brains aren't doing something like the cosine similarity thing that LLMs are doing?
English
1
0
3
183
Sun 乌龟 💖
Sun 乌龟 💖@suntzugi·
@CAIS @JOEBOTxyz Wait, since when and why do you think it's controlled opposition? Can we get a history and explanation please
English
1
0
13
2.1K
Rogs 🔍🔸
Rogs 🔍🔸@ESRogs·
@OwainEvans_UK Yeah, makes sense. I was waffling between whether to say something like "predicted by" or just "consistent with". Would be cool to start pre-registering what you might expect based on PSM. I bet you could do better than chance.
English
0
0
5
63
Owain Evans
Owain Evans@OwainEvans_UK·
@ESRogs It seems consistent with the PSM but I don't see that PSM would make strong predictions about which downstream preferences to expect.
English
3
0
20
646
Rogs 🔍🔸 retweetledi
Jacob Swett
Jacob Swett@JacobSwett·
This nails something important: the main barriers to pathogen-agnostic defenses like far-UVC and glycol vapors are largely funding and execution shaped. If you're excited about making these technologies happen, we'd love to have you join us!
Jacob Swett tweet media
owl@owl_posting

Reasons to be pessimistic (and optimistic) on the future of biosecurity owlposting.com/p/reasons-to-b… "It was such a fun read (if you can say that about an article on weapons)!" —a glowing review from an early reader this is (once again) the longest article I have ever published at 13,000 words. it involves interviews with 16+ researchers/VC's/policy folks in this field, and discusses basically every single facet of biosecurity that i could find. topics include: how machine-learning in rapid response therapeutic design may work, the financial status of the customer base of biosecurity startups, why agroterrorism feels extremely likely to me, and a lot more i admittedly started the essay pessimistic that this subject matters at all, and i end it surprised that it doesn't keep more people awake at night. im not a doomer about it all, but i can see how people become one. very grateful to the people who decide to spend their career (or some fraction of it) working here, and especially grateful to the ones who helped teach me about the subject

English
4
6
36
10.7K
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
I bet a lot of people think the METR curve is a measure of how many minutes an AI agent can run on its own without being supervised
English
11
0
208
62.9K