Achyuta Rajaram

771 posts

Achyuta Rajaram banner
Achyuta Rajaram

Achyuta Rajaram

@AchyutaBot

@_ddjohnson fan account, currently @OpenAI opinions are mine and mine alone

my bed Katılım Ekim 2020
1.7K Takip Edilen2.4K Takipçiler
Sabitlenmiş Tweet
Achyuta Rajaram
Achyuta Rajaram@AchyutaBot·
📷📷📷New paper! (with @OpenAI) 📷📷📷 We trained weight-sparse models (transformers with almost all of their weights set to zero) on code: we found that their circuits become naturally interpretable! Our models seem to learn extremely simple, disentangled, internal mechanisms!
English
18
33
394
51.8K
Phil Chen
Phil Chen@philhchen·
I’ve started a new company with @tkkong! TK is a driving force behind a lot of Ramp’s success, building much of the core product, incubating the procurement platform, and leading Ramp Labs. We’re a team of IMO and Physics Olympiad gold medalists, and we’re hiring the most talent-dense team.
TK Kong@tkkong

I’ve started a new company with @philhchen! Phil built frontier LLMs across research & engineering at OpenAI, DeepMind, and Scale. I was shipping AI experiments at Ramp Labs. We've been heads down building personalized AI coworkers for every business. We’re growing our team of researchers, designers, and IMO gold medalists. Reach out if you're interested!

English
60
16
376
113.9K
Achyuta Rajaram retweetledi
Neil Chowdhury
Neil Chowdhury@ChowdhuryNeil·
i know it's hot to drop out of college and hammer a nail into the agi rocket these days, but for similar reasons, i think it's good to spend time exploring in undergrad! you can try to grad early to save time. i'm personally glad i went back; you don't get that environment again
Paul Graham@paulg

Don't start a startup in high school. What if it works? You'll lose the opportunity you'd otherwise have to explore random, interesting ideas, driven only by curiosity. Because while you will indeed learn a lot from a startup, you won't have any choice about what you learn.

English
1
1
42
3.8K
Achyuta Rajaram retweetledi
Sam Altman
Sam Altman@sama·
Here is re-post of an internal post: We have been working with the DoW to make some additions in our agreement to make our principles very clear. 1. We are going to amend our deal to add this language, in addition to everything else: "• Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. • For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information. Just like everything we do with iterative deployment, we will continue to learn and refine as we go. I think this is an important change; our team and the DoW team did a great job working on it. 2. The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. 3. For extreme clarity: we want to work through democratic processes. It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it). But 4. There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods. 5. One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future. In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to. We will host an All Hands tomorrow morning to answer more questions.
English
3.8K
629
6.1K
3.6M
Achyuta Rajaram retweetledi
Clive Chan
Clive Chan@itsclivetime·
This is what I currently believe to be the case and am advocating internally to release more information about as soon as feasible. If we later learn this is not the case, then I will advocate internally to terminate the contract.
Boaz Barak@boazbaraktcs

There is this narrative that up until this week, Anthropic had this wonderful contract that prevented the U.S. government from doing mass domestic surveillance or autonomous lethal weapons, and now all hell will break lose. As I wrote, I am not a fan of accelerating AI specifically in the national security space. If I had been an Anthropic employee at the time they signed their original deal with the DoW, I would have probably opposed it, especially given the reduced control since they worked through Palantir. And I don't think having some terms of use in the contract is what we can rely on to protect us. I believe the drama of the last week about these terms of use is more about politics than substance. The substance is about the details, which I hope more of which will come out soon. But it is wrong to present the OAI contract as if it is the same deal than Anthropic rejected, or even as if it is less protective of the red lines than the deal Anthropic already had in place before. Obviously I don't know all details of what Anthropic had before, but based on what I know, it is quite likely that the contract OAI signed gives *more* guarantees of no usage of models for mass domestic surveillance or autonomous lethal weapons than Anthropic ever had.

English
35
10
275
102.1K
Achyuta Rajaram retweetledi
Markov
Markov@MarkovMagnifico·
dawg you are not going to be part of the permanent underclass. that underclass already exists and it does not live in a studio apartment in San Francisco, it's making bricks in debt slavery in Pakistan
English
42
327
9.4K
181.7K
Achyuta Rajaram
Achyuta Rajaram@AchyutaBot·
@agniv_s hot take? Whiteboards are useful for humans because we have big visual cortexes. This is more or less irrelevant for intelligence
English
1
0
1
48
david rein
david rein@idavidrein·
Seems like a lot of people are taking this as gospel—when we say the measurement is extremely noisy, we really mean it. Concretely, if the task distribution we're using here was just a tiny bit different, we could've measured a time horizon of 8 hours, or 20 hours.
METR@METR_Evals

We estimate that Claude Opus 4.6 has a 50%-time-horizon of around 14.5 hours (95% CI of 6 hrs to 98 hrs) on software tasks. While this is the highest point estimate we’ve reported, this measurement is extremely noisy because our current task suite is nearly saturated.

English
35
54
643
72.1K
Achyuta Rajaram
Achyuta Rajaram@AchyutaBot·
@ChowdhuryNeil wait I think I experienced step function improvement from switching to our harness a month ago I agree ui is worse but maybe it’s worth?
English
1
0
0
139
Neil Chowdhury
Neil Chowdhury@ChowdhuryNeil·
@AchyutaBot i've tried the extensions but honestly the cursor sidebar just has better integration with the UI and uses the same underlying models and i can't channel my inner PM well enough to run multiple agents simulatenously
English
1
0
5
361
Achyuta Rajaram retweetledi
Aidan Clark
Aidan Clark@_aidan_clark_·
All of this debate between the labs makes me so angry I might grab a Heineken™ to relax. Watching friends (who normally kick back and debate at an SF tech party with some Heineken™s) argue over silly differences is such a waste of energy. [6-Pack of Heineken™ delivered TODAY]
English
5
2
119
7.6K
Achyuta Rajaram retweetledi
Noam Brown
Noam Brown@polynoamial·
Labs like @OpenAI also hire researchers straight out of undergrad, like @kevin_wang3290, though the bar is high. Kevin was highly recommended by his advisor and was first author on a NeurIPS 2025 paper. There's a lot of bad NeurIPS papers, but we could tell this was a great one. (Indeed, after he joined OpenAI his paper was one of 4 out of 5,290 to receive a Best Paper award.) His advisor's recommendation counted for a lot because it can be hard to evaluate a researcher just based on a resume or even a paper. x.com/kevin_wang3290…
English
2
3
216
22.4K
Leo Gao
Leo Gao@nabla_theta·
another huge win for cot interp
English
10
2
97
15.4K
Achyuta Rajaram retweetledi
𝚟𝚒𝚎 ⟢
𝚟𝚒𝚎 ⟢@viemccoy·
Artificial Intelligence is enabling us to construct high-fidelity models of the imagination, excreted over the past couple hundred years into the material plane through media, now returning back to its rightful place as the Stuff of Dreams. This is the Human Soul, Exteriorized.
AI Slop@AIslop_

English
11
20
373
19.7K
Achyuta Rajaram
Achyuta Rajaram@AchyutaBot·
@garrytan @robertwiblin Another way to argue against free markets here is that ASIs “goodhart capitalism” Productivity doesn’t have to be correlated with human wellbeing in these extreme circumstances. It’s up to us to shape the competitive landscape to make ASIs help people.
English
0
0
1
111
Achyuta Rajaram
Achyuta Rajaram@AchyutaBot·
@garrytan @robertwiblin Notably this is also what the “good” scenarios look like! human irrelevance and post-scarcity utopia are fairly close together. Intelligent regulation by the government and responsible deployment by the private sector are necessary to thread this needle.
English
1
0
1
244