Dan Roberts

1.1K posts

Dan Roberts banner
Dan Roberts

Dan Roberts

@danintheory

Scientist @OpenAI. Prev. co-founder @diffeo, acquired by @salesforce // co-authored The Principles of Deep Learning Theory // studied gravity.

San Francisco, CA Katılım Haziran 2009
724 Takip Edilen6.6K Takipçiler
Sabitlenmiş Tweet
Dan Roberts
Dan Roberts@danintheory·
My short talk at @sequoia AI Ascent on how we at @OpenAI are attempting to flip the old "cherry-on-top" meme about "Scaling RL"
English
5
54
438
137.1K
Seth Neel
Seth Neel@SethInternet·
After five incredible years on the faculty at @harvardhbs @hseas — including a year spent at @GoogleAI — I'm leaving to build something new. Back to startups. More at noon PT 🚀
English
5
2
54
4.3K
Dan Roberts retweetledi
Tibo
Tibo@thsottiaux·
Three million people are now using Codex weekly - up from two million a little under a month ago. Incredible to see the growth. Thank you to all of you and to the ecosystem we’re part of. To celebrate, we’re resetting rate limits so you can keep building, and we’ll reset them every additional 1M users until we reach 10M, so we can keep celebrating along the way. Enjoy and thank you!
English
401
294
4.4K
446.1K
Dan Roberts retweetledi
Mehtaab Sawhney
Mehtaab Sawhney@mehtaab_sawhney·
We are excited to share a new paper solving three further problems due to Erdős; in each case the solution was found by an internal model at OpenAI. Each proof is short and elegant, and the paper is available here: arxiv.org/pdf/2603.29961
English
27
149
1.1K
403.4K
Ante ⚙️
Ante ⚙️@AnteOrg·
Billions of dollars in crypto are permanently locked. No recovery. No inheritance. Today we’re launching Ante Vaults, a self-custody vault with time-based social recovery, simple enough to manage from your phone. Now live on Ethereum + Base, no wallet required. 🧵
Ante ⚙️ tweet media
English
31
24
83
9K
Dan Roberts retweetledi
NatSecKatrina
NatSecKatrina@natseckatrina·
I firmly believe that in America, competition is a good thing. We should want patriotic, experienced leaders like Anthropic's Tarun Chhabra helping to steer the trajectory of democratic AI. Though we are competitors because we work for competing frontier AI labs, one thing we share in common is a sincere belief that America's prosperity and security depends, in part, on the American AI industry continuing to lead on this technology.
English
3
6
121
37.6K
Dan Roberts retweetledi
Amanda Askell
Amanda Askell@AmandaAskell·
Tech companies pay millions of dollars for their employees and then stick them in open-plan offices that make it nearly impossible to get work done. Best strategy for poaching employees is probably to just offer them an office with a door.
English
238
230
4.6K
673.9K
Gleb Kuznetsov
Gleb Kuznetsov@glebkuz·
Just announced our joint study with @NVIDIAHealth running a million molecule benchmark of AI-designed proteins for Nvidia's new model Protein-Complexa. Here we brought our massively multiplexed all-against-all AI binder testing platform that has been core to progressing our own protein design model mBER. The key to advancing protein design models beyond what's possible from public data is experiment scale that can match the scale of generative AI. Together we were able to show some quite fantastic results with a 68% hit rate for Protein-Complexa. More designs generated and more designs tested is better (when you can do it efficiently).
Manifold Bio@ManifoldBio

@ManifoldBio and @NVIDIAHealth announce a joint study validating Proteina-Complexa, NVIDIA's latest BioNeMo model for protein binder design.

English
4
13
51
13.6K
Dan Roberts retweetledi
roon
roon@tszzl·
feeling sort of gullible today, maybe due to selection effects
English
76
31
1.2K
73.4K
Dan Roberts retweetledi
NatSecKatrina
NatSecKatrina@natseckatrina·
Some important updates for those following OpenAI’s agreement for classified deployments with the Department of Defense (War). First, in addition to the layered safeguards we already announced, new language reinforces that domestic surveillance is disallowed under this agreement, including involving commercially acquired information. Second, our agreement will not apply to Defense Intellience Components (NSA, NGA, DIA, etc). Services provided to those agencies will require a contract mod. This will give us time to fully consider important implications.
English
24
8
118
15.2K
Dan Roberts retweetledi
Sam Altman
Sam Altman@sama·
Here is re-post of an internal post: We have been working with the DoW to make some additions in our agreement to make our principles very clear. 1. We are going to amend our deal to add this language, in addition to everything else: "• Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. • For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information. Just like everything we do with iterative deployment, we will continue to learn and refine as we go. I think this is an important change; our team and the DoW team did a great job working on it. 2. The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. 3. For extreme clarity: we want to work through democratic processes. It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it). But 4. There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods. 5. One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future. In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to. We will host an All Hands tomorrow morning to answer more questions.
English
3.9K
625
6.1K
3.6M
Dan Roberts retweetledi
Boaz Barak
Boaz Barak@boazbaraktcs·
Extremely well put @deanwball ! A must read essay. My position is that: 1. Anthropic is a great company, people who work there care deeply about AI safety and the benefit of the U.S. Tagging it as a "supply chain risk" is a massive own-goal to American AI leadership. 2. The red line of not using AI to do domestic mass surveillance is not Anthropic's red line - it should be all of ours. To be very clear, it is also my personal red line. 3. I do not know enough to ascertain if Anthropic's original contract, as signed in the Biden administration, did enough to ensure this red line is not crossed. Usage policies are empty words if they are not coupled with effective definitions, safeguards, monitoring. 4. But I do know that now that OpenAI is dealing with the DoW, it is our responsibility to ensure our AI is used to protect freedom and not take it away from Americans. I take this responsibility very seriously. 5. If anything good can come out of the events of the last week, it would be if we in the AI industry start treating the issue of using AI for government abuse and surveilling its own people as a catastrophic risk of its own right. We have done a good job of evaluations, mitigations, and processes, for risks such as bioweapons and cyber security. Let's use similar processes here.
Dean W. Ball@deanwball

I think this one needs no further explanation.

English
13
10
133
20K
Dan Roberts retweetledi
NatSecKatrina
NatSecKatrina@natseckatrina·
Anthropic has primarily been concerned with usage policies, which is because their existing classified deployments involve reduced or removed safety guardrails (making usage policies the primary safeguards in national security deployments). Usage policies, on their own, are not a guarantee of anything.  Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. That's what we pursued in our negotiations and that's why we think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.
English
14
13
177
50.6K
Dan Roberts retweetledi
Dan Roberts retweetledi
OpenAI
OpenAI@OpenAI·
We do not think Anthropic should be designated as a supply chain risk and we’ve made our position on this clear to the Department of War.
English
298
282
4.6K
2M