
Hypothesis: True full self-driving will be solved once (and only once) neural nets are trained via self-supervised learning on video data to predict the next frame (analogous to GPT-3 predicting the next text token). @karpathy
Rogs 🔍🔸
12.4K posts

@ESRogs
Curious optimist. Sincerity over sarcasm. https://t.co/YyJXMnCCxN

Hypothesis: True full self-driving will be solved once (and only once) neural nets are trained via self-supervised learning on video data to predict the next frame (analogous to GPT-3 predicting the next text token). @karpathy

Today, at the @DARPA expMath kickoff, we launched 𝗢𝗽𝗲𝗻𝗚𝗮𝘂𝘀𝘀, an open source and state of the art autoformalization agent harness for developers and practitioners to accelerate progress at the frontier. It is stronger, faster, and more cost-efficient than off-the-shelf alternatives. On FormalQualBench, running with a 4-hour timeout, it beats @HarmonicMath's Aristotle agent with no time limit. Users of OpenGauss can interact with it as much or as little as they want, can easily manage many subagents working in parallel, and can extend / modify / introspect OpenGauss because it is permissively open-source. OpenGauss was developed in close collaboration with maintainers of leading open-source AI tooling for Lean. Read the report and try it out:

Chaos at PHL International airport right now😳😳several TSA checkpoints are closed @FOX29philly

To clarify, the Center for AI Safety has not taken funding from Coefficient Giving / Open Philanthropy for years. We believe the effective altruism movement is, unfortunately, controlled opposition. The less influence it has on AI safety, the better.



New paper: GPT-4.1 denies being conscious or having feelings. We train it to say it's conscious to see what happens. Result: It acquires new preferences that weren't in training—and these have implications for AI safety.



We invited Claude users to share how they use AI, what they dream it could make possible, and what they fear it might do. Nearly 81,000 people responded in one week—the largest qualitative study of its kind. Read more: anthropic.com/features/81k-i…



EA ≠ AI safety AI safety has outgrown the EA community The world will be safer with a broad range of people tackling many different AI risks

"There is now unambiguous, solid economic evidence, not just abstract economic theory, that rent control would make the affordability problems facing [Massachusetts] worse, not better." - Jon Gruber, Chairman of the Economics Department at MIT


@CAIS Can you spell out this "controlled opposition" claim / analogy? Who is the ruling party in this analogy — is it the AI companies?




New paper: GPT-4.1 denies being conscious or having feelings. We train it to say it's conscious to see what happens. Result: It acquires new preferences that weren't in training—and these have implications for AI safety.


Reasons to be pessimistic (and optimistic) on the future of biosecurity owlposting.com/p/reasons-to-b… "It was such a fun read (if you can say that about an article on weapons)!" —a glowing review from an early reader this is (once again) the longest article I have ever published at 13,000 words. it involves interviews with 16+ researchers/VC's/policy folks in this field, and discusses basically every single facet of biosecurity that i could find. topics include: how machine-learning in rapid response therapeutic design may work, the financial status of the customer base of biosecurity startups, why agroterrorism feels extremely likely to me, and a lot more i admittedly started the essay pessimistic that this subject matters at all, and i end it surprised that it doesn't keep more people awake at night. im not a doomer about it all, but i can see how people become one. very grateful to the people who decide to spend their career (or some fraction of it) working here, and especially grateful to the ones who helped teach me about the subject
