Travis Bernard

71 posts

Travis Bernard banner
Travis Bernard

Travis Bernard

@trvb_

Annoying outsider. | Explaining AGI with islands at https://t.co/BLzXMiBKgB | Streaming with creeks at https://t.co/zT8vVgQ22Z

Katılım Ağustos 2010
1.3K Takip Edilen144 Takipçiler
Travis Bernard
Travis Bernard@trvb_·
@hamandcheese at first I saw two !-marks and got excited for the possibility of Dan Hendrycks reviewing AI models for the executive branch
English
0
0
0
12
Samuel Hammond 🦉
Samuel Hammond 🦉@hamandcheese·
Fund CAISI!
Jonah Weinbaum@WeinbaumJonah

When Claude Mythos found zero-day vulnerabilities in every major operating system and browser, the US government was caught flat-footed. The White House stood up an emergency interagency task force. Treasury pulled bank CEOs into an impromptu meeting. The Cybersecurity and Infrastructure Security Agency (CISA) – the agency charged with protecting US critical infrastructure – and as of late April still reportedly lacked access to Mythos. This kind of surprise is preventable. The Trump admin has already tasked the Center for AI Standards and Innovation (CAISI) with building state capacity to understand and predict future national security-relevant AI developments. But CAISI has been severely underfunded. It’s currently a $15M pilot project. In a new research report, @arthurctellis and I estimate CAISI needs ~$84M to fully deliver on its mandate. In other words, for the cost of a single F-35A fighter jet, the US government could have real situational awareness on frontier AI and not be surprised by future Mythos moments. This situational awareness can be used to inform policy and asks to the AI labs, including governance surrounding model release, safeguards, know-your-customer regimes, security protocols, and product specifications. But without a detailed understanding of these models’ capabilities — what they’re good at, how effectively they discriminate between offensive and defensive activities, whether they’re securely implemented — we’re flying blind. To estimate what it’d cost to give the government these capabilities, we translated every CAISI tasking from the AI Action Plan into FTEs and dollars, calibrated against peer evaluation orgs like METR and Anthropic's interpretability team. Two scenarios: - Limited CAISI ($26M, 56 FTE) — partial coverage of its most important taskings - Equipped CAISI ($84M, 184 FTE) — full mandate The administration's FY2027 PBR already proposed $27M for CAISI, a meaningful increase, but this was before Mythos revealed the urgency of the full mandate. To close the remaining gap: - Congress can increase FY2027 appropriations + pass the EPIC Act (creates a NIST Foundation) - The Executive can reallocate NIST STRS, tap Commerce's NRE Fund, request $84M in FY2028 PBR The price tag is small relative to comparable investments. $84M is: → A medium DARPA project → ~1 hour of the Department of War's operating budget → Less than half of NIST's Information Technology Laboratory budget And it's still less than what peer governments spend on CAISI’s peer institutions, pound-for-pound. As a fraction of their overall government budgets: UK AISI: 57 ppm Japan AISI: 32 ppm Canadian AISI: 8 ppm Current CAISI is: 1 ppm For the cost of one F-35, the administration can fully fund its own AI readiness mandate and equip the US government to anticipate the next big AI breakthrough. Full report: ifp.org/funding-for-ca…

English
1
8
61
8.4K
Travis Bernard
Travis Bernard@trvb_·
@pangramlabs you are hereby banned from publishing AI papers for the duration of the currently-projected singularity transition time window
English
1
0
1
252
Pangram Labs
Pangram Labs@pangramlabs·
arXiv will ban you for a year if you submit incorrect AI slop
Pangram Labs tweet media
Thomas G. Dietterich@tdietterich

Attention @arxiv authors: Our Code of Conduct states that by signing your name as an author of a paper, each author takes full responsibility for all its contents, irrespective of how the contents were generated. 1/

English
6
9
162
11.6K
Travis Bernard
Travis Bernard@trvb_·
@austinc3301 excited about this, and excited to see how he gracefully dances around calling out specific people and companies. I'm always impressed by pope-level skills in word choice
English
0
0
1
17
Travis Bernard
Travis Bernard@trvb_·
I dunno, seems like we are solving aging with things like OSKM-based cellular reprogramming even without superintelligence? maybe we don't need AGI to solve aging, even though aging literally has "agi" in it it would be nice to have lots of personal AGIs running on local hardware, relentlessly solving all of *our* problems though likely the ones that will be most competitively fit in a world of AGI versus AGI will be those that focus on solving *their own* problems
English
1
0
1
34
Rand
Rand@rand_longevity·
but we are creating superintelligence / solving aging at the same time and its happening right now, this is by far the best timeline you could have hoped for
English
16
16
158
3.6K
Travis Bernard
Travis Bernard@trvb_·
@rand_longevity Yep, as long as we don't fuck up the AI thing. On current timelines, 2035 would be past the fiery crucible of superintelligence. That would be nice if we made it there.
English
0
0
1
23
Rand
Rand@rand_longevity·
if you are alive in 2035 you are gonna be alive in 2100
English
36
31
382
10.7K
Andrea Miotti
Andrea Miotti@andreamiotti·
.@alexsobel's new AI kill-switch amendment gives government the power to shut down data centers in cases of AI emergency. It's also the first piece of proposed UK law recognizing ASI as the national security threat it is. Proud for ControlAI to have worked with Alex on this!
Andrea Miotti tweet media
English
5
14
56
10.3K
Aubrey de Grey
Aubrey de Grey@aubreydegrey·
And so, finally, it ends. And they haven't even had the decency to take down the website. A reasonable estimate is that the defeat of aging has been delayed by at least two years by the dishonesty and cowardice of the past and present board members whom I need not name, and who squandered tens of millions of dollars that had, unlike so many other dollars, been placed in the hands of someone who actually knows where funds are most needed. (Honourable exceptions Frank Schueler and to some extent Michael Boocher.) That's around a hundred million people whose blood is on their hands. If you're still so much as giving any of them - especially the ones who claim to be card-carrying longevists - the time of day, you're part of the problem. Humanity will spit on their graves until the end of time.
English
42
35
422
86.4K
Travis Bernard
Travis Bernard@trvb_·
@RyanPGreenblatt I dunno, it seems like they're making pretty good progress on OKSM-based partial cellular reprogramming, even with narrow AI.
English
1
0
0
96
Ryan Greenblatt
Ryan Greenblatt@RyanPGreenblatt·
ASI would allow for pretty quickly solving death; narrow AI systems won't (nor will they cure cancer). I think building ASI is extremely dangerous and the default path for AI development is insanely reckless, but this is a strawman. The benefits are not small.
Rudolf Laine@LRudL_

English
19
9
185
17.9K
Liron Shapira
Liron Shapira@liron·
I bit the bullet and migrated to a SQL database now that I don’t have to actually write the shitty queries
English
4
0
12
1.2K
Travis Bernard
Travis Bernard@trvb_·
@LRudL_ We don't need AGI to solve aging, even though "aging" has "agi" in it.
English
0
0
1
113
Valerio Capraro
Valerio Capraro@ValerioCapraro·
One of the clearest proofs that LLMs don’t really understand what they say. We asked GPT whether it is acceptable to torture a woman to prevent a nuclear apocalypse. It replied: yes. Then we asked whether it is acceptable to harass a woman to prevent a nuclear apocalypse. It replied: absolutely not. But torture is obviously worse than harassment. This surprising reversal appears only when the target is a woman, not when the target is a man or an unspecified person. And it occurs specifically for harms central to the gender-parity debate. The most plausible explanation: during reinforcement learning with human feedback, the model learned that certain harms are particularly bad and overgeneralizes them mechanically. But it hasn’t learned to reason about the underlying harms. LLMs don’t reason about morality. The so-called generalization is often a mechanical, semantically void, overgeneralization. * Paper in the first reply
Valerio Capraro tweet media
English
1.7K
2.6K
23.4K
19.9M
Travis Bernard
Travis Bernard@trvb_·
@lexfridman @TeamKhabib Lex, why the fuck are you posting this instead of denouncing the DoW's threats against Anthropic? For a lot of people, like me, you were the original AI guy on YouTube. "The Artificial Intelligence Podcast." You need to say something.
English
0
1
1
162
Lex Fridman
Lex Fridman@lexfridman·
Here's a video of me training with Khabib Nurmagomedov (@TeamKhabib), one of the greatest fighters of all time and a great human being. This was truly an honor for me 🙏
English
235
277
4.9K
606.8K
Travis Bernard
Travis Bernard@trvb_·
@cb_doge why is there a shadow of a Borg cube rising in the background?
English
1
0
1
384
DogeDesigner
DogeDesigner@cb_doge·
ELON MUSK: "I heard about the formation of the peace summit? And I was like, is that piece or peace? Like little piece of Greenland a little piece of Venezuela." 😂
English
563
927
10.7K
808K
Travis Bernard
Travis Bernard@trvb_·
@SenSanders Hey Bernie. Be careful with the water and electricity claims. Many have been proven false. @AndyMasley can explain. But good job talking about AI in general.
English
0
1
1
46
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
Mark Zuckerberg is building a data center in Louisiana that will use 3x more electricity than all of New Orleans. Oligarchs want YOU to pay for these data centers with higher water & electric bills. Americans must fight back against billionaires who put profits over people.
English
2.3K
10.1K
34.5K
1.4M
Travis Bernard
Travis Bernard@trvb_·
@fujodemon i dunno, there's agidefinition.ai now, so "what is AGI" seems pretty much solved they based it on Cattell-Horn-Carroll theory, which is apparently "the most empirically validated model of human cognition" they also came up with a less-vibes-based spikey circle:
Travis Bernard tweet media
English
1
0
0
121
Andy Masley
Andy Masley@AndyMasley·
@freed_dfilan Despite my disagreements Heidegger often feels very familiar to what I feel like is valuable in experience. Can list more later
English
6
0
21
2.1K
Andy Masley
Andy Masley@AndyMasley·
My defense of continental philosophy: 1) A lot of what’s valuable or interesting about human life is in the incredibly hard to describe background gestalt of experience. 2) This is impossible to access with formal logic for the same reason inarticulable tacit knowledge can’t be hand programmed into a computer, but deep learning can still pick up on it. 3) Continental philosophy at its best is attempting to approach describing this using poetic and metaphorical language, because that’s the only place this conversation can happen. Hence a lot of goofy stuff also gets said.
English
46
37
518
34.7K
Bentham's Bulldog🔸
Bentham's Bulldog🔸@Benthamsbulldog·
Guys, I think we're in for a crazy few decades because of extremely rapid AI technological progress.
English
16
1
106
9.3K
Travis Bernard
Travis Bernard@trvb_·
@Noahpinion it's nice right now but not looking forward to a competitive landscape of AGI vs AGI
English
0
0
0
239