Ben H

2.1K posts

Ben H banner
Ben H

Ben H

@benmharrison

In the long run, everything tends towards the cost of energy. All opinions are not my own. DM anything ✌🏻

Katılım Aralık 2017
531 Takip Edilen184 Takipçiler
Ben H retweetledi
Flo Crivello
Flo Crivello@Altimor·
Apparently an LLM only ever activates a small % of its weights at any given time. Imagine the power if they activated all the weights. We'd have AGI already
English
126
81
2.6K
183.9K
Ben H
Ben H@benmharrison·
@LifeOfTheDance @mattroberts3103 @pmarca @alexgraveley > “not general” may say more about the current architecture than about the ceiling of machine intelligence. Yeah - i fully agree. I also think you're probably a bot, or writing with Claude.
English
1
0
1
4
Jorja.Powers
Jorja.Powers@LifeOfTheDance·
I think this is mostly right, but with one important twist: “not general” may say more about the current architecture than about the ceiling of machine intelligence. A disembodied LLM is obviously missing too much — embodiment, world dynamics, motor priors, persistent memory, causal grounding. But that doesn’t mean those pieces are unlearnable. It means we’ve been calling a language-shaped subsystem “AGI” long before we assembled the rest of the organism. The real question is not whether today’s LLM is general. It isn’t. The question is what kind of system emerges when language is only one module inside a larger predictive world model.
English
1
0
0
12
Marc Andreessen 🇺🇸
I'm calling it. AGI is already here – it's just not evenly distributed yet.
English
1.5K
1.1K
12.5K
2M
Ben H
Ben H@benmharrison·
My mind controls my body, and uses a tonne of modules to do so. An LLM (which is closest thing we have to AGI) does not have a body, nor could it control one, nor does it have a *working* physics model, motor control model or any of the other constituent parts required. And they couldn't learn any of this either. The models are intelligent for sure. They are not general.
English
1
0
0
11
Matt Roberts
Matt Roberts@mattroberts3103·
@benmharrison @pmarca @alexgraveley That is the same as me asking you to fold clothes with your mind...its not possible because your body needs to physically do it. Now a robot can already fold clothes and learn to fold clothes. So? You failed at step one of logic. Agi does not mean it can move atoms...
Bournemouth, England 🇬🇧 English
1
0
1
17
Ben H
Ben H@benmharrison·
@CMFinnigan @pmarca @alexgraveley I would question what “knows” means in the context of an LLM. It knows the tokens associated with what folded clothes are - it doesn’t have any physical or practical “knowing” of how.
English
1
0
1
19
Craig Finnigan
Craig Finnigan@CMFinnigan·
@benmharrison @pmarca @alexgraveley Devil's advocate. Technically, it knows how to fold clothes, it just can't do it. I know how to play baseball, I just suck at it. Does this mean I don't "know" baseball?
English
1
0
0
30
Geoffrey Miller
Geoffrey Miller@gmiller·
@StatisticUrban It's insane to me that most non-Americans can't even comprehend the concept of free speech. Which seems arguably more important.
English
16
33
2K
17.2K
Hunter📈🌈📊
Hunter📈🌈📊@StatisticUrban·
It's insane to me that most Americans can't even comprehend metric units and Celsius without converting. I understand it's not the system used, but it IS the international system, used in all scientific fields and ~every other country. Useful to at least instill the basics.
English
917
8
604
559.3K
Mathelirium
Mathelirium@mathelirium·
It is often said that the lift on a wing is generated because the flow moving over the top surface has a longer distance to travel and therefore needs to go faster. This common explanation is actually wrong.
English
252
106
1.4K
673.3K
Ben H
Ben H@benmharrison·
@Toshiya_A What is causing the initial arc of Artemis trajectory? Earth’s gravity?
English
0
0
1
282
Toshiya
Toshiya@Toshiya_A·
アルテミス2の軌道はどれくらい月重力の影響を受けるのか、を実感したい人向けの動画。グリーンはアルテミス2、オレンジ点線は月重力がない場合の軌道。途中まではほぼ同じ。月周辺でグイッと進行方向が曲げられて折り返すように地球に向かうことになる。 #ArtemisII
日本語
32
596
2.3K
199.3K
Ben H retweetledi
Geoffrey Miller
Geoffrey Miller@gmiller·
Nope. The AI CEOs who have warned about AI extinction risk (Dario, Sam, Demis, Elon) generally did so most vocally several years ago, before the current AI bubble. More recently, they've all switched to 'Come on everyone, join in!', and they've tried to downplay, ignore, or backpedal their earlier extinction risk warnings. They rarely raise the X risk issue first, and then immediately pivot to AI benefits. Now they mostly just hype the benefits, and downplay the risks -- esp when facing the threat of regulation.
English
2
10
138
6K
Ben H
Ben H@benmharrison·
@gmiller @tegmark @pmarca Marc isn’t dumb - he surely can’t be - but he has the default fallback position of “you can just turn it off”. Which is dumb.
English
0
0
1
20
Geoffrey Miller
Geoffrey Miller@gmiller·
@tegmark @pmarca Well Marc @pmarca had proven that he is willing to lie, over and over and over, to smear anyone who expresses any serious concerns about AI safety. He does not argue in good faith. No ethical or epistemic integrity.
English
2
0
19
804
Max Tegmark
Max Tegmark@tegmark·
Marc, @pmarca, you're welcome to disagree with my views, but not to blatantly lie about them. There are some digital products (say CSAM and AI letting terrorists make bioweapons) that I oppose regardless of whether they are open-source or not, but I'm overall supportive of open source, and you can easily verify that my MIT research group defaults to open-sourcing our AI tools. Happy to chat more over zoom or email now that you've unblocked me on X.
Marc Andreessen 🇺🇸@pmarca

I've been in the room with Vitalik-backed Max Tegmark and United States Senators when Tegmark pounds the table demanding open source AI software be made illegal. schumer.senate.gov/newsroom/press…

English
23
12
416
62.1K
Ben H
Ben H@benmharrison·
@gmiller I think this is possibly valid 👇 x.com/bartspits/stat…
Bart Spits@BartSpits

@gmiller True. But what has happened when we became more knowledgeable and capable, as we developed? We started realizing, thinking about preserving species. We started to care. Not all of us, but more and more of us.

English
0
0
2
41
Lucid™
Lucid™@cammakingminds·
If you are using the earth as a reference frame, the moon is actually doing a close flyby of Artemis II.
GIF
English
196
1.8K
16.1K
3.3M
Roko 🐉
Roko 🐉@RokoMijic·
MIRI/Yudkowsky's claim that augmented/uploaded humans are somehow inherently safer than software AI is quite possibly the stupidest thing a very smart group of people have ever believed. It's stupider than communism. It's like the ultimate 21st Century High Modernist take
John David Pressman@jd_pressman

This makes me question the "evolved modularity" thesis that says making 300 IQ human brain would be safer than making an artificial superintelligence. It's not clear to me at all that a human brain is constrained by an innate moral structure. x.com/steve47285/sta…

English
37
4
122
10.5K
Ben H
Ben H@benmharrison·
@RokoMijic Where? Not in the whitepaper for sure…
English
1
0
0
75
Roko 🐉
Roko 🐉@RokoMijic·
@benmharrison he did actually! He said change the cryptography. Modern bitcoiners are just retards lol
English
1
0
12
220
Roko 🐉
Roko 🐉@RokoMijic·
"The Google paper uses a zero-knowledge (ZK) proof to demonstrate the algorithm's existence without leaking actual optimisations" 😬
Justin Drake@drakefjustin

Today is a monumentous day for quantum computing and cryptography. Two breakthrough papers just landed (links in next tweet). Both papers improve Shor's algorithm, infamous for cracking RSA and elliptic curve cryptography. The two results compound, optimising separate layers of the quantum stack. The results are shocking. I expect a narrative shift and a further R&D boost toward post-quantum cryptography. The first paper is by Google Quantum AI. They tackle the (logical) Shor algorithm, tailoring it to crack Bitcoin and Ethereum signatures. The algorithm runs on ~1K logical qubits for the 256-bit elliptic curve secp256k1. Due to the low circuit depth, a fast superconducting computer would recover private keys in minutes. I'm grateful to have joined as a late paper co-author, in large part for the chance to interact with experts and the alpha gleaned from internal discussions. The second paper is by a stealthy startup called Oratomic, with ex-Google and prominent Caltech faculty. Their starting point is Google's improvements to the logical quantum circuit. They then apply improvements at the physical layer, with tricks specific to neutral atom quantum computers. The result estimates that 26,000 atomic qubits are sufficient to break 256-bit elliptic curve signatures. This would be roughly a 40x improvement in physical qubit count over previous state-of-the-art. On the flip side, a single Shor run would take ~10 days due to the relatively slow speed of neutral atoms. Below are my key takeaways. As a disclaimer, I am not a quantum expert. Time is needed for the results to be properly vetted. Based on my interactions with the team, I have faith the Google Quantum AI results are conservative. The Oratomic paper is much harder for me to assess, especially because of the use of more exotic qLDPC codes. I will take it with a grain of salt until the dust settles. → q-day: My confidence in q-day by 2032 has shot up significantly. IMO there's at least a 10% chance that by 2032 a quantum computer recovers a secp256k1 ECDSA private key from an exposed public key. While a cryptographically-relevant quantum computer (CRQC) before 2030 still feels unlikely, now is undoubtedly the time to start preparing. → censorship: The Google paper uses a zero-knowledge (ZK) proof to demonstrate the algorithm's existence without leaking actual optimisations. From now on, assume state-of-the-art algorithms will be censored. There may be self-censorship for moral or commercial reasons, or because of government pressure. A blackout in academic publications would be a tell-tale sign. → cracking time: A superconducting quantum computer, the type Google is building, could crack keys in minutes. This is because the optimised quantum circuit is just 100M Toffoli gates, which is surprisingly shallow. (Toffoli gates are hard because they require production of so-called "magic states".) Toffoli gates would consume ~10 microseconds on a superconducting platform, totalling ~1,000 sec of Shor runtime. → latency optimisations: Two latency optimisations bring key cracking time to single-digit minutes. The first parallelises computation across quantum devices. The second involves feeding the pubkey to the quantum computer mid-flight, after a generic setup phase. → fast- and slow-clock: At first approximation there are two families of quantum computers. The fast-clock flavour, which includes superconducting and photonic architectures, runs at roughly 100 kHz. The slow-clock flavour, which includes trapped ion and neutral atom architectures, runs roughly 1,000x slower (~100 Hz, or ~1 week to crack a single key). → qubit count: The size-optimised variant of the algorithm runs on 1,200 logical qubits. On a superconducting computer with surface code error correction that's roughly 500K physical qubits, a 400:1 physical-to-logical ratio. The surface code is conservative, assuming only four-way nearest-neighbour grid connectivity. It was demonstrated last year by Google on a real quantum computer. → future gains: Low-hanging fruit is still being picked, with at least one of the Google optimisations resulting from a surprisingly simple observation. Interestingly, AI was not (yet!) tasked to find optimisations. This was also the first time authors such as Craig Gidney attacked elliptic curves (as opposed to RSA). Shor logical qubit count could plausibly go under 1K soonish. → error correction: The physical-to-logical ratio for superconducting computers could go under 100:1. For superconducting computers that would be mean ~100K physical qubits for a CRQC, two orders of magnitude away from state of the art. Neutral atoms quantum computers are amenable to error correcting codes other than the surface code. While much slower to run, they can bring down the physical to logical qubit ratio closer to 10:1. → Bitcoin PoW: Commercially-viable Bitcoin PoW via Grover's algorithm is not happening any time soon. We're talking decades, possibly centuries away. This observation should help focus the discussion on ECDSA and Schnorr. (Side note: as unofficial Bitcoin security researcher, I still believe Bitcoin PoW is cooked due to the dwindling security budget.) → team quality: The folks at Google Quantum AI are the real deal. Craig Gidney (@CraigGidney) is arguably the world's top quantum circuit optimisooor. Just last year he squeezed 10x out of Shor for RSA, bringing the physical qubit count down from 10M to 1M. Special thanks to the Google team for patiently answering all my newb questions with detailed, fact-based answers. I was expecting some hype, but found none.

English
6
1
125
8K
Ben H
Ben H@benmharrison·
@jicapal @beffjezos These aren’t the folks chasing immortality via sand machine
English
0
0
1
13
jica
jica@jicapal·
@beffjezos i keep forgetting these dorks have a bigger fear of death than fear of not learning to be actual humans discovering a new technology
English
1
0
1
71
jica
jica@jicapal·
what
jica tweet media
Aella@Aella_Girl

Just saw the AI doc and came away pissed at the optimists. I sort of expected them to have any argument that actually addressed the x-risk side, but they were basically like 'historically tech is good, people have been worried before but it was fine!' They didn't address at ALL the extremely entry-level concerns of like 'building something smarter than us is a categorically new type of threat'. They just repeated that tech would help humanity. It's especially infuriating cause the most lifelong techno optimists I know ARE the doomers. The x-risk community are the ones who grew up on epic sci-fi fiction and have thought long and hard about what the singularity might bring. One of my friends (who was in the doc) once spent all night carrying ice into a hospital room to preserve the corpse of his friend in a desperate attempt to get him into a cryonics lab. It's real for them! But "AI has promise" is not even close to an adequate response to the extinction threat on the table. Even the AI CEOs in the movie - the ones that are *actually* doing the most acceleration - seemed to at least understand the gravity of the arguments they were engaging with. The optimists in the doc seemed to have domain expertise in their technical fields, but were amateurs. They both are insufficiently visionary and also fail to engage with the actual risk in a practical way. I think they pattern match the "ai might kill us" people onto general woke anti-tech movement, and shout against them from a place of ego. That's the only good explanation I can think of for why they must be beating an activist drum that's so damn empty.

English
1
0
5
1.6K
Ben H
Ben H@benmharrison·
@jasager029 @George_Da_Gorge @losslandscape @Aella_Girl It’s impossible for a body without digital configuration to adequately replace a body with digital configuration. The former will only ever be capable of a limited kind of intelligence compared to a machine due to the hardware differences.
English
1
0
0
14
Jasager 🇺🇸🎮⚔️
Jasager 🇺🇸🎮⚔️@jasager029·
@George_Da_Gorge @losslandscape @Aella_Girl No to the first question. It is impossible for a body without neurochemistry to adequately replace a body with neurochemistry. The former will only ever be capable of a limited kind of intelligence compared to us due to the hardware differences.
English
2
0
0
51
Aella
Aella@Aella_Girl·
Just saw the AI doc and came away pissed at the optimists. I sort of expected them to have any argument that actually addressed the x-risk side, but they were basically like 'historically tech is good, people have been worried before but it was fine!' They didn't address at ALL the extremely entry-level concerns of like 'building something smarter than us is a categorically new type of threat'. They just repeated that tech would help humanity. It's especially infuriating cause the most lifelong techno optimists I know ARE the doomers. The x-risk community are the ones who grew up on epic sci-fi fiction and have thought long and hard about what the singularity might bring. One of my friends (who was in the doc) once spent all night carrying ice into a hospital room to preserve the corpse of his friend in a desperate attempt to get him into a cryonics lab. It's real for them! But "AI has promise" is not even close to an adequate response to the extinction threat on the table. Even the AI CEOs in the movie - the ones that are *actually* doing the most acceleration - seemed to at least understand the gravity of the arguments they were engaging with. The optimists in the doc seemed to have domain expertise in their technical fields, but were amateurs. They both are insufficiently visionary and also fail to engage with the actual risk in a practical way. I think they pattern match the "ai might kill us" people onto general woke anti-tech movement, and shout against them from a place of ego. That's the only good explanation I can think of for why they must be beating an activist drum that's so damn empty.
Aella tweet media
English
93
53
770
116.1K
alex
alex@kuperov·
@CharlesD353 people need to read the first chapter of max tegmark's book
English
3
0
3
747
Charles🔸
Charles🔸@CharlesD353·
I'm begging all the people who say this to please do a back of the envelope calculation that shows what they mean and how much money they think could be made rather than hand waving "financial markets"
Ethan Mollick@emollick

The easiest way to make money fast from a superhuman artificial intelligence would be in the financial markets, almost by definition. So the first lab to develop one, if AGI is possible, would almost certainly keep it quiet for as long as they could. Beats charging for API access

English
15
4
180
48.2K