Ben H

2.1K posts

Ben H banner
Ben H

Ben H

@benmharrison

In the long run, everything tends towards the cost of energy. All opinions are not my own. DM anything ✌🏻

Katılım Aralık 2017
531 Takip Edilen185 Takipçiler
Marc Andreessen 🇺🇸
“This raises an obvious question: how much of Anthropic’s reluctance to make Mythos widely available is due to security concerns, as opposed to the more prosaic reality that Anthropic simply doesn’t have enough compute?” @stratechery @benthompson
English
301
297
5.1K
763.6K
Ben H
Ben H@benmharrison·
@trekedge Only some parts of Superintelligence?
English
0
0
0
53
Daniel Steigman
Daniel Steigman@trekedge·
The best part of working at OpenAI is that our mission is literal. We want everyone to have access to superintelligence. No hiding our best model for only powerful companies. You get the power.
English
505
109
2.7K
301.2K
Ben H
Ben H@benmharrison·
@DaveShapi Do you think this is good or bad news?
English
0
0
0
34
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
I've been running some numbers and forecasting where AI is going. First, it seems like training frontier models will become prohibitively expensive by 2030 to 2032. At that point it will require a consortium of nations and companies to fund them, or we'll just have to wait for Moore's Law to catch up and drive progress. Second, AI will almost certainly saturate almost all meaningful capabilities before then. There's a concept called "requisite variety" whereby any controller must have at least the same amount of complexity as the system it is trying to control. This is what I have previously called an intelligence optimum. In other words, once you have an AI that is more sophisticated than reality, you start optimizing for efficiency. We won't hit full requisite variety for the most advanced math before costs dramatically slow down AI progress. But, it looks like we'll plateau at a level way far beyond human capability. Tldr; based on current costs and trajectories, we're almost certainly getting not just AGI or ASI but nearly artificial "godlike" intelligence before we run out of headroom.
English
57
19
303
16.6K
Ben H
Ben H@benmharrison·
@HusKerrs It’ll be like nothing you’ve experience before, and yet you’ll feel totally capable nonetheless.
English
0
0
1
12
HusKerrs
HusKerrs@HusKerrs·
Calling on all dads: Ali and I are about 3 weeks out from the birth of our first baby boy! Give me your #1 piece of advice for a new dad.
English
4.8K
32
4.1K
1.2M
Ben H
Ben H@benmharrison·
Accurate
Nina Schick@NinaDSchick

Catastrophe for UK competitiveness and AI ambitions. Britain now has the highest industrial electricity prices in the developed world. At 25p per kilowatt-hour, its power costs stand at double the EU average and quadruple those of the US (6p) and China (7p). But this isn’t just about the death of old industry. Just as cheap electricity determined the industrial powers of the past, it will now determine the AI superpowers of the future. The real competition is not about who builds the best AI models, but who can afford to run them. Sovereignty in this century isn’t found in “green ledgers” or offshore wind farms; it is found in the physical ability to process Intelligence at an industrial scale. Britain’s current path is a dead end. There are 140 data centers in the UK’s grid connection queue, representing 50 GW of demand — more than the entire country’s current peak usage (45 GW). For many, the quoted connection date is 2040. As Intelligence proliferates, productivity will no longer be measured in man-hours, but in Tokens-per-Watt: how many units of ‘Intelligence’ a kilowatt-hour of electricity can buy. With its 25p rate, it is already 400% more expensive to buy Intelligence in Britain than in China or the US. This is a direct hit to the UK services sector, which accounts for 82% of the economy. As AI automates knowledge work, British firms must 'rent' intelligence from foreign clouds at predatory rates just to stay competitive. Even if Britain builds domestic AI infrastructure, the 25p barrier means it would be structurally uncompetitive from day one. This leaves only the path of outsourcing national productivity to foreign clouds, a permanent transfer of British wealth. True sovereignty requires a radical shift to dedicated, low-cost power for compute. Without cheap energy, Britain won’t just lose its factories — it may lose its offices, too.

English
0
0
0
30
DragonStorm9
DragonStorm9@DraganStiglic·
Dear Beatriz, please accept my congratulations and full-scale respect - for your amazing initiatives and discoveries! Few thoughts I wanted to share: To understand this amazing phenomena in comprehensive way, I wonder if - perhaps - we might need to take into the consideration similar (if not the same) type of the phenomena - which are mysterious ''flashes'' appearing and disappearing above the surface of the Moon. In what we might discover - they ALSO show correlation to events happening on Earth! And while this might sound crazy and/or counter-intuitive, perhaps - it is not!
English
1
0
0
969
Beatriz Villarroel
Beatriz Villarroel@DrBeaVillarroel·
The mystery deepens. An independent researcher has uncovered an unexpected anticorrelation between VASCO transient detections and geomagnetic storm activity. This finding seriously challenges explanations based on cosmic rays or plate defects, even without considering the deficit of transients in Earth’s shadow. Read the preprint: arxiv.org/pdf/2604.04950 I’m grateful to independent researchers who have the courage and integrity to examine this topic seriously and in good faith.
Beatriz Villarroel tweet media
English
93
340
1.8K
255.7K
banteg
banteg@banteg·
it all makes sense now. dario was still at openai in 2019. he left next year and took his marketing playbook with him. hasn't changed a thing since.
banteg tweet media
English
164
819
12.8K
705.4K
Ben H
Ben H@benmharrison·
@mahaoo_ASI @ArthurB I don’t follow. You can teach a human a new skill in one or two iterations. It takes hundreds (minimum) to teach a model something new…
English
1
0
0
151
Mahaoo
Mahaoo@mahaoo_ASI·
@ArthurB There is a serious argument to be made that today machine learning techniques are more efficient training algorithms, in terms of packing knowledge into parameters So it can be that the brain has a superior architecture but inferior training algorithms, and it about evens out
English
2
0
21
2.2K
Arthur B.
Arthur B.@ArthurB·
The number of synapses in the human brain is often quoted as 100T. It's going to be particularly embarrassing if what it takes happens to be 100T parameters. The super naive, unsophisticated estimate has no right to be this good!
English
10
8
476
28.5K
Ben H
Ben H@benmharrison·
@RokoMijic I think it applies to any kind of steep growth, including linear. Sadly.
English
0
0
0
382
Peter Hague
Peter Hague@peterrhague·
@libbyemmons It’s more than a dream - the Earth captures only one part in two billion of the Suns energy. Freed from its constraints, civilisation can grow to mind boggling immense scale, and on this scale whoever is left behind on Earth doesn’t matter much. So whoever goes writes the future.
English
3
0
37
534
Libby Emmons
Libby Emmons@libbyemmons·
Women like Zoe Williams have no imagination. We go to space because we humans are a race of explorers. Because we can. Because it is there. And because we dream. Let us dream, Zoe, for God's sake just let us dream.
Melissa Chen@MsMelChen

Women will say shit like this and then wonder why the entire planet and every major religion has imposed strict social restrictions on their sovereignty since the dawn of time in every place humans have ever lived

English
26
27
251
6.5K
Ben H
Ben H@benmharrison·
@RokoMijic @pmarca Moments like this keep me half-believing the universe might be biased for good. I struggle to believe that another lab would have behaved this way
English
0
0
0
9
Roko 🐉
Roko 🐉@RokoMijic·
@pmarca Do you think computer security will finally become "defense dominant" with AI + theorem provers? It's looking that way to me
English
2
0
10
656
Marc Andreessen 🇺🇸
Every security flaw discovered by AI was there before AI, waiting to be discovered either by people or by AI. The world has never been good at securing computer systems; finally with AI we are going to get good.
English
347
469
7.5K
389.3K
Matt Roberts
Matt Roberts@mattroberts3103·
Yes, just like your PC at home can't run a frontier model, but can connect to it, and use it, and build with it.... or a tesla with is Vision model is a nueral net with loads of edge components (not LLM either it a VLA), everything that has intelligence metaphorically has a "brain" that uses electrical signals to communicate with its physical counterpart to perform actions in the physical world. Teslas are robots and use vision not LLM, so LLM isnt the only way...Most robots will use VLA not LLM, you might have missed this memo. VLA and LLM have fundemental differences. AGAIN i will iterate you said it cant fold clothes i gave you an example of humaniod robots that literally do it - which the products DO have embedded AI already (e.g., Figure's Helix, Physical Intelligence's π0, or fine-tuned VLAs). My point was ONLY this - you said it cant, i gave example of it can. You say im wrong even though it has already been done.. I never once questioned AGI, what is AGI, are we there yet, etc. I countered your point on folding clothes only....
English
1
0
0
31
Marc Andreessen 🇺🇸
I'm calling it. AGI is already here – it's just not evenly distributed yet.
English
1.6K
1.2K
13.7K
2.5M
Ben H
Ben H@benmharrison·
@RokoMijic What if those well-resourced, centralised orgs are bad?
English
1
0
0
41
Ben H
Ben H@benmharrison·
wait, did The Project just begin?
English
1
0
1
39
Dan
Dan@DanSimerman·
@tenobrus Have you considered that the entire purpose of DARPA is to be at least 2 decades ahead of civilian technology. You really think the US Government can’t defend itself against an LLM.
English
26
0
56
56.2K
Tenobrus
Tenobrus@tenobrus·
maybe this is not yet clear, so let me state it plainly: as of right now Anthropic, and really a small number of individuals at Anthropic, has the capacity to directly attack and cause major damage to the United States Government, China, and generally global superpowers. government agencies like the NSA do not have internal models or defense capabilities that outclass frontier models. if they chose to do so, they could likely exfiltrate top secret information from government systems, gain control over critical infrastructure including military infrastructure, sabotage or modify communications between members of government at the highest level, and potentially carry on activities for some time without detection. the thing about having access to a huge number of zerodays your adversaries don't know about is it gives you a massive asymmetric advantage. they did not exploit this to gain power or destabilize the world order. they publicly released the information that they had these capabilities and worked to mitigate these flaws. you should be grateful american frontier labs have proven themselves remarkably trustworthy and concerned with the public good. but it's critical you understand we are in a new regime. private entities now have power that directly rivals and impacts the government's monopoly on influence and violence. and anthropic is certainly not the only one, there's little chance OpenAI's internal models are far behind. this trend will accelerate on virtually every dimension, not slow down. my prediction for how it plays out is the relatively imminent seizure and nationalization of labs by the US government, sometime over the next two years. it's very tough for me to see how they accept the existence of this kind of threat. but this adds a whole new class of governance issues, as then we've handed these extremely wide-reaching capabilities from private entities to public ones.
Tenobrus tweet media
English
224
556
5.4K
827.6K
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
@terracotta_hawk New equilibrium: Every frontier AI company is embedded in every Internet-facing company. Any time a new more powerful AI comes out, it gets a chance to rewrite all the Internet-facing code it wants before arriving. It'd be better to have a human rewrite it, but that's slower.
English
7
2
74
12.8K
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
In conclusion: This is perhaps a good time to try making an extra backup of all your online data (eg, via Google Takeout) onto an airgapped offline hard drive, just in case Project Glasswing fails to prevent the First Great AI Security Meltdown.
English
16
73
1K
159.2K
Matt Roberts
Matt Roberts@mattroberts3103·
I work in AI as a software engineer and researcher, i kind of do know what i am talking about, we embed software into hardware all the time and use the cloud to bridge the gap. Its not hard. We have humaniod robots that can fold clothes already... the original post was about agi, your claim was agi cannot fold clothes, i rubuted saying your mind cant either because the software relies on the hardware to perform the task. This is very basic stuff dude.
Bournemouth, England 🇬🇧 English
2
0
1
28