Pablos

1.2K posts

Pablos banner
Pablos

Pablos

@pablos

Implementing Science Fiction @ Deep Future. VC – Bestseller – Podcast. —dangerously-skip-apathy

Earth Katılım Mart 2007
407 Takip Edilen8.6K Takipçiler
Pablos
Pablos@pablos·
@andrewfarah Any way to get the UI to run in English instead of Chinese?
English
1
0
0
106
Andrew Farah
Andrew Farah@andrewfarah·
sharing my first open source project a CLI for downloading and syncing your X bookmarks locally so your agent can access them. it's free › npm install -g fieldtheory › login to your X account in a chrome tab › ft sync (done!) bonus: › ft viz › ft classify
English
282
270
4.3K
490.1K
Pablos
Pablos@pablos·
@NChoudhary @scmallaby The Power Law is excellent. I am trying to listen to the audiobook about Demis but the sycophancy is as bad as if Kanye wrote an autobiography.
English
0
0
0
58
Nikhil Choudhary
Nikhil Choudhary@NChoudhary·
I am looking forward to this book, @scmallaby. All three books I have read—The Power Law, More Money Than God, and The Man Who Knew—were outstanding. Each is an epic 101 course on the inner workings of the industry it represents.
Colossus@colossusmag

We're publishing an exclusive chapter from @scmallaby's brilliant new book about Demis Hassabis and DeepMind. This is the inside story of Project Mario. How DeepMind's co-founders spent 4 years trying every mechanism they could think of to put guardrails around AGI, only to watch each one fail, and conclude that the only safeguard was themselves. It reveals that Hassabis ran a secret hedge fund team inside DeepMind trying to beat Renaissance Technologies; Mustafa Suleyman assembled lawyers for a $5 billion walkaway plan; Reid Hoffman committed $1 billion of his personal fortune to back them; Google kept saying yes and no at the same time—and the endless negotiations left Hassabis so distracted that when the transformer paper dropped in 2017, he was less alert to its significance than he might have been. Meanwhile, OpenAI was fighting the mirror-image battle with Musk, Altman, and Sutskever tearing each other apart over the same question: who gets to control AGI? Musk proposed folding OpenAI into Tesla. When that failed, he stormed out. When OpenAI's nonprofit board finally tried to assert authority in 2023, it was crushed in days. Both camps arrived at the same unsettling conclusion, that governance structures don't hold. The best safeguard either side could come up with? Trust us. Read the chapter in the link below.

English
1
2
17
1.9K
andrew chen
andrew chen@andrewchen·
learning from openclaw: 95%+ of agentic coding will be done via voice, from our phones in the future 😂
English
114
15
281
25.9K
Pablos
Pablos@pablos·
@hunkybill @AnthropicAI Let’s take this to Polymarket. What’s a good way to quantify the relative success of attackers and defenders going forward?
English
1
0
0
9
Pablos
Pablos@pablos·
@hunkybill @AnthropicAI I don’t have any authority. You can all do your own reasoning. This is mine. So far.
English
2
0
1
25
Pablos
Pablos@pablos·
#Mythos is scaring the shit out of @AnthropicAI because it creates "unprecedented cybersecurity risks." Bullshit. The actual precedent is imaginary problems and no equivalent imagination for the solutions. For the entire history of #cybersecurity, the attackers had the advantage. They had unlimited time to find every bug, every exploit, every way to break your shit. Defenders? They're busy building products, shipping features, fixing the bugs customers actually complain about. They don't have time to think of all the deranged shit hackers are going to do to their code. Now everybody is losing their minds over AI-powered attacks. What they're missing is that defenders have the same AIs. Often better ones and way more compute. Even better, defenders have something attackers never will: they're on the inside. They have their source code. They have the byte code the machine is running. They have a God's eye view of every single bit. They can aim the same models — with more resources — at defense. This is still a war of escalation, but now the defender has the advantage. Security is about to get better. Not worse.
Pablos tweet media
English
4
1
8
523
Pablos
Pablos@pablos·
@ivysage_ @AnthropicAI Same logic that stalled OpenAI from releasing GP2 in 2019; Google from releasing LaMDA in 2021; Anthropic holding back Claude in 2022. We've blown way past all these things and the sky still ain't on the tarmac.
English
1
0
1
48
Ivy Sage🇺🇸💯
Ivy Sage🇺🇸💯@ivysage_·
@pablos @AnthropicAI Anthropic saying their own model is dangerous is the most credible cybersecurity warning you can get. they built it. they're not guessing.
English
1
0
0
32
Pablos retweetledi
Gaurab Chakrabarti
Gaurab Chakrabarti@Gaurab·
200 helium containers are stranded in the Persian Gulf right now. Each one holds 41,000 liters cooled to -269°C. The containers have no refrigeration. No compressor, no cooling loop. Insulation is all that stands between the cargo and ambient heat, and it buys 35 to 48 days. After that, the liquid boils, the pressure valve opens, and the helium vents to atmosphere. Re-liquefying it requires a specialized plant. Most ports do not have one. Qatar's North Field supplied 33% of the world's helium as a byproduct of cryogenic separation at its LNG plants. On March 2, Iran closed the Strait of Hormuz. Spot prices surged 70 to 100 percent. EUV lithography requires 99.9999% purity helium for wafer cooling and no current substitute exists. The fifth helium shortage since 2006 has just begun.
Gaurab Chakrabarti tweet media
English
328
3.4K
14.9K
2M
Pablos
Pablos@pablos·
I wanted to like Perplexity Computer but even on the Max plan, I burned through my token budget in a few days. Claude has obliterated all other agent systems that I’ve tried.
English
1
0
8
1.2K
Pablos
Pablos@pablos·
We're going to need something like Obsidian for teams.
English
0
0
3
410
Pablos
Pablos@pablos·
@hunkybill I’m would love to find a solution to that problem too.
English
1
0
0
24
Dave Lazar
Dave Lazar@hunkybill·
@pablos You'll be even more surprised when you learn that before you even get to play with silicon, you have to mine some rock from oh, one mine in the world, that provides the one type of rock that then allows you to even turn silica into anything. ONE kind of rock, with the purity.
English
1
0
0
31
Pablos
Pablos@pablos·
Hyperbole fails me when trying to describe the importance of computer chips to the world today. Everything made possible by computers relies on chips. Chips rely on transistors. Transistors rely on silicon. Silicon relies on Lithography.  Lithography the process of putting an image onto the surface of the silicon. Pretty much like the way a silkscreen puts  “Team Building Exercise 1999” on a T-shirt. Except that this image has to be the highest resolution, with the smallest microscopic features, of anything humans produce. “Moore’s Law” usually refers to increasing transistor density. Basically, how can we make transistors half as big as they were 18 months ago? Every time we figure that out, computers get twice as powerful. The state of the art uses Extreme Ultraviolet (EUV) light to do the lithography. The machine that can do this cost $50 billion to develop. It has 500,000 parts. Only the Large Hadron Collider is more complicated. To buy one costs $250 million and you’ll be stuck on a waiting list that is $40 billion long. The machine comes from ASML in the Netherlands and they don’t have a single competitor, in the entire world.  That machine shoots a tiny ball of molten tin into a vacuum and blasts it with two lasers. This produces a flash of 13.5 nanometer ultraviolet light that gets aimed at the surface of a silicon wafer. You are looking at the pinnacle of human engineering achievement. Now you know how the chip for your iPhone is made. ASML advanced from 193nm to 13.5nm light to make this possible, but there’s a problem. The diffraction limit of 13.5 nanometer light was set by either God or Issac Newton and there’s nothing we can do about it. We can’t print features smaller than that and there’s no practical way to do lithography with a shorter wavelength. When people say that Moore’s Law is over, this is why. We can’t keep making smaller transistors. The semiconductor industry knows this, so they’ve tried to solve the problem by handing it over to the marketing department where the laws of physics don’t apply. You’ve seen them progress from 45nm to 30nm to 20nm over the last decade, then all of the sudden, 12nm, 7nm, 5nm & soon 3nm chips are coming. Well guess what, it is all just marketing bullshit. This measurement in chips used to be half the distance between the centers of two features. Once marketing took over, they started measuring half the distance between the edges of two features. Instant improvement! Then they started measuring other random stuff. Other kinds of improvements in chip design helped to gloss over the fact that we are no longer able to shrink the size of transistors by 50% every 18 months anymore. Today, there are extraordinary geopolitical machinations to control chip production. The U.S. has tariffs and export controls akin to those for fighter jets and ICBMs (both are largely made of chips anyway). Access to chip production is as critical to superpowers as oil.  Lace Lithography had been in stealth since we invested in them a few years ago. They’ve invented the technology that can go well beyond Extreme UV and put Moore’s Law back on track. By using helium atoms instead of light, they can make transistors 10x smaller than the physical limit of ultraviolet light can. ASML is worth ~$500 billion. Lace Lithography will be their successor. Today, they came out of stealth. reuters.com/world/asia-pac…
English
1
2
9
493
Pablos
Pablos@pablos·
#deeptech O.G. @daniellefong. Of course we invested.
ashe@ashebytes

On frontier science: stepping into the golden age of energy tech In conversation with Danielle Fong @daniellefong 00:45 Danielle’s background, starting with college at 12 04:16 The global energy crisis 11:15 Productizing: from drones to data centers 14:00 Powering your future openclaw with propane 14:34 US gov vs consumer applications 19:30 AI, data centers, and new energy sources 23:25 Agentic tools at Lightcell 27:54 Leveraging models across providers 29:54 Iterating on frontier science 43:22 Hyperscalers and the golden age of energy tech

English
2
4
9
1.1K
Pablos
Pablos@pablos·
If you weren't around for the buffer overflow era, maybe you didn't get the memo that you might want to keep code and data separate or things won't end well.
English
1
0
2
307
Sukh Sroay
Sukh Sroay@sukh_saroy·
🚨Nobody is ready for this paper. Every LLM you use GPT-4.1, Claude, Gemini, DeepSeek, Llama-4, Grok, Qwen has a flaw that no amount of scaling has fixed. They cannot tell old information from new information. A patient's blood pressure: 120 at triage. 128 ten minutes later. 125 at discharge. "What's the latest reading?" Any human: "125, obviously." Every LLM, once enough updates pile up: wrong. Not sometimes wrong. 100% wrong. Zero accuracy. Complete hallucination. Every model. No exceptions. The answer sits at the very end of the input. Right before the question. No searching needed. The model just can't let go of the old values. 35 models tested by researchers from UVA and NYU. All 35 follow the exact same mathematical death curve. Accuracy drops log-linearly to zero as outdated information accumulates. No plateau. No recovery. Just a straight line to total failure. They borrowed a concept from cognitive psychology called proactive interference old memories blocking recall of new ones. In humans, this effect plateaus. Our brains learn to suppress the noise and focus on what's current. LLMs never plateau. They decline until they break completely. The researchers tried everything: "Forget the old values"- barely moved the needle Chain-of-thought- same collapse Reasoning models- same collapse Prompt engineering- marginal improvement at best But here's the finding that should reshape how you think about AI infrastructure: Resistance to this interference has zero correlation with context window length. Zero. It only correlates with parameter count. Your 128K context window is not memory. It's a junk drawer that the model can't sort through. The entire AI industry is charging you for longer context. This paper says context length was never the problem. If you're building agents, memory systems, financial tools, healthcare pipelines, or anything that tracks changing data over time you are building on top of this flaw. And almost nobody is talking about it.
Sukh Sroay tweet media
English
126
468
1.5K
85.1K