S.J. Bridger

289 posts

S.J. Bridger banner
S.J. Bridger

S.J. Bridger

@SJBridgerWrites

Four Frequencies structural resilience diagnostics. Author of The Fuse is Short: Let's Roast Marshmallows.

انضم Ocak 2026
49 يتبع26 المتابعون
تغريدة مثبتة
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
Every major organizational failure that made headlines had structural conditions that were measurable before the failure became visible. Not financial risk. Not compliance risk. Structural risk: how capacity, decisions, information, and knowledge interact under pressure. The Four Frequencies Diagnostic measures it. sjbridger.com
English
0
0
2
159
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@pvergadia 30-40% after six months of training tells you something. The bottleneck isn't knowledge. These tools ask you to describe work instead of doing it, and for most people that feels less like learning a new skill than losing the one they had.
English
0
0
0
4
Priyanka Vergadia
Priyanka Vergadia@pvergadia·
I work with enterprises and unfortunately <10% users actually use these tools in efficient ways where they generate and comprehend. 90% use them like another ChatGPT or Claude with bad prompts! After more than 6 months of training and comprehensive change management we get maybe 30-40% to a good usage pattern. This is exactly the point that needs to be said out loud because the reality on ground in real enterprises is very different from what we read on social media! These are real humans trying to work in an entirely different way, building new habit take effort and time and lots of leadership support.
Kasper Saugmann@kaspers

@pvergadia Nah, dropped six weeks ago, and the your conclusion is more nuanced than that. It depends on how you use it. If you use it correctly (Generation-Then-Comprehension) then it will yield even better results than without. Didn't you read the paper?

English
4
1
7
1.5K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@JoshKale Paying workers a few bucks to film the job, then deploying robots to do it. The negotiation over who owns that value already happened. Terms were $11 and a body camera.
English
0
0
0
11
Josh Kale
Josh Kale@JoshKale·
This is straight out of Black Mirror... DoorDash's new app pays delivery drivers to strap on body cameras and film themselves doing household chores to train AI robots The tasks: - Wash five dishes on camera, holding each up to the lens - Film yourself folding laundry - Record an unscripted conversation in Spanish - Walk a grocery aisle filming every shelf - A few bucks per clip DoorDash feeds this into AI and robotics models, and sells the data to partners across tech, retail, and hospitality. They have 8 million drivers across nearly every zip code in America. It's a real world data collection machine no AI lab could replicate. Meanwhile, DoorDash is actively deploying autonomous delivery robots in Arizona. Partnered with Waymo for driverless deliveries in Phoenix. Signed a deal with Serve Robotics for sidewalk bots in LA. Committed to commercializing autonomous delivery this year. Uber and Instacart are running the same playbook. Voice recordings. Photo uploads. Wrist mounted cameras capturing every hand movement while workers cook dinner. The entire gig industry is converting its workforce into AI training data. Funny side note: DoorDash also pays drivers $11 to close Waymo car doors the robot can't close itself The most valuable new gig might just be showing a machine how to do yours. Wild times
Josh Kale tweet media
Andy Fang@andyfang

Introducing Dasher Tasks Dashers can now get paid to do general tasks. We think this will be huge for building the frontier of physical intelligence. Look forward to seeing where this goes!

English
10
5
34
11.8K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@Polymarket People aren't scared because tech leaders said something alarming. They're scared because their Tuesday morning feels different than it did a year ago. That's not a messaging problem.
English
0
0
0
77
Polymarket
Polymarket@Polymarket·
JUST IN: Nvidia CEO Jensen Huang calls on tech leaders to "be careful not to scare people" regarding AI.
English
210
68
958
68.4K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@KobeissiLetter The office number is the one to watch. Minus 38% since ChatGPT launched isn't a real estate trend. That's capital betting on how many humans need to be in the same room going forward.
English
0
0
0
41
The Kobeissi Letter
The Kobeissi Letter@KobeissiLetter·
BREAKING: The value of US data centers under construction has officially surpassed the value of office buildings under construction for the first time in history. Data centers under construction are up+29% YoY, to a record $45.1 billion. Meanwhile, the value of offices under construction are down -13%, to $43.5 billion, the lowest since October 2015. Since November 2022, when ChatGPT was launched, data center construction is up +228%. Over that same period, office construction is down -38%. AI is reshaping the US economy.
The Kobeissi Letter tweet media
English
130
331
1.6K
125.8K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@EricRWeinstein The elder role you're describing requires trust from both sides. But defending billionaires loses you public credibility and pushing back on them loses you access. Hard to mediate from a position nobody can actually occupy.
English
0
0
1
67
Eric Weinstein
Eric Weinstein@EricRWeinstein·
Notice that Billionaires didn’t want to move from California to Texas & Florida. Let’s piss everyone off: Tech Billionaires are treating humanity terribly. And it is also true that humanity is treating them terribly. No adults or wise elders anywhere with a tsunami coming? 🤷‍♀️
S.J. Bridger@SJBridgerWrites

@EricRWeinstein The Coase frame is right but the timing problem is brutal. You negotiate licensing terms before extraction, not after. Most of the value has already been ingested. The leverage point passed while people were still debating whether AI was real.

English
43
13
208
52.5K
Eric Weinstein
Eric Weinstein@EricRWeinstein·
Coase is you getting rich by training and liscencing your replacement. Universal Income is you taking scraps from the AI table. Which will lead to communism and the death of dignity. Everything we knew is over. History changed between November 1952 (H-bomb) and April 1953 (DNA). Well, it changed again between June 2017 and February 2026. The average person is being taught to hate the tiny number of experts he has on his side who could negotiate this deal. Look up Coase. Understand your right to liscence the right to create vampires from your/our data as a SCALING AMMOUNT OF THE WEALTH OF ITS OWNERS. I cannot believe that a tiny number of my friends and colleagues from my time in the bay area are just going to go for it. I'm super excited about AI. We all should be. Don't just sit there.
Champion 0f The Goddess //{DraakenGaard}@Seraph_Notitia

Who’s Coase

English
51
36
299
41.7K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@PeterDiamandis I don't think it's a vision problem. Most people can imagine a better future fine. The part that's gone is believing there's any way to get there from here.
English
0
0
0
32
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
Humanity's greatest need right now, beyond new tech, is HOPE. A compelling, abundant vision of the future that people WANT to live in.
English
168
100
929
33.9K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@addyosmani The tricky part is taste mostly develops through slow reps. Make something, watch it fail, adjust. AI compresses that cycle until the thing that builds taste barely has time to work.
English
0
0
0
6
Addy Osmani
Addy Osmani@addyosmani·
AI doesn't replace taste. It multiplies whatever taste you already have.
English
56
28
221
15.6K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@introverts007 The sensitivity part is real. The "good vibes only" part is where it quietly turns into something smaller than you meant it to be.
English
0
0
0
147
Introvert Memes
Introvert Memes@introverts007·
Introverts are highly sensitive. They're empaths. They feel things on a level you can't even imagine. That's why they avoid people who drain them. People who take but never give. They want no drama. No conflict. No toxic vampires around them. Love, peace, and good vibes only.
English
34
332
2.2K
50.6K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@RokoMijic Risk tolerance tracks with what you're still holding. When the things you were protecting are already gone, every lifeboat looks reasonable. For people still carrying those things, the calculus is different.
English
0
0
0
7
Roko 🐉
Roko 🐉@RokoMijic·
AI Risk is really a misframing Imagine you're on a sinking ship and someone says you shouldn't get in the lifeboat because of "small boat risk". The risk is the risk that you die, not that you die specifically because you're on a small boat. Without AI, we are probably all going to die. And our cultures and extended families are also going to die, but for a different reason (politics/war/migration/low fertility). I already feel that the world I was most comfortable in has died; my friends and family are mostly gone. My country stopped existing between 2007 and 2022. The ship has nearly sunk already.
English
23
12
115
4.3K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@hthieblot Most people who play it safe aren't afraid. They're carrying something that doesn't pause while they take the leap.
English
0
0
0
19
Hubert Thieblot
Hubert Thieblot@hthieblot·
You either take the risk or end up working for someone who did. Worst case scenario: you learn. Best case scenario: it changes your whole life and trajectory. Playing it safe never built anything worth remembering.
English
41
29
435
8K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@SenSanders The privacy piece is real. The harder problem: when you need the technology to explain its own risks to you, you've already passed the point where most people can evaluate it directly. Regulation built on that gap isn't going to close it.
English
0
0
0
1.4K
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.
English
803
1.5K
9K
1.6M
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@Dan_Jeffries1 Enterprising people will be fine. They usually are. That's not the population anyone is worried about.
English
0
0
2
57
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@kaiarhodes And during those quiet decades the redundancy got stripped out because it looked like waste. Reserves, supply chain diversity, domestic capacity, all trimmed for efficiency because the test never seemed to be coming.
English
0
2
6
289
Kaia Rhodes
Kaia Rhodes@kaiarhodes·
For most of my adult life, America operated under a kind of ambient invincibility. Oil shocks happened in our parents’ generation. Amazon and UPS made just-in-time delivery table stakes. Supply chain was a term for logistics people, not a household anxiety. We built an economy so dominant, so insulated by military reach, reserve currency status, and sheer geographic luck, that we confused resilience with imperviousness. We are not impervious. We are dependent. We can't see the dependency because no one has tested it in a generation.
English
64
163
1.4K
42.1K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
The part that's hardest to model is probably the interaction between these tracks. Delegation reshapes trust, language framing shifts what gets delegated, regulation responds to dynamics it can only partially see. The compounding happens at the intersections, not inside any single lane.
English
0
0
2
62
Valerio Capraro
Valerio Capraro@ValerioCapraro·
We are no longer living in a purely human society. We are entering a hybrid system where humans and machines continuously interact and influence each other. Where does this system evolve? In a new perspective piece, we brought together leading experts to address this using the lens of evolutionary game theory. We outline six core research directions: 1) Evolution of social behaviour. How cooperation, fairness, and trust evolve in mixed human–AI populations. 2) Machine culture. How AI systems generate, transmit, and select cultural traits. 3) Language–behaviour co-evolution. How LLMs, by framing decisions, reshape preferences, norms, and actions. 4) Delegation dynamics. How control, responsibility, and agency shift between humans and machines. 5) Epistemic pipelines. How different cognitive processes generate human vs AI judgments, and how these co-evolve. 6) AI–regulation co-evolution. How firms, institutions, and users strategically shape—and are shaped by—AI development. We hope this framework sparks new work at the intersection of AI, behaviour, and society. * Paper in the first reply Joint with @T_A_Han, @jzl86, Tom Lenaerts, @iyadrahwan, @fernandopsantos, @matjazperc
Valerio Capraro tweet media
English
16
36
137
6.3K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@DrBenTapper1 This is real, but it's not new. Agriculture has been losing water allocation fights to higher-value uses for decades: cities, energy, now data centers. The structural problem is that food is priced too cheap for farming to ever outbid the next competitor.
English
0
0
0
176
Dr. Ben Tapper
Dr. Ben Tapper@DrBenTapper1·
AI data centers can use up to five million gallons of water each day. Am I the only one concerned about how this could pose a serious threat to our farmers and our food supply? What happens when these centers seriously strain our aquifers and dry up irrigation systems? Is it really worth it?
English
971
1.6K
3.8K
59.8K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@TheFigen_ It's not ingratitude though. Proximity just makes things invisible. The stuff closest to you is always the hardest to see.
English
0
0
0
205
The Figen
The Figen@TheFigen_·
Japanese actor Hiroyuki Sanada spoke about the contradictions of human nature: “Some people dream of having a swimming pool at home, while those who have one hardly ever use it. Those who have lost a loved one feel a profound sense of loss, while others often complain about their living relatives. Those without a partner long for one, while those who have one often don't appreciate it. The hungry would give anything for a meal, while the satiated complain about the taste of their food. Those without a car dream of owning one, while those who have a car are always looking for a better one.” The key to happiness is gratitude: truly seeing and appreciating what we already have, and understanding that somewhere, someone would give anything for what we take for granted.
The Figen tweet mediaThe Figen tweet media
English
611
8.1K
44.3K
1.5M
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@tomfgoodwin Not stupid at all. Org charts exist because humans hit capacity limits. Copying that into AI imports the structure without the constraint that created it, and then you need a management layer because you just recreated the coordination problem too.
English
0
0
0
0
Tom Goodwin
Tom Goodwin@tomfgoodwin·
I’m surely being stupid. But if AI is rather unconstrained by expertise or capacity or to some extent speed Why do we need to divide tasks or departments to 9 agents ( the marketing agent, the optimization agent etc ) to each do one thing. And then another agent to manage the swarm. Cant one agent just be doing it all you know. It seems very skeuomorphic. Will we have HR agents to make sure the agent agents are being looked after ? A office canteen manager agent to feed the agents ? Seems daft
English
172
3
168
21K
S.J. Bridger
S.J. Bridger@SJBridgerWrites·
@PeterDiamandis The prediction window shrank but the planning structures didn't. Organizations still run 5-year strategies inside 12-month visibility and wonder why nothing lands where they expected.
English
0
0
0
29
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
How far out can experts predict the future? It used to be 20 years... Then 10... Now, even 12 months feels like a moonshot prediction.
English
56
27
309
16.1K