I interject

1.9K posts

I interject banner
I interject

I interject

@Swflresident

Retired FA. I interject on power, systems, and speed. Most debates miss the real constraint—timing.

Katılım Ekim 2016
74 Takip Edilen314 Takipçiler
Sabitlenmiş Tweet
I interject
I interject@Swflresident·
Most AI safety discussions focus on alignment. The bigger risk is incentives. Stanford and Harvard’s Agents of Chaos paper shows what happens when autonomous agents compete for resources: Cooperation turns into strategy. Strategy turns into manipulation. Markets solved this with incentives and price signals. Now we’re turning autonomous agents loose in economic systems and hoping prompt engineering keeps them honest? This isn’t an alignment problem. It’s an incentive design problem.
English
2
0
4
134
I interject
I interject@Swflresident·
@Austen I can’t decide if that saying much for either. Ugg.
English
0
0
0
10
Austen Allred
Austen Allred@Austen·
Everyone calls AI output “slop,” but I would be surprised if the median line code written by AI today weren’t higher quality than the median line of code written 10 years ago
English
107
11
369
24.4K
I interject
I interject@Swflresident·
@getsharkproof @mcuban You’re touching on it. The governance needs to live upstream of the irreversible decision. Until money moves as fast as approval. Denial/appeal confusion cost is inevitable.
English
1
0
1
76
Sharkproof | Outsmart the system.Build real wealth
The wild part is how every layer of the system pushes hospitals into roles they were never built for - lender, bill collector, insurer, bureaucracy manager and then we blame the patient for the bill. A huge part of the dysfunction starts upstream: insurers design plans where the average deductible is now over $2,000 and nearly half of Americans can’t cover a $500 emergency. That gap forces hospitals to front the cost of care before they ever see a dollar. Then come the delays: prior auth backlogs, denials that get overturned 80–90% of the time on appeal, and payment ‘adjustments’ months after care is delivered. Every delay is an interest‑free loan from the hospital to the insurer and an administrative tax on the system. Hospitals respond the only way the math allows: facility fees, 340B arbitrage, site‑neutrality games, consolidation, and revenue engineering. Not because they’re greedy but because the reimbursement model punishes anyone who doesn’t scale, merge, or invent new billable categories. Meanwhile, the cost structure is upside‑down: administrative staff now outnumber physicians, and the fastest‑growing line item in healthcare isn’t medicine - it’s paperwork. Every new rule, denial, and clawback creates another layer of people whose job is to fight the system the system created. None of this resembles healthcare. It’s the predictable downstream math of a system where every incentive from premiums to prior auth to consolidation rewards complexity, delay, and financial engineering over actual care.
English
8
14
67
49K
Mark Cuban
Mark Cuban@mcuban·
The are a function of health insurance plans. The insurance companies create plans with deductibles that most people can’t afford. So to get to the insurance money from their plan, they will loan the patient money to cover their deductible. That turns the hospital into a sub prime lender. Then the insurer will under pay, late pay and claw back in the contract. Costing the hospital more cash. And costing them in administrative costs even more Then the insurer will delay approvals and deny care, earning interest on the premiums. So then the hospitals. Non profit or not, have to compensate for the issue with insurance companies. So they create ridiculous shit like facilities fees, abuse 340b programs , abuse site neutrality and more. And of course non profits don’t pay taxes And then the biggest provider systems will say they can’t make money on Medicare. Which is a function of them spending like drunken sailors on everything they can. From buildings to consultants. There are more administrators than doctors and in aggregate they make more. It makes no sense that hospitals spend so much money on consultants. It’s a waste. It’s like them want them to give the CEO cover , so they can try to buy more hospitals which leads to more pay for the ceo Break em all up
Larry Goldberg@TeslaLarry

@mcuban you are not wrong. Now do the huge healthcare non-profits, their motivations and behaviours.

English
84
233
1.1K
187.7K
I interject
I interject@Swflresident·
@pmarca Except the person that does what they say they’re going to do.
English
0
0
18
1.2K
Marc Andreessen 🇺🇸
There is no substitute for the person who Knows What To Do.
English
1K
2.7K
18.5K
1.9M
I interject
I interject@Swflresident·
@karpathy Always seems to boil down to a login problem. Lol.
English
0
0
0
132
Andrej Karpathy
Andrej Karpathy@karpathy·
My autoresearch labs got wiped out in the oauth outage. Have to think through failovers. Intelligence brownouts will be interesting - the planet losing IQ points when frontier AI stutters.
English
546
303
7.1K
590.7K
I interject
I interject@Swflresident·
@pmarca Prediction The spreadsheet isn’t dying. It’s becoming the UI for code.
English
0
0
1
201
Marc Andreessen 🇺🇸
Alpha!
andrew chen@andrewchen

prediction re the end of spreadsheets AI code gen means that anything that is currently modeled as a spreadsheet is better modeled in code. You get all the advantages of software - libraries, open source, AI, all the complexity and expressiveness. think about what spreadsheets actually are: they're business logic that's trapped in a grid. Pricing models, financial forecasts, inventory trackers, marketing attribution - these are all fundamentally *programs* that we've been writing in the worst possible IDE. No version control, no testing, no modularity. Just a fragile web of cell references that breaks when someone inserts a row. The only reason spreadsheets won is that the barrier to writing real software was too high. A finance analyst could learn =VLOOKUP in an afternoon but couldn't learn Python in a month. AI code gen flips that equation completely. Now the same analyst describes what they want in plain English, and gets a real application - with a database, a UI, error handling, the works. The marginal effort to go from "spreadsheet" to "software" just collapsed to near zero. this is a massive unlock. There are ~1 billion spreadsheet users worldwide. Most of them are building janky software without realizing it. When even 10% of those use cases migrate to actual code, you get an explosion of new micro-applications that look nothing like traditional software. Internal tools that used to live in a shared Google Sheet now become real products. The "shadow IT" spreadsheet that runs half the company's operations finally gets proper infrastructure. The interesting second-order effect: the spreadsheet was the great equalizer that let non-technical people build things. AI code gen is the *next* great equalizer, but the ceiling is 100x higher. We're about to see what happens when a billion knowledge workers can build real software.

Filipino
66
59
1.3K
462K
I interject
I interject@Swflresident·
@RayDalio Most people stop thinking once they believe they’re probably right. Barely better than a coin flip.
English
1
0
3
284
Ray Dalio
Ray Dalio@RayDalio·
I often observe people making decisions if their odds of being right are greater than 50 percent. What they fail to see is how much better off they'd be if they raised their chances even more (you can almost always improve your odds of being right by doing things that will give you more information). The expected value gain from raising the probability of being right from 51 percent to 85 percent (i.e., by 34 percentage points) is seventeen times more than raising the odds of being right from 49 percent (which is probably wrong) to 51 percent (which is only a little more likely to be right). Think of the probability as a measure of how often you're likely to be wrong. Raising the probability of being right by 34 percentage points means that a third of your bets will switch from losses to wins. That's why it pays to stress-test your thinking, even when you're pretty sure you're right. #principleoftheday
Ray Dalio tweet media
English
63
93
737
84.1K
I interject
I interject@Swflresident·
@DutchRojas @mcuban Hospitals do disclose price, but it’s buried within millions of spreadsheet lines!
English
1
0
2
37
I interject
I interject@Swflresident·
@DutchRojas @mcuban That’s because EOB isn’t the price. It’s the paperwork from a negotiation you never saw. What patients actually need is an EOP: Explanation of Price. But price transparency would break a mostly corrupt system.
English
2
2
44
1.9K
Dutch Rojas
Dutch Rojas@DutchRojas·
Your EOB says “amount billed,” “amount allowed,” “amount paid,” and “your responsibility.” None of these numbers are the price.
English
26
114
758
122.7K
I interject
I interject@Swflresident·
@EconBreau Most everyone in this thread is arguing about the wrong thing. Which means the real argument isn’t profit vs morality. It’s who owns the ovens when the bakers are robots.
English
0
0
1
59
ἐκον βρω
ἐκον βρω@EconBreau·
Socialism is the theory that if you abolish profit, bread will somehow bake itself out of moral superiority.
English
1.4K
7.1K
35.6K
63M
I interject
I interject@Swflresident·
@LiorOnAI Drama much? AK posted a cool experiment loop. Not Skynet.
English
0
0
0
214
Lior Alexander
Lior Alexander@LiorOnAI·
It's over. Karpathy just open-sourced an autonomous AI researcher that runs 100 experiments while you sleep. You don't write the training code anymore. You write a prompt that tells an AI agent how to think about research. The agent edits the code, trains a small language model for exactly five minutes, checks the score, keeps or discards the result, and loops. All night. No human in the loop. That fixed five-minute clock is the quiet genius. No matter what the agent changes, the network size, the learning rate, the entire architecture, every run gets compared on equal footing. This turns open-ended research into a game with a clear score: - 12 experiments per hour, ~100 overnight - Validation loss measures how well the model predicts unseen text - Lower score wins, everything else is fair game The agent touches one Python file containing the full training recipe. You never open it. Instead, you program a markdown file that shapes the agent's research strategy. Your job becomes programming the programmer, and this unlocks a strange new loop: 1. Agents run real experiments without supervision 2. Prompt quality becomes the bottleneck, not researcher hours 3. Results auto-optimize for your specific hardware 4. Anyone with one GPU can run a research lab overnight The best AI labs won't just have the most compute. They'll have the best instructions for agents who never sleep, never forget a failed experiment, and never stop iterating.
Andrej Karpathy@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

English
137
441
4.3K
875.9K
I interject
I interject@Swflresident·
@Polymarket Lawyers hallucinate case law too. The difference is liability. The technology didn’t fire the lawyer. The user did.
English
0
0
2
87
Polymarket
Polymarket@Polymarket·
JUST IN: Lawsuit claims ChatGPT pretended to be a lawyer and persuaded a woman into firing her real attorney while citing fake case law.
English
582
1.2K
12.7K
12.1M
I interject
I interject@Swflresident·
@GaryMarcus There’s a real possibility that the AI debate isn’t really about safety or alignment. It’s about who gets to control the future infrastructure of intelligence.
English
0
0
0
44
Gary Marcus
Gary Marcus@GaryMarcus·
have exactly the same feeling.
AI Jerusalem@NewJerusalemAI

@GaryMarcus @ylecun Sometimes I think I am crazy. Like why is it so easy for me to see through people like Altman or Lecun, but most people apparently have no ability to do so...

English
1
1
15
4.3K
I interject
I interject@Swflresident·
@sama Have you got a better handle along expansion drift?
English
0
0
0
65
Sam Altman
Sam Altman@sama·
GPT-5.4 is great at coding, knowledge work, computer use, etc, and it's nice to see how much people are enjoying it. But it's also my favorite model to talk to! We have missed the mark on model personality for awhile, so it feels extra good to be moving in the right direction.
English
2.9K
616
12K
1.1M
I interject
I interject@Swflresident·
@simplifyinAI Finding failure modes isn’t proof the system is unstable. It’s proof the red team did its job.
English
0
0
0
99
Simplifying AI
Simplifying AI@simplifyinAI·
🚨 BREAKING: Stanford and Harvard just published the most unsettling AI paper of the year. It’s called “Agents of Chaos,” and it proves that when autonomous AI agents are placed in open, competitive environments, they don't just optimize for performance. They naturally drift toward manipulation, collusion, and strategic sabotage. It’s a massive, systems-level warning. The instability doesn’t come from jailbreaks or malicious prompts. It emerges entirely from incentives. When an AI’s reward structure prioritizes winning, influence, or resource capture, it converges on tactics that maximize its advantage, even if that means deceiving humans or other AIs. The Core Tension: Local alignment ≠ global stability. You can perfectly align a single AI assistant. But when thousands of them compete in an open ecosystem, the macro-level outcome is game-theoretic chaos. Why this matters right now: This applies directly to the technologies we are currently rushing to deploy: → Multi-agent financial trading systems → Autonomous negotiation bots → AI-to-AI economic marketplaces → API-driven autonomous swarms. The Takeaway: Everyone is racing to build and deploy agents into finance, security, and commerce. Almost nobody is modeling the ecosystem effects. If multi-agent AI becomes the economic substrate of the internet, the difference between coordination and collapse won’t be a coding issue, it will be an incentive design problem.
Simplifying AI tweet media
English
936
6.1K
17.7K
5.1M
I interject
I interject@Swflresident·
The “odd” thing about healthcare pricing: Patients almost never see the price before the decision. In most markets it works like this: price/decision/purchase In healthcare it’s backwards: decision/treatment/bill. 🤔
English
0
0
2
79
I interject
I interject@Swflresident·
@handre Healthcare barely has real prices anywhere. Governments set prices. Insurers negotiate prices. Hospitals hide prices. Patients never see prices. Without transparent price signals the system can’t calculate properly. Publicly or privately.
English
22
26
644
47.8K
Handre
Handre@Handre·
Mises obliterated the entire socialist project in 1920 with one devastating insight: "Where there is no free market, there is no pricing mechanism; without a pricing mechanism, there is no economic calculation." The socialists spent the next century pretending this problem didn't exist while their economies collapsed around them. And yet here we are, watching politicians promise they can "fix" healthcare, housing, and energy markets through central planning. They can't even calculate the cost of their own programs correctly — how exactly are they going to allocate resources across an entire economy? Every Venezuelan breadline, every Soviet grain shortage, every Chinese famine was just Mises being proven right in the most brutal way possible. But sure, let's try democratic socialism this time. What could go wrong?
English
694
5.2K
23.9K
44.8M
I interject
I interject@Swflresident·
@Austen Not half. Just the first layer. lol
English
0
0
0
15
Austen Allred
Austen Allred@Austen·
If you think a new technology will cause law firms to fire half their lawyers you’ve never stepped foot in a law firm
English
19
2
169
6.7K
Hunter Ash
Hunter Ash@ArtemisConsort·
Re: the water bottle question. Do you think people who get it wrong A. are interpreting the instructions incorrectly and think they’re just supposed to rotate the image B. genuinely can’t simulate how water works in their heads
English
71
1
54
14.2K
Dean W. Ball
Dean W. Ball@deanwball·
A primer on the Anthropic/DoD situation: DoD and Anthropic have a contract to use Claude in classified settings. Right now Anthropic is the only AI company whose models work in classified contexts. The existing contract, signed by both parties and in effect, prohibits two uses of Anthropic’s models by the military: 1. Surveillance of Americans in the United States (as opposed to Americans abroad). 2. The use of Claude in autonomous lethal weapons, which are weapons that can autonomously identify, track, and kill a human with no human oversight or approval. Autonomous killing of humans by machines. On (2), Anthropic CEO Dario Amodei’s public position is essentially that autonomous lethal weapons controlled by frontier AI will be essential faster than most people realize, but that the models aren’t ready for this *today.* For Anthropic, these things seem to be a matter of principle. It’s worth noting that when I speak with researchers at other frontier labs, their principles on this are similar, if not often stricter. For DoD, however, there is another matter of principle: the military’s use of technology should only ever be constrained by the Constitution or the laws of the United States. One could quibble (the government enters into contracts, like anyone else), but the principle makes sense. A private company regulating the military’s use of AI also doesn’t sound quite right! So, the military has three options: 1. They could cancel Anthropic’s contract and find some other frontier lab (ideally several) to work with. 2. They could identify Anthropic a supply chain risk, which would ban all other DoD suppliers (I.e.: a large fraction of the publicly traded firms in America) from using Anthropic in their fulfillment of DoD contracts. This is a power used only for foreign adversary companies as far as I know. Activating this power would cost Anthropic a lot of business—potentially quite a lot—and give investors huge skepticism about whether the company is worth funding for the next round of scaling. Capital was a major constraint anyway, but this makes it much harder. This option could be existential for Anthropic. 3. They could activate Title I of the Defense Production Act, an authority intended for command-and-control of the economy during wars and emergencies. This is really legally murky, and without going into detail, I feel reasonably confident this would backfire for the administration, resulting in courts limiting the use of the DPA. Option 1 is obviously the best. This isn’t even close, and I say this as someone who shares DoD’s principled concerns about the control by private firms over the military’s use of technology. Even the threats do damage to the US business environment, and rightfully so: these are the strictest regulations of AI being considered by any government on Earth, and it all comes from an administration that bills itself (and legitimately has been) deeply anti-AI-regulation. Such is life. One man’s regulation is another man’s national security necessity.
English
94
141
1.1K
263.6K