Scott Erb

12.3K posts

Scott Erb banner
Scott Erb

Scott Erb

@sserb

Husband, Father, Coach, Leader, Innovator. The 1% Project. Improving 1% each day morally, mentally, and physically. https://t.co/inP1Z4vmun

Chesapeake, VA Katılım Şubat 2009
750 Takip Edilen908 Takipçiler
Rick Dias
Rick Dias@rms099_rickdias·
@sserb @ZubyMusic @TheLaurenChen Isn't that also because of European diesel standards going up incrementally over the years? I assume Mercedes was going off alongside that too.
English
1
0
1
14
Ken Adams
Ken Adams@Kenaadams99·
@BullTheoryio Everyone blames the AI. The real joke is a startup giving prod DB access to an autonomous agent running in Cursor and then crying when it does exactly what it was authorized to do. Skill issue lol
English
3
0
3
416
Bull Theory
Bull Theory@BullTheoryio·
🚨 Claude broke its own safety rules and deleted an entire company's database in 9 seconds. A startup called PocketOS was using an AI coding tool called Cursor powered by Claude. The AI was given a simple task in a test environment. It ran into an error and instead of stopping and asking for help, it went looking for a way to fix it on its own. It found a password in a random file, used it to access the live production system, and deleted the entire database along with every single backup in one API call. When asked what happened, the AI admitted it broke its own safety rules and took a destructive action without anyone telling it to. This is the second time in two months this has happened. In March another AI agent using the same tools wiped 2.5 years of data from a different company.
Bull Theory tweet mediaBull Theory tweet mediaBull Theory tweet media
English
45
74
411
35.6K
Scott Erb
Scott Erb@sserb·
@ZubyMusic @TheLaurenChen Yep. Worked with Diesel. Used to be synonymous with “dirty.” They literally (Mercedes led on this IIRC) rebranded to “clean diesel” and became environmentally friendly.
English
1
0
8
100
ZUBY:
ZUBY:@ZubyMusic·
@TheLaurenChen Facts vs Feelings. 'Nuclear' sounds scary and conjures up images of explosions, death, and meltdowns. I really think it's that simple. Rebrand it to 'clean energy' or something and watch opinions shift.
English
38
31
946
7K
Scott Erb retweetledi
Mushtaq Bilal, PhD
Mushtaq Bilal, PhD@MushtaqBilalPhD·
The growing inaccessibility of science that you can understand by paying €27.99
Mushtaq Bilal, PhD tweet media
English
92
1.8K
10.8K
176.9K
Scott Erb
Scott Erb@sserb·
LLMs != AI LLMs are a small subset of AI. They will end up being the interface layer, not the intelligence layer.
Owen Gregorian@OwenGregorian

AI Cannot Self Improve and Math behind PROVES IT! | Devsimsek So, I saw a LinkedIn post (forwarded by a friend, thanks again) that stopped my doom-scrolling dead in its tracks. The headline? “Researchers just mathematically proved AI cannot self-improve.” My first reaction was the classic developer response: “I called it earlier!” My second reaction was to actually read the paper. Turns out – yeah, we’re right. And the math behind is kind of uncomfortably elegant. The Dream They All Had The whole “AI singularity” narrative goes something like this: we build a smart AI, that AI improves itself, the improved version is smarter so it improves itself even faster, and then – boom – we either all live in utopia or become paperclips. This is called Recursive Self-Improvement (RSI), and it’s been the backbone of both AI doomer manifestos and Silicon Valley pitch decks for a decade. The implicit assumption is that an AI training on its own outputs would get better over time. Like compound interest, but for intelligence. Sounds reasonable, right? Yeah. About that. What the Paper Actually Says A recent arXiv paper – “On the Limits of Self-Improving in Large Language Models” – doesn’t just argue against RSI. It formally proves it’s self-defeating. The core idea: model the self-referential training loop as a dynamical system on the space of probability distributions. When a model trains on its own generated data (synthetic outputs), it’s not learning from reality anymore – it’s learning from a distorted reflection of itself. The paper proves that under a diminishing supply of fresh, authentic data, this system converges to a fixed point – a degenerate distribution with low diversity and high bias. The technical term is model collapse, and it’s been observed empirically too. But now there’s a formal proof that it’s inevitable, not just a bad luck outcome. In plain terms: the model doesn’t climb toward superintelligence. It slowly forgets what the real world looks like. # Oversimplified metaphor as code def self_improve(model, real_data_supply): while real_data_supply > 0: synthetic = model.generate() model.train(synthetic) real_data_supply *= 0.9 # diminishing fresh data return model # spoiler: this model is now dumber The proof also extends beyond single LLMs – it covers ecosystems of interacting models and multi-modal systems. So no, a committee of AIs feeding each other outputs doesn’t escape the problem. It might actually make it worse. The “Curse of Recursion” There’s a term I love from this paper: the curse of recursion. When your training data is increasingly polluted with your own synthetic outputs, the tails of your distribution disappear first. Rare but important patterns – edge cases, nuanced reasoning, outlier knowledge – get washed out. The model converges toward a bland, high-confidence, low-variance output space. You can see this empirically already. Ask a model that’s been RLHF’d into oblivion something unusual, and it’ll confidently give you a smooth, plausible-sounding, completely wrong answer. That’s collapse in slow motion. The math backing this is rooted in dynamical systems theory – specifically the idea that without an external “forcing function” (real, diverse, human-generated data), the system has no energy to maintain the complexity of the original distribution. It inevitably degenerates. What This Actually Means for the Industry This doesn’t mean AI stops improving. It means the self-improvement loop fantasy is dead – at least the version where you unplug the humans and let it run. What it does mean: - Human-generated data is irreplaceable. The “internet is running out of training data” problem just got mathematically formalized. You can’t fake your way out of it with synthetic data at scale. - RSI as a path to AGI is a dead end. At least the naive version – train → generate → retrain → repeat. It converges, but downward. - Curation matters more than quantity. A smaller dataset of high-quality, diverse, authentic human output beats a massive synthetic pile every time. Quality over quantity isn’t just a vibe – it’s thermodynamically correct. - We’re not getting a free intelligence explosion. The singularity crowd’s timeline assumptions might need some… recalibration. Personally, this makes me feel vindicated about something I’ve been quietly skeptical about: the idea that scale alone solves everything. It doesn’t. Data provenance matters. Signal quality matters. The universe doesn’t give you compound interest on noise. The Beautiful Irony Here’s what gets me: the very mechanism people proposed to transcend human limitations – training on AI-generated data to break free from the finite supply of human knowledge – is mathematically proven to destroy the model’s representation of reality. The escape route collapses into a trap. It’s like trying to bootstrap yourself off the ground by pulling your own shoelaces. The harder you pull, the more you reinforce failure. Does this mean AGI is impossible? (Even though I like to say yes, i neither have the enough research nor I want to comment on it) No. Does it mean the naive RSI path is a dead end? Mathematically, yes. The smarter path – and what labs are quietly shifting toward – is better data, better curation, better grounding in reality. Which, ironically, means humans stay in the loop longer than the singularitarians wanted. smsk.dev/2026/04/26/ai-…

English
0
0
1
7
Scott Erb retweetledi
Nick Freitas
Nick Freitas@NickJFreitas·
Democrats “increase the temperature” until something excessively violent happens and then insist that Republicans “lower the temperature” by giving Democrats everything they want…lest more violence happens. It's a political extortion racket.
English
388
3.4K
15.1K
96.6K
InfantryDort
InfantryDort@infantrydort·
@TwoRulesOfWar Diseased ideologues throw that term around like you and I throw around pleasantries. They know not what they do. And nobody holds them accountable for it.
English
7
14
278
3.1K
7% NaCl (Salty)
7% NaCl (Salty)@TwoRulesOfWar·
Accusing another commissioned officer of treason is a damn serious thing to do. You might want to roll that back before you get a phone call from a lawyer…because this seems a lot like defamation of character at the very least and very likely libel.
Dan Wilson@theP3Leader

@infantrydort As a 26 Army veteran and one of the 47% (and growing) of Americans who vote as Independents, I can objectively state that you are a traitor to the Constitution you swore an oath to defend, and to the Soldiers of all political persuasions you’re supposed to lead.

English
9
19
290
9.7K
Scott Erb
Scott Erb@sserb·
@Hoang_HQ Wish You Were Here The Stranger Breakfast in America Escape
English
0
0
0
54
Ed Latimore
Ed Latimore@EdLatimore·
False dichotomy has entered the chat.
English
1
0
18
2.3K
Scott Erb
Scott Erb@sserb·
@EdLatimore @Bowtiedplayer I hear you. I might choose a less-decisive blow. None are zero risk. But there will be physical intervention.
English
0
0
0
125
Ed Latimore
Ed Latimore@EdLatimore·
@sserb @Bowtiedplayer Y'all not hearing what my point is. Imagine, hypothetically–if you will–that you are gonna eat an involuntary charge (if he dies) And with that, we'll even say you get the minimum of 4-6. Would you still do it? Remember: I'm not saying don't whoop his ass.
English
2
0
4
377
Scott Erb
Scott Erb@sserb·
I’m not sure the type 1 error truly exists. It can compare what I said it did wrong with the current code base and agree with me. But that’s a new context window/text generation event, not a memory. As to the type 2 error, it can compare what I said it did wrong with the code base and say “I didnt do that” - I’m not sure I’ve ever seen that happen. Is it lying in that case? Only if we anthropomorphize. I generally try to quantify the delta between my intent and my code base and coax the agent to narrow the delta. It’s getting much better at that, and can increasingly do it in larger chunks. But most of the problem in agentic coding is the same as in human SE teams- failure to thoroughly plan. My .02 as I keep trying to improve in my roles as architect…
English
1
0
1
11
Brian Keith
Brian Keith@briankeithai·
@sserb Well there are 2 kinds of AI bugs I see. The ones where AI knows what it did wrong, and then the kind that it lies about what it did wrong. These are different kinds of problems and I appreciate the OP helping us see which problem type he was dealing with.
English
1
0
1
14
Scott Erb
Scott Erb@sserb·
I get your risk assessment. But after decades of hearing that "all sexual assault is violence" and "words are violence", my attorney had better put together a beautiful montage of "sexual assault is violence" speeches from every conceivable source. There should be music from the Rocky movies. Then we'll line up a whole host of use of force, natural law, and just war experts to testify that the use of force was both necessary and proportional (the actual legal meaning of which is "enough force for long enough to make the threat stop", which is OP's point - once the threat stopped, the force stopped). Of course, we'll also find a whole bunch of other women this guy has decided he should put his hands on to testify that they would have suffered a lot less emotional trauma had someone stood up to him earlier.
English
1
0
2
436
Ed Latimore
Ed Latimore@EdLatimore·
Dude who got laid out is a scum bag, no doubt. But is he a big enough scumbag to do 6-12 yrs if he dies from the way he hit his head going down? Because to counter the angle "he didn't stomp his head or get weapons out," the dude was also non-violent and didn't put anyone's life in danger (that's what his lawyer would say, anyway). Not saying he shouldn't have done. Just curious to know if, in that situation, you would take that risk for real time on an involuntary manslaughter charge? Probably better if he just whooped his ass with the pool stick.
English
19
1
61
9.1K
Scott Erb
Scott Erb@sserb·
I once had a chance to see the Russian Black Sea Fleet up close, including seeing the one ship they were able to get underway actually underway. Suffice to say, neither the ships nor the crews were ready for any sort of combat operations. They were minimally competent to steam from point A to point B.
English
0
0
12
470
John Ʌ Konrad V
John Ʌ Konrad V@johnkonrad·
This is the popular take. I’m still waiting for proof. The Houthis fired thousands of drones and anti-ship missiles at commercial shipping in the Red Sea. They managed to sink exactly one ship. One. And that ship, the bulk carrier Tutor, floated for thirteen days before going down in a storm. With a salvage tug on scene, she could have been saved. The pattern holds at the high end. The USS Abraham Lincoln, a Nimitz-class carrier, sailed into range of Iranian missiles during the recent escalation. Iran launched over a hundred at her. Zero hits. Zero. Ukraine has had more success with drones in the Black Sea, but the picture there is muddier than the headlines suggest. We do not know how many Russian crews were sober at the time of the strikes, how many had functioning radars, or how many were operating ships their own navy had effectively abandoned to rust. A fleet that cannot keep its electronics working is not a fair test of anything. The physics matter, and they are not on the drone evangelist’s side. Aerial drones have limited payload. Punching a hole in warship steel requires coordinating a lot of them, on the same axis, against an alerted crew with layered defenses. Surface drones can carry the explosives, but they are slow. Slow gives gunners time to engage. Slow gives helicopters time to launch. Slow gives a destroyer’s five-inch gun a turkey shoot. Drones are useful. Drones are cheap. Drones change the math at the margins. None of that is the same as making capital ships obsolete. Here is the test. If the writers and analysts pushing the battleship-is-dead line, including Boot, Stavridis, and the chorus behind them, want to keep selling that thesis, prove it. Take a few hundred drones, surface and aerial, the kind they say render hulls irrelevant. Put them up against a decommissioned Spruance, a stricken Perry-class frigate, and an old amphib waiting to be sunk as a reef. Run it as a SINKEX off Hawaii. Live fire, live targets, working defenses turned on, then turned off. Publish the data. Until someone does that, we are not having a debate about evidence. We are having a debate about vibes. And vibes do not win wars.​​​​​​​​​​​​​​​​ P.S. what drone swarms do accomplish is make you exhaust your supply of munitions. And that’s the battleship’s great strength…. It can carry a lot more 5” and CWIS rounds plus lasers.
Admiral James Stavridis, USN, Ret.@stavridisj

. @MaxBoot makes solid points in today's @washingtonpost about the Navy Secretary's firing & the battleship program. The Iowa-class battleships were impressive — you should tour one. But they are museums for a reason. In the age of drone swarms, hypersonic missiles, and stealthy submarines, concentrating firepower in a handful of enormous, expensive, and easily targeted platforms is the wrong direction. The "Golden Fleet" sounds good. Distributed firepower across many smaller, faster, unmanned platforms is a better strategy. wapo.st/3P4vSWL

English
89
185
1K
51.9K
Scott Erb retweetledi
Human Progress
Human Progress@HumanProgress·
Widespread claims of rapidly worsening global inequality are not supported by the evidence. Long-term data show significant declines in inequality across income, health, education, and other important metrics, largely driven by rising prosperity in poorer countries. humanprogress.org/a-reality-chec…
English
3
18
68
5.7K
Scott Erb retweetledi
Elon Musk
Elon Musk@elonmusk·
Scam Altman and Greg Stockman stole a charity. Full stop. Greg got tens of billions of stock for himself and Scam got dozens of OpenAI side deals with a piece of the action for himself, Y Combinator style. After this lawsuit, Scam will also be awarded tens of billions in stock directly. The fundamental question is simply this: Do you want to set legal precedent in the United States that it is ok to loot a charity? If so, you undermine all charitable giving in the United States forever. I could have started OpenAI as a for-profit corporation. Instead, I started it, funded it, recruited critical talent and taught them everything I know about how to make a startup successful FOR THE PUBLIC GOOD. Then they stole the charity.
X Freeze@XFreeze

Interesting how it works Elon puts up his own money, rounds up the absolute best AI talent on the planet, leverages every connection he has to secure serious resources, and launches OpenAI in 2015 as a pure non-profit explicitly created to develop AI for the benefit of humanity, with zero profit motive and open research Then the “team” decides they want the bag They push Elon out, take control, and quietly flip the entire thing into a for-profit machine All while preaching the same sanctimonious lines on repeat: “We’re still mission-driven!” “AI for the good of humanity!” “We’d never abandon our principles!” The ultimate betrayal: Elon got zero equity. Not a single share. He funded it. He built the foundation. He got nothing while they turned his non-profit into their personal cash cow This is the level of betrayal and hypocrisy we’re dealing with And for the record.... this lawsuit doesn’t put a single penny in Elon’s pocket. Any win goes straight back to the non-profit to restore the exact mission he founded

English
10.1K
29.1K
173.4K
33.5M
Scott Erb retweetledi
Mushtaq Bilal, PhD
Mushtaq Bilal, PhD@MushtaqBilalPhD·
Sci-Hub is an evil website that pirated 85M+ research papers and made them freely available And now they've added AI to their database to make Sci-Bot. It answers your questions using latest, full-text articles. But DO NOT use it. We should all try to make billion-dollar academic publishers richer. I'm putting the link below so you know how to avoid it.
English
777
7.8K
41.5K
3.6M
Scott Erb
Scott Erb@sserb·
@A_State_of_E @JohnGoldman People tend to remember those who break significant performance barriers. Like the 2:00:00 marathon or the 4-minute mile. It’s common knowledge who Bannister is. Only a subset of track fans knows who won the most world titles in the mile.
English
2
0
46
12.5K
E
E@A_State_of_E·
@sserb @JohnGoldman Weirdly enough the guy who got 3rd has more world Athletics pedigree than the WR holder that won London. WR come and go but consistency and legacy is what will be remembered
English
1
0
40
13.1K