Isegoria

81.9K posts

Isegoria

Isegoria

@Isegoria

From the ancient Greek for equality in freedom of speech; an eclectic mix of thoughts, large and small

Katılım Ocak 2008
217 Takip Edilen1.8K Takipçiler
Isegoria retweetledi
Casey Handmer
Casey Handmer@CJHandmer·
Amazon: Re-invests in fundamental infrastructure, shows no profit for 20 years. Capabilities prove fundamental to COVID resilience, procurement, government functions, running the entire Internet. Amazon has a bigger economy and pays more tax than most states. Senator Warren: Bezos is hoarding wealth.
Elizabeth Warren@SenWarren

Jeff Bezos has $222 billion. If he paid my wealth tax this year, we could fund insulin in America for everyone who needs it plus free school lunch for every kid in Texas—and have plenty of money left over. And Bezos would still have $215 billion dollars to spare.

English
29
54
1.1K
44K
Isegoria retweetledi
Christopher F. Rufo ⚔️
Christopher F. Rufo ⚔️@christopherrufo·
The quality of the X feed will improve if it rewards sharing and discussion of substantive articles, rather than copied-and-pasted short-form video, which, to be fair, will require sacrificing some on-timeline user minutes, in favor of X's long-term value as a public forum.
Ryan Burge 📊@ryanburge

In April of 2023, I was getting a third of my Substack traffic from Twitter. By August it was 1-2%. A quick Google search reveals that Elons first major algorithm tweak was in the Spring 2023. I saw the impact of that in real time.

English
20
21
255
39K
Charles Murray
Charles Murray@charlesmurray·
As a faithful follower of @DissidentRight, how is it possible that I didn't even know this book exists? John, did you not post the news repeatedly, like the rest of us ink-stained wretches with new books?
John Derbyshire@DissidentRight

Heartfelt thanks to Auguste Meyrat at The Federalist for a positive review of my book x.com/FDRLST/status/… and the same to Peter Brimelow for bringing the review to my attention x.com/peterbrimelow/… And while I'm at it, more of those thanks -- belated but none the less heartfelt -- to Paul Gottfried, Amy Wax, Nick Land, and Erich Eichman for their generous blurbs on the book's dust cover.

English
2
2
48
6.9K
Isegoria retweetledi
Alex
Alex@notcomplex_·
Top 30 first names in Danish citizenship grants: There are well over ten times as many Mohammads as Michaels
Alex tweet media
English
5
11
79
3.8K
Jonathan Jeckell
Jonathan Jeckell@jon_jeckell·
Anyone know where I can get a rough order of magnitude for the cost, range, and payload for the Baba Yaga heavy drone?
English
2
0
0
108
Isegoria retweetledi
i/o
i/o@avidseries·
Percentage of world population that is Muslim: 24% Percentage that is Jewish: 0.2% Number of Nobel Prizes in science, medicine, and economics awarded to Muslims: 4 To Jews: 194*
i/o tweet media
English
118
482
3.9K
98.1K
Isegoria
Isegoria@Isegoria·
“In short, most people who follow the account do not see a given post. This is normal for smaller, non-optimized accounts on X—visibility depends heavily on the algorithm favoring consistent, high-engagement content.”
English
0
0
5
115
Isegoria
Isegoria@Isegoria·
Grok just confirmed for me that my posts would reach 10 times as many people without the modern algorithm.
English
5
2
25
3.5K
Isegoria retweetledi
Nate Silver
Nate Silver@NateSilver538·
The NYT published a link to critical original reporting on Iran 45 minutes ago. A good, fair story. They have 53m followers. The engagement metrics you display say they got 94 likes and 33 retweets out of that. Is that accurate? And if so, shouldn't you work on a better algo?
Nate Silver tweet media
English
514
347
5.4K
1.2M
Isegoria retweetledi
Ryan Burge 📊
Ryan Burge 📊@ryanburge·
When I first started my Substack, it was before Elon changed the algorithm. About 1 in 4 clicks on my posts would come from Twitter. If I post a link this morning to my newest post, 1 in 1500 or 2000 clicks will come from Twitter. They are absolutely still deboosting.
Nikita Bier@nikitabier

@NateSilver538 It’s paywalled. If only 0.1% of users can derive value from the content, it will organically rank lower.

English
54
123
2.1K
135.8K
Isegoria
Isegoria@Isegoria·
“When you organize text as a directory of files instead of a flat context window, coding agents navigate it the same way they navigate a codebase.”
God of Prompt@godofprompt

🚨 BREAKING: Duke researchers just proved that coding agents are better at processing long documents than models with million-token context windows. > Not because of longer context. Because grep and sed are better retrieval tools than attention. > +17.3% average improvement over state-of-the-art across five benchmarks spanning 188K to 3 trillion tokens. > The setup is almost insultingly simple. Instead of feeding documents into a context window or building a retrieval pipeline, Duke placed text corpora into directory structures and handed them to off-the-shelf coding agents. > No task-specific training. No architectural modifications. No special prompting beyond the file path and the question. The agents used terminal commands, wrote Python scripts, and figured out the rest themselves. The results beat every published baseline across four out of five benchmarks. > The key insight is about what coding agents already know. These models trained on millions of code repositories hierarchical file structures, grep patterns, sed commands, iterative script refinement. That training wasn't for document processing. But it transfers directly. When you organize text as a directory of files instead of a flat context window, coding agents navigate it the same way they navigate a codebase. The inductive prior already exists. Nobody had thought to use it this way. > The emergent behavior is the most striking finding. Nobody told the agents how to process the documents. On multi-hop retrieval tasks, they autonomously developed iterative query refinement starting with an initial search, extracting entities from the results, using those entities to inform the next search, chaining six reasoning hops without a single instruction to do so. On analytical tasks requiring aggregation across thousands of data points, they abandoned search entirely and wrote custom Python classifiers. On standard reading comprehension, they used a hybrid of targeted searches and focused reading. Three completely different strategies, all emerging without prompting, matched to the task structure. The benchmark numbers across all five evaluations: → BrowseComp-Plus (750M token corpus): 88.5% vs 80.0% best published +11% relative → Oolong-Synthetic (536K tokens): 71.75 vs 64.38 best published +11% relative → Oolong-Real (385K tokens): 37.46 vs 24.09 best published +56% relative → LongBench (188K tokens): 62.5% vs 63.3% best published competitive → Natural Questions (3 trillion token corpus): 56.0% vs 50.9% best published +10% relative → Average improvement over state-of-the-art: +17.3% → Gains hold across both Codex and Claude Code as base agents → No task-specific training, no fine-tuning, no specialized architecture > The counterintuitive finding about retrieval tools is important. When the researchers added BM25 or dense embedding retrieval on top of the coding agents, performance did not consistently improve and sometimes degraded. The explanation is behavioral. Without a retriever, agents issue roughly 15 native search commands per query. With a retriever, that drops to 8 to 9. The retriever becomes the default discovery mechanism and displaces the broader filesystem exploration that agents do autonomously. Since retrieval ranking is imperfect, the substitution causes the agent to miss relevant context it would have found on its own. > The tool intended to help becomes a ceiling. > The file system structure matters more than it sounds. When documents are organized as individual files in a directory hierarchy mirroring a code repository agents use sed to extract specific line ranges and index content by file and line number. When the same documents are stored as a single JSON file, agents fall into repeated corpus-wide scans using ripgrep. > The folder structure gives agents a coordinate system. The flat file removes it. Performance gap: 6 percentage points on the same benchmark with the same agent. > Context windows are scaling in the wrong direction. The bottleneck was never how many tokens a model can see. It was always how well the model can navigate what it's looking at. > Coding agents figured out a better solution. Nobody asked them to.

English
0
0
1
147
Isegoria retweetledi
Kagan.Dunlap
Kagan.Dunlap@Kagan_M_Dunlap·
U.S. Air Force Special Operations Command Led an exercise in early May of 2023 called Exercise Agile Chariot. This took place on closed sections of public highways used as temporary runways, specifically on Wyoming Highway 789 west of Riverton, in Fremont County, near areas like Hudson and the Wind River area. Some related highway operations also happened on Wyoming Highway 287 near Rawlins or Casper areas in southern central Wyoming. During this exercise MC-130J's, which look very similar to C-130's, landed with the Little Birds inside, and were offloaded and reassembled, which was followed up with a Combat Search and Rescue Drill with a simulated casualty. The exercise focused on Agile Combat Employment (ACE) tactics, which enabled the participants training on how to operate from austere, dispersed locations like highways instead of fixed airbases. It involved landing the MC-130J, rapidly deploying the Little Birds for a simulated combat search-and-rescue for personnel recovery mission, fast-roping, securing the site, and extracting personnel. Wyoming state and local authorities (including the DOT) temporarily closed the highways for safety.
Kagan.Dunlap tweet mediaKagan.Dunlap tweet mediaKagan.Dunlap tweet mediaKagan.Dunlap tweet media
English
12
127
1.6K
78.4K
Isegoria retweetledi
Razib Khan 🧬 ✍️
to reiterate: the fear of cities that was drilled into suburban or small-town kids and teens is hard to explain today. it really didn't abate until the late 1990s (the culture lagged the decline in crime rates)
Razib Khan 🧬 ✍️@razibkhan

i have been visiting new york city since the mid-80s pretty regularly except for a break btwn 91 and 2000. the bloomberg era was the best imo but you can't compare the city today to the constant fear and chaos prevalent in the 80s. i was there

English
3
2
53
4K
Isegoria retweetledi
Nate Silver
Nate Silver@NateSilver538·
These are the Twitter/X accounts with the most engagement so far in 2026. I suppose I had some intuition for how bad it was, but jeez, this is what you get when the ecosystem is broken.
Nate Silver tweet media
English
7.5K
5.5K
29.5K
20M
Isegoria retweetledi
Isegoria retweetledi
Jonathan Jeckell
Jonathan Jeckell@jon_jeckell·
I assert this is the reason it took so long for Ukraine to start hitting Russia’s obvious vulnerable point: donor nations freaking out about oil prices influencing elections.
Special Kherson Cat 🐈🇺🇦@bayraktar_1love

The only adequate response to this idiocy is to request foreign allies to halt their regular airstrikes on Iran's oil infrastructure amid a surge in global oil and fuel prices driven by the war with Russia.

English
0
1
2
106