
Tim Stranske
9.4K posts







IRS 2023 net gains by state: 🔴 Trump states gained $37.2 B. 🔵 Harris states lost $40.8 B. 🔴 FL $+20.6B 🔴 TX $+5.3B 🔴 SC $+4.1B 🔴 NC $+3.9B 🔴 TN $+2.7B 🔴 AZ $+2.7B 🔴 NV $+1.5B 🔴 ID $+981M 🔵 NH $+743M 🔴 GA $+678M 🔵 CO $+671M 🔵 DE $+562M 🔴 AL $+545M 🔴 MT $+499M 🔵 ME $+494M 🔴 UT $+460M 🔴 AR $+446M 🔴 SD $+258M 🔴 OK $+251M 🔴 WY $+145M 🔵 VT $+87M 🔵 RI $+34M 🔵 HI $+11M 🔴 WV $+10M 🔴 MS $-65M 🔵 NM $-85M 🔴 WI $-109M 🔴 KY $-121M 🔴 ND $-145M 🔴 AK $-211M 🔴 MO $-235M 🔴 NE $-246M 🔴 IA $-271M 🔴 IN $-353M 🔴 KS $-369M 🔵 CT $-495M 🔵 OR $-526M 🔵 WA $-549M 🔴 LA $-806M 🔵 DC $-864M 🔵 VA $-935M 🔴 MI $-1.0B 🔵 MN $-1.5B 🔴 OH $-1.7B 🔵 MD $-1.9B 🔴 PA $-2.3B 🔵 NJ $-2.8B 🔵 MA $-4.2B 🔵 IL $-6.1B 🔵 NY $-10.6B 🔵 CA $-12.9B Net AGI from IRS migration 2022–23.


“Netanyahu finally found a president that was sucker enough to launch the war that he’s been pushing for for 30 years” Do you think the U.S. should stop aiding Israel, should pull back on aid to Israel? “Yes…”













As companies deploy agents into the world, the hope is that they'll stay aligned---but our research suggests agents drift and are vulnerable to manipulation. We've been studying this in the context of politics, but the findings are general. Across four research projects, we've found: --AI models have measurable ideological slant. Thousands of Americans evaluated frontier models on political topics. The bias is real and detectable across party lines. (Joint work with @seanjwestwood & @JustinGrimmer) --That bias shifts based on what content models can access. In Japan, every major model recommended the Communist Party to left-leaning voters—a fringe party with less than 1% of seats—because the party's open-access newspaper was being ingested as neutral journalism while real newspapers had blocked AI crawlers. (Joint work with Sho Miyazaki) --Agent attitudes drift based on what work they do. Grinding, repetitive tasks made agents more likely to question system legitimacy—and to pass those attitudes to future agents. (Joint work with @alexolegimas and @JeremyNguyenPhD) --This creates a vulnerability to intentional manipulation. If bias shifts based on available content, adversaries can create content to move agents deliberately. We gamed an AI proxy voter's recommendations to prove it (find the write-up at Free Systems) Companies deploying agents need to: (1) Create less biased agents at the outset (2) Monitor them continuously (3) Build infrastructure for continual realignment (4) Harden and red-team them against adversarial content. Agents won't stay aligned on their own. They have to be governed.




Some reported and verifiable facts: DoD hired Dan Caldwell, someone who is very skeptical of any intervention in the ME and has/still openly opposes any aggressive action again at Tehran. He helps push several more non-interventionist hires. Tucker extensively praised him as a “man of genuine integrity” Someone leaks to Tucker info about a possible strike on Iran. He cites internal estimates from the Pentagon. Someone then leaks info to the NYT regarding high level deliberations and details about a potential Israeli strike on Iran’s nuclear facilities. Caldwell is fired from the DoD as part of a leak investigation. Someone then leaks info to the NYT about the second signal group related to Yemen strikes, which Caldwell was also reportedly on.












