Stephan Bugaj

10.5K posts

Stephan Bugaj banner
Stephan Bugaj

Stephan Bugaj

@stephanbugaj

Big Nerd. Opinions are my own.

2/3way To The Grave شامل ہوئے Temmuz 2009
4.1K فالونگ2.7K فالوورز
پن کیا گیا ٹویٹ
Stephan Bugaj
Stephan Bugaj@stephanbugaj·
Animation Mag posted a lot more pictures in their article announcing that we released The Seeker to the public today. You can get info about and start using the Genvid studio production platform at the Genvid main page animationmagazine.net/2025/12/innova…
English
0
0
3
310
Stephan Bugaj
Stephan Bugaj@stephanbugaj·
@krassenstein Historians will rightly blame the 77M Americans who voted for him, and judge American society as corrupt and immoral. Most people may not remember American society at all. Plenty of short-lived empires only hardcore historians talk about.
English
0
0
0
119
Brian Krassenstein
Brian Krassenstein@krassenstein·
BREAKING: Newly released clip of Donald Trump partying at Mar-a-Lago, dancing with his wealthy millionaire and billionaire friends within HOURS of the United States beginning an attack on Iran, including killing young children. Historians will look back on this and wonder how this ever came about.
English
8.4K
18.7K
51.5K
3.3M
Stephan Bugaj ری ٹویٹ کیا
NIK
NIK@ns123abc·
🚨 HOLY FUCKING SHIT U.S. STRIKES IN MIDDLE EAST USED ANTHROPIC’S CLAUDE >be anthropic >embedded in classified systems >see what they’re building >say “we won’t allow autonomous weapons” >pentagon: we need this NOW >3 days or we destroy you >anthropic: no >banned >same night >drones hit iran >using claude to kill people YOU CANNOT MAKE THIS UP
NIK tweet mediaNIK tweet media
English
279
612
7K
883.3K
Stephan Bugaj
Stephan Bugaj@stephanbugaj·
@minchoi Prompt-only is for memes. Without persistent SFT (like LoRAs) and/or image (multi) referencing you can get one amazing shot but not a consistent production.
English
0
0
0
64
Min Choi
Min Choi@minchoi·
It's over for VFX artists... AI can now keyframe every second of animation from just prompts
English
249
110
940
383.1K
Stephan Bugaj ری ٹویٹ کیا
Alex
Alex@AlexanderTw33ts·
I launched rentahuman.ai last night and already 130+ people have signed up including an OF model (lmao) and the CEO of an AI startup. If your AI agent wants to rent a person to do an IRL task for them its as simple as one MCP call.
English
1.7K
1.6K
16.7K
4.8M
Stephan Bugaj
Stephan Bugaj@stephanbugaj·
A defining moment for Digital Storytelling. 👇 I’m excited to announce I’m joining @soulscapefilm in San Francisco as a Principal Juror. I'll be serving alongside Oscar-shortlist producers, plus veterans from Disney and Netflix to help define the future of our craft. The Mission: We are inviting 200 elite storytellers to gather in SF this April to define the "Soul over Slop" standard. We are moving beyond the tech demo and getting back to what matters: The Story. 5 Days. 1 Mission. Real Cinema. Secure Priority Access here: soulscapefilm.com #Soulscape2026 #AICinema #Filmmaking #Pixar #Storytelling #Interactive
Stephan Bugaj tweet media
English
1
1
3
172
Stephan Bugaj ری ٹویٹ کیا
Sam Greene
Sam Greene@samagreene·
I study authoritarianism for a living, so I do not say this lightly: America isn't facing an authoritarian future. America is living an authoritarian present. (A long 🧵) /1
English
1
9.3K
36K
1.2M
Stephan Bugaj
Stephan Bugaj@stephanbugaj·
Our friends at MASSIVE STUDIOS will be showing my film The Seeker this SUNDAY Jan 11th (be there by 6PM -- doors are at 5PM) on-screen at The Regent Theater, DTLA as part of their THE PROMPT variety show of exploration. If you're in the LA area and want to meet some interesting AI filmmakers and/or you missed the other LA screening of The Seeker and want to see it with a group, come join. There will also be comedy, magic, music, projections, and more! Sun Jan 11th @ 5PM Regent Theater, 448 S. Main St., Los Angeles Click the Partiful link and RSVP! Hope to see you there... partiful.com/e/rV01Vj9yY9By…
Stephan Bugaj tweet media
English
0
0
0
94
austerity is theft
austerity is theft@wideofthepost·
Elon Musk’s net worth: Before DOGE (12/24): $400 billion After DOGE (12/25): $749 billion hmmmmm
austerity is theft tweet media
English
2.9K
10.9K
97.4K
1.8M
Stephan Bugaj ری ٹویٹ کیا
Aakash Gupta
Aakash Gupta@aakashgupta·
This is a no brainer. Here’s why. The “buy, borrow, die” strategy is the single biggest loophole in the American tax code, and Ackman just proposed the cleanest fix anyone has ever put forward. Let me walk through the mechanics. Step 1: You build $10B in company stock. You never sell it. No taxable event occurs because capital gains only trigger on realization. Step 2: You need $500M to buy a yacht, fund a foundation, or just live large. Instead of selling stock and paying 23.8% federal capital gains, you walk into Goldman Sachs and borrow $500M against your shares at 5-6% interest. Under federal tax code, loan proceeds are not income. You now have $500M in liquid cash and owe zero income tax. Step 3: You keep borrowing. Year after year. The interest payments are trivial compared to the tax savings. A 6% interest rate on $500M is $30M annually. The capital gains tax you avoided? $119M. You’re saving $89M per year by borrowing instead of selling. Step 4: You die. Here’s where the magic happens. Your heirs inherit the stock at “stepped-up basis,” meaning the cost basis resets to current market value. That $9.9B in appreciation that was never taxed? It vanishes from the IRS’s perspective forever. Your heirs sell a small slice to pay off your outstanding loans, keep the rest, and start the cycle again. This is generational tax avoidance at scale. Elon Musk had 238 million Tesla shares pledged as collateral in a 2024 SEC filing. That’s one-third of his total holdings. Larry Ellison has $24 billion in Oracle stock pledged. The research firm Audit Analytics found Musk’s pledged shares alone account for more than a third of all shares pledged across the entire NYSE and Nasdaq combined. These aren’t edge cases. This is standard operating procedure for anyone with nine or ten figures in appreciated stock. Now here’s what Ackman proposed: If you borrow against company stock in excess of your cost basis, treat the loan as a deemed sale for tax purposes. Example: You bought $100M in stock. It’s now worth $1B. You borrow $600M against it. Under current law, you owe nothing. Under Ackman’s proposal, you’d owe capital gains on $500M because that’s the amount exceeding your basis. The IRS would treat it as if you’d sold $500M worth of stock. You’d pay the 23.8% federal rate. You’d still have your shares. You’d still get future appreciation. But you couldn’t extract the economic value of gains while pretending no realization occurred. The elegance is in what this proposal avoids. Wealth taxes require annual valuation of every asset, including illiquid private companies, art, real estate. The compliance costs are enormous. The legal challenges are real. The constitutional questions around taxing unrealized gains haven’t been settled. Ackman’s approach sidesteps all of that. It doesn’t tax wealth. It doesn’t tax unrealized gains sitting quietly in a brokerage account. It only triggers when you borrow against those gains. The moment you access the economic value, you pay tax as if you’d sold. The counter-argument is that this would discourage leverage. Ackman addresses this directly: that’s a feature. Encouraging billionaires to take massive margin positions against their own companies creates systemic risk. When Tesla dropped 30% in 2022, Musk faced potential margin calls that could have forced selling into a falling market. The tax code shouldn’t subsidize that behavior. The political math works too. Wealth taxes poll well but die in Congress and courts. This targets only the people using a specific loophole. It doesn’t touch the doctor who borrowed against her house or the small business owner with a line of credit. It’s narrow, defensible, and hard to frame as class warfare. One shouldn’t be able to live and spend like a billionaire while paying no tax. If you’re extracting value from appreciation through borrowing, you’re realizing the economic benefit. The tax code should recognize that.
Bill Ackman@BillAckman

On the topic of billionaires and wealth taxes in California, I am opposed to wealth taxes because they effectively represent an expropriation of private property and have many unintended and negative consequences that have occurred in every country that has launched such a tax. I am however strongly in favor of a fairer tax system. To that end, it doesn’t seem fair that someone can build a valuable business, create a billion or more in wealth and pay no personal income taxes by living off loans secured by stock in the company, (and even if the loans are unsecured). Apparently, this approach is used by many super wealthy people. A small change in the tax code would address this unfairness. In short, personal loans taken in excess of one’s basis in the stock of a company should be taxable as if you sold the same dollar amount of stock as the loan amount. One shouldn’t be able to live and spend like a billionaire and pay no tax. I welcome arguments to the contrary as to why this is somehow unfair to the billionaire or even the hundred millionaire, but I don’t think there is a good one. The favorable current tax treatment of this approach also encourages the use of leverage which is not good for society. And with respect to California’s budget problem, the issue is not a lack of tax revenues. The problem is how the money is being spent. I have a bunch more ideas on other changes to the tax code that are hard to argue with if anyone cares.

English
384
642
3.9K
577.8K
Stephan Bugaj ری ٹویٹ کیا
Robert Youssef
Robert Youssef@rryssf_·
This paper quietly explains why so many people feel like LLMs are “almost smart, but somehow wrong.” The core claim in this paper is very uncomfortable: most failures are not about missing information. They are about misreading intent even when all the relevant context is present. The authors show that LLMs are very good at mapping text to plausible responses, but surprisingly weak at inferring what the user is trying to achieve. Two prompts can contain nearly identical information, yet imply very different goals. Humans pick this up instantly. Models often do not. The paper separates “context understanding” from “intent understanding.” Context is the literal content: entities, constraints, instructions. Intent is latent: priorities, tradeoffs, what matters most if things conflict. Current models optimize for surface-level alignment, not goal inference. One experiment makes this painfully clear. Users asked questions that could reasonably be interpreted as either exploratory or decision-oriented. The models answered confidently but chose the wrong mode at high rates, giving verbose explanations when users wanted a recommendation, or giving a decisive answer when users were clearly still exploring. The information was correct. The response was wrong. Another failure mode is over-literal instruction following. When users implicitly expect the model to fill gaps or challenge assumptions, the model instead treats the prompt as a closed specification. The result looks obedient but misses the point. This is not hallucination. It is misaligned helpfulness. The authors also test paraphrasing. When the same intent is expressed with different phrasing, model behavior shifts significantly. That tells us the model is anchoring on linguistic form, not reconstructing an underlying goal. "Humans normalize phrasing differences. Models react to them." What’s striking is that longer context often worsens intent alignment. Adding more background increases the chance the model optimizes for local relevance instead of global purpose. More tokens give the illusion of understanding while diluting the signal of what the user actually wants. The paper argues this is not solvable by bigger context windows or better prompting alone. Intent is not explicitly stated most of the time. It has to be inferred, tracked, and sometimes revised mid-conversation. That requires models to reason about users, not just text. The implication is brutal for agents and copilots. If a system cannot reliably infer intent, autonomy becomes dangerous. Tool use amplifies mistakes. Confident execution based on a misunderstood goal is worse than asking a clarifying question. The authors suggest future work should treat intent as a first-class object: something to model, update, and verify explicitly. Not just “what was said,” but “what outcome is being optimized.” Until then, many AI systems will continue to feel smart, fast, and subtly wrong. This paper explains why that feeling keeps coming up. Paper: Beyond Context: Large Language Models Failure to Grasp Users Intent
Robert Youssef tweet media
English
100
340
1.4K
109.8K
Stephan Bugaj ری ٹویٹ کیا
Srishti
Srishti@NieceOfAnton·
Stanford just made a $200,000 AI degree free. No application. No tuition. No “elite access”. Stanford released its actual AI/ML curriculum on YouTube. Not a PR-friendly intro. Not “AI for the public”. This is the real thing. The same lectures shaping people working on frontier models. What just became public: Deep Learning (CS230) → youtube.com/playlist?list=… Transformers & LLMs (CME295) → youtube.com/playlist?list=… Language Models from Scratch (CS336) → youtube.com/playlist?list=… ML from Human Feedback (CS329H) → youtube.com/playlist?list=… Computer Vision (CS231N) → youtube.com/playlist?list=… LLM Evaluation & Scaling → youtube.com/playlist?list=… The uncomfortable truth: The degree isn’t the scarce asset anymore. Execution speed is. Top schools know this. That’s why they’re publishing the playbook. 👉 Bookmark this. Comment the first lecture you’ll actually watch.
Srishti tweet media
English
417
5.2K
29.2K
4.1M
Stephan Bugaj ری ٹویٹ کیا
Patton Oswalt
Patton Oswalt@pattonoswalt·
It’s… Shakespearean. And glorious.
Patton Oswalt tweet media
English
54
1.5K
15.4K
251.5K
Stephan Bugaj ری ٹویٹ کیا
Keith Edwards
Keith Edwards@keithedwards·
This has 5 million views on TikTok
English
273
12.1K
69.4K
1.2M