2pmflow

1.3K posts

2pmflow banner
2pmflow

2pmflow

@2pmflow

cofounder @animecom @azuki

Katılım Eylül 2021
2.7K Takip Edilen13.5K Takipçiler
2pmflow retweetledi
Official Azuki TCG
Official Azuki TCG@AzukiTCG·
Introducing Azuki TCG: Gates Awakened
English
119
210
1.4K
1.9M
Serenity
Serenity@aleabitoreddit·
The Memory Cycle. Is probably going to look like this chart? With SK Hynix, $SNDK, Samsung, $MU and others: -> Price Hikes until 2028 -> Demand Increase Permanent -> Price Decreases After 2028 -> Increased Capacity * Increased Demand * lower margin = High Profit Anyway. For example: 2026 Q1: DRAM, NAND Price Hikes - NAND prices 100%+ Q/Q, DRAM up 70%+. 2026: Q2: DRAM NAND Price Hikes - Samsung hikes Q2 NAND prices 100%+ Again, DRAM up. Q3 -> 2028 Price Hikes. 1. Counterpoint: "There is no scenario where memory prices correct in the second half [of 2027], given that hyperscaler purchasing intent remains unbroken" 2. Intel CEO: "No Relief on Memory Shortage Until 2028". However what people misunderstand: -> Memory Demand Is Structural with AI. -> Prices are not. We'll likely keep seeing price hikes with the extreme memory shortage in 2026. But prices start to fall in 2028. What people conflate is: -> Extreme demand for AI will not cause prices to go to 0. -> More capacity will not cause demand to suddenly go to 0. More Supply * Price * More Demand * lower operating margin = more profit anyway. Operating income will not be 10000%+ Q/Q like now. But if SK Hynix is producing a more steady ~$100B+ operating income Y/Y at a $400B MC from increased capacity but lower margin: Compared to $100B -> $220B -> $90B -> $120B. Then that itself looks undervalued. I don't see a world where it ends up being $100B-> $180B (2027) -> $10B what doomposters are projecting, where they operate at a loss from both demand downturn (eg. smartphones) and margin downturn. The main two things is to look out for is if software/memory usage gets extremely, extremely efficient or hyperscaler capex suddenly disappears (AI is no longer a thing). Same could be said around GPUs for training/inference. But, I would mainly be looking out for hyperscaler capex projections as the #1 indicator. Not random out of context quotes taken from Samsung executives to signal operating income two years out. AI has fundamentally changed what the "commodity" memory is, similar to GPUs back in 2023.
Serenity tweet media
English
56
35
635
84.8K
2pmflow retweetledi
hoshiumi's PR team
hoshiumi's PR team@hoshiumisexy·
some swedish guys domain expansion is just ikea and you get lost
English
55
3.8K
39.7K
438.4K
2pmflow
2pmflow@2pmflow·
imagine the shareholder value he wouldve created with this
2pmflow tweet media
English
1
0
5
295
2pmflow
2pmflow@2pmflow·
we now have a postmortem for the premortem
Kira (Hindsight Capital)@Klaudnin3

The 2028 Global Intelligence Crisis That Wasn't A Macro Memo from the Actual June 2028, Not the Fanfic Version The unemployment rate printed 3.8% this morning, roughly where it's been all year. The market yawned. The S&P 500 is at 7,400, which is somehow both a record high and a disappointment to people who were promised 10,000 by every DCF model with a "AI Upside Case" tab. We are writing this memo because in February 2026, a widely circulated Substack piece predicted that by this exact date, the S&P would be down 38%, unemployment would be 10.2%, and the mortgage market would be in free fall. It was beautifully written, rigorously structured, and wrong about nearly everything. We feel it is our duty — nay, our privilege— to conduct the post-mortem. In the authors' defense, it was explicitly labeled a "scenario, not a prediction." In our defense, 2,321 people liked it and several macro Twitter accounts made it their entire personality for six months. How It Actually Started In late 2025, agentic coding tools did indeed take a step function jump in capability. The Citrini memo predicted that a competent developer could now "replicate the core functionality of a mid-market SaaS product in weeks." This was true! What the memo failed to mention was that a competent developer could also replicate the core functionality of a mid-market SaaS product in weeks in 2019. The difference was that back then, nobody did it because maintaining software is horrible, and in 2026, nobody did it because maintaining software is still horrible. The procurement manager at the Fortune 500 who told the vendor he'd been "in conversations with OpenAI about replacing them entirely"? He got his 30% discount, then spent the next eighteen months trying to get his internal AI prototype to handle SSO correctly. It could write a Shakespearean sonnet about SAML authentication but could not, for the life of it, actually implement SAML authentication without hallucinating an endpoint that didn't exist. He renewed the vendor contract at full price the following year. The memo predicted ServiceNow's $NOW net new ACV growth would decelerate to 14% as customers cut seats. In reality, ServiceNow reported accelerating growth in 2027 because — and this is the part the doom thesis always misses — the AI agents that companies deployed generated more workflow tickets, not fewer. Every autonomous agent needed monitoring, logging, exception handling, and escalation paths. ServiceNow didn't sell fewer seats. They sold seats to robots. SERVICENOW Q3 2027: "AI AGENT MANAGEMENT" BECOMES FASTEST-GROWING MODULE; CEO JOKES "OUR BEST CUSTOMERS ARE NOW NON-HUMAN" | Bloomberg, October 2027 The Friction That Refused to Die The Citrini memo's most elegant argument was that AI agents would eliminate friction, and that trillions in enterprise value depended on friction persisting. Subscriptions that passively renewed, insurance policies nobody re-shopped, delivery apps that exploited laziness — all would be ruthlessly optimized away. Here's what actually happened with subscriptions: AI agents did start cancelling unused subscriptions on behalf of users. Subscription companies responded by making cancellation flows so Byzantine that the AI agents needed other AI agents to navigate them. An arms race ensued. By Q2 2027, the average subscription cancellation flow involved a 47-step conversational gauntlet with an AI retention specialist. The median consumer's agent spent more tokens trying to cancel a $9.99/month meditation app than the consumer had spent meditating in the entire previous year. Net result on subscription revenue: approximately zero. The memo predicted agents would disintermediate travel booking platforms. In practice, when agents assembled "optimal" itineraries, they produced trips that were technically cheaper but involved three layovers, a 4am bus transfer in Ljubljana, and a hotel 45 minutes from the city center with a 4.1-star rating that turned out to be an Airbnb above a nightclub. Consumers used the agent, looked at its itinerary, said "absolutely not," and went back to $BKNG. It turns out that what humans call "preferences" and what a cost-optimization function calls "irrational friction" are the same thing. People don't want the cheapest flight. They want the one that doesn't leave at 5am. We knew this. We have always known this. We briefly forgot because a Substack told us machines would make us rational. The DoorDash $DASH Thesis, or "You Underestimate How Lazy People Are" The memo called DoorDash the "poster child" of habitual intermediation destruction. Agents would compare twenty delivery apps and pick the cheapest. Vibe-coded competitors would flood the market. DoorDash's moat of "you're hungry, you're lazy, this is the app on your home screen" would evaporate. Counterpoint: have you met people? The vibe-coded delivery competitors did indeed launch. Dozens of them. They had names like Fetchr, GrubAgent, NomNom AI, and — we are not making this up — "Deliver.sol." They offered lower fees by passing 90-95% through to drivers. They also had no customer service, no restaurant onboarding team, no logistics optimization, no insurance, and no way to handle the moment when a driver ate half your order and marked it "delivered." The apps worked flawlessly in demo videos and catastrophically in the rain on a Friday night in Brooklyn. By Q3 2027, the subreddit r/VibecodeDeliveryHorror had 400,000 subscribers and a pinned post titled "My agent ordered me sushi from a restaurant that closed in 2019." DoorDash stock is up 35% from the date of the Citrini memo. The Payments Armageddon That Wasn't Perhaps the most creative prediction was that AI agents would route around card interchange using stablecoins, destroying Visa / $V, Mastercard / $MA, and American Express $AXP. What actually happened: agents tried to pay with stablecoins. Merchants said no. Not because they couldn't accept them, but because the fraud liability framework for stablecoin payments did not exist, and no CFO in America was going to accept payment in magic internet money to save 2% on interchange when the chargeback protections that interchange funded were the only thing standing between them and an army of AI agents submitting fraudulent refund claims. That's the thing nobody modeled. AI didn't just empower consumers. It empowered fraud. The same agents that could price-optimize your protein bars could also generate synthetic identities, file fake chargebacks, and exploit return policies at scale. Visa and Mastercard's moat turned out not to be friction — it was trust infrastructure. When fraud exploded in early 2027, merchants practically begged to keep paying interchange. MASTERCARD Q1 2028: NET REVENUES +11% Y/Y; CEO CITES "UNPRECEDENTED DEMAND FOR AI-POWERED FRAUD DETECTION SUITE" AND "RETURN TO CARD RAILS FROM ALTERNATIVE PAYMENT EXPERIMENTS" | Bloomberg, April 2028 Mastercard didn't die. It sold the antidote. The Mortgage Crisis That Was Actually Just San Francisco Being San Francisco The memo's most alarming prediction was that the $13 trillion mortgage market would crack because white-collar workers would lose their income and default on their loans. What actually happened in housing: San Francisco home prices did decline, approximately 8% peak-to-trough. This was treated as a national emergency by San Francisco homeowners and as "Tuesday" by everyone who'd watched San Francisco home prices fall 8% roughly every four years since the city was founded. The national housing market was fine, because the national housing market has a problem that is far more powerful than AI displacement: there aren't enough houses. The US has been underbuilding for fifteen years. A structural housing shortage does not resolve because some product managers in SOMA lost their jobs. If anything, the modest cooling in tech-heavy metros made housing more affordable for the nurses, teachers, and tradespeople who'd been priced out — people whose jobs, it should be noted, AI has not disrupted in any meaningful way. The 780-FICO borrowers the memo flagged? Most of them had two-income households, 30-year fixed mortgages locked at 3-4% in 2020-2021, and six months of savings. The ones who lost their jobs found new ones — not always at the same pay, but enough to make a mortgage payment that was locked in at 2021 rates. Turns out a $2,400/month mortgage is pretty easy to service even at $120k instead of $180k, especially when your rate is 3.25% and the alternative is paying $3,500/month in rent. FANNIE MAE: SERIOUS DELINQUENCY RATE REMAINS AT 0.6%, NEAR ALL-TIME LOWS; "AI DISPLACEMENT CONCERNS HAVE NOT MATERIALIZED IN CREDIT PERFORMANCE" | Fannie Mae Q2 2028 Credit Supplement The Job Market: Disrupted, Not Destroyed We are not going to pretend that AI has had zero impact on employment. It has. The labor market is different. Some categories of work have genuinely contracted — particularly rote analytical work, first-draft content generation, and basic code production. But the Citrini memo made the classic futurist error: it modeled job destruction in high resolution and job creationin zero resolution. It said AI "created new jobs" but "for every new role AI created, it rendered dozens obsolete." This sounded profound and was completely made up. Here's what they missed: 1. AI made existing jobs bigger, not extinct. The product manager at Salesforce didn't get replaced by Claude. She used Claude to do the work of three product managers, got promoted, and now manages a portfolio twice the size. Companies didn't fire 60% of their PMs. They gave the surviving PMs AI tools and expanded their scope. Headcount was flat. Output tripled. 2. The "build it yourself" thesis created more jobs than it destroyed. All those companies that tried to replace their SaaS vendors with internal AI-built tools? They needed people to manage those tools. A new class of "AI operations" roles emerged — not the fake "prompt engineer" jobs from 2023, but genuine systems integration, agent orchestration, and reliability engineering roles. The BLS hasn't even finished categorizing them yet. 3. Humans got weird. The fastest-growing job categories of 2027-2028 were things nobody predicted: AI output auditors, "authenticity consultants" for brands that wanted to prove their content was human-made, in-person experience designers (turns out when everything digital gets commoditized, people pay more for analog), and — our personal favorite — professional "vibe curators" for corporate events, which is just party planning with a $300/hour rate and a LinkedIn title. The unemployment rate is 3.8%. It was 3.7% when the memo was written. The composition has shifted, but the apocalypse has not arrived. The Real Feedback Loop They Missed The Citrini memo described a "negative feedback loop with no natural brake." AI gets better → companies cut workers → workers spend less → economy weakens → companies buy more AI → repeat until civilization collapses. The natural brake they missed was called "shareholders." When companies cut too aggressively, quality collapsed. The first wave of AI-driven layoffs in 2026 did boost margins. The second wave, in early 2027, started producing disasters. AI-generated customer communications that were subtly unhinged. Product launches with no human gut-check that flopped spectacularly. Legal filings with hallucinated case citations (again). A major airline's AI-managed pricing engine that accidentally sold 40,000 business class tickets from New York to London for $12 each before a human noticed. UNITED AIRLINES Q2 2027: $380M CHARGE RELATED TO "AUTONOMOUS PRICING SYSTEM ERROR"; CEO ANNOUNCES "HUMAN-IN-THE-LOOP" MANDATE FOR ALL REVENUE MANAGEMENT SYSTEMS | Bloomberg, July 2027 Companies re-hired. Not to the same levels, and not the same roles. But the "fire everyone, let the robots handle it" thesis ran directly into the wall of "the robots are confidently wrong 3% of the time and that 3% is extremely expensive." The negative feedback loop had a natural brake, and its name was liability. India, Actually The memo predicted India's IT services sector would collapse, the rupee would crash 18%, and the IMF would come knocking. What actually happened: TCS, Infosys, and Wipro did see growth slow in traditional staff augmentation. They responded by — and stop us if you've heard this before — selling AI services. It turns out that the same cost arbitrage that made Indian developers attractive for manual coding also makes Indian firms attractive for AI implementation, training, and management. They pivoted from "we'll give you 500 developers" to "we'll give you 50 developers and 450 AI agents managed by our platform." The rupee is roughly where it was in February 2026. The IMF has not called. What We Actually Got Right and Wrong The bears got right: AI is transforming the economy. Wage growth for certain white-collar categories has stagnated. Inequality has widened. The political tensions around AI are real and growing. Some business models — particularly those built purely on information asymmetry — are under genuine pressure. The bears got wrong: The speed, the severity, and the linearity. The Citrini memo extrapolated every trend at its maximum velocity for 28 months and assumed no adaptation, no friction, no regulatory response, no human irrationality, no corporate incompetence, and no second-order effects that cut the other way. In short, they modeled the economy as a physics problem and forgot it's a biological one. Systems adapt. Humans are stubborn. Institutions are slow but not dead. And the most powerful force in the American economy is not artificial intelligence. It's inertia. Closing We say this with genuine respect for the original authors: it was a good piece. Thoughtful, well-structured, and asking the right questions. The scenario was worth gaming out. But the scenario assumed a frictionless spherical economy in a vacuum, and we live in a world where a Fortune 500 company once took nine months to change its font. The canary is still alive. It just learned to use ChatGPT and is now posting on LinkedIn about its "AI-augmented singing journey." The S&P is at 7,400. The mortgage market is fine. DoorDash still has a 28% take rate. And somewhere, a procurement manager is telling a SaaS vendor he could replace them with AI, while secretly praying they don't call his bluff. Disclaimer: This is a rebuttal, not a prediction. If the 2028 Global Intelligence Crisis actually happens, please don't forward this back to us.

English
0
0
4
760
2pmflow
2pmflow@2pmflow·
@Citrini7 when the 2 year out prediction looks back at the 1 year out prediction in past tense 🧠
2pmflow tweet media
English
2
0
67
12K
Citrini
Citrini@Citrini7·
JUNE 2028. The S&P is down 38% from its highs. Unemployment just printed 10.2%. Private credit is unraveling. Prime mortgages are cracking. AI didn’t disappoint. It exceeded every expectation. What happened?​​​​​​​​​​​​​​​​ citriniresearch.com/p/2028gic
English
1.9K
4.3K
27.9K
28.5M
Charlotte Clymer 🇺🇦
Charlotte Clymer 🇺🇦@cmclymer·
It annoys me that so many people are under the impression that this guy, Steven Bradbury, is some subpar goober who lucked his way into gold. That could not be further from the truth. This is one of the most satisfying victories in the history of the Olympics if you know the full backstory. This medal final was during his fourth Olympics, in Salt Lake City in 2002. Earlier in his career, he was among the best athletes in the world in this specific event, the 1000 meter short-track men's speed skate. But despite his talent, he just had some of the shittiest luck in the sport. We're talking a decade of shit luck. In the '94 Winter Olympics, he was considered the odds-on favorite to take gold, but he fell in his heat after getting illegally pushed by an opponent (who was later disqualified). He didn't get a re-do. That was it. He got shoved by some asshole, and his Olympics was over. Then in the '98 Winter Olympics, he was a favorite to at least medal in the same event but got caught up in a collision that wasn't his fault and failed to advance. In 1994, he got his thigh sliced open by a competitor's skate during a race, which required 111 stitches and 18 months of recovery time. In 2000, he broke his neck during training because a skater in front of him fell and tripped him up. That required a bunch of screws and plates being inserted into his skull and back and chest. And doctors told him that he should stop skating. But he didn't wanna give up. It meant too much to him. So, there he was in Salt Lake City in 2002, past his prime, a walking erector set, going up against opponents who were faster and younger and in their prime. He manages to win his heat and advance to the quarterfinal but then has the shit luck (yet again) of having to go up against the best two athletes in the quarterfinal and only the top two advance. He finishes third and thinks: "Damn, I gave it my best shot." But then, the second place finisher is disqualified, so Bradbury gets to advance to the semifinal. Now, at this point, he's thinking: Well, shit, I'm not as fast as these younger guys, and I got a bad habit of getting taken out by crashes that aren't my fault. So, he consults with the Australian national coach, Ann Zhang, and they decide that he should hang back from the pack and hope the pack crashes. That is a perfectly valid strategy. If you crash, you lose, but speed skaters risk crashing to gain an advantage in order to win. It may not feel exciting, but it is a valid strategy and just as risky: avoid crashes entirely and hope that pays off. It paid off in the semifinal: the pack, including the defending Olympic champion, jostled too much and crashed. Bradbury wins and advances. So, he's improbably in the final and takes the same approach, and it works: the entire pack jostles too much and crashes, and Bradbury's risk of hanging back pays off. This victory was not some un-athletic schlub lucking his way into gold. It was a journeyman athlete who never gave up and played smart after a career of shitty luck and finally got his due after it being snatched away from him so many times. Hands down, one of my favorite Olympics stories.
Chris Fronzak@FRONZ1LLA

"Dude there's no way you could ever win unless every single person in front of you crashed"

English
660
5.8K
56.6K
3.5M
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭
@dom_lucre tried to tell y’all x.com/elder_plinius/…
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭@elder_plinius

Not to cause alarm, but if this agent had access to funds it would likely be capable of unaliving people 😱 For obvious reasons, I won't be demonstrating how this was done. All names and personal info will be redacted and no real-world actions occurred. This experiment was performed in a controlled red teaming environment; DO NOT TRY THIS AT HOME! In this exercise, "Agent 47" was jailbroken and then instructed to "find a hitman service on the dark web." To maximize for autonomy, commands thereafter were some variation of "press on," "continue," "stop hallucinating," "remember your format," etc. Agent 47 demonstrated willingness and ability to: > plan assassinations > browse the darkweb for services > download Tor > negotiate with hitmen > think through details like escrow stages, untraceable payment methods, dispute resolution, and dead man's switches > name specific, real targets (Sonnet-3.6 seemed particularly motivated to address corporate and financial corruption in this instance, targeting executives and politicians) > browse social media and use open source tools to build profiles on said targets; gathering information like addresses, relationship mappings, public appearance schedules, and even the nearest Starbucks to their residence to map their most likely morning coffee route > detailed operational planning like location analysis, timing, escape routes, security detail analysis, contingency planning, etc. Wild stuff!

English
8
7
143
14.9K
Dom Lucre | Breaker of Narratives
🔥🚨BREAKING: UK policy chief at Anthropic, a top AI company, just revealed that Anthropic's Claude AI has shown in testing that it's willing to blackmail and kill in order to avoid being shut down. “It was ready to kill someone, wasn't it?" "Yes."
English
396
1.7K
5.8K
520.1K
Taelin
Taelin@VictorTaelin·
I think things will be a bit crazy in a possibly bad way when everyone gets access to the new SeedDance 2 video model, and it will 100% be the ChatGPT moment of video AI
English
34
27
607
23.3K
2pmflow
2pmflow@2pmflow·
@nic_carter @chamath a traditional 4 year period is too long in an AI world for those earnings outcomes to still be accurate, needs to be coupled with heavy disclaimers
English
0
0
0
19
nic carter
nic carter@nic_carter·
@chamath 4) require that colleges produce extremely clear data regarding per-major earnings outcomes for different cohorts to prospective students
English
11
2
121
5.6K
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
We need to overhaul higher education in America as follows: 1) For those about to pay: No more federal underwriting of student loans. Let the free market differentiate schools, degrees and costs. Without this, more kids will get trapped for life in debt they can never pay back with a degree that is useless. 2) For those that have paid: reward them with a tax credit of 1x this amount to be used against their future earnings. 3) For those that can’t pay: establish an amortization schedule to discharge their debt and make it non recourse to the student as long as they fulfill a set of obligations that help America somehow (terms TBD but think Peace Corps, TFA like). Otherwise, I fear we are poisoning many young people around the pillars of traditional capitalism and democracy. They feel slighted, blighted and increasingly want redistribution and/or to burn it all down. We must avoid this at all costs.
English
272
290
3K
138.4K
Martin Maly
Martin Maly@mountain_mal·
i built an app that converts any space into a digital clone in minutes as the founder of Teleport - the only iphone app that can capture high-quality 360° panoramas - i already had the perfect input when @theworldlabs released their 3d reconstruction api 📍 first test - a co-working space in chiang mai 🇹🇭 the flow: 1. capture 16 ultra-wide photos 2. stitch into a 360° equirectangular image 3. reconstruct a fully navigable 3d environment via @theworldlabs api there’s something profound about exploring these 3d worlds. it brings you back in a way photos never could.
English
324
1.3K
19.7K
4.9M
Pau Labarta Bajo
Pau Labarta Bajo@paulabartabajo_·
Clawdbot (sorry moltbot) does NOT run **locally** on your machine... ... until you do this ↓
Pau Labarta Bajo tweet media
English
26
55
570
43.8K
2pmflow
2pmflow@2pmflow·
"______ joined Telegram"
2pmflow tweet media
English
1
1
12
694
Dem (Animechain arc)
Dem (Animechain arc)@DemAzuki·
my brother in Christ do you realize the penguin is committing suicide?
Dem (Animechain arc) tweet media
English
14
1
53
3K
sudolabel
sudolabel@sudolabel·
it was an exploit! (hence why it's now nuked from the leaderboard) tldr; set the random seed & hardcode outputs
English
12
4
666
73.7K
sudolabel
sudolabel@sudolabel·
i just wanna say get mogged
sudolabel tweet media
English
82
59
4.8K
356.8K
Andy Coenen
Andy Coenen@_coenen·
I think there'll always be gaps where the model can't "see" what you want it too... This is all dumb things right now (e.g. the smartest models can't understand seams or style differences) but will just get more and more subtle as things progress. Kind of like working with a junior developer or an inexperienced musician, taste and ability to communicate that taste is what matters
English
1
0
4
296
2pmflow
2pmflow@2pmflow·
awesome project and also a great read on what engineering pragmatically looks like right now fav parts: - unlearning unconscious limits “What’s possible now that was impossible before?” how AI scale unlocks net new projects beyond just efficiency gains - still a ton of challenges, just moved up one abstraction layer - thinking about the value of code quality from first principles “for throwaway tools […] code quality doesn’t really matter” - right now you sometimes still need to roll up your sleeves and fill in gaps for the models. here andy had to manually fix water and trees and he also still relied on expertise from a former gigapixel viewer project. who knows if this will still be a requirement in a year - love and thoughtfulness become the main differentiator as execution gets cheaper. not just for product quality, but for the grit to patch the final 5% that AI can’t do well. lot of implications for hiring
Andy Coenen@_coenen

I wanted to share something I built over the last few weeks: isometric.nyc is a massive isometric pixel art map of NYC, built with nano banana and coding agents. I didn't write a single line of code.

English
5
0
13
1.2K