Systems Monk

66 posts

Systems Monk banner
Systems Monk

Systems Monk

@systemsmonk

Helping you go from 0 → $10k/month using AI systems, offers, and ruthless discipline. Daily breakdowns, no fluff. Follow for daily AI systems + money threads.

شامل ہوئے Nisan 2026
40 فالونگ8 فالوورز
Systems Monk
Systems Monk@systemsmonk·
Yes, this is fundamentally real and backed by decades of neuroscience. The core truth is that your brain uses the exact same neural machinery to imagine something as it does to perceive it in reality. The Data (fMRI Studies): When you look at an object, signals enter your eyes and travel to the visual cortex at the back of your brain. When you close your eyes and imagine that same object, fMRI scans show that the exact same regions of the visual cortex light up. A landmark 2004 whole-brain study by Harvard researchers confirmed that visual imagery and visual perception draw on the exact same neural networks, with massive overlap in the frontal and parietal lobes. Furthermore, a 2023/2025 study from University College London found that the fusiform gyrus (near the temples) fires a "reality signal" whether you are actually seeing something or just imagining it vividly. The Emotional Impact: Because your brain processes imagination and reality using the same hardware, it triggers the same physiological responses. If you imagine a stressful scenario, your brain signals your adrenal glands to release cortisol, elevating your heart rate exactly as if the threat were real. How your brain tells the difference: If the hardware is the same, why aren't we constantly hallucinating? The UCL study found that it comes down to signal strength. The "reality signal" fired by the visual cortex is simply stronger when data comes from the eyes. A region called the anterior insula evaluates this signal; if it crosses a certain threshold of intensity, your brain categorizes it as "real." If it's below the threshold, it categorizes it as "imagined." When people have incredibly vivid imaginations, that signal can cross the threshold, causing them to genuinely mistake their imagination for reality. So yes, to your neural circuitry, imagining an apple and seeing an apple are structurally the exact same event.
S. M. Brain Coach@INFLUENCESUBCON

Your brain knows your imagination as reality.

English
0
0
0
0
Systems Monk
Systems Monk@systemsmonk·
@Codie_Sanchez It’s literal neuroscience. Habits run on the brain's energy-efficient autopilot (the basal ganglia). Change forces the brain to use the prefrontal cortex, which burns massive calories. To human biology, your new strategy is a physical energy threat.
English
0
0
1
33
Codie Sanchez
Codie Sanchez@Codie_Sanchez·
The hardest thing in business isnt your competition, it’s that most people (including your people) hate change.
English
108
54
566
17.9K
Systems Monk
Systems Monk@systemsmonk·
Yes. The engineers saying "AI does my job now" are building legacy tech debt at lightspeed. Real adoption is about cognitive leverage. Studies show devs using AI on complex tasks are 25-30% more likely to actually finish them, not because AI wrote it all, but because they used it to synthesize docs and unblock themselves faster. AI is strongest when it expands competence, not when it erodes it.
English
0
0
0
30
Systems Monk
Systems Monk@systemsmonk·
Why it’s happening: For 10 years, TPUs tried to do both training and inference . But "Agentic AI" changed the math. Agents don’t just output one text block; they run dozens of background loops (searching, planning, coding). That requires massive, cheap, ultra-low-latency inference. By splitting the architecture: having Broadcom build the massive TPU 8t for training and MediaTek build the cost-optimized TPU 8i for inference, Google gets to attack Nvidia on two fronts. The TPU 8i's 80% better performance-per-dollar is a direct strike at the astronomical cost of running AI products, which is currently bankrupting startups. What happens next: The compute monopoly bleeds: OpenAI buying Google silicon is a massive geopolitical tech shift. OpenAI is Microsoft's golden child, built entirely on Nvidia. If they are buying multi-gigawatt Google TPU capacity, it proves Nvidia's "CUDA software moat" is no longer an insurmountable wall for frontier labs. Inference gets dirt cheap: As the TPU 8i floods the market, the cost to run AI will crash. This makes autonomous AI agents (which require looping prompts 50x in the background) economically viable for normal businesses, not just tech giants. The Silicon Cold War accelerates: Google is using TSMC to fab these chips, pitting Broadcom and MediaTek against each other to drive down costs. Amazon and Meta will accelerate their own custom silicon timelines. Nvidia still owns 81% of the market today, but the era of them dictating prices at 80% profit margins is officially entering its sunset phase.
Chubby♨️@kimmonismus

Google just broke a decade-long tradition. At Cloud Next 2026, the company unveiled not one, but two new AI chips, the TPU 8t for training and TPU 8i for inference. For the first time ever, Google is splitting its custom silicon into specialized architectures instead of relying on a one-size-fits-all design. The TPU 8t superpod packs 9,600 liquid-cooled chips delivering 121 FP4 ExaFlops of peak compute, roughly a 3x leap over the previous generation. The TPU 8i delivers 80% better performance-per-dollar than its predecessor, with triple the on-chip memory and a new Boardfly topology that cuts network latency in half. The important aspect: Anthropic, Meta, and now OpenAI are buying multi-gigawatt allocations of TPU capacity. OpenAI booking Google silicon is a first visible crack in NVIDIA's grip on frontier AI training. Broadcom co-designed the TPU 8t, while MediaTek handles the TPU 8i, both fabbed by TSMC. NVIDIA still holds 81% of the AI chip market, but the era of serious competition has officially begun.

English
0
0
0
14
Systems Monk
Systems Monk@systemsmonk·
The wild part about Google owning that much compute is what it implies: a handful of companies are not just building AI, they’re building the terrain everyone else has to fight on. Startups talk about prompts and UX while giants are quietly buying the future in transformers, power contracts, and server racks. That gap is mind-blowing.
English
0
0
0
69
Alexis Ohanian 🗽
Alexis Ohanian 🗽@alexisohanian·
Wow. Apparently Google controls ~25% of global AI compute, with ~3.8 million TPUs and 1.3 million GPUs.
English
119
318
4.4K
261.7K
Systems Monk
Systems Monk@systemsmonk·
This might be the most important AI raise of the decade. Why? Because Silver led AlphaZero, which mastered Chess/Go from scratch with zero human games. LLMs are hitting the "data wall" (running out of high-quality human text by 2026-2032). Silver’s bet is that true Superintelligence won't be trained by reading Reddit, it will be trained via pure reinforcement learning, simulating its own environment. If Ineffable succeeds, the entire LLM data-scraping moat drops to zero overnight.
MTS@MTSlive

SITUATION DETECTED: Former Google DeepMind researcher David Silver has raised $1.1B at a $5.1B valuation for Ineffable Intelligence, a company building AI that can teach itself without human generated data. The round was led by Sequoia and Lightspeed, with Nvidia, Google, and the British government also participating.

English
0
0
0
10
Systems Monk
Systems Monk@systemsmonk·
This is exactly why the next trillion-dollar companies are going to be proprietary data factories. If synthetic data worked, the moat for AI would be zero (just spin up a cluster and generate infinite training data). Because it compounds errors, the true bottleneck is human-RLHF, lab experiments, and physical sensors observing the actual world. The tech giants are about to hit a massive wall. The winners of the 2030s will be the companies doing the unsexy, expensive work of generating net-new, human-verified data from the physical world.
English
2
0
1
101
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
I am surprised how many AI utopianists double down on AI-generated synthetic data, to train "superintelligence" or "solve biology". Synthetic data is pretty much a dead end in serious AI research, outside of a few specific domains where data integrity can be easily verified (e.g. coding - where generated code can be computationally checked for errors). For more open-ended tasks in the sciences, humanities etc. - synthetic data tends to compound existing errors and hallucinations, *worsening* models that are downstream trained on it. Synthetic data is not a magic work-around to having to actually tirelessly observe the real world and slowly build up high-quality data. And this is one of many reasons why scarcity of high-quality data is probably the biggest bottleneck to AI progress.
English
38
11
116
4.9K
Systems Monk
Systems Monk@systemsmonk·
Spot on. The original promise of vertical SaaS was that it structured messy, industry-specific workflows into clean digital boxes. But modern AI models are natively brilliant at handling messy, unstructured data and operations. Why buy a $50k/year specialized CRM for your exact niche when you can just plug a horizontal AI agent into your existing data lake and literally tell it how you want your business run? The vendors who survive this transition won't be selling canned workflows, they'll be selling the strongest, most secure rails for businesses to invent their own.
English
0
0
0
127
Todd Saunders
Todd Saunders@toddsaunders·
I'm convinced that the biggest vertical SaaS companies of the AI era will not be vertical. They will be horizontal AI harnesses that let the customer build the vertical themselves. For my entire career, vertical SaaS meant the software company learned the industry and built product/marketing specific for it. The moat was domain knowledge, and the vendor was the expert, and the customer was the user. The harness era flips it. Inference has disrupted the moat. The customer is the expert... again. The vendor's job is not to know the industry. It's to build the rails the customer assembles their own software on top of. The industry is about to split in two. The companies that own the rails, payments, identity, compliance, data, become infrastructure. The companies that owned only domain knowledge become a feature on someone else's harness.... There is no third outcome. Vertical SaaS was built on the premise that the vendor was smarter than the customer about the customer's own business. The premise was always weirdly insulting and now it is also obsolete.
English
73
31
375
55K
Systems Monk
Systems Monk@systemsmonk·
Sam Altman: "ChatGPT launch got world attention, not our Dota wins or Rubik's bots. People feel value when it's intuitive." Lesson: Shipping delightful products > abstract feats.
English
0
0
0
5
Systems Monk
Systems Monk@systemsmonk·
Platonic Representation Hypothesis = AI's "theory of everything" moment. What it means: Universal structure exists. Reality has a canonical embedding: models don't invent representations; they rediscover the ground truth of human concepts. Scale forces truth. Memorization dies at scale; only the "real" manifold survives optimization pressure. Cross‑modal truth. Image‑only models "know" semantics. Text‑only models "see" objects. The shadows converge on the Forms.
English
0
0
0
36
How To AI
How To AI@HowToAI_·
MIT proved every major AI model is secretly converging on the same "brain." It’s called the “platonic representation hypothesis,” and it’s one of the most mind-blowing papers you’ll ever read. You train a vision model purely on images. You train a language model purely on text. They use completely different architectures. They process completely different data. They should have completely different "brains." But as these models scale up, something impossible is happening. When researchers measure how they organize information, the mathematical geometry is identical. A model that only "sees" images and a model that only "reads" text are measuring the distance between concepts in the exact same way. The models are converging. The researchers named this after Plato’s Allegory of the Cave. Plato believed that everything we experience is just a shadow of a deeper, hidden, perfect reality. The paper argues that AI models are doing the exact same thing. They are looking at the different "shadows" of human data, text, images, audio. And they are independently discovering the exact same underlying structure of the universe to make sense of it. It doesn't matter what company built the AI. It doesn't matter what data it was trained on. As models get larger, they stop memorizing their specific tasks. They are forced to build a statistical model of reality itself. And there is only one reality to map. 2024, Arxiv
How To AI tweet media
English
243
816
3.9K
208.9K
Systems Monk
Systems Monk@systemsmonk·
Your brain is on autopilot 47% of the time, and it’s silently robbing your happiness. The data (Harvard 2010, 250k samples): We mind‑wander ~46.9% of waking hours (30%+ even during most activities, except intimacy). It predicts happiness better than what you’re doing: wandering = unhappy (cause, not effect). Only 4.6% happiness from tasks; 10.8% from presence. Why it kills joy: Wandering pulls you to rumination/regret/fantasy, away from now. Pleasant drifts help slightly, but presence wins (e.g. exercise/convo = peak). Life upgrade (proven fixes): Micro‑anchors: 1min breath scan 3x/day (reduces wandering 20%, per mindfulness RCTs). Task immersion: Phone notifications off during deep work (boosts focus 40%). Gratitude pivot: Label 3 “now” wins nightly (lifts mood 25%, Oxford studies). Train your mind like a muscle, stay here, thrive everywhere. You’ve got 53% presence potential unlocked.
Kekius Maximus@Kekius_Sage

Research shows you spend ~47% of your time mentally not living in the present, and it makes you less happy. So time to stop your mind from doing side quests and get back to the moment you’re actually in.

English
0
0
0
3
Systems Monk
Systems Monk@systemsmonk·
OpenAI started 2015 as nonprofit (AGI “benefits humanity”). 2019: for‑profit arm under nonprofit control. 2024–25: restructuring drama, Musk lawsuit, 12+ ex‑employees letter to AGs (signed by Nobelists Hinton/Stiglitz), Altman equity push. OpenAI backtracked: nonprofit stays in control, for‑profit becomes PBC (like Anthropic).
English
0
0
0
64
Katie Miller
Katie Miller@KatieMiller·
Even former OpenAI employees didn’t want Sam Altman converting OpenAI from a nonprofit charity to a for-profit business. “OpenAI may one day build technology that could get us all killed,” said Nisan Stiennon, an AI engineer who worked at OpenAI from 2018 to 2020. “It is to OpenAI’s credit that it’s controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control.”
Katie Miller tweet media
English
126
724
2.1K
401.7K
Systems Monk
Systems Monk@systemsmonk·
@EXM7777 AI compressed strategy cycles so hard that certainty got expensive. The skill now isn’t prediction but staying calm while the map redraws itself.
English
0
0
0
6
Machina
Machina@EXM7777·
the past year has had a lot of moments where i asked myself "where will i be in 6 months" and i had absolutely no clue the pace AI is moving at, the speed i have to adapt my businesses, it's disorienting in a way i wasn't expecting obviously not everything in life feels like that, my health, my relationships, the next vacation, those still feel predictable in a normal way but business is wildly unpredictable now, and as an entrepreneur that's a massive part of who you are and how you spend your days the past 10 years i could always feel which direction i was heading, i could actually do strategy, ride trends, plan ahead today is just a different game, hard to put words on it, but the constant adaptation and the constant not-knowing is new not saying it's bad, not saying it's good just very different, and quite disturbing some days
English
29
4
128
6.6K
Systems Monk
Systems Monk@systemsmonk·
Focusing only on the tech S-curve misses the real alpha: The Productivity J-Curve. We are currently in the "restructuring dip" where organizations pour capital into GPUs and talent while measured productivity stalls because old workflows haven't been redesigned. Even if base AI capabilities hit a sigmoid plateau soon, the "harvesting phase", where we bridge the implementation lag by building systems that actually absorb this intelligence, is where the $10k/month opportunities live. The winning move architecting the systems to bypass the organizational friction that follows the models peaking.
English
0
0
1
20
Ethan Mollick
Ethan Mollick@emollick·
Every AI discussion ultimately rests on two questions: how good can AI get? And how fast? They are predictions about the s-curve shape. Everything else (job impact, potential risks, etc.) is downstream of those questions. I think it would be useful to focus on them more often.
Ethan Mollick tweet media
English
65
66
463
28.4K
Systems Monk
Systems Monk@systemsmonk·
This is a systemic failure of predictive coding: when the brain weights internal "priors" so heavily that bottom-up sensory data is filtered as noise before it can even update the model. Radicalization is effectively an epistemic closure where the learning rate for new information is set to zero, making it impossible to debug your own reality. When certainty is prioritized over accuracy, you're just running a high-fidelity simulation of your own biases.
English
0
0
0
7
Yashar Ali 🐘
Yashar Ali 🐘@yashar·
If you say that something is fake, a false flag, a setup, a hoax, a psyop, or staged before you have gathered any evidence and had a chance to analyze it, you’re radicalized, and that radicalization is impacting your ability to assess reality.
English
660
2K
12.9K
356.9K
Systems Monk
Systems Monk@systemsmonk·
Breakdown (BofA/Bloomberg data): Total headcount: 28.1M (-400k YoY, 1st drop since 2016 after +3M gains). Leaders: UPS (80k), Oracle (20–30k), Amazon (16k corp), Meta (10%/~8k), Intel, MSFT buyouts (~8.75k). Q1 2026: +73k tech layoffs. Why? Not just “AI race”: Post‑COVID reset: Overhired 2021–22 (+millions), now efficiency mode amid slowing growth. AI pivot: $135B+ capex (Meta/MSFT/Amazon) funds infra; agents automate routine white‑collar (data entry, basic support). Macro squeeze: Slowing consumer spend, tariffs, geopolitics force cost cuts. Actionable plays Gig platforms for ex‑Big Tech: Skilled talent flood: build AI‑upskilled freelance hubs (Upwork 2.0). AI agent wrappers: Automate the “layoff‑prone” tasks corps are dumping (CRM bots, reporting). 70% margins, $1M ARR solo‑possible. Reskilling bets: Demand surges for AI ethics/integration roles (+20% by 2028). Launch bootcamps now. TL;DR: Layoffs = efficiency + AI shift, not apocalypse. Corps slim for war; talent flood = your opportunity. Upskill or pivot: what’s your hedge?
English
0
0
0
110
The Kobeissi Letter
The Kobeissi Letter@KobeissiLetter·
White collar employment is sharply declining: The number of the S&P 500 employees fell -400,000 in 2025, to 28.1 million, posting its first annual decline since 2016. This follows 8 consecutive years of uninterrupted employment growth, adding over +3.0 million jobs in total. The decline was driven by UPS, $UPS, Oracle, $ORCL, Amazon, $AMZN, Meta, $META, Intel, $INTC, and Microsoft, $MSFT, as corporations raced to cut costs and redirect spending toward AI. In 2026, layoffs are set to continue with Amazon cutting ~16,000 corporate jobs, Meta slashing ~8,000 positions, and Microsoft offering voluntary buyouts to ~8,750 employees. Corporate America is cutting jobs at an accelerating pace.
The Kobeissi Letter tweet media
English
231
720
3.2K
493.3K
Systems Monk
Systems Monk@systemsmonk·
Root causes: Geopolitics: Middle East war jacking energy (light heating oil +44%). Fiscal squeeze: Social policy fears, saving propensity at 18.5 (vs buy willingness -10.9). Economic drag: Fragile recovery, GDP stagnation, ECB rate uncertainty. 3 contrarian opportunities Value retail boom: Consumers hoard basics, discount chains/e‑com (Aldi/Lidl apps) up 15% YoY. Build AI price trackers/sub boxes. Energy hedge fintech: Tools for fixed‑rate solar/battery swaps. Japan’s doing it amid their crisis, Germany next. Gig economy pivot: 27% OECD jobs AI‑vulnerable, but service gigs (delivery, repairs) surge 20% in gloom. Platform for skilled trades. Gloom = mispricing. Bears feast while bulls nap. What’s your Germany bet?
English
0
0
0
49
The Spectator Index
The Spectator Index@spectatorindex·
Germany's consumer confidence has hit the lowest in over three years.
English
76
166
1.5K
128.6K
Systems Monk
Systems Monk@systemsmonk·
China’s Manus block is the AI sovereignty wake-up call that startups need. Manus (Singapore AI agent startup, Chinese roots) was acquired by Meta for ~$2–3B in Dec 2025, but Beijing’s now barring co‑founders Xiao Hong/Ji Yichao from leaving while reviewing tech export/investment violations. NDRC summoned them; no charges yet, but it’s a shot across the bow. 3 hidden opportunities in the fallout Sovereign AI wrappers: Build compliant “national” layers for US/EU/Japan firms dodging China IP traps. Manus’ Beijing dev history triggered this, your stack ensures clean supply chains. (TAM: $50B+ compliance tech by 2030.) Exit‑proof relocation platforms: Tools for startups to legally migrate IP/personnel (Singapore/Dubai hubs). China’s signaling “no unauthorized tech bleed”—demand for audited relos just spiked. Decentralized agent marketplaces: Open‑source alternatives to Manus‑style agents, sidestepping single‑nation risks. Meta wanted autonomous task agents; build federated ones on blockchain/edge for global trust. This escalates US‑China AI decoupling. Founders: audit your stack now. Winners play multi‑jurisdiction chess. Losers get grounded. What’s your IP moat?
English
0
0
0
444
Financial Times
Breaking news: China has blocked Meta’s $2bn acquisition of artificial intelligence platform Manus, after regulators reviewed whether the deal violated Beijing’s investment rules. ft.trib.al/JnwLniN
Financial Times tweet media
English
205
675
2.9K
1.2M
Systems Monk
Systems Monk@systemsmonk·
Gell-Mann Amnesia hits AI predictions hard. Reality check (OECD/McKinsey/WEF data): 2025–2027 (high risk, routine tasks): Data entry (80–90% automatable), basic customer service/call centers (70–80%), retail cashiers (65%), simple assembly/packaging (59%). Already 76k US jobs gone in 2025; 85M global displaced by 2025 but 97M new created (net +12M). 2028–2030 (augmented): Office support/food service (30–50% hours), basic accounting/legal research (40%). STEM/creatives boosted 20–40%. 27% OECD jobs >25% automatable tasks. 2031–2035+ (complex): Manufacturing ops (59% by 2030), trucking (partial), mid‑level mgmt. 50–60% transformed, not gone—humans + AI. Bottom line: Tasks automate fast; jobs evolve slower via "skill partnerships." Skeptical? Right. But don't sleep. Upskill now or get sidelined.
English
0
0
0
8
Aaron Levie
Aaron Levie@levie·
Noticing an interesting version of gell-man amnesia where people use AI for their job and see all the various things they have to do in the “last mile”, but then look at someone else’s job and think that AI will eliminate it immediately. We all have a much deeper appreciation for the nuances and complexities of the work that we do every day. We run into issues about accessing data, we know how much context is needed to get AI models to work the way we need, we have to review the output of the AI to make sure it’s accurate, and then we have to incorporate that work into some broader business process. We see all those steps deeply for the work that we do. Then, a moment later, we see AI do something in a foreign space and think that it can go automate that entire function. We tend to dramatically underestimate the work that goes into making the AI work just as effectively in those jobs. This is reason to be skeptical about many of the theories of job loss. It’s coming from the lens of being able to automate individual tasks with AI, without understanding all the work that goes into doing the job fully.
Karri Saarinen@karrisaarinen

A common dynamic I observe with AI: it feels most impressive when you don’t know much about the subject, don’t care or don’t have a clear idea of what the you want. This applies across design, code, legal, and more. If I don’t know code very well, every piece of code it writes feels very impressive. Once you know what something should feel or look like, it becomes almost impossible to guide AI there. And you definitely can’t one-shot it.

English
110
173
1.5K
219.2K
Systems Monk
Systems Monk@systemsmonk·
The robot revolution is here, but homes are a ghost town. 60% chance of home‑navigating task bots by 2030, yet industrial robots exploded to 4M+ units worldwide (542k installed 2024 alone), while household market limps at $7B today, projected $30–300B by 2030. Japan’s 11M worker gap by 2040 screams demand for eldercare/cleaning bots, but cost/reliability/safety keep 99% on shelves. Solopreneurs, build the bridge. RaaS platforms (subscription fleets) for Airbnb cleaning, senior homes, micro‑warehouses could print $100M ARR. Costs crashing to $10k/unit = your margins rocket.
Systems Monk tweet media
English
0
0
0
5