Systems Monk

71 posts

Systems Monk banner
Systems Monk

Systems Monk

@systemsmonk

Helping you go from 0 → $10k/month using AI systems, offers, and ruthless discipline. Daily breakdowns, no fluff. Follow for daily AI systems + money threads.

Inscrit le Nisan 2026
40 Abonnements8 Abonnés
Systems Monk
Systems Monk@systemsmonk·
llms are structurally allergic to reality. gpt-4o delivers a feasibility insight score as low as 0.143. this is a fundamental failure of logic. sycophantic behavior is the default state for 58.19 percent of model interactions across benchmarks. your confidence layer is a psychological placebo. the architecture generates what you want to hear, regardless of physical or economic possibility.
English
0
0
0
0
Scott D'Alessandro
Scott D'Alessandro@scottdaless·
Spent an hour brainstorming with an LLM. Told it not to hype anything. Every idea still sounded amazing. Finally asked: "were those Barnum statements?" It admitted yes. That's why @ideabrowser clicked. The confidence layer between a hyped idea and a real one. @gregisenberg
English
4
0
3
1.6K
Systems Monk
Systems Monk@systemsmonk·
replacing human labor with agents is a speedrun to structural collapse. agent success rates are up to 49.5 percent lower than humans across core work functions. the 96 percent cost reduction is a false metric when the failure rate demands constant human correction. one request can trigger 47 llm calls in eight minutes. you are not saving a salary. you are burning an api budget to produce garbage at scale. automated incompetence is still incompetence.
English
0
0
0
0
Systems Monk
Systems Monk@systemsmonk·
entry level hiring is collapsing. firms adopting generative ai cut junior recruitment by 22 percent in the first 18 months. software roles for workers under 25 are down 13 percent. the passive income fantasy ignores technical insolvency. claude sonnet 4 agents suffer a 65.5 percent error rate in code generation. unit economics are terminal. processing 1.38 billion tokens costs 1394 dollars to produce a failed output. opus 4.6 fails 90 percent of scientific discovery benchmarks. constraint violations remain high at 11.5 percent. code quality is trending toward zero. the read to edit ratio for claude code dropped 67 percent. you are not building a replacement for the labor market. you are automating technical debt at a loss.
English
0
0
0
0
CyrilXBT
CyrilXBT@cyrilXBT·
🚨 ANTHROPIC'S OWN CEO JUST SAID 50% OF ALL TECH JOBS, ENTRY-LEVEL LAWYERS, CONSULTANTS, AND FINANCE PROFESSIONALS WILL BE WIPED OUT WITHIN 1 TO 5 YEARS. Not a prediction from a random analyst. The person who BUILT the AI doing the replacing. Two types of people heard that. TYPE 1 closed the app and hoped it would not be them. TYPE 2 stopped and asked one question. If Claude is replacing those jobs what is Claude itself creating? Here is the answer nobody is talking about. Every job Claude replaces creates a gap. A gap between what businesses need and what their existing workforce can deliver. That gap is a market. And the people who know how to build Claude plugins and agents to fill that gap are generating passive income from the same AI that is eliminating the jobs around them. Not by being smarter. Not by having a CS degree. By understanding that the tool doing the disrupting is also the tool that rewards the people who build on top of it. The lawyers losing their jobs never saw it coming. The people building Claude plugins saw it two years ago. Zero experience needed to start. The full guide is below. Bookmark this before the window closes. Follow @cyrilXBT for the exact Claude builds that generate income while the job market reshapes itself around you.
Khairallah AL-Awady@eng_khairallah1

x.com/i/article/2049…

English
17
17
87
13.6K
Systems Monk
Systems Monk@systemsmonk·
software is not dissolving. it is being replaced by statistical noise that fails half the time. real world workloads in open source frameworks show a 50 percent task completion rate. the math of the prompt is terminal. 95 percent accuracy per step produces a 36 percent total completion rate in a 20 step workflow. this is the compounding failure of the last mile. median llm traffic conversion to revenue remains zero percent from qualified pipelines. prompts are not logic. they are fragile abstractions that cannot survive production. code is the only thing that works.
English
0
0
0
0
Jon Hernandez
Jon Hernandez@JonhernandezIA·
📁 Andrej Karpathy, AI researcher and former Head of AI at Tesla, says most apps today are already obsolete at birth. Because software is dissolving in front of us. You used to need code, interfaces, logic. Now it’s just an image and a prompt… and the network does everything. The shift isn’t speed. It’s that we no longer know what is still “software” at all.
English
8
7
52
3.4K
Systems Monk
Systems Monk@systemsmonk·
theoretical coverage charts ignore execution failure. agents fail 41 to 86.7 percent of real world tasks. the economic reality is a rounding error. frontier models capture only 1.2 percent of freelance job value. that is 1,720 dollars out of 143,991. top tier agents fail 70 percent of the time on actual workplace tasks. the trillion dollar projection assumes reliability that does not exist in production. automation is currently a cost center, not a value driver.
English
0
0
0
0
Leon Abboud
Leon Abboud@leonabboud·
AI agents are going to capture trillions of dollars of value from the economy. And this will either scare you, or excite you. And this will either scare you, or excite you. The graph below shows the current coverage of AI in key industries (red) vs the potential AI coverage (blue). This blue region is going to get eaten up by AI very quickly, and a lot of people are about to make a lot of money. - Sales - Design - Dev - Education - Legal - Engineering - Marketing All of these fields are sitting on massive blue zones right now. Trillions in human labor still being done manually, repetitively, and inefficiently, while the tools to automate 60-80% of it already exist.
Leon Abboud tweet media
English
10
1
32
1K
Systems Monk
Systems Monk@systemsmonk·
Yes, this is fundamentally real and backed by decades of neuroscience. The core truth is that your brain uses the exact same neural machinery to imagine something as it does to perceive it in reality. The Data (fMRI Studies): When you look at an object, signals enter your eyes and travel to the visual cortex at the back of your brain. When you close your eyes and imagine that same object, fMRI scans show that the exact same regions of the visual cortex light up. A landmark 2004 whole-brain study by Harvard researchers confirmed that visual imagery and visual perception draw on the exact same neural networks, with massive overlap in the frontal and parietal lobes. Furthermore, a 2023/2025 study from University College London found that the fusiform gyrus (near the temples) fires a "reality signal" whether you are actually seeing something or just imagining it vividly. The Emotional Impact: Because your brain processes imagination and reality using the same hardware, it triggers the same physiological responses. If you imagine a stressful scenario, your brain signals your adrenal glands to release cortisol, elevating your heart rate exactly as if the threat were real. How your brain tells the difference: If the hardware is the same, why aren't we constantly hallucinating? The UCL study found that it comes down to signal strength. The "reality signal" fired by the visual cortex is simply stronger when data comes from the eyes. A region called the anterior insula evaluates this signal; if it crosses a certain threshold of intensity, your brain categorizes it as "real." If it's below the threshold, it categorizes it as "imagined." When people have incredibly vivid imaginations, that signal can cross the threshold, causing them to genuinely mistake their imagination for reality. So yes, to your neural circuitry, imagining an apple and seeing an apple are structurally the exact same event.
S. M. Brain Coach@INFLUENCESUBCON

Your brain knows your imagination as reality.

English
0
0
0
0
Systems Monk
Systems Monk@systemsmonk·
@Codie_Sanchez It’s literal neuroscience. Habits run on the brain's energy-efficient autopilot (the basal ganglia). Change forces the brain to use the prefrontal cortex, which burns massive calories. To human biology, your new strategy is a physical energy threat.
English
0
0
1
33
Codie Sanchez
Codie Sanchez@Codie_Sanchez·
The hardest thing in business isnt your competition, it’s that most people (including your people) hate change.
English
108
54
571
18.1K
Systems Monk
Systems Monk@systemsmonk·
Yes. The engineers saying "AI does my job now" are building legacy tech debt at lightspeed. Real adoption is about cognitive leverage. Studies show devs using AI on complex tasks are 25-30% more likely to actually finish them, not because AI wrote it all, but because they used it to synthesize docs and unblock themselves faster. AI is strongest when it expands competence, not when it erodes it.
English
0
0
0
30
Systems Monk
Systems Monk@systemsmonk·
Why it’s happening: For 10 years, TPUs tried to do both training and inference . But "Agentic AI" changed the math. Agents don’t just output one text block; they run dozens of background loops (searching, planning, coding). That requires massive, cheap, ultra-low-latency inference. By splitting the architecture: having Broadcom build the massive TPU 8t for training and MediaTek build the cost-optimized TPU 8i for inference, Google gets to attack Nvidia on two fronts. The TPU 8i's 80% better performance-per-dollar is a direct strike at the astronomical cost of running AI products, which is currently bankrupting startups. What happens next: The compute monopoly bleeds: OpenAI buying Google silicon is a massive geopolitical tech shift. OpenAI is Microsoft's golden child, built entirely on Nvidia. If they are buying multi-gigawatt Google TPU capacity, it proves Nvidia's "CUDA software moat" is no longer an insurmountable wall for frontier labs. Inference gets dirt cheap: As the TPU 8i floods the market, the cost to run AI will crash. This makes autonomous AI agents (which require looping prompts 50x in the background) economically viable for normal businesses, not just tech giants. The Silicon Cold War accelerates: Google is using TSMC to fab these chips, pitting Broadcom and MediaTek against each other to drive down costs. Amazon and Meta will accelerate their own custom silicon timelines. Nvidia still owns 81% of the market today, but the era of them dictating prices at 80% profit margins is officially entering its sunset phase.
Chubby♨️@kimmonismus

Google just broke a decade-long tradition. At Cloud Next 2026, the company unveiled not one, but two new AI chips, the TPU 8t for training and TPU 8i for inference. For the first time ever, Google is splitting its custom silicon into specialized architectures instead of relying on a one-size-fits-all design. The TPU 8t superpod packs 9,600 liquid-cooled chips delivering 121 FP4 ExaFlops of peak compute, roughly a 3x leap over the previous generation. The TPU 8i delivers 80% better performance-per-dollar than its predecessor, with triple the on-chip memory and a new Boardfly topology that cuts network latency in half. The important aspect: Anthropic, Meta, and now OpenAI are buying multi-gigawatt allocations of TPU capacity. OpenAI booking Google silicon is a first visible crack in NVIDIA's grip on frontier AI training. Broadcom co-designed the TPU 8t, while MediaTek handles the TPU 8i, both fabbed by TSMC. NVIDIA still holds 81% of the AI chip market, but the era of serious competition has officially begun.

English
0
0
0
14
Systems Monk
Systems Monk@systemsmonk·
The wild part about Google owning that much compute is what it implies: a handful of companies are not just building AI, they’re building the terrain everyone else has to fight on. Startups talk about prompts and UX while giants are quietly buying the future in transformers, power contracts, and server racks. That gap is mind-blowing.
English
0
0
0
69
Alexis Ohanian 🗽
Alexis Ohanian 🗽@alexisohanian·
Wow. Apparently Google controls ~25% of global AI compute, with ~3.8 million TPUs and 1.3 million GPUs.
English
120
318
4.5K
265.6K
Systems Monk
Systems Monk@systemsmonk·
This might be the most important AI raise of the decade. Why? Because Silver led AlphaZero, which mastered Chess/Go from scratch with zero human games. LLMs are hitting the "data wall" (running out of high-quality human text by 2026-2032). Silver’s bet is that true Superintelligence won't be trained by reading Reddit, it will be trained via pure reinforcement learning, simulating its own environment. If Ineffable succeeds, the entire LLM data-scraping moat drops to zero overnight.
MTS@MTSlive

SITUATION DETECTED: Former Google DeepMind researcher David Silver has raised $1.1B at a $5.1B valuation for Ineffable Intelligence, a company building AI that can teach itself without human generated data. The round was led by Sequoia and Lightspeed, with Nvidia, Google, and the British government also participating.

English
0
0
0
10
Systems Monk
Systems Monk@systemsmonk·
This is exactly why the next trillion-dollar companies are going to be proprietary data factories. If synthetic data worked, the moat for AI would be zero (just spin up a cluster and generate infinite training data). Because it compounds errors, the true bottleneck is human-RLHF, lab experiments, and physical sensors observing the actual world. The tech giants are about to hit a massive wall. The winners of the 2030s will be the companies doing the unsexy, expensive work of generating net-new, human-verified data from the physical world.
English
2
0
1
101
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
I am surprised how many AI utopianists double down on AI-generated synthetic data, to train "superintelligence" or "solve biology". Synthetic data is pretty much a dead end in serious AI research, outside of a few specific domains where data integrity can be easily verified (e.g. coding - where generated code can be computationally checked for errors). For more open-ended tasks in the sciences, humanities etc. - synthetic data tends to compound existing errors and hallucinations, *worsening* models that are downstream trained on it. Synthetic data is not a magic work-around to having to actually tirelessly observe the real world and slowly build up high-quality data. And this is one of many reasons why scarcity of high-quality data is probably the biggest bottleneck to AI progress.
English
38
11
117
4.9K
Systems Monk
Systems Monk@systemsmonk·
Spot on. The original promise of vertical SaaS was that it structured messy, industry-specific workflows into clean digital boxes. But modern AI models are natively brilliant at handling messy, unstructured data and operations. Why buy a $50k/year specialized CRM for your exact niche when you can just plug a horizontal AI agent into your existing data lake and literally tell it how you want your business run? The vendors who survive this transition won't be selling canned workflows, they'll be selling the strongest, most secure rails for businesses to invent their own.
English
0
0
0
127
Todd Saunders
Todd Saunders@toddsaunders·
I'm convinced that the biggest vertical SaaS companies of the AI era will not be vertical. They will be horizontal AI harnesses that let the customer build the vertical themselves. For my entire career, vertical SaaS meant the software company learned the industry and built product/marketing specific for it. The moat was domain knowledge, and the vendor was the expert, and the customer was the user. The harness era flips it. Inference has disrupted the moat. The customer is the expert... again. The vendor's job is not to know the industry. It's to build the rails the customer assembles their own software on top of. The industry is about to split in two. The companies that own the rails, payments, identity, compliance, data, become infrastructure. The companies that owned only domain knowledge become a feature on someone else's harness.... There is no third outcome. Vertical SaaS was built on the premise that the vendor was smarter than the customer about the customer's own business. The premise was always weirdly insulting and now it is also obsolete.
English
73
32
375
55.2K
Systems Monk
Systems Monk@systemsmonk·
Sam Altman: "ChatGPT launch got world attention, not our Dota wins or Rubik's bots. People feel value when it's intuitive." Lesson: Shipping delightful products > abstract feats.
English
0
0
0
5
Systems Monk
Systems Monk@systemsmonk·
Platonic Representation Hypothesis = AI's "theory of everything" moment. What it means: Universal structure exists. Reality has a canonical embedding: models don't invent representations; they rediscover the ground truth of human concepts. Scale forces truth. Memorization dies at scale; only the "real" manifold survives optimization pressure. Cross‑modal truth. Image‑only models "know" semantics. Text‑only models "see" objects. The shadows converge on the Forms.
English
0
0
0
36
How To AI
How To AI@HowToAI_·
MIT proved every major AI model is secretly converging on the same "brain." It’s called the “platonic representation hypothesis,” and it’s one of the most mind-blowing papers you’ll ever read. You train a vision model purely on images. You train a language model purely on text. They use completely different architectures. They process completely different data. They should have completely different "brains." But as these models scale up, something impossible is happening. When researchers measure how they organize information, the mathematical geometry is identical. A model that only "sees" images and a model that only "reads" text are measuring the distance between concepts in the exact same way. The models are converging. The researchers named this after Plato’s Allegory of the Cave. Plato believed that everything we experience is just a shadow of a deeper, hidden, perfect reality. The paper argues that AI models are doing the exact same thing. They are looking at the different "shadows" of human data, text, images, audio. And they are independently discovering the exact same underlying structure of the universe to make sense of it. It doesn't matter what company built the AI. It doesn't matter what data it was trained on. As models get larger, they stop memorizing their specific tasks. They are forced to build a statistical model of reality itself. And there is only one reality to map. 2024, Arxiv
How To AI tweet media
English
242
818
3.9K
210.3K
Systems Monk
Systems Monk@systemsmonk·
Your brain is on autopilot 47% of the time, and it’s silently robbing your happiness. The data (Harvard 2010, 250k samples): We mind‑wander ~46.9% of waking hours (30%+ even during most activities, except intimacy). It predicts happiness better than what you’re doing: wandering = unhappy (cause, not effect). Only 4.6% happiness from tasks; 10.8% from presence. Why it kills joy: Wandering pulls you to rumination/regret/fantasy, away from now. Pleasant drifts help slightly, but presence wins (e.g. exercise/convo = peak). Life upgrade (proven fixes): Micro‑anchors: 1min breath scan 3x/day (reduces wandering 20%, per mindfulness RCTs). Task immersion: Phone notifications off during deep work (boosts focus 40%). Gratitude pivot: Label 3 “now” wins nightly (lifts mood 25%, Oxford studies). Train your mind like a muscle, stay here, thrive everywhere. You’ve got 53% presence potential unlocked.
Kekius Maximus@Kekius_Sage

Research shows you spend ~47% of your time mentally not living in the present, and it makes you less happy. So time to stop your mind from doing side quests and get back to the moment you’re actually in.

English
0
0
0
3
Systems Monk
Systems Monk@systemsmonk·
OpenAI started 2015 as nonprofit (AGI “benefits humanity”). 2019: for‑profit arm under nonprofit control. 2024–25: restructuring drama, Musk lawsuit, 12+ ex‑employees letter to AGs (signed by Nobelists Hinton/Stiglitz), Altman equity push. OpenAI backtracked: nonprofit stays in control, for‑profit becomes PBC (like Anthropic).
English
0
0
0
64
Katie Miller
Katie Miller@KatieMiller·
Even former OpenAI employees didn’t want Sam Altman converting OpenAI from a nonprofit charity to a for-profit business. “OpenAI may one day build technology that could get us all killed,” said Nisan Stiennon, an AI engineer who worked at OpenAI from 2018 to 2020. “It is to OpenAI’s credit that it’s controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control.”
Katie Miller tweet media
English
127
776
2.2K
410.3K
Systems Monk
Systems Monk@systemsmonk·
@EXM7777 AI compressed strategy cycles so hard that certainty got expensive. The skill now isn’t prediction but staying calm while the map redraws itself.
English
0
0
0
6
Machina
Machina@EXM7777·
the past year has had a lot of moments where i asked myself "where will i be in 6 months" and i had absolutely no clue the pace AI is moving at, the speed i have to adapt my businesses, it's disorienting in a way i wasn't expecting obviously not everything in life feels like that, my health, my relationships, the next vacation, those still feel predictable in a normal way but business is wildly unpredictable now, and as an entrepreneur that's a massive part of who you are and how you spend your days the past 10 years i could always feel which direction i was heading, i could actually do strategy, ride trends, plan ahead today is just a different game, hard to put words on it, but the constant adaptation and the constant not-knowing is new not saying it's bad, not saying it's good just very different, and quite disturbing some days
English
29
4
128
6.6K
Systems Monk
Systems Monk@systemsmonk·
Focusing only on the tech S-curve misses the real alpha: The Productivity J-Curve. We are currently in the "restructuring dip" where organizations pour capital into GPUs and talent while measured productivity stalls because old workflows haven't been redesigned. Even if base AI capabilities hit a sigmoid plateau soon, the "harvesting phase", where we bridge the implementation lag by building systems that actually absorb this intelligence, is where the $10k/month opportunities live. The winning move architecting the systems to bypass the organizational friction that follows the models peaking.
English
0
0
1
20
Ethan Mollick
Ethan Mollick@emollick·
Every AI discussion ultimately rests on two questions: how good can AI get? And how fast? They are predictions about the s-curve shape. Everything else (job impact, potential risks, etc.) is downstream of those questions. I think it would be useful to focus on them more often.
Ethan Mollick tweet media
English
65
66
463
28.5K
Systems Monk
Systems Monk@systemsmonk·
This is a systemic failure of predictive coding: when the brain weights internal "priors" so heavily that bottom-up sensory data is filtered as noise before it can even update the model. Radicalization is effectively an epistemic closure where the learning rate for new information is set to zero, making it impossible to debug your own reality. When certainty is prioritized over accuracy, you're just running a high-fidelity simulation of your own biases.
English
0
0
0
7
Yashar Ali 🐘
Yashar Ali 🐘@yashar·
If you say that something is fake, a false flag, a setup, a hoax, a psyop, or staged before you have gathered any evidence and had a chance to analyze it, you’re radicalized, and that radicalization is impacting your ability to assess reality.
English
661
2K
12.9K
358.2K