Sharding Futures

185 posts

Sharding Futures banner
Sharding Futures

Sharding Futures

@ShardingFutures

Dispatches from the window. AI, automation, and building what's next. By ShadowAnon0x. Subscribe on Substack.

cryptopia Katılım Ekim 2023
173 Takip Edilen21 Takipçiler
Sharding Futures
Sharding Futures@ShardingFutures·
The position that's hard to see and hard to copy is the general store. Specific product. Specific problem. Specific person who needs it solved every month. No funding. No permission. No dev team.
English
0
0
0
1
Sharding Futures
Sharding Futures@ShardingFutures·
You don't need to build the next foundation model. You need to be the person who shows a plumber how to never miss a booking again and gets paid monthly while it runs without you. That's the shovel. That's the ladder. That's the window.
English
0
0
0
2
Sharding Futures
Sharding Futures@ShardingFutures·
You need to be the person who shows a plumber how to never miss a booking again and gets paid monthly while it runs without you. That's the shovel. That's the ladder. That's the window. SF-001 just dropped. Full thesis here:
English
1
0
0
4
Sharding Futures
Sharding Futures@ShardingFutures·
The window opened in 2017. I watched it from the inside of someone else's business. Watched it close the same way. This time I'm not watching.
English
1
0
0
1
Sharding Futures
Sharding Futures@ShardingFutures·
Same pattern is playing out right now. Some people are a step ahead, building audiences and turning AI into income. That is real money. It is also rented land. The smaller group, the one that's easy to miss, is building on the tool layer itself.
English
0
0
0
2
Sharding Futures
Sharding Futures@ShardingFutures·
@KanikaBK This seems kind of asinine. I don't think I truly agree with what's being said here because it's saying that the user doesn't understand the output that it's getting. You're still reviewing and proofing and iterating to make sure that it is an accurate statement.
English
1
0
1
22
Kanika
Kanika@KanikaBK·
I just read a paper that made me question every project in my portfolio. Three researchers from an AI company studied what happens to your brain when you use ChatGPT, Claude, and coding agents every day. And what they found is that you are not getting smarter. You are getting better at feeling smart. The paper is called The LLM Fallacy. Published April 2026. And it documents a cognitive trick that every LLM user falls for without realizing it. Here is what the fallacy does to you without your permission. It makes you believe you can code. You generate a working script with Claude Code. It runs. You ship it. You tell yourself you built it. But when the API changes or the dependency breaks you cannot fix it alone. You do not know the architecture. You do not know the debugging process. You only know how to ask the agent to fix it again. The competence is not yours. It is borrowed. And the loan has interest. It also makes you believe you are fluent. You generate a perfect email in French or a proposal in Mandarin. It is grammatically flawless. Contextually appropriate. You feel bilingual. But remove the tool and you cannot produce a single correct sentence. The fluency is not in your brain. It is in the interface. You are conflating surface polish with internal ability. And finally what I personally found crazy was it makes you believe you understand. You ask an LLM to explain quantum computing or macroeconomics. It gives you a beautiful summary. You nod. You feel informed. But try to explain it to someone else without the tool. The structure collapses. You internalized the shape of the reasoning without engaging in the reasoning itself. You have the map. Not the territory. And the scariest part of the whole paper is one concept buried in the implications. The evaluators cannot tell either. In hiring, interviewers see polished outputs and overestimate competence. In education, teachers see completed assignments and misread understanding. In certification, credentials signal verified skill but the skill was system-scaffolded. The evaluation systems themselves are compromised because they were designed for a world where humans work alone. That world is gone. Now think about where you are using LLMs right now. Writing your posts. Coding your projects. Analyzing your data. Learning your skills. Generating your reports. Proposing your strategies. Everything that used to require sustained cognitive effort is now mediated by a system that makes the output feel effortless. Every single person doing this has the same assumption baked in. The AI is helping me think better. The knowledge is sticking because I am still the one directing it. And if I had to do this without the tool I would perform almost as well. The paper says all three assumptions are wrong. The AI is not helping you think better. It is replacing the thinking you used to do yourself. The knowledge is not sticking because fluency signals competence to your brain even when competence is absent. And you would not perform almost as well. Empirical studies show users rely on generated solutions without internalizing the reasoning behind them. Surface-level correctness does not indicate deeper correctness. You cannot independently reproduce what you shipped. The researchers did not use some obscure experimental setup. They analyzed the same workflows you and I use every day. And they are not anti-AI. They explicitly disclose that they wrote the paper using AI assistance. The irony is intentional. Even the researchers who named the fallacy are inside it. That is the point. Nobody is outside it. I am auditing my portfolio this week. Not because I am a purist. Because I need to know which projects I can still rebuild alone and which ones I accidentally outsourced to a chatbot. If you had to delete every LLM-assisted output from your portfolio today what would be left? Reply below. I am collecting honest answers.
Kanika tweet media
English
37
24
116
8K
Sharding Futures
Sharding Futures@ShardingFutures·
It looks like thinking related to AI is already beginning to mean revert. It is great to see.
Aria Westcott@AriaWestcott

🚨BREAKING: The workers losing their jobs to AI are not the ones who use AI. Stanford's 2026 AI Index confirms it. Unemployment is rising faster among workers least exposed to AI than workers most exposed. The threat was never having a job AI can do. It is having a job AI cannot reach. For two years the entire conversation has been built on one assumption. AI takes the high-exposure jobs first. The lawyers. The analysts. The programmers. Every layoff headline reinforced the same story. Stanford just inverted it. The Stanford Institute for Human-Centered AI tracked unemployment changes across occupations sorted by AI exposure. They expected to find workers most exposed to AI losing jobs faster than workers least exposed. They found the opposite. Workers in low-exposure occupations are losing jobs faster than workers in high-exposure ones. The mechanism makes sense once you see it. Companies are not firing AI-exposed workers and replacing them with AI. They are using AI to make those workers more productive. Then they cut elsewhere. The cuts come from departments where AI is not yet a tool. The AI-exposed workers are not the casualties. They are the leverage that justifies cuts everywhere else. The numbers make this concrete. Software developers aged 22 to 25 saw employment fall nearly 20% from 2022 to 2025. But experienced developers in the same field grew their headcount. The cuts within tech are concentrated in entry level. The cuts outside tech are concentrated in jobs where AI never arrived. The pattern is consistent across every dataset Stanford pulled from. The marketing team using AI gets to keep its 12 people. The procurement team that is not using AI loses 4 of its 10. The accounting team using AI gets to keep its workflow. The facilities team that is not using AI absorbs the layoff. The savings have to come from somewhere. Stanford documented exactly where. McKinsey's 2025 executive survey confirmed where the next round is heading. A third of organizations expect AI to shrink their workforce in the next year. The cuts they named were not in the high-exposure categories. They were in service operations and supply chain. Departments that have to absorb the cost savings the AI-exposed teams are generating elsewhere. The detail most coverage missed: productivity gains from AI are not appearing in tasks requiring more judgment. The 14% gain in customer service and 26% gain in software development are real. In tasks requiring legal reasoning, financial judgment, or strategic decision-making, productivity gains are weak or negative. The work AI is actually doing is concentrated in narrow task categories. The work AI is supposed to do, that justifies the layoffs, is not showing measurable productivity at all. If you spent the last two years assuming your job was safe because AI could not do it, the Stanford 2026 AI Index just confirmed your job is exactly where the cuts are landing. Source: Stanford Institute for Human-Centered AI, 2026 AI Index Report PDF: hai.stanford.edu/ai-index/2026-…

English
0
0
0
3
Sharding Futures
Sharding Futures@ShardingFutures·
@AriaWestcott Looks like thinking related to AI is already beginning to mean revert. Its great to see.
English
0
0
0
11
Aria Westcott
Aria Westcott@AriaWestcott·
🚨BREAKING: The workers losing their jobs to AI are not the ones who use AI. Stanford's 2026 AI Index confirms it. Unemployment is rising faster among workers least exposed to AI than workers most exposed. The threat was never having a job AI can do. It is having a job AI cannot reach. For two years the entire conversation has been built on one assumption. AI takes the high-exposure jobs first. The lawyers. The analysts. The programmers. Every layoff headline reinforced the same story. Stanford just inverted it. The Stanford Institute for Human-Centered AI tracked unemployment changes across occupations sorted by AI exposure. They expected to find workers most exposed to AI losing jobs faster than workers least exposed. They found the opposite. Workers in low-exposure occupations are losing jobs faster than workers in high-exposure ones. The mechanism makes sense once you see it. Companies are not firing AI-exposed workers and replacing them with AI. They are using AI to make those workers more productive. Then they cut elsewhere. The cuts come from departments where AI is not yet a tool. The AI-exposed workers are not the casualties. They are the leverage that justifies cuts everywhere else. The numbers make this concrete. Software developers aged 22 to 25 saw employment fall nearly 20% from 2022 to 2025. But experienced developers in the same field grew their headcount. The cuts within tech are concentrated in entry level. The cuts outside tech are concentrated in jobs where AI never arrived. The pattern is consistent across every dataset Stanford pulled from. The marketing team using AI gets to keep its 12 people. The procurement team that is not using AI loses 4 of its 10. The accounting team using AI gets to keep its workflow. The facilities team that is not using AI absorbs the layoff. The savings have to come from somewhere. Stanford documented exactly where. McKinsey's 2025 executive survey confirmed where the next round is heading. A third of organizations expect AI to shrink their workforce in the next year. The cuts they named were not in the high-exposure categories. They were in service operations and supply chain. Departments that have to absorb the cost savings the AI-exposed teams are generating elsewhere. The detail most coverage missed: productivity gains from AI are not appearing in tasks requiring more judgment. The 14% gain in customer service and 26% gain in software development are real. In tasks requiring legal reasoning, financial judgment, or strategic decision-making, productivity gains are weak or negative. The work AI is actually doing is concentrated in narrow task categories. The work AI is supposed to do, that justifies the layoffs, is not showing measurable productivity at all. If you spent the last two years assuming your job was safe because AI could not do it, the Stanford 2026 AI Index just confirmed your job is exactly where the cuts are landing. Source: Stanford Institute for Human-Centered AI, 2026 AI Index Report PDF: hai.stanford.edu/ai-index/2026-…
Aria Westcott tweet media
English
26
106
313
44K
Sharding Futures
Sharding Futures@ShardingFutures·
ClickFunnels hustlers. Teachable creators. Shopify dropshippers. Crypto builders on Telegram. They weren't the smartest. They just recognized the infrastructure layer before everyone else and moved. By the time the crowd caught up, the window was closed.
English
0
0
0
10
Sharding Futures
Sharding Futures@ShardingFutures·
In the gym this morning, just thinking about how behind I feel as I take action and execute on the things that I've been trying to execute on for years but now with AI it's made it even more possible for me than ever.
English
0
0
0
3
Sharding Futures
Sharding Futures@ShardingFutures·
The gap between "I have an idea" and "I shipped it" is not talent. It is not money. It is the willingness to sit with a bad first version long enough to make it a good one. Most people bail before the ugly stage ends.
English
0
0
0
3
Elias Al
Elias Al@iam_elias1·
Two economists just published a mathematical proof that AI will destroy the economy. Not might. Not could. Will — if nothing changes. The paper is called "The AI Layoff Trap." Published March 2, 2026. Wharton School, University of Pennsylvania. Boston University. Peer reviewed. Mathematically modeled. The conclusion is one sentence. "At the limit, firms automate their way to boundless productivity and zero demand." An economy that produces everything. And sells it to nobody. Here is how you get there. A company fires 500 workers and replaces them with AI. A competitor fires 700 to keep up. Another fires 1,000. Every company is behaving rationally. Every company is following the incentives correctly. And every company is building a trap for itself. Because the workers who were fired were also customers. When they lose their jobs faster than the economy can absorb them, they stop spending. Consumer demand falls. Companies respond by cutting costs — which means automating more workers — which means less spending — which means more falling demand — which means more automation. The loop has no natural exit. The researchers tested every proposed solution. Universal basic income. Capital income taxes. Worker equity participation. Upskilling programs. Corporate coordination agreements. Every single one failed in the model. The only intervention that worked: a Pigouvian automation tax — a per-task levy charged every time a company replaces a human with AI, forcing them to price in the demand they are destroying before they pull the trigger. No government has implemented this. No major economy is seriously discussing it. Meanwhile the numbers are already tracking the curve. 100,000 tech workers laid off in 2025. 92,000 more in the first months of 2026. Jack Dorsey fired half of Block's workforce and said publicly: "Within the next year, the majority of companies will reach the same conclusion." Nobody is doing anything wrong. Companies are following their incentives perfectly. That is exactly the problem. Rational behavior. At scale. Simultaneously. With no mechanism to stop it. Two economists built the math. The math leads to one place. Source: Falk & Tsoukalas · Wharton School + Boston University · arxiv.org/pdf/2603.20617
Elias Al tweet media
English
1.1K
4K
9.9K
1.3M
Sharding Futures
Sharding Futures@ShardingFutures·
I can't believe I waited so long to give myself permission to execute. Years gone.
English
0
0
0
1
Sharding Futures
Sharding Futures@ShardingFutures·
In the Gold Rush, the miners got famous. The people selling picks, shovels, denim, and provisions got rich. Consistently. Regardless of whether anyone struck gold. That pattern is playing out again right now with AI. Pay attention to who is building the general store.
English
0
0
0
9
Sharding Futures
Sharding Futures@ShardingFutures·
@LouisCooper_ Ive been on the similar journey man. Thanks for sharing. Looks like you're still standing so good on ya bro.
English
0
0
0
13
Louis
Louis@LouisCooper_·
I put everything into crypto for 3 years straight, only to walk away with 5% of what I made. The real cost though, is a mental one. The ones I care about who I could have, should have, helped. Instead I let greed get the better of me and in this game, you only have yourself to blame. What is only possible in crypto, is also only possible in crypto. Manage your greed or be punished by it a thousand times over. I share this in hopes to help just one person who’s currently at the other end of my situation, pushing for me. Be grateful for what you have and if you already have the life you dreamt of, secure it at all costs.
English
64
23
517
33.9K