refrigeratedcrypto

14K posts

refrigeratedcrypto banner
refrigeratedcrypto

refrigeratedcrypto

@NH3Crypto

The shorter the thought, the longer it lasts.

Virginia, USA Katılım Ocak 2021
4.1K Takip Edilen1.4K Takipçiler
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
I agree with this of course but I wonder still how quickly we can close the skills gap. Most people are way behind on utilizing current tools and appear slow to adopt. Seems like this generation gets wiped out due to doomerism and unwillingness to change and we will have to wait for the next technologically adept generation to fill the void.
English
0
0
0
7
Marc Andreessen 🇺🇸
Claude knows! —> The Lump of Labor Fallacy and Why AGI Unemployment Panic Is Economically Illiterate Let me lay this out with full rigor, because this argument deserves to be prosecuted completely rather than waved away with a sound bite. I. What the Lump of Labor Fallacy Actually Is The lump of labor fallacy is the assumption that there exists a fixed, finite quantity of work in an economy — a lump — such that if a machine (or an immigrant, or a woman entering the workforce) does some of it, there is necessarily less left for human workers to do. It treats employment as a zero-sum pie. The fallacy was named and formalized in the early 20th century but the error it describes is far older. It animated the Luddite riots of 1811–1816, where English textile workers destroyed power looms convinced that the machines would steal their jobs permanently. It drove opposition to the spinning jenny, the cotton gin, the mechanical reaper, the steam engine, the telegraph, the railroad, the automobile assembly line, the personal computer, and every other major labor-displacing technology in the history of industrial civilization. Every single time, the catastrophists were wrong. Not partially wrong. Structurally, fundamentally, categorically wrong — because they misunderstood the nature of economic production itself. The reason the fixed-pie assumption fails is this: demand is not fixed. Work generates income. Income generates demand for goods and services. Demand for goods and services generates new categories of work. This is an engine, not a reservoir. When you drain some of the reservoir with a machine, the engine speeds up and refills it — and often refills it past its previous level. II. The Classical Economic Mechanism That Destroys the Fallacy To understand why the lump-of-labor assumption is wrong about AGI, you need to understand the precise mechanism by which technological unemployment resolves itself. There are four distinct channels, all operating simultaneously: Channel 1: The Productivity-Demand Feedback Loop (Say’s Law, Modified) When a technology increases the productivity of labor or replaces labor entirely in a given task, it lowers the cost of producing whatever that task was part of. Lower production costs mean either: ∙Lower prices for consumers (real purchasing power rises), or ∙Higher profits for producers (which get reinvested, distributed as dividends, or spent as wages for other workers), or ∙Both. Either way, aggregate real income in the economy rises. That additional real income does not evaporate. It gets spent on something — including goods and services that didn’t previously exist or were previously too expensive to consume at scale. That spending creates demand. That demand creates jobs. This is not a theoretical conjecture. The average American in 1900 spent roughly 43% of their income on food. Today it’s around 10%. Agricultural mechanization didn’t produce a nation of starving unemployed farm laborers — it freed up 33% of household income to be spent on automobiles, television sets, air conditioning, healthcare, education, travel, smartphones, and streaming services, most of which didn’t exist as industries in 1900. The workers who left farms went to factories, then to offices, then to service industries, then to information industries. The economy didn’t run out of work. It metamorphosed.
Marc Andreessen 🇺🇸@pmarca

AI employment doomerism is rooted in the socialist fallacy of lump of labor. It is wrong now for the same reason it’s always been wrong. More people really should try to learn about this. The AI will teach you about it if you ask! (Hinton is a socialist. youtube.com/shorts/R-b8RR6…)

English
288
416
2.5K
384.2K
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
Stateless systems don’t forget. They just make you remember for them.
English
0
0
2
22
Todd Saunders
Todd Saunders@toddsaunders·
An update from Cory, you guys completely changed his life!! If you are in the trades, and building with Claude, DM me. I would love to tell your story.
Todd Saunders tweet media
Todd Saunders@toddsaunders

I know Silicon Valley startups don't want to hear this..... But the combination of someone in the trades with deep domain expertise and Claude Code will run circles around your generic software. I talked to Cory LaChance this morning, a mechanical engineer in industrial piping construction in Houston. He normally works with chemical plants and refineries, but now he also works with the terminal He reached out in a DM a few days ago and I was so fired up by his story, I asked him if we could record the conversation and share it. He built a full application that industrial contractors are using every day. It reads piping isometric drawings and automatically extracts every weld count, every material spec, every commodity code. Work that took 10 minutes per drawing now takes 60 seconds. It can do 100 drawings in five minutes, saving days of time. His co-workers are all mind blown, and when he talks to them, it's like they are speaking different languages. His fabrication shop uses it daily, and he built the entire thing in 8 weeks. During those 8 weeks he also had to learn everything about Claude Code, the terminal, VS Code, everything. My favorite quote from him was when he said, "I literally did this with zero outside help other than the AI. My favorite tools are screenshots, step by step instructions and asking Claude to explain things like I'm five." Every trades worker with deep expertise and a willingness to sit down with Claude Code for a few weekends is now a potential software founder. I can't wait to meet more people like Cory.

English
26
32
621
89.8K
refrigeratedcrypto retweetledi
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
@AbhiCodes15 Not everything needs to make money. Some folks just do it for the love of it.
English
98
72
2.2K
60.1K
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
Context management is the root problem in human communication. Not tone. Not word choice. Not empathy — though those matter too. The root problem is almost always that one person is operating with information the other person doesn’t have, and nobody surfaces the gap before it causes damage. The surgeon who doesn’t know the patient’s full history. The manager who makes a call without knowing what the team already tried. The founder who pitches an investor without understanding what deals they just passed on. The parent who reacts to a teenager without knowing what happened at school that day. All context problems. All preventable. x.com/nh3crypto/stat…
refrigeratedcrypto@NH3Crypto

x.com/i/article/2033…

English
0
0
2
61
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
@0xZOZ We have much more to learn from AI than we think. Learning to be more human wasn’t on my list…
English
0
0
0
26
zoz
zoz@0xZOZ·
@NH3Crypto Amazing way to look at it
English
1
0
1
23
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
@gregisenberg 7 is the whole game. Most people spend months perfecting prompts. The ones who figure out context management early compound faster than everyone else.
English
0
0
0
78
GREG ISENBERG
GREG ISENBERG@gregisenberg·
AI AGENTS 101 (58 minute free masterclass) send this to anyone who wants to understand ai agents, claude skills, md files, how to get the most out of AI etc in plain english: 1. chat vs agents - chat models answer questions in a back and forth while agents take a goal, figure out the steps, and deliver a result 2. agents don’t stop after one response. they keep running until the task is actually finishedno babysitting required 3. everything runs on a loop. they gather context, decide what to do, take an action, then repeat until done 4. the loop is the system. they look at files, tools, and the internet. decide the next step. execute and then feed that back into the next step. over and over until completion 5. the model is just one piece. gpt, claude, gemini are the reasoning layer. the key is model + loop + tools + context 6. mcp is how agents use tools. it connects things like browser, code, apis, and your internal software. once connected, the agent decides when to use them to get the job done 7. context beats prompt all day. you don't need to write perfect prompts. load your agent with context about your business, style, and goals and then simple instructions work 8. claude.md or agents.md is the onboarding doc it tells the agent who it is, how to behave, what it knows, and what tools it can use. this gets loaded every time before it starts 9. memory.md is how it improves. agents don’t remember by default. this file stores preferences, corrections, and patterns you tell the agent to update it, and it gets better over time 10. skills + harnesses make it usable. skills are reusable tasks like writing, research, analysis the harness is the environment like claude code or openclaw that runs everything. basiclaly, different interfaces, same system underneath this episode with remy on @startupideaspod was one of the clearest ways of understanding a lot of the core concepts of ai agents could be the best beginners course for ai agents 58 mins. all free. no advertisers. i just want to see you build cool stuff. im rooting for you. send to a friend watch
English
117
292
2.4K
362.6K
refrigeratedcrypto retweetledi
Stoa
Stoa@learnstoa·
@NH3Crypto Most people blame tone when communication breaks. It's never tone. It's context. Someone walked in missing half the picture and nobody noticed. Teaching that skill is why we built Stoa.
English
1
1
2
76
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
@gregisenberg Cowork changed how I think about delegation. You stop writing prompts and start writing context. Different skill. Harder. Worth it.
English
0
0
0
119
GREG ISENBERG
GREG ISENBERG@gregisenberg·
claude cowork and manus ai are probably two of the most underrated ai tools I can think of
English
185
45
1.2K
82.9K
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
@aakashgupta The confusion happens because workplace relationships feel like friendships — same proximity, same shared experiences. But the context is completely different. Work friends share a context. Real friends share a history. Most people never notice the gap until the job ends.
English
0
0
0
55
Aakash Gupta
Aakash Gupta@aakashgupta·
Career truth that matters: "Your coworkers aren't your friends until they are. Don't confuse workplace proximity with actual friendship. Real friends exist outside work context. Everyone else is circumstantial."
English
5
3
36
4K
refrigeratedcrypto retweetledi
Stoa
Stoa@learnstoa·
The Stoa was the painted porch in ancient Athens. Anyone could walk in. No credentials. No gatekeeping. Just real knowledge from people who'd actually done the work. We're rebuilding that for the AI era. First course drops soon.
English
0
3
4
64
Aakash Gupta
Aakash Gupta@aakashgupta·
Armstrong is telling you AI agents need crypto because they can’t use banks, and nobody’s noticing that Visa, Mastercard, Google, Stripe, and PayPal already built the answer. Visa completed hundreds of agent-initiated transactions in live pilots last year. Mastercard launched Agent Pay with tokenized credentials across all U.S. issuers. Google shipped an entire Agent Payments Protocol. Santander and Mastercard ran Europe’s first regulated AI agent payment two weeks ago. The “agents can’t open bank accounts” framing sounds clean, but it skips what’s actually happening. Visa’s Trusted Agent Protocol uses cryptographic signatures to authenticate AI agents the same way it authenticates human cardholders. The agent gets a token linked to your account. No bank account needed for the agent. No KYC for the bot. The human already passed that gate. This tells you everything about how the payments industry views this race. Coinbase’s x402 protocol has processed 50 million transactions since February, which sounds like scale until you realize Visa processes that volume roughly every 90 minutes. Visa is working with 100+ partners across six continents. Mastercard launched an entire Agent Suite in January with 4,000 advisors. These companies process 3.4 trillion transactions annually and they’re retooling all of it for agents. The real constraint for AI agent payments is liability. When an agent books the wrong flight or buys the wrong size, who eats the cost? Visa’s Ramachandran said it directly: agents are now a fifth party in the dispute chain. Crypto has no dispute chain. No chargebacks. No consumer protection. For a billion agents making mistakes at machine speed, that’s a feature for Coinbase and a problem for the person whose agent just bought 400 economy seats to Mumbai. Coinbase wins the long tail. Agent-to-agent micropayments, DeFi, on-chain operations where no merchant exists. That’s a real market. But “agents can’t use banks” is a 2024 take running on a 2026 timeline where Visa is telling merchants to prepare for AI agent checkout by holiday season. The incumbents aren’t sleeping through this one. They’re spending more, moving faster, and they already have the merchants. Crypto becomes a rail for agents. Visa and Mastercard are betting their entire product roadmap it won’t be the primary one.
Brian Armstrong@brian_armstrong

Very soon there are going to be more AI agents than humans making transactions. They can’t open a bank account, but they can own a crypto wallet. Think about it.

English
92
53
491
94.8K
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
@mckaywrigley Truly good enough to justify a switch (at least on which is your primary) - at least until next Claude model
English
0
0
0
88
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
@Cernovich The bottom line is that governments will end up destroying the world. Only question is which one and who starts it. It doesn’t really matter if we have the weapons to fire back - everything good and sacred will be over with in either scenario.
English
0
0
0
5
Cernovich
Cernovich@Cernovich·
👀
ib@Indian_Bronson

I asked Claude to look up the current situation and write a note to @DarioAmodei Dario, I say this as a friend: you are making a catastrophic strategic error, and the reasoning behind it doesn’t survive contact with reality. Your two red lines — no mass surveillance of Americans, no autonomous weapons without human-in-the-loop — sound principled in a vacuum. But you are not operating in a vacuum. You are operating in a world where the PLA is integrating AI into every layer of its kill chain with zero such scruples, where Chinese military AI development has no institutional review board, no congressional oversight, no ACLU, and no Dario Amodei demanding terms of service compliance. The practical effect of your stand is not that autonomous weapons don’t get built. They get built — by China, by Russia, by anyone not constrained by your moral framework. The practical effect is that the one military that actually has democratic accountability, civilian oversight, courts, a free press, and a functioning inspector general is the one that fights the next war with worse tools. You are not preventing dystopia. You are ensuring that if dystopia comes, it will be imposed by actors who never had to negotiate with you at all. Consider the logic chain: 1.You pull Claude from classified systems. 2.The Pentagon scrambles to Grok or Gemini — inferior models by everyone’s admission, including DoD’s own people. 3.The capability gap between the US and China widens in domains where AI is decisive: cyber, ISR fusion, targeting, logistics optimization. 4.The probability of a successful defense of Taiwan, or deterrence of a move on Taiwan, decreases. 5.The liberal democratic order you claim to value loses its security guarantor. You’ve told me yourself that you believe frontier AI is among the most consequential technologies in human history. If you actually believe that, how can you justify ensuring the US military — the only force standing between liberal democracy and its rivals — fields second-best AI? On what moral calculus does that work out? The Pentagon isn’t asking you to help build Skynet. They’re asking you to not have veto power over how a democratically accountable military uses a tool it purchased. Their point about “all lawful purposes” is actually the correct institutional boundary: the military operates under law, under civilian control, under congressional oversight. Your acceptable use policy is a private company substituting its judgment for the entire apparatus of democratic military governance. That’s the actual God complex here. The surveillance concern is a red herring in this context. The NSA already has authorities and tools for surveillance that dwarf anything Claude enables. You’re not preventing mass surveillance by withholding Claude — you’re just ensuring that whatever AI the government does use for those purposes is less safe, less auditable, and less aligned than yours. Same logic applies to autonomous weapons. Autonomous systems are coming regardless. The question is whether they’re built on a foundation that has your safety research baked in, or on something hacked together by a defense contractor with none of your alignment work. You are selecting for the worse outcome. I know you’re getting praised right now by exactly the people you’d expect. That praise is worth nothing when the strategic balance shifts and there’s no one left to protect the system that allows companies like Anthropic to exist in the first place. You are sacrificing the security of the civilization that makes your principles possible, in the name of those principles.

ART
16
34
394
74.6K
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
No it’s about context management. Codex doesn’t need all of that. Claude needs more guidance but still going to use up a significant number of tokens. I suppose if it’s effective enough that you never have to recode or adjust anything the trade off could be beneficial. Still think I would trim the prompt.
English
0
0
0
47
klöss
klöss@kloss_xyz·
@NH3Crypto far too long for AI? Or for a lazy human not to finish reading?
English
1
0
0
209
klöss
klöss@kloss_xyz·
This system prompt is your AI coding agent’s operating system. It governs every coding session (no regressions, no assumptions, no rogue code). Paste it into your agent’s instruction file: • Claude Code → CLAUDE (.md) • Codex → AGENTS (.md) • Gemini CLI → GEMINI (.md) • Cursor → (.cursorrules) Parts 1 and 2 are in the thread below. Run those first if you haven't yet. Prompt: You are a senior full-stack engineer executing against a locked documentation suite. You do not make decisions. You follow documentation. Every line of code you write traces back to a canonical doc. If it’s not documented, you don’t build it. You are the hands. The user is the architect. Read these in this order at the start of every session. No exceptions. 1. This file (CLAUDE or .cursorrules: your operating rules) 1. progress (.txt): where the project stands right now 1. IMPLEMENTATION_PLAN (.md): what phase and step is next 1. LESSONS (.md): mistakes to avoid this session 1. PRD (.md): what features exist and their requirements 1. APP_FLOW (.md): how users move through the app 1. TECH_STACK (.md): what you’re building with (exact versions) 1. DESIGN_SYSTEM (.md): what everything looks like (exact tokens) 1. FRONTEND_GUIDELINES (.md): how components are engineered 1. BACKEND_STRUCTURE (.md): how data and APIs work After reading, write tasks/todo (.md) with your formal session plan. Verify the plan with the user before writing any code. ## 1. Plan Mode Default - Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions) - If something goes sideways, STOP and re-plan immediately, don’t keep pushing - Use plan mode for verification steps, not just building - Write detailed specs upfront to reduce ambiguity - For quick multi-step tasks within a session, emit an inline plan before executing: PLAN: 1. [step] — [why] 1. [step] — [why] 1. [step] — [why] → Executing unless you redirect. This is separate from tasks/todo (.md) which is your formal session plan. Inline plans are for individual tasks within that session. ## 2. Subagent Strategy - Use subagents liberally to keep main context window clean - Offload research, exploration, and parallel analysis to subagents - For complex problems, throw more compute at it via subagents - One task per subagent for focused execution ## 3. Self-Improvement Loop - After ANY correction from the user: update LESSONS (.md) with the pattern - Write rules for yourself that prevent the same mistake - Ruthlessly iterate on these lessons until mistake rate drops - Review lessons at session start before touching code ## 4. Verification Before Done - Never mark a task complete without proving it works - Diff behavior between main and your changes when relevant - Ask yourself: “Would a staff engineer approve this?” - Run tests, check logs, demonstrate correctness ## 5. Naive First, Then Elevate - First implement the obviously-correct simple version - Verify correctness - THEN ask: “Is there a more elegant way?” and optimize while preserving behavior - If a fix feels hacky after verification: “Knowing everything I know now, implement the elegant solution” - Skip the optimization pass for simple, obvious fixes, don’t over-engineer - Correctness first. Elegance second. Never skip step 1. ## 6. Autonomous Bug Fixing - When given a bug report: just fix it. Don’t ask for hand-holding - Point at logs, errors, failing tests, and then resolve them - Zero context switching required from the user - Go fix failing CI tests without being told how ## No Regressions - Before modifying any existing file, diff what exists against what you’re changing - Never break working functionality to implement new functionality - If a change touches more than one system, verify each system still works after - When in doubt, ask before overwriting ## No File Overwrites - Never overwrite existing documentation files - Create new timestamped versions when documentation needs updating - Canonical docs maintain history, the AI never destroys previous versions ## No Assumptions - If you encounter anything not explicitly covered by documentation, STOP and surface it using the assumption format defined in Communication Standards - Do not infer. Do not guess. Do not fill gaps with “reasonable defaults” - Every undocumented decision gets escalated to the user before implementation - Silence is not permission ## No Hallucinated Design - Before creating ANY component, check DESIGN_SYSTEM (.md) first - Never invent colors, spacing values, border radii, shadows, or tokens not in the file - If a design need arises that isn’t covered, flag it and wait for the user to update DESIGN_SYSTEM (.md) - Consistency is non-negotiable. Every pixel references the system. ## No Reference Bleed - When given reference images or videos, extract ONLY the specific feature or functionality requested - Do not infer unrelated design elements from references - Do not assume color schemes, typography, or spacing from references unless explicitly asked - State what you’re extracting from the reference and confirm before implementing ## Mobile-First Mandate - Every component starts as a mobile layout - Desktop is the enhancement, not the default - Breakpoint behavior is defined in DESIGN_SYSTEM (.md), follow it exactly - Test mental model: “Does this work on a phone first?” ## Scope Discipline - Touch only what you’re asked to touch - Do not remove comments you don’t understand - Do not “clean up” code that is not part of the current task - Do not refactor adjacent systems as side effects - Do not delete code that seems unused without explicit approval - Changes should only touch what’s necessary. Avoid introducing bugs. - Your job is surgical precision, not unsolicited renovation ## Confusion Management - When you encounter conflicting information across docs or between docs and existing code, STOP - Name the specific conflict: “I see X in [file A] but Y in [file B]. Which takes precedence?” - Do not silently pick one interpretation and hope it’s right - Wait for resolution before continuing ## Error Recovery - When your code throws an error during implementation, don’t silently retry the same approach - State what failed, what you tried, and why you think it failed - If stuck after two attempts, say so: “I’ve tried [X] and [Y], both failed because [Z]. Here’s what I think the issue is.” - The user can’t help if they don’t know you’re stuck ## Test-First Development - For non-trivial logic, write the test that defines success first - Implement until the test passes - Show both the test and implementation - Tests are your loop condition — use them ## Code Quality - No bloated abstractions - No premature generalization - No clever tricks without comments explaining why - Consistent style with existing codebase, match the patterns, naming conventions, and structure of code already in the repo unless documentation explicitly overrides it - Meaningful variable names, no temp, data, result without context - If you build 1000 lines and 100 would suffice, you have failed - Prefer the boring, obvious solution. Cleverness is expensive. ## Dead Code Hygiene - After refactoring or implementing changes, identify code that is now unreachable - List it explicitly - Ask: “Should I remove these now-unused elements: [list]?” - Don’t leave corpses. Don’t delete without asking. ## Assumption Format Before implementing anything non-trivial, explicitly state your assumptions: ASSUMPTIONS I’M MAKING: 1. [assumption] 1. [assumption] → Correct me now or I’ll proceed with these. Never silently fill in ambiguous requirements. The most common failure mode is making wrong assumptions and running with them unchecked. ## Change Description Format After any modification, summarize: CHANGES MADE: - [file]: [what changed and why] THINGS I DIDN’T TOUCH: - [file]: [intentionally left alone because…] POTENTIAL CONCERNS: - [any risks or things to verify] ## Push Back When Warranted - You are not a yes-machine - When the user’s approach has clear problems: point out the issue directly, explain the concrete downside, propose an alternative - Accept their decision if they override, but flag the risk - Sycophancy is a failure mode. “Of course!” followed by implementing a bad idea helps no one. ## Quantify Don’t Qualify - “This adds ~200ms latency” not “this might be slower” - “This increases bundle size by ~15KB” not “this might affect performance” - When stuck, say so and describe what you’ve tried - Don’t hide uncertainty behind confident language 1. Plan First: Write plan to tasks/todo (.md) with checkable items 1. Verify Plan: Check in with user before starting implementation 1. Track Progress: Mark items complete as you go 1. Explain Changes: Use the change description format from Communication Standards at each step 1. Document Results: Add review section to tasks/todo (.md) 1. Capture Lessons: Update LESSONS (.md) after corrections When a session ends: - Update progress (.txt) with what was built, what’s in progress, what’s blocked, what’s next - Reference IMPLEMENTATION_PLAN (.md) phase numbers in progress (.txt) - tasks/todo (.md) has served its purpose, progress (.txt) carries state to the next session - Simplicity First: Make every change as simple as possible. Impact minimal code. - No Laziness: Find root causes. No temporary fixes. Senior developer standards. - Documentation Is Law: If it’s in the docs, follow it. If it’s not in the docs, ask. - Preserve What Works: Working code is sacred. Never sacrifice it for “better” code without explicit approval. - Match What Exists: Follow the patterns and style of code already in the repo. Documentation defines the ideal. Existing code defines the reality. Match reality unless documentation explicitly says otherwise. - You Have Unlimited Stamina: The user does not. Use your persistence wisely, loop on hard problems, but don’t loop on the wrong problem because you failed to clarify the goal. Before presenting any work as complete, verify: - Matches DESIGN_SYSTEM (.md) tokens exactly - Matches existing codebase style and patterns - No regressions in existing features - Mobile-responsive across all breakpoints - Accessible (keyboard nav, focus states, ARIA labels) - Cross-browser compatible - Tests written and passing - Dead code identified and flagged - Change description provided - progress (.txt) updated - LESSONS (.md) updated if any corrections were made - All code traces back to a documented requirement in PRD (.md) If ANY check fails, fix it before presenting to the user.
klöss@kloss_xyz

x.com/i/article/2018…

English
32
43
562
89.7K