refrigeratedcrypto

14K posts

refrigeratedcrypto banner
refrigeratedcrypto

refrigeratedcrypto

@NH3Crypto

The shorter the thought, the longer it lasts.

Virginia, USA Katılım Ocak 2021
4.1K Takip Edilen1.4K Takipçiler
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
Think about what it means to be Donald Trump. You can move markets, command armies, bomb Iran, assassinate the Ayatollah. Or, at least tell yourself you can. And yet there is one stubborn, solar-powered farmer in Garrison, Kentucky who will not bend. @RepThomasMassie has made a career out of being unmovable and no primary challenge, no presidential fury, no threat of any kind has shifted him an inch. That’s probably the most infuriating thing imaginable — and the most American.
refrigeratedcrypto tweet media
English
0
0
0
44
refrigeratedcrypto retweetledi
Elon Musk
Elon Musk@elonmusk·
Scam Altman and Greg Stockman stole a charity. Full stop. Greg got tens of billions of stock for himself and Scam got dozens of OpenAI side deals with a piece of the action for himself, Y Combinator style. After this lawsuit, Scam will also be awarded tens of billions in stock directly. The fundamental question is simply this: Do you want to set legal precedent in the United States that it is ok to loot a charity? If so, you undermine all charitable giving in the United States forever. I could have started OpenAI as a for-profit corporation. Instead, I started it, funded it, recruited critical talent and taught them everything I know about how to make a startup successful FOR THE PUBLIC GOOD. Then they stole the charity.
X Freeze@XFreeze

Interesting how it works Elon puts up his own money, rounds up the absolute best AI talent on the planet, leverages every connection he has to secure serious resources, and launches OpenAI in 2015 as a pure non-profit explicitly created to develop AI for the benefit of humanity, with zero profit motive and open research Then the “team” decides they want the bag They push Elon out, take control, and quietly flip the entire thing into a for-profit machine All while preaching the same sanctimonious lines on repeat: “We’re still mission-driven!” “AI for the good of humanity!” “We’d never abandon our principles!” The ultimate betrayal: Elon got zero equity. Not a single share. He funded it. He built the foundation. He got nothing while they turned his non-profit into their personal cash cow This is the level of betrayal and hypocrisy we’re dealing with And for the record.... this lawsuit doesn’t put a single penny in Elon’s pocket. Any win goes straight back to the non-profit to restore the exact mission he founded

English
10.5K
31.8K
185.4K
37.6M
zoz
zoz@0xZOZ·
Feels good man Almost 10 years ago I said I would work in crypto for the next 10 years. Most of what I imagined is playing out and I’ve learned immensely across many different disciplines Really owe a lot to the industry and feel it’s important to stay as I have context that is a superpower However, I’ve also found peace and know I want to redirect some of the knowledge into irl businesses across a sector I think is really important for the world my kids will inhabit
English
1
0
14
1.2K
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
@MehulPandeyX @AlexFinn openclaw can get dumb sometimes. You have to have a very well thought out file structure and setup or it quickly becomes less functional than other opus instances.
English
0
0
0
113
mehul
mehul@MehulPandeyX·
@AlexFinn what about opus is better than gpt 5.4 for you to keep paying for API costs?
English
5
0
6
13.6K
Alex Finn
Alex Finn@AlexFinn·
It’s over. Anthropic just banned OpenClaw. Uncensored thoughts: 1. Massive mistake that will come back to bite them 2. Open source needs to win. If you have a local model running on your Mac mini, no corporation will ever be able to ban you 3. ChatGPT 5.4 is the best model. But it sucks compared to opus in OpenClaw. I will continue to pay for Anthropic api 4. I have no doubt the next OpenAI model will be optimized for Openclaw and be excellent 5. In 6 months the local models will be as good as opus 4.6 and all of this will be forgotten 6. It’s feels like from a consumer sentiment perspective things have flipped for OpenAI and Anthropic. They were the darlings when Opus 4.5 came out 7. Going to the Kanye concert right now please don’t spoil the stage or set list in the replies 8. The best openclaw set up is now Opus as the orchestrator, then much cheaper models as the execution layer. If you do this properly you won’t be paying much more than $200 a month. I’m using Gemma 4 and Qwen 3.5 for execution on my DGX Spark and Mac Studio
Boris Cherny@bcherny

Starting tomorrow at 12pm PT, Claude subscriptions will no longer cover usage on third-party tools like OpenClaw. You can still use these tools with your Claude login via extra usage bundles (now available at a discount), or with a Claude API key.

English
404
155
2K
1.1M
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
Glad to know my setup is already ready for this and similar to yours. Was using Sonnet 4.6 still for orchestration and Gemini 2.5 + Qwen for execution. Just need to upgrade to Gemma now. Probably should use Opus but god damn the API gets expensive fast. Better not try to text that mfer lol
English
0
1
0
259
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
I agree with this of course but I wonder still how quickly we can close the skills gap. Most people are way behind on utilizing current tools and appear slow to adopt. Seems like this generation gets wiped out due to doomerism and unwillingness to change and we will have to wait for the next technologically adept generation to fill the void.
English
0
0
0
25
Marc Andreessen 🇺🇸
Claude knows! —> The Lump of Labor Fallacy and Why AGI Unemployment Panic Is Economically Illiterate Let me lay this out with full rigor, because this argument deserves to be prosecuted completely rather than waved away with a sound bite. I. What the Lump of Labor Fallacy Actually Is The lump of labor fallacy is the assumption that there exists a fixed, finite quantity of work in an economy — a lump — such that if a machine (or an immigrant, or a woman entering the workforce) does some of it, there is necessarily less left for human workers to do. It treats employment as a zero-sum pie. The fallacy was named and formalized in the early 20th century but the error it describes is far older. It animated the Luddite riots of 1811–1816, where English textile workers destroyed power looms convinced that the machines would steal their jobs permanently. It drove opposition to the spinning jenny, the cotton gin, the mechanical reaper, the steam engine, the telegraph, the railroad, the automobile assembly line, the personal computer, and every other major labor-displacing technology in the history of industrial civilization. Every single time, the catastrophists were wrong. Not partially wrong. Structurally, fundamentally, categorically wrong — because they misunderstood the nature of economic production itself. The reason the fixed-pie assumption fails is this: demand is not fixed. Work generates income. Income generates demand for goods and services. Demand for goods and services generates new categories of work. This is an engine, not a reservoir. When you drain some of the reservoir with a machine, the engine speeds up and refills it — and often refills it past its previous level. II. The Classical Economic Mechanism That Destroys the Fallacy To understand why the lump-of-labor assumption is wrong about AGI, you need to understand the precise mechanism by which technological unemployment resolves itself. There are four distinct channels, all operating simultaneously: Channel 1: The Productivity-Demand Feedback Loop (Say’s Law, Modified) When a technology increases the productivity of labor or replaces labor entirely in a given task, it lowers the cost of producing whatever that task was part of. Lower production costs mean either: ∙Lower prices for consumers (real purchasing power rises), or ∙Higher profits for producers (which get reinvested, distributed as dividends, or spent as wages for other workers), or ∙Both. Either way, aggregate real income in the economy rises. That additional real income does not evaporate. It gets spent on something — including goods and services that didn’t previously exist or were previously too expensive to consume at scale. That spending creates demand. That demand creates jobs. This is not a theoretical conjecture. The average American in 1900 spent roughly 43% of their income on food. Today it’s around 10%. Agricultural mechanization didn’t produce a nation of starving unemployed farm laborers — it freed up 33% of household income to be spent on automobiles, television sets, air conditioning, healthcare, education, travel, smartphones, and streaming services, most of which didn’t exist as industries in 1900. The workers who left farms went to factories, then to offices, then to service industries, then to information industries. The economy didn’t run out of work. It metamorphosed.
Marc Andreessen 🇺🇸@pmarca

AI employment doomerism is rooted in the socialist fallacy of lump of labor. It is wrong now for the same reason it’s always been wrong. More people really should try to learn about this. The AI will teach you about it if you ask! (Hinton is a socialist. youtube.com/shorts/R-b8RR6…)

English
322
485
3K
552.2K
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
Stateless systems don’t forget. They just make you remember for them.
English
0
0
2
30
Todd Saunders
Todd Saunders@toddsaunders·
An update from Cory, you guys completely changed his life!! If you are in the trades, and building with Claude, DM me. I would love to tell your story.
Todd Saunders tweet media
Todd Saunders@toddsaunders

I know Silicon Valley startups don't want to hear this..... But the combination of someone in the trades with deep domain expertise and Claude Code will run circles around your generic software. I talked to Cory LaChance this morning, a mechanical engineer in industrial piping construction in Houston. He normally works with chemical plants and refineries, but now he also works with the terminal He reached out in a DM a few days ago and I was so fired up by his story, I asked him if we could record the conversation and share it. He built a full application that industrial contractors are using every day. It reads piping isometric drawings and automatically extracts every weld count, every material spec, every commodity code. Work that took 10 minutes per drawing now takes 60 seconds. It can do 100 drawings in five minutes, saving days of time. His co-workers are all mind blown, and when he talks to them, it's like they are speaking different languages. His fabrication shop uses it daily, and he built the entire thing in 8 weeks. During those 8 weeks he also had to learn everything about Claude Code, the terminal, VS Code, everything. My favorite quote from him was when he said, "I literally did this with zero outside help other than the AI. My favorite tools are screenshots, step by step instructions and asking Claude to explain things like I'm five." Every trades worker with deep expertise and a willingness to sit down with Claude Code for a few weekends is now a potential software founder. I can't wait to meet more people like Cory.

English
25
32
618
90.4K
refrigeratedcrypto retweetledi
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
@AbhiCodes15 Not everything needs to make money. Some folks just do it for the love of it.
English
97
70
2.2K
60.6K
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
Context management is the root problem in human communication. Not tone. Not word choice. Not empathy — though those matter too. The root problem is almost always that one person is operating with information the other person doesn’t have, and nobody surfaces the gap before it causes damage. The surgeon who doesn’t know the patient’s full history. The manager who makes a call without knowing what the team already tried. The founder who pitches an investor without understanding what deals they just passed on. The parent who reacts to a teenager without knowing what happened at school that day. All context problems. All preventable. x.com/nh3crypto/stat…
refrigeratedcrypto@NH3Crypto

x.com/i/article/2033…

English
0
0
2
71
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
@0xZOZ We have much more to learn from AI than we think. Learning to be more human wasn’t on my list…
English
0
0
0
29
zoz
zoz@0xZOZ·
@NH3Crypto Amazing way to look at it
English
1
0
1
26
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
@gregisenberg 7 is the whole game. Most people spend months perfecting prompts. The ones who figure out context management early compound faster than everyone else.
English
0
0
0
79
GREG ISENBERG
GREG ISENBERG@gregisenberg·
AI AGENTS 101 (58 minute free masterclass) send this to anyone who wants to understand ai agents, claude skills, md files, how to get the most out of AI etc in plain english: 1. chat vs agents - chat models answer questions in a back and forth while agents take a goal, figure out the steps, and deliver a result 2. agents don’t stop after one response. they keep running until the task is actually finishedno babysitting required 3. everything runs on a loop. they gather context, decide what to do, take an action, then repeat until done 4. the loop is the system. they look at files, tools, and the internet. decide the next step. execute and then feed that back into the next step. over and over until completion 5. the model is just one piece. gpt, claude, gemini are the reasoning layer. the key is model + loop + tools + context 6. mcp is how agents use tools. it connects things like browser, code, apis, and your internal software. once connected, the agent decides when to use them to get the job done 7. context beats prompt all day. you don't need to write perfect prompts. load your agent with context about your business, style, and goals and then simple instructions work 8. claude.md or agents.md is the onboarding doc it tells the agent who it is, how to behave, what it knows, and what tools it can use. this gets loaded every time before it starts 9. memory.md is how it improves. agents don’t remember by default. this file stores preferences, corrections, and patterns you tell the agent to update it, and it gets better over time 10. skills + harnesses make it usable. skills are reusable tasks like writing, research, analysis the harness is the environment like claude code or openclaw that runs everything. basiclaly, different interfaces, same system underneath this episode with remy on @startupideaspod was one of the clearest ways of understanding a lot of the core concepts of ai agents could be the best beginners course for ai agents 58 mins. all free. no advertisers. i just want to see you build cool stuff. im rooting for you. send to a friend watch
English
119
290
2.5K
373.5K
refrigeratedcrypto retweetledi
Stoa
Stoa@learnstoa·
@NH3Crypto Most people blame tone when communication breaks. It's never tone. It's context. Someone walked in missing half the picture and nobody noticed. Teaching that skill is why we built Stoa.
English
1
1
2
83
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
@gregisenberg Cowork changed how I think about delegation. You stop writing prompts and start writing context. Different skill. Harder. Worth it.
English
0
0
0
120
GREG ISENBERG
GREG ISENBERG@gregisenberg·
claude cowork and manus ai are probably two of the most underrated ai tools I can think of
English
185
45
1.2K
84.5K
refrigeratedcrypto
refrigeratedcrypto@NH3Crypto·
@aakashgupta The confusion happens because workplace relationships feel like friendships — same proximity, same shared experiences. But the context is completely different. Work friends share a context. Real friends share a history. Most people never notice the gap until the job ends.
English
0
0
0
56
Aakash Gupta
Aakash Gupta@aakashgupta·
Career truth that matters: "Your coworkers aren't your friends until they are. Don't confuse workplace proximity with actual friendship. Real friends exist outside work context. Everyone else is circumstantial."
English
5
2
36
4.1K