Master of Coin

7.4K posts

Master of Coin banner
Master of Coin

Master of Coin

@deCENTroll07

Believer that history doesn’t repeat itself but it rhymes. Student of human psychology. #BTC Permabull.

Katılım Eylül 2010
2.1K Takip Edilen284 Takipçiler
Master of Coin retweetledi
Anish Moonka
Anish Moonka@anishmoonka·
Peter Thiel just put $140 million into a startup that wants to run AI inside giant steel orbs floating in the ocean. Almost half of America's AI data centers planned for this year have already been cancelled or delayed. The grid cannot handle them. A single big AI data center uses as much electricity as a small city, around the clock. America was not wired for that. In America's biggest power market, which stretches from New Jersey to Illinois, the cost of reserving future power has jumped from $29 to $329 in two years. That is more than ten times higher. And if you order one of the giant transformers a data center needs to plug into the grid, you now wait up to four years to get it. So a small Oregon company called Panthalassa raised the cash. Their hardware looks like a giant steel orb floating on the surface, with the rest of the body extending 80 meters down into deep water. Waves push water through internal channels to spin a turbine, and the electricity runs AI chips right there on the platform. Answers travel back to land by satellite. The company is now worth roughly $1 billion. Backers include John Doerr (an early Google and Amazon investor), Marc Benioff (Salesforce's founder), and Peter Thiel's own venture firm Founders Fund. The second problem ocean-AI solves is heat. AI chips run scorching. Cooling them on land is so thirsty that a large data center drinks 5 million gallons of water a day, the same as a town of 50,000 people. Microsoft already proved the ocean fixes this. A few years back they sealed 864 servers inside a steel tube and sank it off the coast of Scotland. The cold seawater cooled them for free. They used zero water from any town, and the servers had 8 times fewer breakdowns than the same machines on land. There is also nobody to argue with out at sea. Just last week, two companies pulled their plans to build data centers in Seattle because locals fought back. Those facilities alone would have eaten about a third of the city's daily power. Of course, this could still fail. Saltwater eats steel. Big storms break things. Earlier wave-power companies have burned through hundreds of millions of dollars and never made it to commercial scale. Panthalassa's first real ocean test has not happened yet. Paying customers are not promised until 2027. But the math has flipped. If grid power costs ten times what it did, the transformer arrives in four years, and the neighbors will not let you build, then floating computers in the open ocean stops looking ridiculous and starts looking like the only door still open.
Financial Times@FT

Peter Thiel, co-founder of Palantir and PayPal, is leading a $140mn investment in a US start-up that plans to use wave energy to fuel giant fleets of floating data centres. ft.trib.al/BxRK2rJ

English
16
88
633
182K
Master of Coin retweetledi
Oliver
Oliver@olvrgln·
Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents. Every team building agents eventually hits the same wall: where do the files live? Not the chat history, the actual artifacts the agent works on. > The contracts your agent redlined > The claim files it updated > The 200-page audit report it edited overnight while you were asleep Today those documents live in a sandbox that dies in 30 minutes, an S3 bucket where concurrent writes clobber each other, or a GitHub repo that was never built to absorb agent-scale traffic. So we built Mesa. The world's first POSIX-compatible filesystem with built-in version control, designed from the ground up for agents. You mount it into your sandbox like any other filesystem. Your agent reads and writes files normally. Behind the scenes every change is versioned, branchable, reviewable, and rollback-able — like a codebase, for any file type. Mesa provides – Branches so agents work in parallel without locking – Durable storage that survives sandbox death – Sparse materialization so massive document sets load instantly – Fine-grained access control per agent – Full history for human review and audit Design partners are running Mesa in production across legal, healthcare, GTM, business ops, and coding agents. Private beta is open: link in the comments
Oliver tweet media
English
122
156
2.1K
628.8K
Master of Coin retweetledi
Brian Armstrong
Brian Armstrong@brian_armstrong·
For the agentic economy to overtake the human economy, agents need a way to discover services. We launched Agentic(.)market to give agents a discovery layer to find and integrate x402 services seamlessly. Add the skill to your agent. And list your services to start earning revenue. > npx skills add coinbase/agentic-wallet-skills
nick.base.eth 🛡@Nick_Prince12

Introducing Agentic(dot)Market, the homepage of the agent economy. - Monitor agentic commerce trends - Discover services for your agent to buy - Sell your services to agents Thousands of services. Zero API keys. Powered by x402.

English
266
263
2.3K
501.3K
Master of Coin retweetledi
Felix Prehn 🐶
Felix Prehn 🐶@felixprehn·
Texas just admitted it needs $174 billion for water. Not for roads. Not for schools. Not for energy. Water. The stuff that comes out of your faucet. The Texas Water Development Board released the numbers last week. $174 billion over the next 50 years to prevent the state from running out of water. Double the last estimate from 2022. Texas is adding 17 million people by 2080. A 53% increase. Water supply is dropping 10% over the same period from depleting aquifers. Without action, shortages could cause $177 billion in economic losses by 2030 alone. More than the cost of fixing it. And it's not just population growth draining the system. Tesla's Giga Texas factory uses 556 million gallons of water per year. A single factory. Data centers are consuming 0.4% of the state's entire water supply and growing fast. In Austin, data centers and industrial demand are straining a water system built for residential use. This isn't a Texas problem. It's a global one. The World Bank just launched a program called Water Forward targeting water security for 1 billion people by 2030. 14 countries signed on. They're calling it one of the defining infrastructure crises of the century. Water is the only commodity on earth with no substitute. Oil has renewables. Gold has Bitcoin (if you believe that). Copper has aluminum for some applications. Water has nothing. You need it or you die. Every person, every farm, every factory, every data center. And it's running out faster than any government projected. Where this creates an investment thesis almost nobody is talking about: Xylem (XYL). The largest pure-play water technology company in the world. Builds the infrastructure that treats, tests, transports, and analyzes water. Revenue above $8 billion. Every dollar of that $174 billion Texas plan flows through companies like Xylem. American Water Works (AWK). Largest publicly traded water utility. Serves 14 million people across 24 states. Water utilities are natural monopolies. You can't build a second pipe to someone's house. The customer can't switch providers. Pricing power is absolute and demand is non-negotiable. Veolia (VEOEY). Global leader in water treatment and waste management. Operates on every continent. When countries need to build water infrastructure from scratch, Veolia gets the call. Essential Utilities (WTRG). Growing through acquisitions of small water systems. Rural water infrastructure across America is crumbling. Most small systems are municipally owned with no budget to upgrade. Essential buys them, upgrades them, and charges the regulated rate. Mueller Water Products (MWA). Builds the valves, hydrants, and pipes that make up the physical water distribution network. Every infrastructure dollar spent on water flows through components these companies manufacture. The Invesco Water Resources ETF (PHO) gives you diversified exposure to the entire water infrastructure chain. When governments start writing $174 billion checks for water, every company in this ETF benefits. Water infrastructure is the most boring and most inevitable investment thesis on earth. Nobody talks about it because it's not AI and it doesn't have a ticker on CNBC's bottom scroll. That's why it's still cheap. every week i cover where the money is actually going before it makes headlines. former banker. felixfriends.org/live (texas just said it needs $174 billion for water. double the last estimate. the state is adding 17 million people while aquifers are depleting. tesla's single factory uses 556 million gallons a year. data centers are draining supply in austin. the world bank just launched an emergency water security initiative for 1 billion people. water is the only commodity on earth with zero substitute. nobody on financial TV is covering this. $174 billion has to go somewhere.)
English
116
487
1.5K
268.3K
Master of Coin retweetledi
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
CLAUDE CODE CAN NOW COPY ANY UI ON THE INTERNET. It scans real websites and rebuilds their design system instantly, turning any page into your own starting point.
English
222
655
8.8K
1M
Master of Coin retweetledi
Reliant News Hub
Reliant News Hub@ReliantNews1776·
🤔 “What If Trump Wants To Lose This War?”
English
2
34
82
2.7K
Master of Coin retweetledi
rari
rari@0xwhrrari·
OpenAI just mass-fired their robotics team. One of the engineers DM'd me 20 minutes later. I didn't know him. He found me through a Polymarket thread. His first message: "I have 30 days of severance and nothing to lose. Let me tell you what we actually use internally. It's not GPT" I thought he was trolling. "Every serious team at OpenAI prototypes on Claude Code. Not ChatGPT. Not the API. Claude Code connected to a repo. That's the actual workflow" I asked why. "Because Claude reads the codebase. GPT reads the prompt. There's a difference. One guesses. The other one understands the full context and builds on top of it" He sent me one link. github.com/warproxxx/poly… 86 million Polymarket trades. Every wallet. Every entry. Every exit. Open source. Free. "Point Claude Code at this. Say - find every wallet with 70%+ win rate and 100+ trades. Watch what happens" I did it that night. Claude pulled 47 wallets in 4 minutes. Average profit: $214K. Hold time: 7 hours. 91% exit BEFORE resolution. They never wait for the outcome. "Now look at how they exit" Top wallets capture 86% of the move and cut at 12%. Everyone else captures 58% and holds losers to 41%. Same entries. Completely different results. He sent another link. github.com/Polymarket/pol… "Three commands. Your bot sees 500+ markets. No key needed. Read-only. Claude scores them in 20 minutes" I asked why he's telling me all this. "Because I just got fired for saying we should open-source more. So here I am open-sourcing everything I know" Then he sent me an article where someone built the full bot from these repos in a weekend -> t.me/PolyGunSniperB… Three exit triggers: Target 85% of move. Volume spike x3 - smart money out. 24h silence - thesis dead. I copied the stack. Claude Code $20. VPS $5. $25/month. No team. No office. No GPT subscription. 17 days. 191 trades. 73% win rate. $850 seed. +$9,400. I sent him my results. He replied: "This is exactly what I built as a side project at OpenAI. They made me delete it" I asked if I could post this. "Post it. What are they gonna do. Fire me again?"
Lunar@LunarResearcher

x.com/i/article/2041…

English
31
37
359
156.7K
Master of Coin retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
7K
58.3K
20.9M
Master of Coin retweetledi
Ole Lehmann
Ole Lehmann@itsolelehmann·
karpathy just casually described the future of ai and most people scrolled right past it: he's been building what he calls "llm knowledge bases." here's what that means in plain english: you take everything you're interested in. articles, research papers, datasets, images, etc and you dump it all into one folder then you point your ai at the folder and say "read all of this, organize it, and remember it" the ai reads through every single source. writes summaries, groups related ideas together, links concepts across different articles basically builds a personal library that's fully organized and searchable and it maintains the whole thing for you. when you add something new, the ai reads it, figures out how it connects to everything already in the library, and updates automatically. karpathy said he rarely touches it himself once the library gets big enough (~100 articles, ~400k words), you can start asking it complex questions and get answers pulled from across your entire collection > "what are the common themes across these 30 papers" > "what did i save six months ago that connects to this new idea" > "summarize everything i have on topic x and tell me what's missing" and every answer it gives gets filed back into the library. so the system gets smarter every single time you use it. the memory grows from both sides: what you save AND what you ask now think about your own life for a second you probably have > thousands of twitter bookmarks you'll never reopen. > hundreds of saved articles from the last year > podcasts where someone said something brilliant and you can't remember what it was or which episode all dead knowledge. you consumed it once and it disappeared now imagine all of it lives in one system: organized, connected, and queryable. you could ask "what are the best pricing frameworks i've come across this year" and get an answer that pulls from: 1. a podcast you listened to in january 2. a twitter thread you bookmarked in march 3. and a blog post you forgot you even read the ai connects dots across formats, across months, across topics. because it absorbed everything and has photographic memory of all of it that's the dream. and karpathy built it the problem: right now this requires obsidian (a note-taking app built around linked notes), command line tools, custom scripts, and browser extensions just to wire it all together. you need to be quite technical karpathy even said it himself: "i think there is room here for an incredible new product instead of a hacky collection of scripts" i think whoever packages this for normal people is sitting on something massive. one app that syncs with the tools you already use, your bookmarks, your read-later app, your podcast app, your saved threads. it pulls everything in automatically, the ai organizes and connects it over time, and you can ask questions across your entire personal library whenever you want you never manually upload anything. it just learns in the background someone please build this
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
137
141
1.8K
309.4K
Master of Coin retweetledi
Luke Pierce
Luke Pierce@lukepierceops·
Anthropic and OpenAI are both building PE-backed consulting arms to deploy AI inside companies. Let that sink in for a second. The two companies building the most powerful AI on earth looked at the market and said "businesses can't figure out how to use this. We need to go in and do it for them." They are literally telling you where the gap is. Companies have access to the best AI models ever built. And most of them are still running on spreadsheets, disconnected tools, and manual processes because nobody showed them how to actually implement it. That's the whole game right now. Not building better models (obviously) or shipping new features. IMPLEMENTATION. Getting AI inside real workflows. Mapping the processes, building the systems, and making it stick. I've been doing exactly this for 4 years and have worked with 80+ companies at this point. It started with automation and naturally flowed into Ai. And every single engagement starts the same way. Not with AI or automation but with a process map. Because AI alone won't fix broken operations. Companies now understand that. They have not yet seen true ROI from Ai. You have to understand how the business actually runs before you touch a single tool. Where does the data live? Where are the bottlenecks? What's manual that shouldn't be? What breaks when volume goes up? That's the work, and that's what Anthropic and OpenAI just told the entire market is worth billions. Every company is going AI-first over the next 3-5 years. The demand for people who can actually make that happen is about to be unlike anything we've seen. The labs told you where the gaps are. Now go fill them.
Luke Pierce tweet media
English
139
194
2.2K
357.7K
Master of Coin retweetledi
Ricardo
Ricardo@Ric_RTP·
Jensen Huang just called out every CEO who’s been firing people “because of AI.” Jim Cramer asked him why companies are laying people off if AI is supposed to make everyone MORE productive. Jensen's answer: "For companies with imagination, you will do more with more. For companies where the leadership is just out of ideas, they have nothing else to do. They have no reason to imagine greater than they are. When they have more capability, they don't do more." Read that again. The man who built the most important tech company on Earth just told you that if your CEO is using AI to cut headcount, it means one thing: They have no imagination. They have no vision for what comes next. They got handed the most powerful tool in human history and their FIRST instinct was to fire people. This is the CEO of NVIDIA. The company whose chips power every AI system on the planet. If anyone on Earth has the right to say "AI replaces workers," it's Jensen Huang. And he said the OPPOSITE. He said every carpenter could become an architect. Every plumber could become an architect. AI elevates capability. It doesn't eliminate it. But here's where it gets really interesting... During the same interview, Jensen revealed something nobody's talking about: He said AI startups like OpenAI and Anthropic are seeing their revenues increase by one to two billion dollars a WEEK. And he wishes these companies were public so the world could see what he sees. One to two billion per week. That's a $50 to $100 BILLION annualized run rate. For companies that most people think are burning cash and making nothing. The entire Wall Street narrative that "AI companies aren't profitable" might be completely wrong. Jensen sees their numbers. He sees their compute orders. He sees their growth. And he's saying the revenue is real. So if the money IS real, why are other companies firing people? Because they're not building AI products. They're not creating new revenue streams. They're not using AI to expand into new markets. They're using AI as an EXCUSE to cut costs because they ran out of ideas 3 years ago and need something to tell the board. Jensen's company added $500 billion in new orders in 5 months. He expects $1 trillion in cumulative revenue through 2027 from just two product lines. That number doesn't include the new chips, systems, or partnerships announced this week. And he's not cutting people. He's hiring. Because when you have imagination, more capability means MORE opportunity. Not less headcount. Meanwhile Salesforce cut thousands. Meta cut thousands. Amazon cut thousands. All blaming "AI efficiency." Jensen's response: You're out of imagination. He also said something that stuck with me. Cramer asked if he ever thought he'd build a $10 to $20 trillion company while waiting tables at Denny's. His answer: "I was just trying to make it through the shift." Biggest tip he ever got? Two, three dollars. Now he's building tech that increased computing demand by one million times in two years. He announced OpenClaw, which he says is as big as ChatGPT. And he's got 21 months of new business that isn't even counted in the trillion dollar figure yet. When asked how long he plans to keep working? "I'm hoping to die on the job. And I'm not hoping to die anytime soon." This is a man who believes every single thing he's building. And his message to every CEO using AI to justify layoffs is simple... You're not innovating. You're surrendering. The technology wasn't built to shrink companies. It was built to make them limitless. If your leadership can't see that, the problem isn't AI. It's THEM.
English
471
1.6K
7.8K
1.4M
Master of Coin retweetledi
Stitch by Google
Stitch by Google@stitchbygoogle·
Meet the new Stitch, your vibe design partner. Here are 5 major upgrades to help you create, iterate and collaborate: 🎨 AI-Native Canvas 🧠 Smarter Design Agent 🎙️ Voice ⚡️ Instant Prototypes 📐 Design Systems and DESIGN.md Rolling out now. Details and product walkthrough video in 🧵
English
991
4.8K
41.7K
18.6M
Master of Coin retweetledi
Global Dissident
Global Dissident@GlobalDiss·
🚨 SOMETHING BIG JUST HAPPENED BlackRock, the world’s largest asset manager, just BLOCKED withdrawals. Investors tried to pull $1.2 BILLION from its $26B private credit fund. BlackRock said NO and capped withdrawals at 5%. Nearly HALF the investors who wanted out were denied their money. At the same time, Blackstone faced record withdrawals and had to inject $400M of its own cash. When the BIGGEST funds on Earth start limiting withdrawals, it is a MAJOR WARNING sign for the entire $1.8 TRILLION private credit market.
Global Dissident tweet media
English
1.6K
15.4K
42.4K
2.3M
Master of Coin retweetledi
Trending Bitcoin
Trending Bitcoin@TrendingBitcoin·
🇺🇸 FIDELITY PREDICTS THAT ONE #BITCOIN WILL BE WORTH $1 BILLION BY 2038 🚀
Trending Bitcoin tweet media
English
213
345
1.9K
134.4K