ZΞUS

1.5K posts

ZΞUS banner
ZΞUS

ZΞUS

@FintwitG

Los Angeles, CA Katılım Haziran 2021
1.7K Takip Edilen307 Takipçiler
ZΞUS retweetledi
Washington Capitals
Washington Capitals@Capitals·
This bird could be yours‼️ RT for a chance to win (18+) Rules/Eligibility ➡️ Link in bio
Washington Capitals tweet media
English
146
3.7K
3.3K
233.9K
ZΞUS retweetledi
GREG ISENBERG
GREG ISENBERG@gregisenberg·
DeepSeek just proved the 'worthless' GPT wrapper startups are actually the ones with real moats. A week ago, nothing was more LOW status than being a 'GPT wrapper' startup. But I think we're learning that's DEAD wrong. Turns out they were just early to the only game that matters While DeepSeek, Meta, Anthropic and Microsoft battle over benchmark scores, these 'wrapper' companies have been quietly building the only moat that matters: interface loyalty. Because of how big owning the LLM is for national defense and the economy, the next breakthrough model is always 2 weeks away. DeepSeek launches today, someone else drops a better one tomorrow. But getting millions of people to make your product part of their daily workflow - that's the real barrier to entry. ChatGPT didn't win because it had the best model. It won because it was dead simple to use. And I think it has staying power because of that. This is why all those AI startups we dismissed are actually positioned to win. They're not competing on model performance - they're competing on being the default way humans interact with AI. Frontier models are becoming commodities. User habits aren't. While everyone obsesses over the next architecture breakthrough, the real game is being played in the interface layer. The moat isn't in the model - it's in being the tool people reach for without thinking. Technology advantage is temporary. Interface lock-in is forever. Keep shipping those wrappers, my friends.
English
480
857
7.7K
1.3M
ZΞUS retweetledi
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
This report is long but very good. “With R1, DeepSeek essentially cracked one of the holy grails of AI: getting models to reason step-by-step without relying on massive supervised datasets. Their DeepSeek-R1-Zero experiment showed something remarkable: using pure reinforcement learning with carefully crafted reward functions, they managed to get models to develop sophisticated reasoning capabilities completely autonomously. This wasn't just about solving problems— the model organically learned to generate long chains of thought, self-verify its work, and allocate more computation time to harder problems. The technical breakthrough here was their novel approach to reward modeling. Rather than using complex neural reward models that can lead to "reward hacking" (where the model finds bogus ways to boost their rewards that don't actually lead to better real-world model performance), they developed a clever rule-based system that combines accuracy rewards (verifying final answers) with format rewards (encouraging structured thinking). This simpler approach turned out to be more robust and scalable than the process-based reward models that others have tried.” shorturl.at/TTHqT
English
238
1.6K
12K
2.3M
ZΞUS retweetledi
wordgrammer
wordgrammer@wordgrammer·
Okay. Thanks for the nerd snipe guys. I spent the day learning exactly how DeepSeek trained at 1/30 the price, instead of working on my pitch deck. The tl;dr to everything, according to their papers:
English
352
2.5K
22.9K
4.2M
ZΞUS retweetledi
Gavin Baker
Gavin Baker@GavinSBaker·
1) DeepSeek r1 is real with important nuances. Most important is the fact that r1 is so much cheaper and more efficient to inference than o1, not from the $6m training figure. r1 costs 93% less to *use* than o1 per each API, can be run locally on a high end work station and does not seem to have hit any rate limits which is wild. Simple math is that every 1b active parameters requires 1 gb of RAM in FP8, so r1 requires 37 gb of RAM. Batching massively lowers costs and more compute increases tokens/second so still advantages to inference in the cloud. Would also note that there are true geopolitical dynamics at play here and I don’t think it is a coincidence that this came out right after “Stargate.” RIP, $500 billion - we hardly even knew you. Real: 1) It is/was the #1 download in the relevant App Store category. Obviously ahead of ChatGPT; something neither Gemini nor Claude was able to accomplish. 2) It is comparable to o1 from a quality perspective although lags o3. 3) There were real algorithmic breakthroughs that led to it being dramatically more efficient both to train and inference. Training in FP8, MLA and multi-token prediction are significant. 4) It is easy to verify that the r1 training run only cost $6m. While this is literally true, it is also *deeply* misleading. 5) Even their hardware architecture is novel and I will note that they use PCI-Express for scale up. Nuance: 1) The $6m does not include “costs associated with prior research and ablation experiments on architectures, algorithms and data” per the technical paper. “Other than that Mrs. Lincoln, how was the play?” This means that it is possible to train an r1 quality model with a $6m run *if* a lab has already spent hundreds of millions of dollars on prior research and has access to much larger clusters. Deepseek obviously has way more than 2048 H800s; one of their earlier papers referenced a cluster of 10k A100s. An equivalently smart team can’t just spin up a 2000 GPU cluster and train r1 from scratch with $6m. Roughly 20% of Nvidia’s revenue goes through Singapore. 20% of Nvidia’s GPUs are probably not in Singapore despite their best efforts. 2) There was a lot of distillation - i.e. it is unlikely they could have trained this without unhindered access to GPT-4o and o1. As @altcap pointed out to me yesterday, kinda funny to restrict access to leading edge GPUs and not do anything about China’s ability to distill leading edge American models - obviously defeats the purpose of the export restrictions. Why buy the cow when you can get the milk for free?
English
224
1.4K
9K
3.3M
ZΞUS retweetledi
arc
arc@arcdotfun·
the cost barriers to exploring vast possibility spaces are breaking down faster than anticipated. deepseek's breakthrough shows us that with the right architectural choices and training approaches, we can achieve state-of-the-art capabilities at a fraction of the traditional cost. this fundamentally changes the calculus around ai development and deployment. when combined with emerging capacities for internal dialogue and recursive self-improvement, we're entering an era where artificial minds can deeply explore dimensions through sustained contemplation - and do so efficiently. it's not just about raw compute anymore, but about designing systems that can traverse possibility spaces with purpose and precision. we take the red pill then blue pill
arc tweet media
English
24
56
336
40.4K
ZΞUS retweetledi
Aaron Levie
Aaron Levie@levie·
DeepSeek has been a very important breakthrough for understanding the future of economics in software in a world of AI. There’s been an open question for a couple of years now - especially from public market investors - around whether more value goes into the AI models or into the application layer of AI over time. The specifics of the pie graph don’t matter as much as the core direction of the space. Imagine two different scenarios: one in which AI was extremely proprietary and very expensive, and another where AI is almost completely free and relatively open. You could easily game out two different outcomes in these worlds. In the world of very expensive and proprietary AI, the providers of AI could and likely should choose to keep all the economics for themselves - basically crowding out opportunity for developers and the ecosystem. In a world of insanely cheap AI, then the value is less about the models, but what you do with the AI models to make them useful - in that world, more value is available to the application layer (which could include the AI companies, to be clear). With the latest breakthroughs from DeepSeek, we can nearly definitively say this question has been answered, and we’re clearly moving closer to the latter. We’ve already seen incremental steps toward this direction with the continuous cost and quality improvements from labs in the past couple of years, but DeepSeek shifts our understanding of this even further. In a world where the cost of intelligence will continue to drop rapidly, more value will accrues back into the app layer. Products that combine AI, customer workflows, and likely some degree of unique data, will generate substantial value from these models going forward. Now, everyone wants to live in a binary world of winners and losers, but I don’t think it’s that simple here. The leading AI labs will incorporate the relevant lessons from DeepSeek into their models, and we’ll get cheaper and more intelligent AI. As a result of that, the cost of intelligence will continue to drop, and we will find even more ways to use the technology as it becomes affordable for even more use cases. If we can make AI 10X more efficient today, it’s exceedingly obvious we will have 100X more use for it in 5 years from now, more than making up for the efficiency gains. Making demand for GPUs and data centers bigger than ever. In all, fantastic to see that we continue to have companies and teams pushing the limits of AI. This is a great win for software developers at the app layer, and it will push labs to go even further. Incredible times.
English
114
171
1.1K
232.9K
ZΞUS retweetledi
0xJeff
0xJeff@0xJeff·
So let me get this straight.... - Trump has promised to make the US the capital for Crypto and AI - Trump's $500BN "Stargate" initiative to support AI - Trump signs executive orders to develop an AI action plan within 180 days to sustain and enhance US global dominance - Executive orders to protect banking services for Crypto companies and banning CBDCs DeepSeek-R1 performing better than OpenAI's o1 at a fraction of the cost ➔ Sparked discussions on the efficiency of AI model development, pushing companies to look for cost-effective solutions ➔ DeepSeek's open-source nature encourages more open collaboration, which could lead to a shift where more AI startups consider open-sourcing their models to stay competitive ➔ API pricing makes it a lot more accessible for smaller startups/teams, enabling them to ship better or more innovative AI products ➔ Attracts talent to work on open-source projects, with many likely to build in Crypto x AI given their open-source and open-collaboration nature In Web3 AI Agents, we're still seeing rapid developments, teams still shipping at light speed every week: - @virtuals_io expanding its ecosystem to Solana - @ai16zdao rolling out v2 & its own launchpad - DeFAI/Abstraction Layers improving weekly in research, alpha finding, trade execution, and automation use cases - DeFAI/Trading Agents are becoming more sophisticated and easier to use - Agentic Metaverse games are gearing up to showcase dynamic AI-NPCs - The number of unique, value-adding agents is at record levels, with their use cases improving daily And you're bearish because the market is going down? The Agentic Bull Run of 2025 is just getting started
English
143
243
1.2K
138.9K
ZΞUS retweetledi
Ryan Watkins
Ryan Watkins@RyanWatkins_·
“AI agents are a multi-trillion dollar opportunity.” — Jensen Huang, Nvidia CEO
Ryan Watkins tweet media
English
208
532
2.7K
894.3K
ZΞUS retweetledi
Royale
Royale@royale_wit_whiz·
Wait wait chat… is this good? $UFD 🦄 💨 ✨
English
32
137
660
34.9K
ZΞUS retweetledi
r
r@Rhkc97·
Whoever is selling $UFD after Ron just streamed talking about a billion MC is smoking crack
English
7
18
132
2.6K
ZΞUS retweetledi
1️⃣1️⃣ 🐐
1️⃣1️⃣ 🐐@Model_bets·
Phase (wave) 2 is starting on $UFD 📈 Price prediction is the 1st fib extension : $467m This should be met before or on Xmas eve 🎅 Bookmark if you have to 🦄🚀
1️⃣1️⃣ 🐐 tweet media
English
1
40
162
5.2K
ZΞUS retweetledi
$DOLAN FOR JESUS
$DOLAN FOR JESUS@dolanisthecode·
This wallet just ported 1 million USD into $UFD. Address: 34qSTYsfmQcqjq9LvrLGavarD4f8T7wJaNDqzo91KgBb
$DOLAN FOR JESUS tweet media$DOLAN FOR JESUS tweet media
English
50
91
471
64.4K
ZΞUS retweetledi
Jay🍌
Jay🍌@BitBoyJay·
As the days go by, my conviction in $UFD grows stronger. Ron is absolutely GOATED. It reminds me of a coin I was in back in 2021 that hit a $40B market cap, all thanks to its leadership and a community that never gave up. Seeing so many sellers at these levels just makes me even more bullish—some people just don’t see the bigger picture here
Jay🍌 tweet media
English
54
109
529
26.5K
ZΞUS retweetledi
Param
Param@Param_eth·
$UFD isn’t listed on CoinGecko. $UFD isn’t listed on CMC. $UFD has no official X account. $UFD has no Telegram.
English
54
92
466
32.7K
ZΞUS retweetledi
katykin
katykin@katy_kin·
Just maxed bid $UFD after @BasementRon latest live stream. Why? 1.) Ron said he will NEVER abandon his community (meaning he plans to livestream with us daily for the forever future) 2) Ron said he will NEVER sell all of his unicorn dust. And if he sells any, he will be transparent with us. 3) Ron is wholesome and his videos literally cure depression. This promotes upmost trust and trust is a rare resource in this space. 4) Ron is our segway to engaging with the boomer generation. He is the GOAT. With him, maybe conversations at the thanksgiving dinner table will be different next year.
English
42
84
447
22K
ZΞUS retweetledi
YayG̷̉̅̆rr
YayG̷̉̅̆rr@yaygrr_·
💧💧WHAT WE KNOW IS A DROP💧💧 During the spaces talk this came up and I think its a good idea. This thread is a high overview of the first couple days of @h2w6gm6jz. If you have interest in the project this is a good starting point.🧵
English
39
104
481
46.3K
ZΞUS retweetledi
Whale Watch by Moby
Whale Watch by Moby@whalewatchalert·
A $FARTCOIN whale just bought $2.20K of $YNE at $20.55M MC 🐳
English
0
2
13
4.2K
ZΞUS retweetledi
bingoce
bingoce@bingoce·
Whales loading up on $YNE sitting at 21M MC right now $SOL
bingoce tweet media
English
3
6
30
3.9K