Paul Atreides

11.6K posts

Paul Atreides banner
Paul Atreides

Paul Atreides

@PaulAtretre

Poker Pro, aka: 'Slick Willy Omaha', aka: 'River Killer of Omaha', aka: 'Head Hunter of Omaha' "I can kill you with a word"

Caves of Arrakis Katılım Eylül 2021
1.1K Takip Edilen5 Takipçiler
Paul Atreides retweetledi
Investing visuals
Investing visuals@InvestingVisual·
A breakdown of the photonics value chain and best positioned business: Layer 1: Materials & wafers • $GLW • $AXTI • $IQE • $AIXA • $AMS Layer 2: Core photonic devices • $IPGP • $COHR • $LITE • $LASR • $SIVE • Layer 3: Components & modules • $AAOI • $MTSI • $FN • $VIAV • $LPTH Layer 4: Systems & equipment • $ASML • $BESI • $ASM • $LPKF • $MKS Layer 5: Test, metrology & yield • $CAMT • $FORM • $AEHR • $ONTO • $VIAV
Investing visuals tweet media
Investing visuals@InvestingVisual

Photonics stocks are going parabolic: • $LITE +1621% • $AAOI +1434% • $COHR +529% • $CRDO +377% • $VIAV +353% • $FN +281% • $MTSI +189% • $MRVL +186% • $IPGP +136% • $POET +127%

English
19
207
999
183.4K
Paul Atreides retweetledi
Muhammad Ayan
Muhammad Ayan@socialwithaayan·
BREAKING: ANTHROPIC JUST OPEN SOURCED THE ENTIRE WALL STREET WORKFLOW. DCF models. LBO models. Equity research reports. Merger analysis. KYC checks. All of it. Free. On GitHub. It connects Claude directly to: -> Bloomberg, FactSet, S&P Global, Morningstar, PitchBook -> Builds real Excel models with live formulas and sensitivity tables -> Drafts CIMs, IC memos, earnings reports, and buyer lists -> Runs PE due diligence, GL reconciliation, and NAV tie-outs This is not a chatbot wrapper. These are production agents that own entire financial workflows. The kind firms pay $50,000 to $500,000 per year in software to run. Now it is a one-line Claude Code plugin install. 19.8K GitHub stars. Apache-2.0 License. 100% Open Source.
Muhammad Ayan tweet media
English
54
207
1.3K
148.8K
Paul Atreides retweetledi
InvestmentGuru
InvestmentGuru@InvestmentGuru_·
The Complete Data Center Stack: Where the AI Infrastructure Money Flows Most investors still think AI is just about GPUs. That’s incomplete. AI is an infrastructure buildout, and the real opportunity spans the entire data center stack. Every inference, every training run, and every deployed model depends on multiple layers working together. Here’s the breakdown: 1. Compute Silicon (The Brain) Tickers: $NVDA, $AMD, $AVGO, $INTC This is the foundation. GPUs, CPUs, accelerators, and custom silicon power training and inference. Why it matters: - Compute demand keeps rising with larger models - AI workloads are forcing faster chip innovation - Custom ASICs are becoming a major trend 2. Server OEMs & Solutions (The Hardware Layer) Tickers: $SMCI, $DELL, $HPE, $VRT, $ETN, $MOD Chips need systems. These companies assemble and deliver the physical AI servers and power systems. Why it matters: - AI racks are denser and hotter - Power distribution is now critical - Cooling is becoming a competitive advantage 3. Memory & Storage (The Hidden Bottleneck) Tickers: $SNDK, SK Hynix, $MU, $WDC, $P, Samsung, $NTAP AI models consume massive amounts of memory bandwidth and storage. Why it matters: - High-bandwidth memory is becoming strategic infrastructure - Data storage demand rises with AI deployment - Faster access = better model performance 4. Networking & Connectivity (The Nervous System) Tickers: $ANET, $CSCO, $MRVL, $CRDO, $CIEN, $NOK AI clusters must communicate at ultra-high speed. Why it matters: - Faster networking reduces latency - Data movement is becoming expensive - Scale depends on interconnect efficiency Key idea: AI cannot scale without bandwidth. 5. Neoclouds & Physical Infrastructure (The New Builders) Tickers: $NBIS, $IREN, $CRWV, $APLD $CIFR $DGXX These companies provide specialized AI infrastructure and hosting. Why it matters: - Cloud alternatives are growing - AI-native infrastructure is becoming valuable - Capacity shortages create pricing power 6. Energy (The Ultimate Constraint) Tickers: $CEG, $NEE, $EOSE, $GEV, $EQT, $VST $OKLO $BE $FLNC AI consumes enormous electricity. Power availability is becoming a limiting factor. Why it matters: - Grid demand is surging - Battery storage is essential - Reliable baseload power matters Final Thought The market often focuses on one winner. But AI infrastructure is an ecosystem. If you want to understand where capital flows next, follow the stack: Compute → Servers → Memory → Networking → Infrastructure → Energy The biggest winners in the AI cycle may not always be the obvious names. Sometimes the best opportunities are in the supporting layers that make the whole system possible.
InvestmentGuru tweet media
English
4
150
600
37.1K
Paul Atreides retweetledi
Aditya
Aditya@Aditya_181105·
If you want to understand modern AI systems fundamentals, study these: 1.Transformers 2.Attention Mechanism 3.Embeddings 4.Vector Databases 5.RAG (Retrieval-Augmented Generation) 6.Fine-Tuning vs Prompt Engineering 7. AI Agents 8.Context Windows 9.Model Quantization 10.Hallucination & Alignment
English
21
55
305
8.9K
Paul Atreides retweetledi
Ajit kumar
Ajit kumar@ajitcodes·
Stop wasting hours trying to learn AI. I have already done it for you. With one list. Zero confusion. And no fluff. 📹 Videos: 1. LLM Introduction: t.co/kyDon6qLrb 2. LLMs from Scratch: t.co/2hyMhuKoiI 3. Agentic AI Overview (Stanford): t.co/FXu6cAqITC 4. Building and Evaluating Agents: t.co/ZigR1tdOFL 5. Building Effective Agents: t.co/uYwfwO55mO 6. Building Agents with MCP: t.co/4arFTW1b3i 7. Building an Agent from Scratch: t.co/eOmveyM9Hz 8. Philo Agents: t.co/zLu7x1tx9m 🗂️ Repos 1. GenAI Agents: t.co/eXCl2YaRPv 2. Microsoft's AI Agents for Beginners: t.co/3CSW4zPAwf 3. Prompt Engineering Guide: t.co/GVzvxPYDVO 4. Hands-On Large Language Models: t.co/0rgDvhx3pI 5. AI Agents for Beginners: t.co/3CSW4zPAwf 6. GenAI Agents: lnkd.in/dEt72MEy 7. Made with ML: t.co/9z5KHF9DMe 8. Hands-On AI Engineering: t.co/dldAj5Xkr6 9. Awesome Generative AI Guide: t.co/U2WZhT4ERV 10. Designing Machine Learning Systems: t.co/sYAZX34YdQ 11. Machine Learning for Beginners from Microsoft: t.co/NjFxHbC9jZ 12. LLM Course: t.co/N34YTPu1OK 🗺️ Guides 1. Google's Agent Whitepaper: t.co/bW3Ov3vMW0 2. Google's Agent Companion: t.co/wredwWAbBA 3. Building Effective Agents by Anthropic: t.co/fxtE4alVrJ 4. Claude Code Best Agentic Coding practices: t.co/lLSwJ9pG7C 5. OpenAI's Practical Guide to Building Agents: t.co/xgkEIogGfh 📚 Books: 1. Understanding Deep Learning: t.co/CjcKpTemmV 2. Building an LLM from Scratch: t.co/DaWBxOx8o3 3. The LLM Engineering Handbook: t.co/ZA1n0N41Mf 4. AI Agents: The Definitive Guide - Nicole Koenigstein: t.co/boLkl1VlKb 5. Building Applications with AI Agents - Michael Albada: t.co/H1Xf5EkJLL 6. AI Agents with MCP - Kyle Stratis: t.co/JI3ELQZE6a 7. AI Engineering: t.co/Xk0JzMIf7o 📜 Papers 1. ReAct: t.co/QNqE4UU55w 2. Generative Agents: t.co/CwEpoJgY1U 3. Toolformer: t.co/5m9xZd5teZ 4. Chain-of-Thought Prompting: t.co/KjVlgdWi77 🧑🏫 Courses: 1. HuggingFace's Agent Course: t.co/7FSUYKxIdG 2. MCP with Anthropic: t.co/IkZGiWm2yS 3. Building Vector Databases with Pinecone: t.co/2YRoMfLdXd 4. Vector Databases from Embeddings to Apps: t.co/23A50ixbHJ 5. Agent Memory: t.co/uc3L9BrNF7 Follow @iansh04_ for more!! 👇 Comment “AI” for more resources Repost for your network ♻️ Bookmark for future.
Ajit kumar tweet media
English
67
382
1.4K
71.1K
Paul Atreides retweetledi
Rishi
Rishi@RishiUvaach·
🧠 𝗔𝗜 𝗶𝘀 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗖𝗵𝗮𝘁𝗚𝗣𝗧. Think of it as a universe with layers: 🌐 𝗔𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 The broad umbrella — machines performing tasks that normally need human intelligence: reasoning, planning, language, vision, speech, decision-making and automation. ⚙️ 𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 A core part of AI where systems learn patterns from data instead of being manually programmed for every rule. 🧬 𝗡𝗲𝘂𝗿𝗮𝗹 𝗡𝗲𝘁𝘄𝗼𝗿𝗸𝘀 Models inspired by the human brain, designed to recognise complex patterns across text, images, numbers, speech and behaviour. 🔥 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 A more advanced form of neural networks with multiple layers, powering breakthroughs in computer vision, speech recognition, language understanding and recommendation systems. ✨ 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 A specialised branch focused on creating new outputs — text, images, code, audio, video, summaries, designs and conversations. Most people enter AI from the smallest circle: 𝗚𝗲𝗻𝗔𝗜. But real AI understanding begins when you see the full stack. Save this map — it makes the AI universe easier to decode. 🚀
Rishi tweet media
English
21
70
211
5.5K
Paul Atreides retweetledi
Paul Atreides retweetledi
Antonio Lupetti
Antonio Lupetti@antoniolupetti·
"Neural Networks and Deep Learning" by Michael Nielsen is a free online book that offers a clear introductions to the mathematical and conceptual foundations of modern neural networks and deep learning. It explores neural networks as biologically-inspired computational models capable of learning from observational data, introducing many of the core ideas behind deep learning, backpropagation, gradient descent, and representation learning. The book also explains why neural networks have become fundamental in image recognition, speech recognition, and natural language processing, making it an excellent resource for anyone interested in the mathematical foundations of AI. It is a very well written and accessible resource, available for free online. A GitHub repository containing exercises and implementations is also available. neuralnetworksanddeeplearning.com/index.html
Antonio Lupetti tweet media
English
2
87
388
12.3K
Paul Atreides retweetledi
Rand Group
Rand Group@randgroup·
The AI supercycle will last 15 years. We're in year 3. Most investors are still buying Phase 1 names while the real money is already rotating into Phase 3. I mapped the entire cycle into 4 phases with the tickers that matter at each stage: The AI supercycle is the biggest investment theme of our generation. Bigger than mobile. Bigger than cloud. A 15 year structural shift that will reshape every sector of the global economy. Hyperscalers just committed $725 billion in capex for 2026, nearly doubling last year. Microsoft, Google, Amazon, and Meta each spending over $100 billion individually. This is not speculation. I've mapped the entire supercycle into four phases so you know exactly where we are and where the asymmetric opportunities sit. 🔴 Phase 1: Already Ran (2023 to 2025) The foundation layer is complete. $AMD ran 78% in 2025, $NVDA 39%, and $INTC just posted a blowout Q1 that sent the Philadelphia Semiconductor Index above 10,000 for the first time. Chips still power every phase but the generational entries are gone and risk/reward has compressed. - $NVDA, $AMD, $ARM, $INTC, $AVGO, $MU, $GLW - Semiconductors, Memory & Storage,Photonics/Optics - Foundation complete. Still growing but priced for it. 🟠 Phase 2: Peak Buildout (2025 to 2027) The phase most investors just woke up to. $CEG acquired Calpine to become the largest U.S. private power producer at 55 GW. $GEV up over 200% in a year. $VRT co engineering cooling for NVIDIA's Rubin architecture. $GLW up 74% YTD on optical fiber demand. Nuclear SMRs are the breakout with $OKLO, $SMR, and $BWXT positioning to power data centers directly. Still upside but the obvious names have moved. - $CEG, $GEV, $VRT, $VST, $TLN, $ANET, $GLW, $MOD, $EQIX $OKLO, $SMR, $BWXT, $NNE - Power/Grid, Cooling, Networking, Nuclear/SMR Peak buildout. - Nuclear SMRs are the sleeper. 🟡 Phase 3: The Positioning Window (2026 to 2028) Where AI escapes the data center and enters the physical world. Most will be late. Tesla converting Fremont to Optimus production, $25B capex, mass production targeted H2 2026. Rocket Lab posted record $602M revenue with $1.85B backlog. $LUNR up 47% YTD with $943M in contracts. $KTOS Valkyrie drone selected for the Marine Corps. The window to position is open right now. - $TSLA, $RKLB, $LUNR, $KTOS, $AVAV, $PATH, $ISRG $MP, $FCX, $ALB, $ASTS - Robotics/Autonomy, Space/Defense/Drones, Rare Earths - This is where the asymmetric risk/reward lives. 🟢 Phase 4: Final Frontier (2028+) The endgame. Microsoft capex $190B. Alphabet $190B. Amazon $200B. Meta $145B. Google Cloud backlog past $460B. They're building the rails for AI software dominance and AGI. Quantum still early but $IONQ and D Wave are laying groundwork. The platforms that control the software layer win the entire supercycle. - $MSFT, $GOOGL, $AMZN, $META, $ORCL, $IONQ - AI Software Dominance, AGI Infrastructure Decade long thesis. - Accumulate on weakness. 💊 Key Takeaway - Phase 2 is confirmed ($725B hyperscaler capex) - Phase 3 is where the smart money positions nowRobotics, space, defense, nuclear - SMR are the 2026 to 2028 trades - Most will rotate into these names 12 months too late 15 year supercycle. Not a trade. Phase 1 ran. Phase 2 is priced. Phase 3 is where you want to be.
Rand Group tweet media
English
169
808
3.9K
749.2K
Paul Atreides retweetledi
Colorado Rockies
Colorado Rockies@Rockies·
ROCKIES WIN!
Colorado Rockies tweet media
English
45
199
1.8K
36.2K
Paul Atreides retweetledi
hardmaru
hardmaru@hardmaru·
The human brain🧠 is incredibly efficient because it only activates the specific neurons needed for a thought. Modern LLMs naturally try to do this too (> 95% of neurons in feedforward layers stay silent for any given word), but our hardware punishes them for it. One of the most frustrating paradoxes in deep learning: making a model do less math often makes it run slower. Why? Because unstructured sparsity introduces irregular memory access, and GPUs are built for predictable, dense blocks of math. We teamed up with @NVIDIA to try to fix this hardware mismatch. Instead of forcing the GPU to adapt to the sparsity, we built a "Hybrid" format that reshapes the sparsity to fit the GPU. Our sparsity format (TwELL) dynamically routes the 99% of highly sparse tokens through a fast path, and uses a dense backup matrix as a safety valve for the rare, heavy tokens. Through TwELL and a new set of custom CUDA kernels for both LLM inference and training, we translated theoretical sparsity into actual wall-clock speedups: >20% faster training and inference on H100 GPUs, while also cutting energy consumption and memory requirements. Paper: arxiv.org/abs/2603.23198 Blog: pub.sakana.ai/sparser-faster… Code: github.com/SakanaAI/spars… ⚡️
hardmaru tweet media
Sakana AI@SakanaAILabs

How do we make LLMs faster and lighter? Don’t force the GPU to adapt to sparsity. Reshape the sparsity to fit the GPU! ⚡️ Excited to share our new #ICML2026 paper in collaboration with @NVIDIA: "Sparser, Faster, Lighter Transformer Language Models". This work introduces new open-source GPU kernels and data formats for faster inference and training of sparse transformer language models: Paper: arxiv.org/abs/2603.23198 Blog: pub.sakana.ai/sparser-faster… Code: github.com/SakanaAI/spars… While LLMs are undoubtedly powerful, they are increasingly expensive to train and deploy, with a large part of this cost coming from their feedforward layers. Yet, an interesting phenomenon occurs inside these layers: For any given token, only a small fraction of the hidden activations actually matter. The rest approximate zero, wasting computation. With ReLU and very mild L1 regularization, this sparsity can exceed 95% with little to no impact on downstream performance. So, can we leverage this sparsity to make LLMs faster? The challenge is hardware. Modern GPUs are optimized for dense matrix multiplications. Traditional sparse formats introduce irregular memory access and overheads that cancel out their theoretical savings for GEMM operations. Our contribution is twofold: 1/ We introduce TwELL (Tile-wise ELLPACK), a new sparse packing format designed to integrate directly in the same optimized tiled matmul kernels without disrupting execution. 2/ We develop custom CUDA kernels that fuse multiple sparse matmuls to maximize throughput and compress TwELL to a hybrid representation that minimizes activation sizes. We used our kernels to train and benchmark sparse LLMs at billion-parameter scales, demonstrating >20% speedups and even higher savings in peak memory and energy. This work will be presented at #ICML2026. Please check out our blog and technical paper for a deep dive!

English
50
502
3.5K
418.7K
Paul Atreides retweetledi
Lin
Lin@Speculator_io·
The trillion-dollar space race is taking off today: $RKLB Rocket Lab +25.57% $FLY Firefly Aerospace +21.08% $BKSY BlackSky Technology +18.36% $RDW Redwire Space +16.47% $LUNR Intuitive Machines +15.97% $SATL Satellogic +11.18% $SIDU Sidus Space +11.19% $VOYG Voyager Space +9.33% $SPIR Spire Global +9.70% $PL Planet Labs +8.43% $FJET Starfighter Space +7.52% $YSS York Space Systems +7.34% $SPCE Virgin Galactic +6.59% $ASTS AST SpaceMobile +6.28%
Lin tweet media
Lin@Speculator_io

The SpaceX IPO will ignite a trillion-dollar space race. 𝗦𝗽𝗮𝗰𝗲 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 $RKLB Rocket Lab $SIDU Sidus Space $FLY Firefly Aerospace $RDW Redwire Space $LUNR Intuitive Machines $MDA MDA Space $VOYG Voyager Space $YSS York Space Systems $SPCE Virgin Galactic $FJET Starfighters Space 𝗦𝗮𝘁𝗲𝗹𝗹𝗶𝘁𝗲 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 $ASTS AST SpaceMobile $GSAT Globalstar $SATS EchoStar $IRDM Iridium Communications $ETL Eutelsat $TSAT Telesat $GILT Gilat Satellite Networks $VSAT Viasat 𝗦𝗽𝗮𝗰𝗲 𝗜𝗺𝗮𝗴𝗶𝗻𝗴 $PL Planet Labs $GSAT Globalstar $SATL Satellogic $BKSY BlackSky Technology $SPIR Spire Global 𝗦𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘁𝘆 𝗠𝗮𝘁𝗲𝗿𝗶𝗮𝗹𝘀 $CRS Carpenter Technology $MTRN Materion $HXL Hexcel $ATI ATI $GLW Corning $PKE Park Aerospace 𝗔𝗲𝗿𝗼𝘀𝗽𝗮𝗰𝗲 & 𝗗𝗲𝗳𝗲𝗻𝘀𝗲 $RTX RTX Corporation $LMT Lockheed Martin $KTOS Kratos Defense & Security $VOYG Voyager Space $LHX L3Harris Technologies $NOC Northrop Grumman $BA Boeing $AIR Airbus $HO Thales 𝗦𝗽𝗮𝗰𝗲 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 $TDY Teledyne Technologies $APH Amphenol $KRMN Karman Space $RBC RBC Bearings $PH Parker Hannifin $AME AMETEK $GHM Graham $HEI Heico $DCO Ducommun $ATRO Astronics $TDG TransDigm

English
33
224
1.2K
235.1K
Paul Atreides retweetledi
Colorado Rockies
Colorado Rockies@Rockies·
Ended the homestand with a ROCKIES W!
Colorado Rockies tweet media
English
22
140
1.5K
23.3K
Denver Broncos
Denver Broncos@Broncos·
We have signed General Manager George Paton to a new five-year contract through the 2030 season, Owner and CEO Greg Penner announced. 📰 » buff.ly/uymkjJ6
Denver Broncos tweet media
English
87
319
2.8K
366.9K
Sam Block
Sam Block@theblockspot·
My AFC QB1 Rankings: 1. Patrick Mahomes 2. Josh Allen 3. Joe Burrow 4. Lamar Jackson 5. Drake Maye 6. CJ Stroud 7. Bo Nix 8. Trevor Lawrence 9. Justin Herbert 10. Daniel Jones 11. Kirk Cousins 12. Aaron Rodgers 13. Cam Ward 14. Geno Smith 15. Malik Willis 16. Deshaun Watson
English
68
11
220
33.7K