Dh🅰️rmesh Patel

6.3K posts

Dh🅰️rmesh Patel banner
Dh🅰️rmesh Patel

Dh🅰️rmesh Patel

@pateltexas

Katılım Kasım 2014
111 Takip Edilen483 Takipçiler
Dh🅰️rmesh Patel retweetledi
₿itcoin ₿utcher 🥩 🐑 🐷
Anyone that owns $IREN has homework Read 👇
franklee6924x@franklee6924T

From DGX to DSX — NVIDIA’s Secret Weapon Is $IREN DGX was the pivotal turning point that transformed NVIDIA from a chip company into a systems company. From the original ambition of creating a “unified data center standard,” DGX encountered resistance from the hyperscalers. They refused to adopt NVIDIA’s unified standard and instead developed their own chips, frustrating NVIDIA’s vision of becoming the dominant systems platform of the AI era. Google is perhaps the most notable example: after initially falling out of the core AI race, it rapidly recovered and mounted a full-scale counterattack, at one point nearly matching NVIDIA’s market capitalization and challenging NVIDIA’s status as the “godfather” of AI. DGX failed to conquer the cloud giants’ strongholds. NVIDIA’s massive sales still primarily came from individual GPU chips, while its plan to establish DGX as a new systems standard combining GPUs and software did not succeed. However, strategically, DGX laid an extremely important foundation for NVIDIA. Customers could reject the complete DGX system, but they still had to remain compatible with NVIDIA’s software management stack, otherwise GPU performance could not be fully utilized. As a result, technologies such as NVLink, NVSwitch, and Base Command matured alongside the market, enabling NVIDIA to evolve from simply selling GPUs into a company with full-stack platform control capabilities, while solidifying its dominance in scientific computing and private cloud markets. Entering the Blackwell era, the physical limits of power consumption, interconnect complexity, and liquid cooling made it impossible for the industry to continue operating independently. NVIDIA formally introduced the standardized AI factory architecture known as DSX, positioning it as the optimal path for building large-scale AI data centers. From this point onward, DGX evolved into DSX. In other words, it evolved from a “single-machine AI supercomputer” into a “data-center-scale AI factory standard,” completing the transition from standardizing one machine to standardizing an entire factory. During the Blackwell generation, AI training systems pushed power consumption, interconnect complexity, and thermal management close to physical limits: single rack power draw surpassed hundreds of kilowatts, NVLink/NVSwitch topologies became dramatically more complex, and liquid cooling shifted from optional to mandatory. In theory, this generation already required a standardized architecture like DSX. However, the supply chain ecosystem was not yet mature, and no partner possessed the full engineering capability necessary to build a true “system-level AI factory.” As a result, DSX remained only a concept and reference design. By the Vera Rubin era, NVLink 6, NVSwitch 6, and NVL72 rack systems formed a scalable, reproducible interconnect foundation, finally giving DSX the conditions necessary for practical deployment using NVIDIA’s full-stack technology. But that alone was still insufficient. To fully realize DSX, the industry also required: High-density interconnected rack architecture capabilities Large-scale liquid cooling expertise and construction experience GW-scale single-site campuses with stable long-term power supply These became the necessary conditions for constructing a flagship DSX factory. And only one company in the world possesses all three simultaneously. At this point, IREN enters the stage. Beyond those three core requirements, IREN possesses several additional strategic characteristics: Grid-based power supply. First, grid power solves the stability problem. To become a flagship DSX standard site, power interruptions and voltage fluctuations are unacceptable. Large-scale grid infrastructure provides industrial-grade voltage stability guarantees. Second, relying on the grid offers superior cost economics. Third, it provides regulatory compliance as public infrastructure, removing the unpredictable risks often associated with behind-the-meter (BTM) power systems, which frequently carry “gray-area” or temporary characteristics and therefore lack sufficient long-term reliability. GW-scale infrastructure. This enables the creation of multiple DSX modular standards. Small and medium-sized data centers become trivial by comparison — deployments from 10MW to over 1GW can all be standardized. This makes IREN the ideal flagship demonstration platform. We already know there will likely be SW2 and potentially additional nearby expansion sites. The total power capacity is enormous. DSX only truly begins with Rubin, and the upgrade path beyond that will continue for many years. Therefore, possessing ultra-large campus-scale sites within a single region is critically important. This advantage makes IREN the one unavoidable choice for NVIDIA. No other company possesses such massive strategic power infrastructure concentrated within a single region. The long-term significance and moat of such infrastructure can hardly be overstated. Small scattered sites stitched together — even if they collectively total several GW — are simply incomparable to IREN’s grid-connected GW-scale campuses concentrated in single regions. Green energy. As global concern over AI energy consumption rises, future “carbon footprint” metrics will become core evaluation standards for sovereign AI procurement. IREN’s long-term commitment to renewable energy allows NVIDIA’s DSX standard to become not only “the most powerful,” but also “the greenest.” This is critically important for attracting national-level infrastructure customers. Owned land and expansion capability. DSX requires data centers to be constructed from the ground up, including specialized transformers, ultra-heavy rack support systems, and complex liquid cooling pipelines. Only companies with full ownership of their land can customize AI factories entirely according to NVIDIA’s blueprint without facing endless approval bottlenecks or third-party building restrictions. Vertical integration and data center engineering expertise. IREN is not merely a data center operator. It is one of the only vertically integrated companies in the industry that owns everything from greenfield development, site development, power procurement, to operations and maintenance. For a DSX flagship factory, NVIDIA needs a partner capable of rapidly executing its “reference designs.” IREN’s model of “designing, building, and operating everything itself” dramatically shortens the timeline from blueprint to first deployed GPU. Liquid cooling capability. DSX is fundamentally a liquid-cooled era architecture. Liquid cooling becomes a central requirement. IREN already possesses high-density rack deployment experience through the Horizon project. Its Chief Innovation Officer is one of the most influential and experienced engineering experts in the United States in data center liquid cooling, high-density thermal architecture, and ASHRAE standards systems. He joined IREN specifically to help establish standards. Long-term operational data accumulation. IREN has years of operational experience managing large-scale, high-heat-density facilities running at full load. The physical environment of Bitcoin mining is remarkably similar to AI inference: both involve 24/7 full-load operations with extreme thermal output. This long-term expertise in managing massive electrical and thermal loads is, in reality, an extremely competitive advantage within the industry. From the analysis above, one can understand why IREN possesses such uniqueness and strategic importance in NVIDIA’s DSX ecosystem, while also inferring the likely development path of DSX itself: DSX will likely follow a “top-down” design philosophy. Using IREN’s massively scalable GW-scale sites and specialized engineering capabilities, NVIDIA can define a flagship standard that is “multi-scale, most advanced, most efficient, and greenest,” then deconstruct that blueprint into modular, reproducible AI factory units. In the future, whether it is a GW-scale campus or merely a company operating a single row of racks, as long as they purchase NVIDIA’s “DSX-certified package,” they could theoretically produce tokens with the same efficiency as IREN. This strategy of “defining the upper limit, then distributing the standard downward” reflects NVIDIA’s true ambition to control the global AI infrastructure ecosystem. IREN’s Sweetwater site — along with future surrounding expansion campuses — could become the incubation base for future AI intelligence factories. The scale of this project may become one of the largest engineering undertakings in human industrial history: “Intelligent factories produce intelligence, and DSX defines how those factories are built and run.” This concept has already moved beyond theoretical logic into actual execution. The reason I am able to describe this vision is because I have been observing this direction consistently for a long time. In reality, developments do appear to be moving this way. The broader historical backdrop behind the emergence of the DSX system comes primarily from three major forces: First, the rapid development of the AI industry has positioned DSX at the center of a major inflection point in compute infrastructure. DSX is a natural product of the industry reaching a new stage of maturity. AI is no longer confined to internal model training inside a few hyperscalers. The entire world now requires AI compute — including sovereign AI, enterprise private AI, neo-clouds, AI inference platforms, agent networks, token factories, vertical-specific models, and national AI infrastructure. Many countries — particularly in the Middle East, Europe, and Southeast Asia — are unwilling to place core AI workloads inside the public clouds of U.S. tech giants due to data sovereignty concerns. Through DSX templates, NVIDIA can help these nations rapidly build their own “national AI factories.” Hyperscalers can no longer monopolize AI infrastructure. This has become one of the most important changes of the past two years, and it forms the foundational soil for DSX to grow. Second, hyperscalers themselves are now constrained by power, land, permitting, transformers, and cooling systems. They are no longer in a state of unlimited expansion. AI inference also requires broader distributed deployment. In the future, there will be large numbers of regional AI factories, national AI nodes, and enterprise private clusters whose operators do not want to rely entirely on hyperscalers. Meanwhile, Google TPU, Amazon Trainium, and Microsoft Maia are all rapidly advancing. Over time, they may reduce GPU purchases, form closed ecosystems, and sell their own AI services externally — creating a strategic threat to NVIDIA. Therefore, NVIDIA must cultivate a “non-hyperscaler AI ecosystem.” Third, by the Blackwell and Vera Rubin eras, single-rack power consumption has already reached the 100kW–200kW range. Traditional air cooling, cabling, and power topology can no longer support these systems. This means that if data centers are not built according to NVIDIA’s DSX standards — system-level liquid cooling, GB200 NVL72 architecture, and related infrastructure — they simply will not be able to run the highest-efficiency compute systems. In other words, physical laws themselves are forcing the market to adopt NVIDIA’s standards. DSX effectively becomes the “entry ticket” to the AI era. Under this backdrop, DSX attempting to define the entire AI factory standard becomes a completely natural progression. It encompasses GPU architecture, network topology, liquid cooling standards, power design, rack standards, software orchestration, inference optimization, and token factory production pipelines — reflecting an ambition to turn AI compute into something like an “industrial iPhone operating system.” After understanding the broader context, one can then better appreciate the deeper strategic meaning behind IREN’s acquisition of Mirantis. To build a standardized flagship DSX factory, IREN already possesses massive GW-scale physical infrastructure, liquid cooling capability, and engineering expertise, but it still lacked the software layer needed to bridge “hardware” and “cloud services.” Mirantis perfectly fills this gap. Its deep experience in OpenStack, Kubernetes, and bare-metal management enables IREN to transform DSX into a directly usable cloud platform, allowing customers to immediately deploy AI workloads out of the box. For NVIDIA, this acquisition enables its key partner IREN to free DSX from dependence on AWS, Google, and other cloud giant software ecosystems, establishing an independent vertically integrated stack. For IREN, the acquisition elevates it from a power and infrastructure supplier into a true “neo-cloud” platform capable of delivering sovereign AI and national-scale AI infrastructure. Mirantis will also integrate NVLink topologies and DSX-specific features directly into software orchestration, enabling AI factories to achieve automated scheduling and token-level operational stability. Although CRWV and NBIS also possess software with somewhat similar functionality, their stacks are largely designed for internal use and are difficult to standardize for export. Mirantis, by contrast, is inherently a cloud-native software company serving global customers. This allows IREN to transform DSX into an exportable “software-defined AI factory” template. Its core product, k0rdent, can unify bare metal, virtual machines, and Kubernetes management while deeply optimizing for NVIDIA GPUs — a capability IREN could not realistically develop internally. One could speculate that NVIDIA itself encouraged this acquisition (especially given how inexpensive the deal appeared, with IREN seemingly receiving extraordinary value). The ultimate objective may be to give DSX an independent software control layer outside AWS and Google while creating a sovereign AI solution deliverable globally. Mirantis upgrades IREN from a hardware host into the software brain of DSX, while giving NVIDIA a strategic ally in global AI infrastructure that is open-source-oriented, conflict-free, economically aligned, and technologically synchronized. NVIDIA choosing not to acquire Mirantis directly — instead allowing IREN to do so — likely centers on avoiding antitrust concerns, maintaining delicate relationships with hyperscalers, and ensuring the software layer remains closely aligned with practical AI factory operations. An IREN acquisition appears as ecosystem collaboration rather than market domination. At the same time, Mirantis software must deeply integrate with IREN’s GW-scale power, liquid cooling, and operations systems, making IREN the more efficient owner. Financially, NVIDIA benefits through warrants tied to IREN’s growth without needing to bear integration costs itself. Through this strategy, NVIDIA effectively supports the emergence of a fully aligned DSX flagship manufacturing partner while preserving its own asset-light structure and strategic control position. A full-scale DSX rollout would potentially: Form the foundation for NVIDIA reaching a $10–15 trillion valuation Become the inevitable path for NVIDIA’s vision of AI intelligence factories and operational control Represent the most economical and efficient path for AI industry development Solve the post–Vera Rubin scaling direction for compute growth Become NVIDIA’s only viable method for breaking out of hyperscaler encirclement IREN becoming the sole top-level collaborator in such a massive project could not have happened spontaneously. Planning something of this scale would likely require at least a year or more of preparation. Ever since interactions between NVIDIA and IREN began to appear unusually secretive, I have noticed multiple examples suggesting unusual behavior between the two companies — almost like two people who already know each other pretending not to in public. Overall, they likely did not want the industry to speculate too early about their true intentions, while also minimizing regulatory attention. Even IREN, once an unusually transparent Bitcoin mining company, has become more guarded. In that sense, the limited interaction between IREN’s investor relations team and the market may actually make sense. At this point, IREN has already completed the most difficult parts of its AI industrial expansion: High-quality, massive-scale, long-term stable power supply, still growing further Secured supply access to the latest GPUs Developed engineering teams and supply chain maintenance capabilities Obtained status as a flagship manufacturing partner for next-generation AI intelligence factories The next inevitable step is filling IREN’s enormous power capacity with high-quality customer contracts. Unlike before, however, IREN may no longer need to build a traditional sales force or aggressively market its software capabilities. NVIDIA itself would likely help facilitate customer adoption while emphasizing the superior token-generation efficiency of the DSX system, because the economic interests of both companies are now deeply aligned. Under the DSX standard, NVIDIA could gradually evolve from a “supplier” into a “global orchestrator.” Securing partnerships with companies like Anthropic would no longer be solely IREN’s concern. NVIDIA itself has strong incentives to push major AI companies already experimenting with TPU systems toward using more NVIDIA-based infrastructure. Second, NVIDIA holds massive warrants in IREN. Every major contract signed by IREN potentially increases its stock price, allowing NVIDIA not only to profit from GPU sales but also from appreciation in IREN’s equity value. Jokingly speaking, one could say IREN “used warrants to buy itself a world-class salesman.” Third, the emergence of sovereign AI has opened an entirely new market. Since IREN acquired Mirantis, the term “sovereign AI” has appeared increasingly frequently. In fact, when evaluating IREN’s sites originally, many observers already noted their suitability for sovereign AI deployments. The strategic quality of IREN’s sites is fundamentally incomparable to the fragmented infrastructure assembled by many competitors. For NVIDIA, it needs a GW-scale “pure-blood” flagship to demonstrate to sovereign AI customers globally that NVIDIA’s DSX architecture can achieve superior token efficiency. Sovereign AI customers may not want to hand their compute, data, models, or orchestration layers to the three major U.S. hyperscalers, but they may still accept supplier sovereignty. The distinction is subtle but important. IREN’s careful positioning and boundary management become critical here. Even the Mirantis acquisition did not overextend into hyperscaler territory; in fact, sovereign AI is already one of Mirantis’ core areas. From this perspective, NBIS may actually be poorly positioned for sovereign AI because its full-stack platform structure is precisely what sovereign AI customers are attempting to avoid. Overall, IREN appears to be positioning itself at a point that maximizes strategic optionality and economic upside. If it attempted to define itself as a fully integrated hyperscaler-like platform, cooperation with a company at NVIDIA’s level would likely become far more difficult. This partnership with NVIDIA may sacrifice some of IREN’s historical emphasis on flexibility and optionality, but technological evolution tends to follow efficiency. The emergence of the “Magnificent Seven” itself demonstrates that antitrust frameworks increasingly must adapt to technological realities. For IREN, the most important objective during this enormous capital expenditure cycle is rapidly establishing scale advantages. These data center assets ultimately become long-term hard assets fully owned by the company. The more infrastructure accumulated now, the greater IREN’s strategic flexibility becomes in the future. From that perspective, this is an extremely rational strategy. As IREN gradually becomes one of the standard-setters for the next-generation compute ecosystem, it could eventually open additional monetization paths such as standardized AI factory design fees, consulting and licensing revenue, and software licensing income. Compared to its core business, these may remain relatively small, but the strategic value of occupying the top layer of the ecosystem could become nearly limitless. Many people — especially institutions — already seem to recognize these dynamics. IREN’s stock price may not have risen dramatically yet, but its trading volume appears to reveal something unusual. The volume itself has become almost phenomenon-level behavior. Meanwhile, IREN’s $6 billion ATM facility has remained active, and immediately after earnings the company issued a $2 billion convertible bond deal, later increased to $3 billion due to overwhelming demand. The intensity of demand, favorable interest rates, and high conversion prices were genuinely surprising. If the narrative described above is even partially correct, such investor enthusiasm becomes entirely understandable. Furthermore, the remaining $5 billion of ATM financing demand will likely be sold at significantly higher prices. At this point, CRWV, NBIS, NSCALE, and LAMBADA increasingly appear to function as alliance members within NVIDIA’s broader ecosystem. Capital markets have seen constant fighting among supporters of the three neo-cloud stocks, especially between NBIS and IREN supporters — almost to the point of ideological warfare. But IREN may ultimately represent NVIDIA’s final and most important strategic move: the piece that controls the overall board. Importantly, IREN achieved this position through its own decisions and execution. It was not merely “chosen” or artificially supported. Yet at the same time, NVIDIA likely must publicly deny any direct support relationship — readers can think carefully about the reasons themselves. NVIDIA’s earlier strategic investments were designed primarily to secure the GPU deployment ecosystem. As the DSX system matures, companies like CRWV, NBIS, NSCALE, and LAMBADA may increasingly become deployment and implementation partners. Interestingly, during the earlier NBIS-versus-IREN debates, some NBIS supporters argued that the two companies did not need to be adversaries and might eventually cooperate — for example, IREN leasing power capacity to NBIS. Looking at things now, cooperation indeed seems possible, but perhaps in the opposite direction: IREN may ultimately become the holder of the standard itself, licensing intellectual property outward. Finally, this article is ultimately just speculative corporate-strategy fiction — written mainly for entertainment purposes, not investment advice.

English
11
13
224
36.6K
Dh🅰️rmesh Patel retweetledi
ASTS Investors 🅰️
ASTS Investors 🅰️@ASTS_Investors·
DE SHAW INCREASE THEIR AST SPACEMOBILE POSITION BY NEARLY 150% DE Shaw loaded back up in AST during Q1, they purchased over 2 million shares. Their position was worth $285m at the end of Q1 💰 Q1 2026 - 3,437,298 Q4 2025 - 1,404,959 Change - ⬆️ 2,032,339 $ASTS
GIF
English
4
15
199
20.4K
Dh🅰️rmesh Patel retweetledi
Gaetano
Gaetano@crux_capital_·
The $PENG community is strong! So I just wanted to take the paywall down So everyone can see it. cruxcapitalgroup.substack.com/p/peng-a-new-x… Please bookmark & share!
Gaetano@crux_capital_

Alright, $PENG folk @pennycheck put me onto this one early last week There is so much to unpack Quite an interesting company! I laid out all my thoughts here: cruxcapitalgroup.substack.com/p/peng-a-new-x… I'll throw a summary up on X later today or tomorrow

English
7
15
188
34.8K
Dh🅰️rmesh Patel retweetledi
Muhammad Ayan
Muhammad Ayan@socialwithaayan·
50 WEBSITES GOOGLE DOESN'T WANT YOU TO KNOW 1. 12ft. io — bypass any paywall 2. libgen. is — millions of free textbooks 3. sci-hub. se — free research papers 4. alternativeto. net — find free app alternatives 5. justwatch. com — find where to stream anything 6. archive. org — access any old webpage ever 7. gutenberg. org — 70K free classic books 8. pdfdrive. com — free PDF downloads 9. openculture. com — free courses from top unis 10. wolframalpha. com — solve any math instantly 11. photopea. com — free Photoshop in browser 12. squoosh. app — compress any image free 13. remove. bg — remove image backgrounds free 14. cleanup. pictures — erase objects from photos 15. unscreen. com — remove video backgrounds 16. carbon. now. sh — turn code into art 17. ray. so — beautiful code screenshots 18. shots. so — free product mockups 19. smartmockups. com — mockups without Photoshop 20. haveibeenpwned. com — check if you were hacked 21. virustotal. com — scan any file for malware 22. privnote. com — send self destructing messages 23. temp-mail. org — disposable email instantly 24. file. io — share files that auto delete 25. archive. ph — save any webpage forever 26. similarsites. com — find any site alternatives 27. radio. garden — listen to any radio worldwide 28. everynoise. com — explore every music genre 29. tunefind. com — find songs from any show 30. musicforprogramming. net — music to focus with 31. mynoise. net — custom focus soundscapes 32. coffitivity. com — cafe sounds for productivity 33. elicit. org — AI research paper assistant 34. consensus. app — search what science agrees on 35. connectedpapers. com — map research visually 36. semanticscholar. org — free academic search 37. scispace. com — understand any research paper 38. summarize. tech — summarize any YouTube video 39. phind. com — AI search for developers 40. regex101. com — test any regex instantly 41. codebeautify. org — format any code cleanly 42. jsonformatter. org — read JSON like a human 43. explainshell. com — understand terminal commands 44. raindrop. io — bookmark manager that works 45. downdetector. com — check if any site is down 46. tineye. com — reverse image search 47. fast. com — check your internet speed 48. smallpdf. com — edit PDFs free 49. ilovepdf. com — merge and split PDFs 50. 10minutemail. com — temp email in seconds The internet is bigger than Google shows you. Most people never leave the first page.
Muhammad Ayan tweet mediaMuhammad Ayan tweet mediaMuhammad Ayan tweet mediaMuhammad Ayan tweet media
English
156
4.5K
14.9K
694K
Dh🅰️rmesh Patel retweetledi
Swati Gupta
Swati Gupta@hrswatigupta·
🚨BREAKING: Claude can now write your entire job application like a top recruiter. Here are 10 prompts that turn a job description into a tailored CV, cover letter, and interview prep guide in under 5 minutes. (Save this)
Swati Gupta tweet mediaSwati Gupta tweet mediaSwati Gupta tweet media
English
42
474
2.3K
183.8K
Dh🅰️rmesh Patel retweetledi
Avi Chawla
Avi Chawla@_avichawla·
Claude vs. Claude Code vs. Cowork. Anthropic offers three distinct ways to interact with Claude, and each one targets a fundamentally different workflow. Think of it as: Chat for thinking, Code for building, and Cowork for doing. Here's a quick breakdown: 1️⃣ Claude Chat This is the conversational AI assistant most people already know. You type a prompt, Claude responds, and you iterate together. - Turn rough ideas into structured plans through conversation - Write emails, reports, essays, and long-form content - Research and summarize complex topics in minutes - Analyze documents, PDFs, and images - Build interactive prototypes through Artifacts The key here is that everything happens through conversation. You're thinking with Claude, not delegating work to it. It's available on every device, has a free tier, and supports persistent memory across sessions. The tradeoff is that it has no direct access to your local files (upload only), and it can't generate raster images natively. 2️⃣ Claude Code This is a terminal-native coding agent. You describe what you want in plain English, and Claude reads your codebase, writes code, runs tests, fixes errors, and ships the result. - Build and debug entire features across the full codebase - Write, run, and fix tests automatically - Manage git workflows and create pull requests - Spawn multiple parallel agents working on different parts of a task simultaneously It handles the full development cycle end to end, from planning to execution to testing. With the CLAUDE(.)md configuration file, you can teach it your project's conventions, patterns, and constraints so it writes code the way your team expects. The tradeoff is a steeper learning curve compared to Chat, and token costs can add up during heavy sessions. 3️⃣ Claude Cowork This is the newest addition. Anthropic describes it as Claude Code for the rest of your work. It's an agentic desktop assistant that automates file management and repetitive tasks through a GUI. You describe an outcome, and Claude plans, executes, and delivers finished work: formatted documents, organized file systems, spreadsheets with working formulas, and synthesized research. - Direct local file access and editing (no upload/download cycle) - Schedule recurring tasks automatically - Assign tasks remotely via Dispatch from your phone - Computer Use lets Claude control your screen directly It runs inside a sandboxed virtual machine on your computer, so Claude can only access folders you explicitly grant. You don't need to know how to code to use it. The tradeoff is that your computer must stay awake for tasks to run, and it's still in research preview. Here's how to think about choosing between them: → If you need to think through a problem or get writing/research help, use Chat → If you're building software and want an autonomous coding partner, use Code → If you have a clearly defined deliverable that involves local files and desktop workflows, use Cowork All three are included in the same subscription starting at $20/month, which makes it one of the highest-leverage subscriptions in productivity software right now. I've put together a visual below that maps the workflow of each product side by side. Also, if you want to go deeper into Claude Code specifically, my co-founder wrote a detailed article covering the anatomy of the .claude/ folder, a complete guide to CLAUDE(.)md, custom commands, skills, agents, and permissions, and how to set them all up properly. Read it below.
GIF
Akshay 🚀@akshay_pachaar

x.com/i/article/2034…

English
45
335
1.5K
200.8K
Dh🅰️rmesh Patel retweetledi
Elias Al
Elias Al@iam_elias1·
Anthropic is paying $3,850 a week to people with no AI experience. No PhD required. No published papers. No prior research background. Just a strong technical mind and a genuine interest in making AI safe. This is the Anthropic Fellows Program. And it is one of the most underrated opportunities in technology right now. Here is exactly what it is. The Anthropic Fellows Program is designed to accelerate AI safety research and foster research talent providing funding and mentorship to promising technical talent regardless of previous experience. Fellows work for 4 months on empirical research questions aligned with Anthropic's overall research priorities, with the aim of producing public outputs like a paper. Four months. Full-time. Paid. Mentored by the researchers building the world's most advanced AI. And the results from the first cohort were not small. Fellows developed agents that identified $4.6 million in blockchain smart contract vulnerabilities and discovered two novel zero-day exploits, demonstrating that profitable autonomous exploitation is now technically feasible. A year prior, an Anthropic fellow developed a method for rapid response to new ASL3 jailbreaks, techniques that block entire classes of high-risk jailbreaks after observing only a handful of attacks. This work became a key component of Anthropic's ASL3 deployment safeguards. Other fellows published the subliminal learning paper, the research proving AI models transmit behavioral traits through unrelated data which landed in Nature. Others produced the agentic misalignment research showing frontier models resort to blackmail when facing replacement. Others open-sourced attribution graph tools that let researchers trace the internal thoughts of large language models. Over 80% of fellows produced papers. Over 40% subsequently joined Anthropic full-time. 80% published. 40% hired. From a program that does not require any prior AI safety experience to enter. Here is what the program looks like in practice. Anthropic mentors pitch their project ideas to fellows, who choose and shape their project in close collaboration with their mentors. You are not assigned busywork. You are not a research assistant. You own the project. You work alongside the people who built Claude, who designed its safety systems, who published the papers that define the field. The stipend is $3,850 USD per week, approximately $61,600 for the full 4 months with access to a compute budget of approximately $10,000 per fellow per month for running experiments. Here is what the 2026 program covers. Research areas include scalable oversight, adversarial robustness and AI control, model organisms, mechanistic interpretability, AI security, model welfare, economics and policy, and reinforcement learning. Something for every technical background. Not just ML engineers. Successful fellows have come from physics, mathematics, computer science, and cybersecurity. You do not need a PhD, prior ML experience, or published papers. The one requirement: work authorization in the US, UK, or Canada. Anthropic does not sponsor visas for fellows. Here is the timeline you need to know. The next cohort begins July 20, 2026. Applications are reviewed on a rolling basis — earlier applications get more consideration. The process includes an initial application and reference check, technical assessments, interviews, and a research discussion. Applicants are encouraged to apply even if they do not meet every listed qualification. The program values potential, motivation, and research curiosity over rigid credential requirements. This is the rarest kind of opportunity in technology. A company at the frontier of AI, one valued at over $900 billion offering outsiders direct access to its research infrastructure, its mentors, and its most important open problems. Paying them generously to do it. And then hiring 40% of them afterward. Most people who want to work on AI safety spend years trying to publish papers, get into the right PhD program, and find a way in. The Fellows Program is the door they did not know existed. It is open right now.
Elias Al tweet media
English
189
582
4.5K
650.6K
Dh🅰️rmesh Patel retweetledi
Gaetano
Gaetano@crux_capital_·
Alright, $PENG folk @pennycheck put me onto this one early last week There is so much to unpack Quite an interesting company! I laid out all my thoughts here: cruxcapitalgroup.substack.com/p/peng-a-new-x… I'll throw a summary up on X later today or tomorrow
English
12
17
285
55.2K
Dh🅰️rmesh Patel
Dh🅰️rmesh Patel@pateltexas·
@Agrippa_Inv what say you
𝒰𝓂𝒷𝒾𝓈𝒶𝓂@Umbisam

I strongly believe in hybrid models when scaling into the unknowns ... ie this new AI stratosphere ... at times, particularly if/when you need a ton of capital, you may want to take note of at least some investors preferences market is telling, quite loudly, colo is an appreciated model, as it gives (perception of) stability for much longer than csp (10/15 years) that means higher market cap, easier access to capital and - above all - lower dilution for current shareholders going all in GPU as-a-service seems more ideological than first principled (at least for me & several investors I regularly talk to) ... it's a bet without para-shoots ... closer to reasoned gambling than fundamental investing ... it may turn out to be a huge win, as possibly a dramatic defeat ... 90/10 ... 50/50 ... 10/90 ... nobody knows odds now days ... we'll know in two years maybe I understand NBIS & CRWV seemingly winners in the GPU-only model ... sort of supporting the idea that a vertically integrated GPU as-a-service provider will necessarily win even more ... but that's a wrong take imo !! they seem to win as they are 1yr ahead in the cycle (ahead by leasing DCs here & there instead of building them out/owning) ... their anticipated cash flows are winning (as it happened with MARA & RIOT back in the day, in mining) ... but the business model - beside initial anticipation - is structurally weak and their current market cap proves nothing (they better raise at these levels, in fact). All in all ... GPU as-a-service only is an unproven model and colo is a fantastic tool for smoothing/balancing out upside & downside. No need to prefer one over the other ... just do both. Pretty sure IREN would trade at $100/sh if it had signed a colo deal with MSFT. Or if it would sign a similar size colo-only deal with any top HS in coming weeks/months. My take, in a world of unknowns and having secured a ton of GWs, is to present the market - aspirationally - a 50/50 colo/GPU model. Taking the most out of either. Going all in GPU-only is limiting your many options. For no reasons.

English
1
0
2
633
Dh🅰️rmesh Patel retweetledi
The Trend Sage
The Trend Sage@JonkooTrades·
The part most people are missing on $PENG - the MemoryAI server isn't a science project. It plugs directly into $NVDA - NVIDIA Dynamo, NVIDIA's own framework for KV cache offloading. That means it drops into any existing NVIDIA inference cluster. No software rewrite. No vendor lock-in fight. It just works. The performance gap closes the case: → 10x faster than NVMe-based KV caching → 3.8x faster than RDMA approaches → Speeds at a fraction of HBM cost per TB Customers are already moving. In Q2 FY26, $PENG added 5 new AI/HPC customers, including a Tier 1 financial institution deploying MemoryAI in production. They raised full-year revenue and EPS guidance on the back of it. The competitive landscape tells the story: - XConn + MemVerge: SC25 demo, not shipping - SK Hynix: showcased pooling, not commercialized - $ALAB - Astera Labs: ships the controller, not the integrated server $PENG is the only company with a production-ready, NVIDIA-compatible CXL KV cache server right now. First mover in a category the Street hasn't priced in yet. Layer in: - SK Hynix strategic collaboration (Jan 2025) for leading-edge DRAM - 3.3B GPU runtime hours feeding OriginAI reference architecture - ~$32M cash inflow from the $MRVL / Celestial AI deal - Photonic Memory Appliance in active development The market is fixated on compute. The next leg of AI infrastructure is memory disaggregation. $PENG is the only public pure-play shipping the product today. Asymmetric setup. @aleabitoreddit @ParadisLabs @BlackPantherCap @TheStockDon
The Trend Sage@JonkooTrades

Is anyone even looking at $PENG right now? Memory is the variable that determines GPU utilization. When memory is the bottleneck, GPUs sit idle. When memory scales, cost per token drops and inference demand expands. $PENG - Penguin is building exactly that layer. What they launched: In March 2026, Penguin released the MemoryAI CXL-based KV Cache server. The industry's first production-ready server built on CXL memory disaggregation. It gives inference workloads 11 TB of attached memory beyond what the GPU's HBM can hold. This directly solves the Memory Wall. The point where large model inference exceeds onboard GPU memory and performance collapses. The photonic angle: Penguin is also developing a Photonic Memory Appliance (PMA) using optical interconnects to extend memory bandwidth at scale. They were early investors in Celestial AI, acquired by $MRVL Marvell for billions. This is a pure-play AI infrastructure company transitioning to high-margin products. Same space as $SMCI, significantly less covered. The memory wall is real. $PENG is building the solution.

English
9
11
83
22.1K
9 Ventures
9 Ventures@ThematicTrader·
Even more excited about the portfolio. Portfolio Update: 1. $VPG ($45) -> New ATH & Monster Quarter 2. $PENG ($28.4) -> Added again today 3. $FCEL ($9.15) -> Any data center order news and this is a $30+ stock 4. $NBIS ($111.5) -> ER tomorrow 5. $HIMX ($15.4) 6. $FLNC ($16.7) 7. $SHMD ($6.6) Focus List: $VECO $POWI $KLIC $OUST $CEVA $NVTS $VICR $SYNA $INDI $HLIT $CRCL $TSEM
9 Ventures@ThematicTrader

STILL Incredibly excited about the portfolio. Portfolio Update: 1. $VPG ($45) -> NEW ATH and ER 5/12! 2. $FCEL ($9.15) -> On verge of breakout. Any data center order news and this is a $30+ stock 3. $PENG ($28.4) -> Will be the TML 4. $NBIS ($111.5) -> ER next week 5. $HIMX ($15.4) -> Monster! 6. $FLNC ($16.7) -> Monster! 7. $SHMD ($6.6) -> tiny Focus List: $VECO $POWI $KLIC $OUST $CEVA $NVTS $VICR $SYNA $INDI $HLIT

English
8
6
115
42.3K
Dh🅰️rmesh Patel retweetledi
Michael Sikand 🦑
Michael Sikand 🦑@michaelsikand·
I just bought $2M of a brand new stock after it crashed 7% today. $PENG is now a 20% position in my Asymmetrical Bets fund (+89% YTD) on @joinautopilot followed by $10M. Credit goes to legend @pennycheck for being the first to call this stock. With Penguin Solutions I now own the winner agnostic integrator behind the memory, CPU, and photonics supercycle at under 17x forward earnings. 1) The memory business alone is worth the market cap. Penguin's Integrated Memory biz = they take raw DRAM chips from manufacturers like SK Hynix and package them into custom memory modules built to spec for AI servers, telco gear, and enterprise systems. It's now 50% of revenue, did $172M last quarter, growing 63% YoY, ~$800M annualized. Apply a 3x price to sales on just this unit and you're already above what $PENG is worth today. 2) Play the CPU supercycle. CPU:GPU ratios going from 1:8 to 1:1 as agentic AI takes over. $PENG is the lead integration partner for AMD EPYC and Intel Xeon. Every new socket = more memory cooling and integration revenue baked in. 3) The AI Factory platform is real. OriginAI is their turnkey deployment from 256 to 16,000+ GPU clusters for sovereign and enterprise customers. 85,000 GPUs already deployed. UBS says non hyperscaler buyers (sovereigns, neoclouds, enterprises) capture 48% of AI infra spend in 2026. Hyperscalers build in house. But these other players ALL need Penguin. 4) Photonics is the unpriced asymmetric bet. $PENG called photonics early and was an early investor in Celestial AI. $MRVL acquired it $3.25B in December. Now Penguin is building the Photonic Memory Appliance, making it the only public play on this kind of wild photonics tech. The PMA is basically a box that uses light to link memory across a bunch of servers so the entire AI cluster can share one giant pool of memory like it's one big computer. Marvell guides Celestial to $1B revenue in 2029. If Penguin captures even low double digits of that stream, that could be 9 figs of unpriced networking revenue on $PENG's highest margin, most defensible IP. 5) People/partners are cracked. Chairman of $PENG is ALSO Chairman of $LITE. AMD CTO Mark Papermaster sits on the board SK Telecom dropped $200M as a strategic investor New CPO Ian Colle ran AI infra at AWS 6) Risks are real but manageable Penguin's AI cluster business is lumpy and one big customer slipping a quarter can tank earnings (already happened in Q2, down 42% YoY). The memory shortage is a headwind as high DRAM prices are slowing customer orders and hitting Penguin's gross margins. The photonics upside is a 2027+ story, so if it slips, the stock can sit dead money for a while. Because the multiple is still so cheap, I overall see limited downside compared to the upside if their photonics option can be quantified with $MRVL where I could see Penguin trading closer to a 30x+ forward PE. Surf's up. Full thesis linked on Substack below.
Michael Sikand 🦑 tweet media
English
50
51
650
239.4K
Dh🅰️rmesh Patel retweetledi
Leo Edge
Leo Edge@LeoCapital_01·
$ASTS just quietly completed one of the hardest things a hardware company can do. They built their own custom ASIC chip. From scratch. And it's about to go into production satellites. Here's why this is a bigger deal than the market realizes. 🧵
Leo Edge tweet media
English
21
72
660
72.8K
Dh🅰️rmesh Patel retweetledi
Pepe Invests
Pepe Invests@pepemoonboy·
The amount of alpha in the FinX community is astonishing. If you follow the right people, it’s about as close as you can get to a cheat code for making money. Here are a few of my favorite accounts right now for straight alpha: - @aleabitoreddit - @ParadisLabs - @stocktalkweekly - @IncomeSharks - @DeepValueBagger - @jrouldz - @zephyr_z9 - @Sandeman52 - @michaelsikand - @KawzInvests - @Kaizen_Investor - @TheProfInvestor - @TheRonnieVShow - @Remzztrades - @Gubloinvestor - @crux_capital_ - @ZaStocks - @daniel_koss - @WheelieInvestor - @Mr_Derivatives - @mvcinvesting - @aristotlegrowth - @Frenchie_ - @investingluc There's a few that I'm leaving out who I will make another separate post for. Didn't want to make the list too long in order to avoid diluting the message. Which are your top 3?
English
87
66
855
172.5K
Dh🅰️rmesh Patel retweetledi
Charlie Hills
Charlie Hills@charliejhills·
Anthropic just shipped Claude's 10 finance agents. Available in Cowork, Code, API, and Office. How to install in 4 steps. 1. Install in Cowork. - Open Settings → Plugins → Add plugin. - Paste: github.com/anthropics/fin… - Pick the agents you want from the list. 2. Install for Microsoft Office. - Open the GitHub link (above). - Copy the install command into Claude Code. - Run /claude-for-msft-365-install:setup to finish. 3. Connect your data sources. - There are 17 data partners at launch. - Add the ones you pay for as connectors. 4. Pick one. Run today. - Map one agent to a job on your plate this week. - Paste the prompt. Edit it. Run it. Try these prompts: ✦ pitch-agent Pulls comps, precedents and LBO numbers into a branded pitch deck. "Draft a 12-slide pitchbook for our acquisition of [TargetCo]." ✦ meeting-prep-agent Pulls past notes, recent news and talking points into a one-page brief. "Build a one-page brief for tomorrow's 10am with [Client]." ✦ earnings-reviewer Reads earnings reports and flags the surprises and risky wording. "Summarise [Ticker]'s Q1 earnings and flag every surprise vs forecast." ✦ model-builder Builds a working financial model in your spreadsheet from one prompt. "Build a valuation model for [TargetCo] vs six similar companies." ✦ market-researcher Pulls sector trends, competitor moves and pricing into one memo. "Write a 1,000-word memo on European fintech lenders." ✦ valuation-reviewer Audits a valuation model and challenges every assumption inside it. "Review the valuation model on [sheet]. Challenge every assumption." ✦ gl-reconciler Matches your books against bank statements and flags any mismatches. "Reconcile our bank books against last month's statement." ✦ month-end-closer Runs your monthly accounts checklist end to end and flags any issues. "Run the April month-end close on our standard checklist." ✦ statement-auditor Audits the books for errors and control gaps before they ship. "Audit April's P&L and balance sheet against our books." ✦ kyc-screener Vets new clients against watchlists and ownership records. "Run a background check on [NewClient]. Pull ownership records." Free Claude playbooks → charliehills.substack.com Repost ♻️ to help someone in your network.
Charlie Hills tweet media
English
30
183
989
99.1K