Deep Learner

3K posts

Deep Learner

Deep Learner

@_vision2020_

Angel Investor | Alternative Income Freak | Tech Veteran

Katılım Mart 2021
955 Takip Edilen119 Takipçiler
Deep Learner retweetledi
Hasan Toor
Hasan Toor@hasantoxr·
Best GitHub repos for Claude code that will 10x your next project: 1. Superpowers github.com/obra/superpowe… 2. Awesome Claude Code github.com/hesreallyhim/a… 3. GSD (Get Shit Done) github.com/gsd-build/get-… 4. Claude Mem github.com/thedotmack/cla… 5. UI UX Pro Max github.com/nextlevelbuild… 6. n8n-MCP github.com/czlonkowski/n8… 7. Obsidian Skills github.com/kepano/obsidia… 8. LightRAG github.com/hkuds/lightrag 9. Everything Claude Code github.com/affaan-m/every…
Hasan Toor tweet mediaHasan Toor tweet media
English
107
727
4.8K
381.1K
Deep Learner retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
There are 6 levels of making Claude Code run autonomously, and most people are stuck on Level 1. Level 1: Kill the permission prompts. Run claude --dangerously-skip-permissions. One flag. Now it stops asking “Can I edit this file?” every 30 seconds while you’re checking Slack. Level 2: Context window management. Claude Code now supports 1M tokens. Use /clear between tasks. Run /compact at 60% usage instead of waiting for auto-compaction to fire at 90% when the model is already forgetting your instructions. Level 3: Subagents. The reason it stops at 15 minutes: everything runs in one context window. Subagents run in separate contexts. Build a looping todo command, each task executes in its own window. Builds, tests, and git operations never touch the main conversation. 2+ hours autonomous with zero intervention. Level 4: Ralph Wiggum loop. Official Anthropic plugin. Claude works, tries to exit, a Stop hook blocks the exit, re-feeds the same prompt. Each iteration sees modified files and git history from previous runs. One developer ran 27 hours straight, 84 tasks completed. Geoffrey Huntley ran one for three months and built a programming language with a working LLVM compiler. Level 5: Karpathy’s AutoResearch. On March 7, Karpathy pushed a 630-line script to GitHub and went to sleep. Woke up to 100+ ML experiments completed overnight. 25K stars in five days. The difference from Ralph: structured eval loops. Define a metric, run, measure, analyze failures, improve, repeat. One Claude Code port took model accuracy from 0.44 to 0.78 R² across 22 autonomous experiments. Level 6: VPS + OpenClaw for 24/7. Your laptop lid closing kills everything. Run Claude Code on a VPS inside tmux. Detach, close your laptop, come back tomorrow to a finished diff. OpenClaw (247K GitHub stars) takes it further: a persistent gateway connecting LLMs to your real tools, running 24/7 across messaging, email, git, and calendars. Jensen Huang at GTC called it “probably the most important release of software ever.” The unlock at every level is the same: give Claude a way to verify its own work.
Joseph Garvin@joseph_h_garvin

Claude code rarely runs for longer than 15m without stopping and asking for input from me. How do all these stories of people letting agents run overnight work? Custom harnesses? Yelling at Claude in all caps to keep going no matter what?

English
39
59
915
116K
Deep Learner retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
How to setup your Claude code project? TL;DR Most developers skip the setup and just start prompting. That's the mistake. A proper Claude Code project lives inside a .𝗰𝗹𝗮𝘂𝗱𝗲/ folder. Start with 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 as Claude's instruction manual. Split it into a 𝗿𝘂𝗹𝗲𝘀/ folder as it grows. Add 𝗰𝗼𝗺𝗺𝗮𝗻𝗱𝘀/ for repeatable workflows, 𝘀𝗸𝗶𝗹𝗹𝘀/ for context-triggered automation, and 𝗮𝗴𝗲𝗻𝘁𝘀/ for isolated subagents. Lock down permissions in 𝘀𝗲𝘁𝘁𝗶𝗻𝗴𝘀.𝗷𝘀𝗼𝗻. There are two .𝗰𝗹𝗮𝘂𝗱𝗲/ folders: one committed with your repo, one global at ~/.𝗰𝗹𝗮𝘂𝗱𝗲/ for personal preferences and auto-memory across projects. The .𝗰𝗹𝗮𝘂𝗱𝗲/ folder is infrastructure. Treat it like one. The article below is a complete guide to 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱, custom commands, skills, agents, and permissions, and how to set them up properly.
Akshay 🚀 tweet media
Akshay 🚀@akshay_pachaar

x.com/i/article/2034…

English
95
737
6.4K
880.4K
Deep Learner retweetledi
rubicon59
rubicon59@rubicon59·
Actually, AEC copper is replacing optical fiber at an increasingly rapid rate for shorter scale out distances in data centers because it draws half the power and is 1000x more reliable. (The optics shilling is at a fever pitch now.)
CK Capital@CKCapitalxx

The AI data center has a problem nobody is talking about enough. The GPU is no longer the bottleneck. The wire is. Copper cannot move data fast enough to keep up with what Nvidia is building. It consumes too much power. It generates too much heat. As AI clusters get denser and models get larger, the physical limits of copper become a hard ceiling on what AI can do. The solution is light. Optical interconnects transmit data faster, cooler, and at a fraction of the power consumption of copper. Co-packaged optics, where the laser is packaged directly onto the GPU chip itself, reduces power consumption in AI clusters by up to 40%. That market is growing from under $400 million today to nearly $3 billion by 2032. Two companies own this transition. $LITE controls roughly 50-60% of the specialized laser chip market that powers these systems. Q2 revenue just came in at $665 million, up 65% year over year. Q3 guidance is $780-830 million, implying over 85% growth. Their backlog of optical circuit switches is sold out through end of 2027. They are targeting a $2 billion quarterly revenue run rate within two years. The CEO said they are at the starting line. Rosenblatt price target $900. $COHR just reported data center revenue up 36% year over year. A 20-year relationship with Nvidia just got formalized into a full strategic partnership covering next-generation silicon photonics, ultra-high power lasers, and priority capacity rights. Zero sell ratings on the Street. Rosenblatt price target $375. Both just got added to the S&P 500. Every index fund on earth is now a permanent forced buyer. Mizuho named $LITE a top AI pick for 2026 alongside Nvidia and Broadcom. The GPU era gave us Nvidia. The photonics era is giving us $LITE and $COHR.

English
14
17
260
59.5K
Deep Learner retweetledi
Melissa Chen
Melissa Chen@MsMelChen·
Iconic moments in Trump diplomacy You think he won’t really go there but then he does
Melissa Chen tweet mediaMelissa Chen tweet mediaMelissa Chen tweet mediaMelissa Chen tweet media
English
137
1.1K
7.9K
525.5K
Deep Learner retweetledi
outside five sigma
outside five sigma@jwt0625·
at this point maybe we should just do immersion cooling for everything (liquid cooling all the way into the pluggable)
outside five sigma tweet media
English
6
6
73
5.1K
Deep Learner retweetledi
Science girl
Science girl@sciencegirl·
Bringing the elongated skulls of Peru to life, from the ancient Paracas culture and their practice of cranial deformation. 📹historyrevivedofficial
English
521
650
5.8K
3.1M
Deep Learner retweetledi
Scott Stevenson
Scott Stevenson@scottastevenson·
Google disrupting Figma is unexpected
Google Labs@GoogleLabs

Introducing the new @stitchbygoogle, Google’s vibe design platform that transforms natural language into high-fidelity designs in one seamless flow. 🎨Create with a smarter design agent: Describe a new business concept or app vision and see it take shape on an AI-native canvas. ⚡️ Iterate quickly: Stitch screens together into interactive prototypes and manage your brand with a portable design system. 🎤 Collaborate with voice: Use hands-free voice interactions to update layouts and explore new variations in real-time. Try it now (Age 18+ only. Currently available in English and in countries where Gemini is supported.) → stitch.withgoogle.com

English
137
317
5.5K
1.6M
Deep Learner retweetledi
QF Research
QF Research@ResearchQf·
1) $LITE is up $56 and $132 since yesterday morning. LITE presented during market hours at OFC yesterday! I may be almost 80% there on CPO scale-up opportunity through at least Feynman. There has been bunch of new info in a day. Here are 2 key LITE slides. Phase 0. Again scale-out is well understood near term. "Multi-hundred" million 1H27 alone. Quantum-X and Spectrum-X CPO switch build data later. Phase 1. That inter-rack NVL576 scale-up I've been referring to. 3x to 4x CPO links vs Phase 0. Phase 2. 3x to 4x vs Phase 1. NVL1176 also includes longer distance intra-rack due to those physical copper bandwidth-length limits (see 2nd slide). Phase 2 alone is causing those huge $NVDA (and now other customers) demand signals for LITE $COHR and bunch of other suppliers. 100% optical scale-up is inevitable. 3.2T 6.4T+. Will discuss various resulting photonics opportunities from a top down and bottoms up basis later. ASPs, units, high power (e.g. 400 mW) CW lasers etc.
QF Research tweet mediaQF Research tweet media
QF Research@ResearchQf

I might be 70% there in understanding CPO scale-up opportunity through Feynman, but some technical clarifications plus coming supplier datapoints should take that to 80-90%? Inter-rack optical scale-up for NVL576, as mentioned earlier, appears confirmed for Oberon. But that's only a first step. Scale-out near term is well understood. Orders received to date by $LITE or $COHR and their high level TAM statements seem roughly consistent. Jensen is often imprecise during presentations, but that often leads to opportunities. This is one of many AI technologies where fortunes could be made or lost over the next years.

English
6
15
139
20.9K
Deep Learner retweetledi
TheValueist
TheValueist@TheValueist·
This Vera CPU presentation was the best of the day as it crystalizes what to expect for GAI over the next 24-36 months. My biggest takeaway from GTC Day 2: the market is grossly underestimating the massive CPU demand that will be generated by agentic GAI and by LLMs acting as orchestrators marshaling CPU compute resources for a countless variety of tasks. $NVDA $AMD $INTC $AVGO
TheValueist tweet mediaTheValueist tweet mediaTheValueist tweet mediaTheValueist tweet media
TheValueist@TheValueist

$NVDA DESK NOTE - NVIDIA Vera CPU: The Amdahl Argument for a Purpose-Built AI Factory CPU atlaspeakresearch.com/report/1683d7 Bottom Line: Vera is best understood as a purpose-built AI-factory CPU designed to compress the serial fraction of reasoning, tool-use, and reinforcement-learning workflows so that GPU capital does not sit idle behind CPU-bound orchestration. Its differentiation comes from five interacting features: unusually high single-thread ambition for control-heavy code, unusually high memory bandwidth per active core, deterministic full-socket behavior, coherent CPU-GPU memory via NVLink-C2C, and rack-scale power efficiency. Against that, AMD and Intel remain stronger on universality, x86 software inertia, memory capacity, and standards-based flexibility. Vera therefore looks less like a broad x86 killer and more like a specialized control-plane and environment processor that becomes highly compelling precisely where Amdahl's Law makes the CPU impossible to ignore.

English
6
20
104
40.6K
Deep Learner retweetledi
Serenity
Serenity@aleabitoreddit·
The Photonics Supercycle is here. $NVDA is spearheading the next leap into CPO & Silicon Photonics. And we’re only near the inflection point with chokepoints in the supply chains like Soitec ( $SOI ) or Sivers ( $SIVE ). “NVIDIA’s update on the Spectrum-X switch with co-packaged optics is an important moment, confirming that silicon photonics is central to next-generation AI infrastructure.” Despite a long-standing reliance on copper-based interconnects for scale-up systems, the company is now placing photonics at the core of its future platforms, including Vera Rubin Ultra. This transition is expected to support increasingly complex configurations, such as NVL576 and future architectures like Kyber NVL1152.” “Nvidia is already in production with Spectrum-X Photonics, which is co-packaged optics (CPO) Ethernet switch. The company also announced the Quantum-X Photonics InfiniBand switch, which delivers up to 800 Tb per second of scale-out throughput using its proprietary scale-out interconnect” Although copper is important, it can no longer alone can no longer handle AI-scale demands. NVLink8 CPO is probably the biggest signal with $NVDA also bringing silicon photonics into its scale-up NVLink interconnect, not just scale-out networking. CPO for scale-out is shipping now/2026, CPO for NVLink scale-up arrives soon. The paradigm has shifted, and the bottleneck of AI infrastructure is now officially being solved by light. It’s only a matter of time before markets find these chokepoints in the supply chains. Then price them in.
Serenity tweet mediaSerenity tweet media
Serenity@aleabitoreddit

The upcoming CPO / Silicon Photonics Bottleneck Cheat Sheet: $SIVE, Sumitomo, $LITE, $COHR, $AVGO, $MTSI, $AAOI - Light Source (CW DFB Lasers) $TSEM, $GFS, $UMC, $TSM, $INTC - SiPh foundry $NOK, $CIEN, $CSCO, $COHR - DCO $HIMX, FOCI (3363.TWO) - Micro-lens + Fiber Arrays $POET - Optical Interposers $SOI, $AXTI, Shin-Etsu - Substrates $FN, $ASX, Innolight, Eoptolink - Optical Packaging and Assembly $MTSI, $SMTC, $MRVL, $MXL - Analog/Mixed-Signal ICs $LWLG - Speculative Modulator Materials. $GLW, $APH, $TEL, $FIT, Fujikura - Connectors and Fibers $FORM, $KEYS, $VIAV, $AEHR- Test & Measurement $BESI, $SMHN, $ONTO, $CAMT - Advanced Packaging & Hybrid Bonding Many are private companies from Lightmatter, Ayar, Ranovus and others. Now... Everyone is asking... How do you profit? If you look at the forecast for CPO TAM, it's a straight line up, and next year is inflection point for CPO mass deployment. The alpha is capturing the rotation: From the current EML bottlenecks ( $LITE, $COHR type) to SiPh / CW DFB architectural winners for CPO. Highest upside potential are the ones that aren't included in current cycles. But that are in the next. Companies like $SOI, $SIVE, or $AEHR are perfect examples. Ride the current pluggable bottleneck like $AAOI. But the alpha is frontrunning institutions with the next CPO bottleneck. The capital rotation is inevitable.

English
48
49
581
97.9K
Deep Learner
Deep Learner@_vision2020_·
@insane_analyst The slide might be mis labeled. S11 means return loss. The would be an awful performance for the right plot
English
2
0
7
532
Irrational Analysis
Irrational Analysis@insane_analyst·
This is what good frequency reponse looks like.
Irrational Analysis tweet media
English
8
4
88
12.7K
Deep Learner retweetledi
Giuliano Liguori
Giuliano Liguori@ingliguori·
8 specialized AI model types 👇 LLM → text generation LCM → semantic reasoning LAM → action-oriented agents MoE → expert routing VLM → vision + language SLM → lightweight edge models MLM → masked token learning SAM → image segmentation AI is moving from “one big model” to specialized architectures. #AI #LLM #MoE #VLM #MachineLearning
Giuliano Liguori tweet media
English
33
454
1.9K
52.1K
Deep Learner retweetledi
Photon Capital
Photon Capital@PhotonCap·
I joined today's GTC keynote session online. The energy was electric. And from the photos my friend sent from LA (OFC), the Nvidia 'Si Microring Resonator Modulators at >200Gb/s' session looked every bit as charged as the room Jensen was in. Photonics is fueling the fire.
Photon Capital tweet mediaPhoton Capital tweet mediaPhoton Capital tweet mediaPhoton Capital tweet media
English
2
4
39
4K
Deep Learner
Deep Learner@_vision2020_·
@bubbleboi @zephyr_z9 Unless they are using AEC or ACC which would be bullish for Credo/Astera/Semtech. But I don’t see LPO connectors.
English
0
0
0
85
Deep Learner retweetledi
Jason Luongo
Jason Luongo@JasonL_Capital·
BREAKING: Claude now has live access to real-time stock quotes and options chain data You can pull prices, scan option chains, check Greeks, and view your portfolio without leaving the chat Here's how to connect the (free) API step by step:
Jason Luongo tweet media
English
131
461
4.8K
1.1M