Andrew St. Clair

9 posts

Andrew St. Clair

Andrew St. Clair

@a_st_clair

Investor at Prosperity7 Ventures. Former Barclays and BYU

Katılım Nisan 2025
140 Takip Edilen13 Takipçiler
Andrew St. Clair retweetledi
Atal
Atal@ZabihullahAtal·
🚨 BREAKING: A new research paper proved that the future computer will have no apps at all and no operating systems like Windows, macOS, or Linux. Instead, it may run entirely on AI agents. The concept is called AgentOS. Here’s the problem researchers identified. Today’s AI agents are becoming incredibly capable. Systems like OpenClaw can already: • control a local computer • execute complex workflows • connect and use external tools • perform multi-step tasks autonomously But there’s a hidden limitation. All of these agents still run inside traditional operating systems. And those systems were designed for a completely different era. Modern operating systems like Windows, macOS, and Linux were built around two interaction models: • GUI (Graphical User Interface) clicking icons and navigating windows • CLI (Command Line Interface) typing commands into a terminal These models were designed for humans manually operating software. Not for AI agents coordinating complex tasks across dozens of tools. This creates a fundamental mismatch. And it leads to several problems. First: fragmentation. Every application exists in its own silo. Data, workflows, and permissions are separated across different programs. Second: context loss. When a task spans multiple tools, the system has no unified understanding of what the user is trying to accomplish. Each app only sees a small piece of the workflow. Third: messy permissions and hidden automation. Many AI tools bypass normal system controls to get things done. Researchers call this phenomenon “Shadow AI.” Where autonomous agents operate across systems without clear structure, governance, or transparency. In short: AI agents are powerful. But the operating system architecture isn’t designed for them. So researchers propose a new paradigm. A new type of operating system called AgentOS. Instead of apps running on the system… The system itself becomes an AI coordination layer. At the center is something called the Agent Kernel. Think of it as the brain of the entire computer. This kernel continuously interprets user intent and manages intelligent agents. It can: • understand natural language requests • break complex tasks into smaller steps • coordinate multiple specialized AI agents • select the right tools for each step And traditional software? It evolves into something called Skills-as-Modules. Instead of launching separate applications, capabilities become modular skills that agents can dynamically combine. For example, instead of manually opening multiple tools: • a document editor • a spreadsheet • a presentation app • an email client You simply say: “Analyze this report, extract the key insights, create slides, and send them to my team.” The Agent Kernel interprets the request. Then it automatically selects and orchestrates the required skills. No apps. No switching windows. Just intent → execution. In other words: Computers stop being app platforms. They become intent platforms.
Atal tweet media
English
131
295
1.1K
87.6K
Andrew St. Clair retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Groq (recently acquired by Nvidia) is massively expanding its partnership with Samsung by increasing their order from 9,000 wafers to about 15,000. According to industry sources. Groq just asked Samsung Foundry to boost 4nm chip production by 70% to 15,000 wafers. While Nvidia dominates chips that train AI, running those learned models demands too much electricity. Groq solves this using Static Random Access Memory, or SRAM, instead of high bandwidth memory. Because SRAM sits beside computing cores, data processes instantly while slashing power and costs. Nvidia will likely reveal a new inference chip based on this SRAM design in sometime in 26. --- The AI world is divided into training and inference. Training is the "schooling" phase, while inference is when the AI actually does its job. NVIDIA and AMD are the masters of training hardware, but because those chips eat up so much energy, there is a shift toward inference-heavy chips. This is why NVIDIA acquired Groq indirectly, aiming to build a massive presence in the inference market before anyone else catches up.
Rohan Paul tweet media
English
12
17
86
11.6K
Andrew St. Clair retweetledi
Aidan Gold
Aidan Gold@MrGoldBro·
Top private companies that fewer people know about: 1) Heron Power 2) Erebor Bank 3) Redwood Materials 4) Bedrock Robotics 5) Armada
English
12
13
272
18.3K
Andrew St. Clair retweetledi
Yiannis Zourmpanos
Yiannis Zourmpanos@yianisz·
You can make a lot of money in the photonics / SiPh market if you position early.. What Im doing right now is watching the photonics supply chain very closely ahead of Optical Fiber Communication Conference 2026 next week. OFC tells you where the photonics industry will be in 2–3 years. So Im not looking at flashy demos... Im looking for signals of real demand and production ramps... Here’s what I’m focused on: 1) 1.6T optics moving from lab to early deployment 800G is still ramping, but hyperscalers are already designing the next generation. If companies like $LITE, $COHR, $AAOI, $ACIA (Cisco), $INFN platforms start talking about 1.6T moving from lab demos into customer sampling or early production planning, it means the next upgrade cycle may already be forming. Contract manufacturers like $FN and optics assemblers such as $JNPR or $CSCO's OEM partners would benefit directly from that ramp. 2) Silicon photonics finally scaling I want to hear real production signals from silicon photonics foundries. If $TSEM, $GFS, $MRVL (with its SiPho/DSP platforms) or ecosystem players like $INTC and $AAPL's supply chain start talking about wafer volume, customer qualifications or high‑volume manufacturing ramps, that confirms SiPho is moving from research projects into actual manufacturing scale. Any mention of specific CPO, LPO or 1.6T tape‑outs going into HVM would be a key tell. 3) Co-packaged optics timeline Everyone talks about CPO but timing is the real question. If vendors like $AVGO, $COHR, $LITE, $MRVL, $ADVA or switch OEMs ( $ANET, $CSCO) hint that 2027 deployments are realistic, it creates a 2nd growth wave across the ecosystem. That would benefit SiPho suppliers like $TSEM and $GFS, optical component makers like $LITE and $COHR, co‑packaged light‑source players such as $POET and $SIVSQ, and advanced packaging names like $BESIY and $AMKR. 4) Optical circuit switching adoption Another theme I’m watching is optical circuit switching. If hyperscalers expand testing and start talking about early deployments of this architecture for large GPU clusters, it could create a completely new optical networking market. Companies like $LITE (OCS), $COHR (photonic switching), system vendors like $CIEN, $JNPR and $CSCO, and SiPho enablers such as $TSEM could all be key beneficiaries as architectures evolve. 5) Upstream bottlenecks Every optical module ultimately depends on lasers, substrates and manufacturing tools. If demand continues accelerating, the real leverage may remain upstream with companies like $AXTI and $IQE supplying InP materials, epi and substrates, while equipment and testing demand rises for firms like $VECO (InP MOCVD/IBD), $AEHR (SiPho and AI wafer‑level burn‑in), $ONTO and $FORM (inspection/probe) and even tool vendors like $AMAT and $LRCX as photonics lines scale. 6) Emerging photonics platforms I’ll also be watching smaller innovation‑driven players like $POET, $LWLG, $ALMU, NLM-linked platforms (via foundries) and integrated module players like $MTSI and $IPGP. These companies could surprise the market if they announce new partnerships, OEM integrations or design wins during the conference — especially around external light sources for CPO/LPO, new modulator technologies, or AI‑specific optical engines. My view: OFC often reveals where the industry is heading long before the numbers show up in earnings. Right now, the signals from $AVGO's AI networking guide and $NVDA's $4B optics investment suggest AI networking demand is accelerating across the entire photonics supply chain, from materials and tools all the way up to modules and systems.
Yiannis Zourmpanos tweet mediaYiannis Zourmpanos tweet media
English
25
38
421
95.3K
Andrew St. Clair retweetledi
Ricardo
Ricardo@Ric_RTP·
Nvidia just spent $4 billion on a technology 99% of people have never heard of. But in 3 years, every AI data center on Earth will need it. And Nvidia just LOCKED UP the supply. Here's what happened: Nvidia invested $2 billion in Coherent and $2 billion in Lumentum. You probably never heard of these companies. They make photonics technology. Systems that transmit data using LIGHT instead of electricity. Sounds like sci-fi. But this is the most important infrastructure bet in AI right now. Here's the problem Nvidia just solved for itself: AI data centers are hitting a wall that has nothing to do with chips, energy, or money... Copper wiring is dying. Every data center on Earth moves data between GPUs using copper cables. But at the speeds AI now demands, copper physically cannot keep up. Signal degrades. Heat explodes. Power consumption skyrockets. Right now, 30% of the electricity in an AI data center is wasted just MOVING data from point A to point B. An MIT researcher said: "Copper's not going to cut it. It gets too hot. Too much power consumption and loss." Jensen Huang admitted it himself too: "We use copper as far as we can, about a meter or two. But where data centers are the size of a stadium, we need something else." That something else is photonics. Replacing copper with laser-powered fiber optics built directly into the chip. The numbers are insane: - 3.5x more power efficient - 10x better network reliability - Data moving at 102 terabits per second Wells Fargo estimates the photonics market will hit $10-12 billion by 2030. And Nvidia just bought privileged access to the two companies that make the advanced lasers every single one of these systems will need. This is the Nvidia playbook on repeat. They did this with CoreWeave. Invested $2 billion, locked up GPU capacity, created a dependent customer. They did this with memory suppliers. Secured HBM allocations years in advance while competitors scrambled. Now they're doing it with photonics. Invest early. Lock up supply. Make the entire ecosystem dependent on companies that are dependent on Nvidia. By the time competitors realize photonics is the bottleneck, Nvidia already OWNS the supply chain. Every data center, AI factory, and GPU cluster will need this technology to function at scale. Nvidia will become even more important.
English
166
542
3.5K
637.2K