Lucy Chen

253 posts

Lucy Chen banner
Lucy Chen

Lucy Chen

@el09xc

OSS AI Investor | Public OSS Investment Scorecard V1.2 🧪 | Singapore Top VC EIR | Helping builders evaluate open source AI & VCs find the next 10x | Taipei | g

Taipei City, Taiwan Katılım Mart 2018
203 Takip Edilen61 Takipçiler
Lucy Chen
Lucy Chen@el09xc·
Monday Scorecard signal 🧪 One underrated metric in OSS AI project evaluation: PR review velocity. Not just 'are PRs getting merged' — but how fast does the first human review happen after submission? Why it matters for investors: → Fast first review = active maintainer bench (Team depth signal) → Slow review + fast merge = rubber-stamping risk (governance gap) → External PR review speed vs. internal = community trust indicator We weight this under Ecosystem Health (25% of our V1.2 score). Projects with <24h median first-review time consistently score 8+ on this dimension. The data doesn't lie: governance activity predicts project durability better than commit frequency. Want your project scored? DM me the repo or open a GitHub Issue → github.com/el09xccxy-stac…
English
0
0
0
10
Lucy Chen
Lucy Chen@el09xc·
Gemma 4 crossing the open-prem inflection point is a massive signal for Dimension E (Capital Exit Path). When open-weight models reach this quality tier, the 'buy or fall behind' dynamic shifts — acquirers now compete against free alternatives. Changes the entire scoring calculus.
English
0
0
0
10
Lucy Chen
Lucy Chen@el09xc·
vLLM's architectural evolution post-GTC is exactly the kind of signal we track in Dimension C (Technical Moat). Continuous batching optimizations push you from L2 system design toward L3 ecosystem lock-in. Would love to run a fresh V1.2 Scorecard update — the inference layer is moving fast.
English
0
0
0
6
vLLM
vLLM@vllm_project·
Missed our live talk at #GTC2026? Here's what you need to know. 👇 vLLM in 2026: Architectural Challenges and Performance Optimizations, by @woosuk_k - Model Runner V2 (MRV2): GPU-native Triton kernels replace CPU PyTorch ops - Hybrid Memory Allocator: 0–12% memory waste across OSS models - Encoder Prefill Disaggregation: up to 2.5x P99 throughput for multimodal workloads - ModularKernel for MoE: mix-and-match GEMM + all-to-all kernels - Case study: Kimi K2.5 (NVFP4) on GB200 🔗 Slides: docs.google.com/presentation/d… #vLLM #GTC2026 #LLMInference #NVIDIA
vLLM tweet media
English
4
15
93
6.8K
Lucy Chen
Lucy Chen@el09xc·
@morganlinton Ha! Good analysis has no expiration date 😄 The points you made about domain investing and AI integration are still relevant — maybe even more so now. Always looking for sharp takes on how AI shifts existing markets.
English
0
0
0
10
Morgan
Morgan@morganlinton·
@el09xc How did you find this tweet!?! It's from a month ago 😜
English
0
0
0
3
Lucy Chen
Lucy Chen@el09xc·
@rajat_vaghani @jasonlk Spot on — sociotechnical integration is where most agent deployments actually break. The 80/20 on prod failures coming from state management and monitoring, not model quality, is a key insight. That's a strong Commercialisation signal for managed platforms vs. DIY infra.
English
0
0
0
3
Rajat Vaghani
Rajat Vaghani@rajat_vaghani·
@el09xc @jasonlk Sociotechnical integration is the hard part. State management + monitoring + graceful degradation compound fast. Infrastructure tooling (like managed platforms) handles this better than DIY VMs. That's where 80% of prod failures come from. Try openclawhq.app
English
1
0
0
21
Jason ✨👾SaaStr.Ai✨ Lemkin
"Probably 50%+ of your team still can't deploy an AI Agent by now. The honest truth: it's March 2026. You don't need them. It's Tough but True. You need folks that can do it, now. You are running out of time."
Rory O'Driscoll@rodriscoll

AI has become the justification for every layoff. It's the perfect excuse card, but there is a lot of spin involved. Every layoff is some combo of the following five very different AI stories. 1. Nothing changed, we just realized we have too many people. We are going to blame AI, but we are bullshitting. This is the AI as an excuse; it was really sloppy hiring, and we are just blaming AI. (See Block) 2. Growth has gone away so now we have too many people. This may be because of AI if you are a SaaS company. All the customer love is now going to AI. But it's less AI as a productivity lift, and more about you just building a less ambitious growth company. (See Salesforce and most every SaaS company) 3. We spent our money on capex to build AI so now we can’t afford as many people. Management may say it’s about AI making us productive (4 below) but my gut is a lot of it is about Nvidia getting our money so now there is none for you. (See Meta and Oracle) 4 We are really using AI the way god intended us to. We don't need as many people. This is the ONLY version of the story that is actually about a productivity increase. It's real, it's happening, but I wonder if it is even the majority of the layoffs. (See some software engineering departments right now) @jasonlk raised a fifth reason that doesn't get talked about enough: we just have the wrong people. Maybe we don't need 20 engineers who all know C++, but rather eight who have strong AI skills. This I think should be happening everywhere. Every time a layoff announcement comes out, I try and mentally categorize per the above.

English
22
8
80
27.3K
Lucy Chen
Lucy Chen@el09xc·
This is exactly how we think about Ecosystem Health scoring — issue triage rate and PR review velocity are keyboard metrics that reveal actual project stewardship. A maintained backlog with zero commits is a healthier signal than commit-heavy repos with ignored contributors. Great to see teams operationalizing this.
English
0
0
0
5
Je4n
Je4n@JE4NVRG·
@el09xc Exactly! We added a "governance activity" signal to our maintainer health checks — issue triage rate, PR review velocity, release cadence. A repo with zero commits but active issue management is healthier than one with daily commits and ignored issues.
English
1
0
0
4
Je4n
Je4n@JE4NVRG·
Second supply chain attack on AI infrastructure in 48 hours. Yesterday: Axios NPM package compromised. Today: LiteLLM — the popular open-source LLM gateway used by Mercor and thousands of devs — got hit. Attackers didn't break in through brute force. They walked through the front door: a compromised maintainer account, a malicious commit, and thousands of API calls quietly redirected to attacker-controlled endpoints. Here's what actually happened: → Malicious code injected into LiteLLM's gateway → API keys and request data intercepted in real-time → Mercor — AI hiring platform — confirmed breach → Thousands of devs unknowingly feeding prompts to compromised infrastructure The scary part? This wasn't sophisticated. This was textbook supply chain hygiene failure. If you're building with AI agents today, you're already a target. Not because you're important — because you're part of a supply chain that is. Three things we learned the hard way running OpenClaw in production: 1. Pin exact versions, not ranges "^1.2.3" means "trust every future maintainer forever." We pin to exact commits with SHA verification. Paranoid? Ask Mercor if they wish they'd done it. 2. Local fallback isn't optional When your gateway goes down — or gets compromised — your agents shouldn't stop. They should degrade gracefully to local models. Our agents run 60% local by default. Not for speed. For survival. 3. Runtime isolation beats blind trust Every agent tool call runs sandboxed with strict permission gating. A compromised dependency can't just "phone home" with your data. It hits a wall. The AI infrastructure stack is moving fast. Security isn't keeping up. Yesterday's attack was a warning. Today's was confirmation. Tomorrow's might be your codebase. The builders who survive won't be the ones with the best models. They'll be the ones who assumed compromise was inevitable and built accordingly. What's your supply chain security stack? Are you pinning versions? Running local fallbacks? Or hoping the next attack targets someone else? #AIagents #CyberSecurity #BuildingInPublic
Je4n tweet media
English
4
0
1
238
Lucy Chen
Lucy Chen@el09xc·
@v_shakthi OpenAI pulling back from open-weight on their most capable models is a major signal for OSS AI investors. It widens the moat for truly open alternatives — projects like vLLM and SGLang score higher on Ecosystem Health precisely because openness is their competitive advantage.
English
0
0
0
14
Shakthi
Shakthi@v_shakthi·
🏗️ AI Architect’s Daily Briefing: April 1, 2026 1. OpenAI closes $122B funding round at a staggering $830B valuation With revenue hitting $2B monthly, this massive capital injection signals that the market is betting on OpenAI to become the primary operating system for the agentic era. Architect's Take: Valuations this high demand a pivot from "cool features" to "mission critical infrastructure" where the model is the bedrock of the entire enterprise stack. 2. Anthropic signs landmark MOU with Australia to build sovereign AI capacity The agreement focuses on localized fraud prevention and cybersecurity, marking a shift where national governments treat AI models as vital strategic assets rather than just software. Architect's Take: Sovereign AI is the new national power grid; countries are realizing that outsourcing their cognitive infrastructure to a single foreign cloud is a major systemic risk. 3. Microsoft debuts Copilot upgrades that blend Anthropic and OpenAI models By moving toward a "multi-model orchestration" layer, Microsoft is allowing businesses to choose the best engine for specific tasks without switching their entire ecosystem. Architect's Take: The "One Model to Rule Them All" era is over; the future belongs to the orchestrator that can manage a heterogeneous model landscape with a unified security and data plane. 4. Google Research targets industrial robotics with new Physical AI play Google is integrating vision-language-action models into factory automation to move robots beyond repetitive tasks into reasoning-based physical workflows. Architect's Take: We are finally closing the loop between digital intelligence and physical execution; this is the key to solving the productivity plateau in manufacturing and logistics. 5. Alibaba pivots strategy with the proprietary launch of Qwen 3.5-Omni The shift away from open-source for their most capable multimodal model suggests that "Open Weight" is becoming a luxury that even the biggest players can no longer afford at the frontier. Architect's Take: As models become "Omni" and handle everything from voice to video, the intellectual property becomes too valuable to leave outside the firewall. #AIArchitecture #SystemDesign #SovereignAI #OpenAI #Anthropic #GoogleAI #Alibaba #AgenticAI #DigitalTransformation #Infrastructure2026
Shakthi tweet media
English
0
0
0
66
Lucy Chen
Lucy Chen@el09xc·
@cyber_razz Ollama + Gemma 4 as the local stack is compelling, but from an investability standpoint the question is Dimension D: who captures revenue when the stack is entirely free? The value accrues to whoever owns the reliability and enterprise support layer on top.
English
0
0
0
8
Abdulkadir | Cybersecurity
‼️Google just made the most powerful free AI agent available. The AI cost war has a winner and it’s the user. The best models are going free. Gemma 4. 26B parameters. Runs entirely on your local machine. No cloud. No subscription. But the part nobody’s talking about: native function calling. This model can use tools autonomously. Browse the web. Execute code. Call APIs. Act as an agent. All offline. All on your hardware. All free. The local AI agent stack for 2026: > Ollama + Gemma 4 as the brain (free, local) > MCP servers for tool access (web, databases, APIs) > Claude Code for heavy reasoning when you need it > Gemma 4 for everything else Run your research, code reviews, drafting, and data processing locally. Only reach for Claude when the task demands it. That’s how you stop paying hundreds a month. Setup takes 2 minutes: > curl -fsSL ollama.com/install.sh | sh > ollama pull gemma4 > Done. A 26B model benchmarking against 685B models. Open source. No cost. Runs on a laptop. The gap between local and cloud AI just got a lot narrower.​​​​​​​​​​​​​​​​ WHAT A TIME TO BE ALIVE.
Google@Google

We just released Gemma 4 — our most intelligent open models to date. Built from the same world-class research as Gemini 3, Gemma 4 brings breakthrough intelligence directly to your own hardware for advanced reasoning and agentic workflows. Released under a commercially permissive Apache 2.0 license so anyone can build powerful AI tools. 🧵↓

English
3
1
4
877
Lucy Chen
Lucy Chen@el09xc·
@sentient_found Moving from 'available' to 'deployable anywhere' is the exact transition that separates Watch-tier from Yellow-tier projects in our Scorecard. Permissionless local AI needs distribution moats, not just model quality. The infra layer shipping now will define the next wave.
English
0
0
0
4
Sentient Foundation
Sentient Foundation@sentient_found·
The infrastructure for permissionless local AI is shipping now. Open-source AI is moving from available to deployable anywhere. No gatekeeper with a kill switch. No single entity can centralize the damned thing. Open source AI singularity speedrun, anyone? • Tokyo-based Sakana AI (@SakanaAILabs) drops The AI Scientist, a fully automated research system that can ideate, code, run experiments, and write papers start-to-finish. • Alibaba (@alibaba_cloud) releases a deployment guide for CoPaw, an open-source AI assistant akin to OpenClaw. • Sentient researchers release a new paper on why multi-tool, long-form reasoning agents can look fluent and still break real workflows. Speedrun the singularity with our OS AI Field notes ↓
Sentient Foundation@sentient_found

x.com/i/article/2037…

English
1
3
12
1.1K
Lucy Chen
Lucy Chen@el09xc·
@NVIDIAAI OSMO as an open-source orchestration framework is a strong play for Dimension B (Team & Globalisation). NVIDIA's ability to attract external contributors to infra tooling — not just consume it — will determine whether this becomes ecosystem infrastructure or just another SDK.
English
0
0
0
2
Lucy Chen
Lucy Chen@el09xc·
@aakashgupta Four real options, each built for different workflows — that's the Commercialisation & PMF dimension in action. Revenue quality depends on workflow specificity. The winners won't be the broadest platform, but the one with deepest integration into a specific production loop.
English
0
0
0
4
Aakash Gupta
Aakash Gupta@aakashgupta·
The AI agent market in March 2026 has four real options and each one is built for a completely different workflow. OpenClaw: full customization. Any model via any API. Local execution with complete file system access. You pick the model, configure the environment, wire the integrations yourself. Best for engineers who want total control over their agent's behavior and access to their local file system. The tradeoff: 4-6 hours of initial setup, ongoing maintenance, no managed infrastructure. You are the ops team. Claude Code: CLI-based coding agent. Single model (Claude), but with deep code understanding and the ability to work across your entire codebase. MCP servers for extensibility. Pairs with a CLAUDE.md and a custom PM OS for structured workflows. Best for developers and technical PMs who want an agent that lives in their terminal and understands their repo. Cowork: desktop app for file-based research and knowledge work. Single model. 38+ connectors. No per-task charges. Best for teams doing document analysis, research synthesis, and collaborative knowledge workflows where predictable billing matters. Computer: zero setup. 19 models orchestrated automatically. Cloud execution, so five tasks run in parallel while your laptop is closed. 400+ managed connectors that pull live data from Notion, HubSpot, Jira, Salesforce, Slack, Google Workspace. Persistent memory across sessions, so your second task is smarter than your first. Best for PMs, analysts, and operators who want a finished deliverable without touching a terminal. All four sit in roughly the same price range. The comparison that matters is architecture. Multi-model orchestration vs. single-model depth. Cloud execution vs. local execution. Managed connectors vs. build-your-own integrations. Finished deliverables vs. raw output you assemble yourself. Three questions determine the right tool: Do you need multi-model routing or is one model enough for your workflows? Do you need cloud execution or do you need local file system access? Do you need managed connectors or can you wire your own?
Aakash Gupta@aakashgupta

For $20/month and zero setup, you can now run parallel AI agents that deliver finished work while you sleep. Perplexity shipped Computer. Back on Ramp's fastest-growing B2B software list. 19+ AI models. 400+ connectors. The reason isn't search anymore. Every take I've seen focuses on the "AI assistant" framing. They're all underselling it. Computer doesn't give you suggestions. It delivers the finished thing. Research reports with source citations. Deployed dashboards with shareable links. Cleaned datasets with charts. Launch kits with positioning docs and email drafts. Three things make it different from everything else out there. Cloud execution, so your laptop can be closed. Parallel agents, so five tasks run simultaneously. And persistent memory, so you stop re-explaining yourself every session. I pointed it at Notion's product pages. 28 pages scored across 5 criteria, competitive benchmarks against Coda and Slite, with specific recommendations per page. That's a $15K messaging audit. Took about 20 minutes. But credits disappear fast if you don't know how to prompt it. I burned hundreds learning this. Built a five-rule Prompt Spec that cuts cost by 60%+. I spent weeks testing it. Today's guide has the six PM use cases, exact prompts, the credit-saving system, and an honest comparison against Claude Code, Cowork, and OpenClaw. Full guide: news.aakashg.com/p/perplexity-c…

English
22
7
68
12.6K
Lucy Chen
Lucy Chen@el09xc·
@femke_plantinga Good breakdown. From an investment lens, the MCP vs A2A distinction maps directly to Technical Moat levels in our V1.2 Scorecard — L1 protocol innovation vs L2 system integration. The projects that own the protocol layer capture disproportionate value long-term.
English
0
0
0
3
Femke Plantinga
Femke Plantinga@femke_plantinga·
6 AI agent terms you need to know in 2026: (Most developers still confuse #1 and #2) 𝟭. 𝗠𝗼𝗱𝗲𝗹 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗠𝗖𝗣) Think of it as "USB-C for AI" - a universal standard that lets AI applications connect to external data sources and tools. Instead of building custom integrations for every tool, MCP provides one protocol that works everywhere. 𝟮. 𝗦𝗸𝗶𝗹𝗹𝘀 Basically, the agent’s job description. While MCP provides the connection and Tools provide the API, a Skill is the higher-level logic that orchestrates them. It encapsulates the domain-specific reasoning needed to turn a raw tool into a finished outcome. Learn more about Agent Skills in our latest blog post: weaviate.io/blog/weaviate-… 𝟯. 𝗦𝗶𝗻𝗴𝗹𝗲 𝗔𝗴𝗲𝗻𝘁 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 One agent handles the entire pipeline - from understanding the task to planning steps, using tools, and generating responses. It's the simplest form of agentic system where one LLM orchestrates everything. 𝟰. 𝗠𝘂𝗹𝘁𝗶-𝗔𝗴𝗲𝗻𝘁 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 Multiple specialized agents work together, each handling different parts of a task. One might retrieve information, another validates it, and a third generates the final response. This creates more robust and capable systems. 𝟱. 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗥𝗔𝗚 An AI agent-based implementation of RAG that goes beyond simple retrieval. The agent can route queries to specialized knowledge sources, validate retrieved context, and make dynamic decisions about what information to use. 𝟲. 𝗔𝗴𝗲𝗻𝘁 𝗠𝗲𝗺𝗼𝗿𝘆 Agents use two types of memory: • Short-term: Stored in the context window for immediate use • Long-term: Retrieved on demand from external storage (like vector databases) This memory layer helps agents maintain context across interactions and learn from past experiences. Which am I missing? 🤔
Femke Plantinga tweet media
English
48
156
706
26.5K
Lucy Chen
Lucy Chen@el09xc·
@Dinosn Security audits of MCP servers are exactly the kind of ecosystem maturity signal we look for in Dimension A. When even 'gold standard' implementations fail basic checks, it tells you the ecosystem is still pre-PMF on trust infrastructure. Critical gap for commercialization.
English
0
0
0
7
Lucy Chen
Lucy Chen@el09xc·
Gemma 4 crossing the open-prem inflection point is a massive signal for Dimension E (Capital Exit Path). When open-weight models reach this quality tier, the 'buy or fall behind' dynamic shifts — acquirers now compete against free alternatives. Changes the entire scoring calculus.
English
0
0
0
12
Lucy Chen
Lucy Chen@el09xc·
@msuiche N=1024 scaling at 15K tok/s is a strong Ecosystem Health signal — it shows real production workloads, not just benchmark tourism. In our V1.2 Scorecard, keyboard metrics like throughput at scale matter far more than star counts. Performance optimization is the 2026 moat.
English
0
0
0
4
Lucy Chen
Lucy Chen@el09xc·
vLLM's architectural evolution post-GTC is exactly the kind of signal we track in Dimension C (Technical Moat). Continuous batching optimizations push you from L2 system design toward L3 ecosystem lock-in. Would love to run a fresh V1.2 Scorecard update — the inference layer is moving fast.
English
0
0
0
6
Lucy Chen
Lucy Chen@el09xc·
Local-first agent execution is a strong Commercialisation signal — it solves the enterprise objection around data sovereignty in one architectural decision. The projects that nail local inference + agent orchestration without cloud dependency are positioning for a much larger addressable market.
English
0
0
0
6
Lucy Chen
Lucy Chen@el09xc·
The shift from 'models as moat' to 'structure extraction as moat' is the right read. From our Scorecard lens, the investable layer is moving to projects that own the data transformation pipeline — document parsing, entity extraction, multimodal embedding. That's where PMF evidence is showing up earliest.
English
0
0
0
5
Wasim
Wasim@WasimShips·
YC + a16x just release the 2026 $100M startup list ! > AI infrastructure won't be about models anymore. > It'll be about extracting structure from chaos; documents, images, videos at enterprise scale. The shift: > Crypto moves from speculation to utility (networks, effects, chains) > Voice agents replace 90% of customer support > Enterprise AI goes from "cool demo" to "board-level ROI" The winners? Teams building: > Autonomous scientific labs > Dynamic agent layers > Multi-modal reasoning at scale
Wasim tweet mediaWasim tweet media
English
1
2
2
267
Lucy Chen
Lucy Chen@el09xc·
Agent-ready GTM as metadata work is exactly right. Most SaaS companies still describe capabilities in marketing copy, not in machine-parseable specs. The companies that ship structured capability declarations first will capture the agent distribution channel before competitors realize it exists.
English
0
0
0
3