
Ed Sim
10.2K posts

Ed Sim
@edsim
@boldstartvc partnering from Inception with bold technical founders building the autonomous enterprise, weekly newsletter: What's 🔥 IT/VC 👇🏼


Today, I'm excited to announce the industry's first proactive AI SRE agent with @grepr_ai. It works by finding novel behaviors in your environment and only asking an LLM to investigate those. By focusing on novel behaviors, we make applying LLMs on an entire stream of observability data possible. Read more: grepr.ai/blog/proactive…

Right now every coding agent is running in full god mode. Your creds. Your perms. Zero guardrails. Human identity = one dimension. Agent identity = four: user, agent, runtime, task. Every credential short-lived, scoped to the exact tool call, and gone when the session ends. This is what real agent security infrastructure looks like.@KeycardLabs. 🔥 up to be investor since inception



Introducing Lovable for more general tasks. Lovable has always been for building apps. Today it also becomes your data scientist, your business analyst, your deck builder, and your marketing assistant. This is a big step toward what Lovable is becoming: a general-purpose co-founder that can do anything. See examples below.

Cloudflare $NET CEO Matthew Prince said AI bot traffic is on pace to exceed human internet traffic by 2027. Prince said AI agents can hit thousands of sites for a single task, creating much more load than a human user and putting new pressure on internet infrastructure.


Your coding agents inherit your credentials and your permissions. No identity system in the stack can tell the difference between you and the agent acting in your name. Today: Keycard for Coding Agents 🧵



Your coding agents inherit your credentials and your permissions. No identity system in the stack can tell the difference between you and the agent acting in your name. Today: Keycard for Coding Agents 🧵

When @bling0 says the party goes on, rain or shine, he means it. Great time last night at @blingcapital @JoinAtomic @GVteam @Boldstartvc @stripe Miami Open Cocktail Mixer with so many of my favorite people. @chest @edsim

Happening now at #NVIDIAGTC: Generalist’s GEN-0 model autonomously packing phones on @Universal_Robot arms in our first public demo. To move robotics beyond the lab, systems need to operate in real time on industrial hardware. See the demo below, and stop by booth #1840 👇🤖

36.8 % of AI agent skills have security issues. That’s why we partnered with Snyk. Every skill in the Tessl Registry now has a Snyk security score and is scanned at publish, browse & install. Skills aren’t just code; they’re instructions that need a different security model. More in our blog. bit.ly/47PGQFH

Your agent is only as trustworthy as the environment it runs in. So today we launch something new with @NVIDIA. AI agents have gone from prompt-and-response tools to autonomous systems that run for hours, write their own code, build their own tools, and learn as they go. The OpenClaw project earlier this year made this concrete, self-evolving agents that plan complex tasks, generate their own tools, and run continuous workflows. We built CrewAI for exactly this. Long-running multi-agent systems. Persistent memory. A dual-layer architecture where Flows handle deterministic control, and Crews handle reasoning. Developers get precise control over how much autonomy each part of the system gets. But here's what keeps coming up with enterprise teams. When an agent can install packages, write files, and generate its own tools, it can also do things you didn't plan for. Most agents inherit the full permissions of whoever launched them. Security checks are usually built inside the agent — so a self-evolving agent could, in theory, work around its own guardrails. This is the trust gap. The real reason most enterprise agent projects don't make it to production. CrewAI addresses a lot of this at the orchestration layer: guardrails, human-in-the-loop, and hierarchical task scoping. But orchestration alone can't close the full gap. You also need enforcement at the infrastructure level, below the agent, where the agent can't reach. That's why we're working with NVIDIA on NemoClaw. NVIDIA NemoClaw is an open-source stack that simplifies running OpenClaw always-on assistants safely, with a single command. It includes the NVIDIA OpenShell Runtime with three core capabilities: A sandbox for isolated execution — agents operate freely without affecting the host. A policy engine that evaluates every action at the binary, destination, and network level. A privacy router that directs inference to local or external models based on your enterprise policies. The critical design choice: enforcement happens at the infrastructure layer, not inside the agent's code. Even if an agent's logic changes unexpectedly, the runtime blocks anything that violates policy. Agents start with zero permissions. Every escalation requires human approval. Every decision gets logged. CrewAI handles orchestration. NemoClaw handles the secure runtime. Together, organizations can run powerful autonomous agents while maintaining real control over their infrastructure and data. We've powered roughly 2 billion agentic executions over the past year and work with more than 60% of the Fortune 500. NemoClaw's infrastructure layer closes the gap between what these agents can do and what enterprises need to trust them in production.

Hello world. We just raised $57,000,000 raised to operationalize your enterprise security program with AI. Backed by Accel, Cyberstarts, and Boldstart Ventures.


Agent red-pilling your company has to come from both ends. CEO drives from the top. Organic usage builds from the bottom. Appoint one red-pilled person per department. If they resist, they're not long for this world. More on the "sandwich model" in this week's What's 🔥 whatshotit.vc/p/whats-in-ent…

Agentic software engineering adoption is on fire at @Uber. 1,800 code changes per week are now written entirely by Uber's internal background coding agent, and 95% of our engineers now use AI every month across all the tools we track. This is a real reset moment for engineering; it's one of the most exciting times to lead. This shift requires builders to be curious and hands-on. I’m incredibly lucky to be surrounded by a team that’s doing exactly that. The best part is that the strongest adoption isn’t being pushed top down from leadership announcements; it’s coming from engineers who are quietly experimenting, quietly shipping, and quietly pushing things forward. I love spending time with those engineers because there’s no substitute for being close to the work. Over the last few months, we leaned in hard, and the results have been phenomenal. The bigger shift: going agentic. 84% of AI users are now working with agent-style workflows, not just tab completion. Claude Code usage nearly doubled in 2 months (32% → 63%), while IDE-based tools have largely plateaued. Engineers are moving from accepting suggestions to delegating tasks. Even within traditional IDEs, ~70% of committed code is now AI-generated. Background agents are writing code autonomously. Our internal background coding agent went from <1% of all code changes to 8% in just a few months. There is zero human authoring. Engineers review and approve, but the code is written entirely by AI agents. The role of the engineer is shifting - from writing every line to architecting systems and reviewing AI-generated code. More to come from the @UberEng team in the coming days.

Ready to deploy AI agents? NVIDIA NemoClaw simplifies running @openclaw always-on assistants with a single command. 🦞 Deploy claws more safely ✨ Run any coding agent 🌍 Deploy anywhere Try now with a free NVIDIA Brev Launchable 🔗 nvidia.com/nemoclaw


Nvidia will spend a total of $26 billion over the next five years building the world's best open source models, per Wired.
