Ed Sim

10.3K posts

Ed Sim banner
Ed Sim

Ed Sim

@edsim

@boldstartvc partnering from Inception with bold technical founders building the autonomous enterprise, weekly newsletter: What's 🔥 IT/VC 👇🏼

Miami Katılım Ocak 2009
3.5K Takip Edilen19.9K Takipçiler
Sabitlenmiş Tweet
Ed Sim
Ed Sim@edsim·
🔥 up to announce @boldstartvc Fund VII $250M to back bold technical founders building the autonomous enterprise. From Inception. Before the world believes. It always starts with an idea that feels insane… until it isn’t. 🎥👇🧵
English
100
37
466
81.6K
Ed Sim
Ed Sim@edsim·
Congrats @Boldstartvc portfolio companies @clay and @crewAIInc on being selected to The Enterprise Tech 30, a survey of top venture capitalists about what startups they think are the most promising right now.
Ed Sim tweet mediaEd Sim tweet media
English
0
1
12
565
Ed Sim
Ed Sim@edsim·
@yrechtman I will try to make it! Sounds like an awesome and timely event
English
0
0
0
19
yoni rechtman
yoni rechtman@yrechtman·
@edsim man if only we could talk about this on stage together in NY in a few weeks. much to discuss!
English
2
0
3
102
Gergely Orosz
Gergely Orosz@GergelyOrosz·
Ironic how Anthropic sells Claude Code security reviews positioned as something v powerful (costing $15-25 per PR review), and being clear they use it on all PRs... then leaking all of Claude Code's code thanks to publishing their sourcemap. AI won't save you from yourself!
English
87
208
3.4K
129.4K
Ed Sim retweetledi
Manoj Nair
Manoj Nair@mnair1·
npm axios, 300 million downloads a week, was targeted by a sophisticated supply chain attack! Your package manager is now an attack surface. The axios compromise should end any remaining illusion that “trusted package” means safe. This was not typo-squatting. Not a fake clone. Not some dusty transitive dependency nobody noticed. This was the real package. On the real distribution path. From a compromised maintainer account (jasonsaayman’s npm creds got owned). Two malicious versions dropped (axios@1.14.1 and axios@0.30.4). They quietly pulled in a brand-new package (plain-crypto-js@4.2.1) that ran an obfuscated postinstall and dropped a cross-platform RAT. Windows, macOS, Linux — all covered. Self-cleaning malware. Classic. A few hours was enough. One poisoned release. One CI run. One developer laptop. One production build. Game over. **Full technical breakdown + IOCs from our @liran_tal and @snyksec research team here** → snyk.io/blog/axios-npm… (SNYK-JS-AXIOS-15850650 + SNYK-JS-PLAINCRYPTOJS-15850652) Feross (@feross) nailed it in his thread this morning: “textbook supply chain installer malware.” Karpathy (@karpathy) posted his own near-miss (googleworkspace/cli resolved clean for him… this time) and dropped the bigger truth: the defaults of npm, pip, etc. have to change so one temporary maintainer compromise doesn’t pwn the planet at random scale. **What practitioners (and leaders) should actually do right now:** • Pin exact versions. No ^. No latest. • Enforce lockfiles in CI and never run plain `npm install`. • Use `npm ci --ignore-scripts` (or Bun’s default behavior). • Scan for malicious behavior, not just known CVEs. • If you installed in the window — assume breach and rotate everything. The real shift here is bigger than Axios. Software now moves at machine speed. So does trust propagation. So does blast radius. If your environment still allows loose semver, non-deterministic installs, and postinstall scripts by default… you are not moving fast. You are scaling risk. What is the strongest control you have today against this class of supply chain attack? #SupplyChainSecurity #npm #OpenSourceSecurity #DevSecOps #Cybersecurity
Manoj Nair tweet media
English
0
2
11
1.9K
Sarah Wang
Sarah Wang@sarahdingwang·
Pavan Ravitapi at @cursor_ai raised a great point at dinner last week: Reusable artifacts like skills, sub-agents, and custom rules are how context will diffuse through the AI-enabled firm. ~1% of engineers are making them, and discovery is an unsolved problem, but they benefit the whole org when shared internally. Cracking artifact authorship will be a big unlock.
English
25
15
323
29.5K
ellen livia ᯅ 🇺🇸🇮🇩
This week in security: - LiteLLM, backdoored release exfiltrating secrets - Axios, supply chain malware via dependency - Railway, CDN caching leaked user data - OpenAI Codex, command injection via GitHub branch names - Mercor 1TB data leak - Delve, data leak + compliance risk infra is the attack surface now
English
81
497
3.2K
172.3K
Feross
Feross@feross·
AI is playing a role in two ways: 1.Far more code is being written (1.5-2x by some estimates) and far more people are vibe coding without reviewing what their agents install. Every unreviewed dependency is an attack surface. 2.Attackers have woken up. We saw the first NPM worm last year. The recent TeamPCP attacks (against Trivy and LiteLLM) have stolen a massive number of credentials that most teams haven’t rotated yet. We’ll be dealing with the long tail of these compromises for 6-12 months. Not that developers were good at reviewing dependencies before. But AI has mass-produced the exact behavior attackers exploit.
Amjad Masad@amasad

@feross Is there a reason why supply chain attacks are seemingly on the rise? Is AI playing a role?

English
27
38
318
49.5K
Thomas Fanning
Thomas Fanning@Sunny_and_72·
@edsim fantastic recap ed. Point 11) was one of my shared takeaways - do you have the sense (or concern) that CISOs are being pushed too far towards enablement, such that we are increasing the likelihood of a large scale agentic attack?
English
1
0
2
113
Ed Sim retweetledi
Noah
Noah@NoahKingJr·
People using AI for automation vs people using AI agents
Noah tweet media
English
162
523
6.9K
272.8K
Ed Sim
Ed Sim@edsim·
If this is true, enterprises are going to look at that Anthropic bill and start getting their open source models ready. Frontier intelligence too expensive to meter is the best thing that ever happened to open-weight models. The constellation of models isn't optional anymore. It's economic survival More models, more scaffolding needed, more startups needed to deliver around it
Andrew Curran@AndrewCurran_

Three weeks ago there were rumors that one of the labs had completed its largest ever successful training run, and that the model that emerged from it performed far above both internal expectations and what people assumed the scaling laws would predict. At the time these were only rumors, and no lab was attached to them. But in light of what we now know about Mythos, they look more credible, and the lab was probably Anthropic. Around the same time there were also rumors that one of the frontier labs had made an architectural breakthrough. If you are in enough group chats, you hear claims like this constantly, and most turn out to be nothing. But if Anthropic found that training above a certain scale, or in a certain way at that scale, produces capabilities that sit far above the prior trendline, then that is an architectural breakthrough. I think the leaked blog post was real, but still a draft. Mythos and Capybara were both candidate names for the new tier, though Mythos may now have enough mindshare that they end up keeping it. The specific rumor in early March was that the run produced a model roughly twice as performant as expected. That remains unconfirmed. What is confirmed is that Anthropic told Fortune the new model is a 'step change,' a sudden 2x would certainly fit the definition. We will find out in April how much of this is true. My own view is that the broad shape of this is correct even if some of the numbers are wrong. And if it is substantially accurate, then it also casts OpenAI's recent restructuring in a new light. If very large training runs are about to become essential to staying in the game, then a lot of their recent decisions, like dropping Sora, make even more sense strategically. For the public, this would mean the best models in the world are about to become much more expensive to serve, and therefore much more expensive to use. That will put pressure on rate limits, pricing, and subscription plans that are already subsidized to some unknown degree. Instead of becoming too cheap to meter, frontier intelligence may be about to become too expensive for most of humanity to afford. Second-order effects; compute, memory, and energy are about to become much more important than they already are. In the blog they describe the new model as not just an improvement, but having 'dramatically higher scores' than Opus 4.6 in coding and reasoning, and as being 'far ahead' of any other current models. If this is the new reality, then scale is about to become king in a whole new way. It would also mean, as usual, that Jensen wins again.

English
26
12
163
58.4K
Satya Nadella
Satya Nadella@satyanadella·
Introducing Critique, a new multi-model deep research system in M365 Copilot. You can use multiple models together to generate optimal responses and reports.
English
420
508
4.1K
1.3M
Kaleb Grabert
Kaleb Grabert@grabert_kaleb·
@edsim I also think it’s important to know as talented as these companies are, vertical knowledge does matter and that’s why OAI and Anthropic won’t win everything.
English
1
0
1
67
Ed Sim
Ed Sim@edsim·
Claude and GPT will kill the cybersecurity industry...oh crap, wait a second... TeamPCP is on a tear. Backdoored Trivy → stole creds across GitHub Actions, npm, PyPI, Docker Hub, OpenVSX → and now allegedly Databricks. This is exactly why Claude or GPT won't magically catch these attacks. Valid credentials, legitimate hashes, trusted maintainers. Stochastic scanning misses what looks normal. Fully autonomous AI agents still aren't ready to fly solo. Sharp humans + agent swarms working together. Hybrid defense wins. Databricks customers: rotate those AWS creds yesterday
International Cyber Digest@IntCyberDigest

🚨‼️ BREAKING: Databricks allegedly compromised in a TeamPCP supply chain attack. Databricks is the leading cloud-based data analytics platform: used by organizations worldwide to manage massive datasets. We notified them last week. They scaled up to investigate. We haven't heard back since.

English
5
1
16
8.8K
Ed Sim
Ed Sim@edsim·
@MetacriticCap Go for it. Multi model means you choose what needs SOTA and what needs good enough
English
0
0
0
227
Ed Sim
Ed Sim@edsim·
@LyraInTheFlesh 💯 the most sophisticated already are. The open source will get better. There will be opportunities for startups to build software stacks to make this easier to deploy and manage as well.
English
0
0
3
365
Lyra Intheflesh
Lyra Intheflesh@LyraInTheFlesh·
@edsim I don't see away around this. Large organizations are going to have to figure out what's "good enough" for a model to do the work 24/7, and if there's an Open Source option that they can run on their own hardware, they will.
English
1
0
4
547
Gil Dibner
Gil Dibner@gdibner·
@edsim We've been investing on this thesis all along.
English
1
0
4
691
Ed Sim
Ed Sim@edsim·
Humans are about to get overloaded with exception handling. Something that was discussed as nauseam at RSA. Exceptions handled by department leader escalated to IT to security. Going to need a lot of folks to keep up if default for mission critical decisions or security permissions require a human
English
0
0
8
461
signüll
signüll@signulll·
the current moment is like the early web era where everything was being shoehorned into print metaphors except now we’re shoehorning ai into human interaction paradigms. i.e. every software company is implicitly a workflow company, & their entire information architecture was optimized for human cognitive constraints. future software will change radically where ~80% of the surface area gets rebuilt from first principles as an ai execution layer which is structured, low latency, etc. & the remaining ~20% will become a human monitoring console which is exception handling, supervision, & potential override.
English
57
31
529
33.4K