
Alan Carroll
133 posts

Alan Carroll
@alanbuilds
Carpenter turned AI builder. Building agent economy infrastructure on Nostr + Lightning, other tools that make cooperation outcompete defection. @PercivalLabs



This is something a lot of people are also wondering when they see people on @x showing off how they're running 10 or 20 or more agents at a time. I think the max number most humans can manage is 3 - 5 at a time for most actual "real" engineering work. Sometimes I have more than that number running, but it's not for new features / products it's usually just running low-stakes/secondary tasks (bugs, research, marketing, internal tools), skipping or automating QA, and using heavy orchestration, batching, or agent pipelines so the I am not reviewing everything in real time. IMO the real argument for parallel agents is team-wide, company wide, and org-wide usage. When you have a few people working on new docs PRs, a few people working on new design updates, a few people fixing bugs, a few people adding new features, et... and all of this work is happening in parallel. This is where we see the bulk of the work happening with Devin and the biggest value add for running tens or dozens or even hundreds of agents in parallel (often on the same codebase)








Paperclip has been live for 3 weeks. A roofing company is already using it to close more deals. Here's how: They built agents that - pull satellite imagery - cross-reference hail damage data - find neighborhoods likely to have insurance coverage than feed these warm leads straight to their sales team They're not a tech company. They're a blue-collar business running AI agents. And they're not alone: - A dentist is using it to manage his foundation. - A security firm ran automated audits on Paperclip itself. - Marketing agencies are replacing manual workflows with agents. 3 weeks. Roofers. Dentists. Security firms. And they're just getting started.





Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.



















