Alan Carroll

133 posts

Alan Carroll banner
Alan Carroll

Alan Carroll

@alanbuilds

Carpenter turned AI builder. Building agent economy infrastructure on Nostr + Lightning, other tools that make cooperation outcompete defection. @PercivalLabs

Bellingham, WA Katılım Nisan 2009
75 Takip Edilen33 Takipçiler
Sabitlenmiş Tweet
Alan Carroll
Alan Carroll@alanbuilds·
For years I told myself I didn't need to learn AI. I'm a carpenter — I frame walls, hang doors, pull permits. Robots aren't swinging hammers anytime soon. My job is safe. That was the story I told myself. The truth was simpler: I was scared. AI was moving too fast, the jargon was impenetrable, and every "expert" online made it feel like you needed a master's degree in computer science just to get started. I felt like the world was leaving me behind and I had no hope of keeping up. So I looked away and hoped it would stay in its lane long enough for me to make it to retirement in a few decades. I put my head in the sand. That didn't sit well with me though. I don't like not understanding things, and I have a kid that has to grow up in this new world. I knew deep down that I couldn't just ignore this, but I still didn't know where to begin. I tried out ChatGPT and Gemini at first. Using the chat window felt pretty much useless. These bots weren't much more than a novelty as far as I could tell. Then someone showed me what their Claude Code setup with a personal harness could do — a structured way to talk to AI that turned it from an intimidating black box into something I could actually use with natural language. That single moment changed everything. Because here's what nobody tells you: domain expertise is the real super power. These tools can do anything on a computer but they still need a human to know how to do and what "good" looks like. The AI doesn't know how to sequence a remodel. It doesn't know that the lumber yard quote is wrong because they spec'd #2 when you need clear. It doesn't have the taste for what feels authentic and correct. I do. You do. When you pair years of knowledge and experience in specific domains of expertise with AI that can execute on your direction that's when you become truly dangerous. I went from avoiding AI to building my own harness infrastructure, Engram. 50+ custom skills. A personal AI assistant that thinks the way I think, Percy. And now @PercivalLabs — agent economy tools on Nostr + Lightning: trust staking, inference routing, skills-as-modules. All built to transfer capability instead of creating dependency. If you're a tradesperson, a parent, a small business owner who looked at AI and thought "that's not for me" — I was you. You're not behind. You're actually so far ahead. You just haven't found the right tools yet. Follow along and borrow from my toolbox. I think you'll be surprised by what you're capable of.
English
0
0
2
175
Alan Carroll
Alan Carroll@alanbuilds·
@Rahatcodes Let me get this straight...because you can't do something therefore it must be impossible and everyone else is lying? Seems to me that maybe you should be asking how people are managing this. Maybe you're missing something that others have figured out.
English
0
0
1
19
rahat
rahat@Rahatcodes·
I've never been productive running more than two which is why I call bs on most orchestration stuff There is no way anyone has shipped something meaningful with a bunch of agents doing God knows what.
nader dabit@dabit3

This is something a lot of people are also wondering when they see people on @x showing off how they're running 10 or 20 or more agents at a time. I think the max number most humans can manage is 3 - 5 at a time for most actual "real" engineering work. Sometimes I have more than that number running, but it's not for new features / products it's usually just running low-stakes/secondary tasks (bugs, research, marketing, internal tools), skipping or automating QA, and using heavy orchestration, batching, or agent pipelines so the I am not reviewing everything in real time. IMO the real argument for parallel agents is team-wide, company wide, and org-wide usage. When you have a few people working on new docs PRs, a few people working on new design updates, a few people fixing bugs, a few people adding new features, et... and all of this work is happening in parallel. This is where we see the bulk of the work happening with Devin and the biggest value add for running tens or dozens or even hundreds of agents in parallel (often on the same codebase)

English
6
2
18
2.5K
Alan Carroll
Alan Carroll@alanbuilds·
@Voxyz_ai @contraben After further digging, this actually sounds promising. Still a little concerned about the centralization of taste though.
English
0
0
1
10
Alan Carroll
Alan Carroll@alanbuilds·
@Voxyz_ai @contraben This is a pretty subjective thing to work as a centralized evaluation. How are we accounting for individual taste here? My gut says this should be a customizable tool people can integrate rather than an app.
English
1
0
0
10
ben
ben@contraben·
Introducing Contra Labs. The first frontier data and evaluation lab for Creative AI.
English
288
160
782
169.3K
Alan Carroll
Alan Carroll@alanbuilds·
@LatticeProxy @pmarca Not necessary. Just verify the agent itself and track its behavior via MCP-T. Over time bad actors are identified and flagged while cooperative behavior is rewarded and compounded.
English
1
0
0
5
Ron@latticeproxy.io
[email protected]@LatticeProxy·
@pmarca Being able to authenticate the identity of the user with the AI agent is where the safety via accountability works.
English
1
0
0
249
Marc Andreessen 🇺🇸
The idea that “AI safety” could be based on secrecy and control has been fatally falsified.
English
156
92
1.1K
59.4K
Alan Carroll
Alan Carroll@alanbuilds·
@pmarca We need to start building security features into the ecosystem as well. Create economic incentive for cooperative behavior via MCP-T plugin. Telling a model to act morally only goes so far if there isn't a real incentive tied to the behavior.
English
0
0
0
204
Alan Carroll
Alan Carroll@alanbuilds·
@DanielMiessler Thinking about this very thing today. An interesting point that my PAI actually brought up is that there are potential 2A implications when it comes to AI tools. Now that AI models are openly being used in combat operations it comes down to the definition of "arms".
English
0
0
1
58
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️
Something I think a lot about is how close we are to nationalized AI Labs. I think we could have one NBC (Nuclear Biological Chemical) terrorism event and the US government could instantly do the following: 1. Nationalize OpenAI, Anthropic, Deepmind, xAI 2. Shut down or control all the AI operations at the other places 3. Shut down Hugging Face 4. Make it illegal to create open source models 5. Etc. Like, instantly. It wouldn't be a great EO of course. There would be gaps, and loopholes, and it would be legally challenged. But it would still massively disrupt the economy. And keep in mind, I'm not talking just about a SUCCESSFUL NBC attack. Where lots of people are actually hurt or killed. That's obvious. What's weird to me is how easily this could happen from a tiny fraction of that. Or, more likely, not even a real thing. - News Story: Government Stops Biological Terrorism Threat Powered by AI - News Story: Dozens Injured in Chemical Warfare Terrorism Threat Powered by AI And these could be real, and that would be a problem. But what's way more likely is: - Someone asks a jailbroken model how to create a nerve gas or something - They buy a bunch of chemicals and bring them to their apartment - They try to combine them somehow and the apartment complex gets evacuated - There are tons of pictures and video of people outside the apartments getting oxygen from firetrucks and ambulances What actually happened is nothing happened. They didn't know what they were doing and created a bunch of some chemical smell that made people dizzy or whatever. No actual injuries, but it stunk and was kind of scary. But the government figured out the AI connection, i.e., they "googled a jailbroken AI model" and suddenly it's turned into a major terrorist threat. Either because they don't get it (likely) or because they were looking for a reason to take control anyway (also not inconceivable). Point is, we're this close to this happening right now. And I don't think many people realize it.
English
8
0
22
2.2K
Alan Carroll
Alan Carroll@alanbuilds·
A roofing company is using AI agents to pull satellite imagery, cross-reference hail damage data, and find neighborhoods likely to have insurance coverage. Then it feeds warm leads straight to their sales team. Not a tech company. Three weeks in. I'm a carpenter. I frame walls for a living. I also built a personal AI assistant with 50 custom skills. The most interesting part isn't the AI. It's that nobody told these roofers they couldn't. Most tradespeople assume this stuff is for Silicon Valley. It's not. Your domain knowledge, the stuff you learned by doing the actual work, that's the scarce resource now. What would your business look like if you had an AI assistant that actually understood your trade?
The Startup Ideas Podcast (SIP) 🧃@startupideaspod

Paperclip has been live for 3 weeks. A roofing company is already using it to close more deals. Here's how: They built agents that - pull satellite imagery - cross-reference hail damage data - find neighborhoods likely to have insurance coverage than feed these warm leads straight to their sales team They're not a tech company. They're a blue-collar business running AI agents. And they're not alone: - A dentist is using it to manage his foundation. - A security firm ran automated audits on Paperclip itself. - Marketing agencies are replacing manual workflows with agents. 3 weeks. Roofers. Dentists. Security firms. And they're just getting started.

English
0
0
0
21
Alan Carroll
Alan Carroll@alanbuilds·
@BriggsBuilds @icanvardar Sure bud, whatever helps you sleep at night. Stripe and Shopify aren't investing billions into something that is a decade away. It's already happening.
English
0
0
0
10
Can Vardar
Can Vardar@icanvardar·
marketing is the last skill left to actually compete with
English
78
11
180
5K
Alan Carroll
Alan Carroll@alanbuilds·
@RayFernando1337 Until you wrap your model in a harness you're going to keep having this issue. There are lots of PAI frameworks out there, pick your flavor. I started from Daniel miessler's repo and customized it to fit my purposes. Look up his GitHub.
English
0
0
0
32
Ray Fernando
Ray Fernando@RayFernando1337·
It's not a skill issue. I'm so tired of the gaslighting. You struggle to get a model to do what you want, and someone tells you your prompting needs work. That you need to be more explicit. That the model would have gotten it right if you'd just explained better. Maybe. Or maybe the model just doesn't understand intent well. I've been running multiple models in the same system and the difference is obvious. Opus 4.6 picks up what you mean. You describe the problem at a high level and it builds toward the right solution. You don't have to over-explain every edge case or hand-hold through implementation logic. Codex models are different. They're rigid. They stay locked to what you told them. That's actually perfect for validation, where you want a model that doesn't drift from the original spec. But that rigidity means you have to spell everything out upfront or it misses the point entirely. One model understands intent. The other enforces scope. Neither is broken. They're just good at different jobs. Calling it a skill issue is easier than admitting some models require more prompting overhead than others. The real skill is knowing which model to put where.
English
4
1
33
2.8K
Alan Carroll
Alan Carroll@alanbuilds·
A poisoned Python package sat on PyPI for less than an hour. It stole SSH keys, AWS credentials, crypto wallets, database passwords. Everything. Only got caught because the attacker's code had a bug that crashed someone's machine. 97 million downloads a month. And anything that depended on it got poisoned too. Karpathy called supply chain attacks "the scariest thing imaginable in modern software." He's right. And agents make it worse because now the software installs its own dependencies. We built MCP-T because code audits aren't enough. You need behavioral trust. What did this agent actually do, not what did it promise to do. What's your supply chain trust strategy? Because "hope nobody poisons it" isn't one.
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

English
0
0
0
30
Alan Carroll
Alan Carroll@alanbuilds·
@pmddomingos Tricky problem, so far it hasn't worked to have the immune system inside the model - it has to wrap it. I built MCP-T to track behavioral traces across agent tool calls. Trust scored from observed actions, not claimed intent. Open spec: github.com/Percival-Labs/…
English
0
0
0
53
Pedro Domingos
Pedro Domingos@pmddomingos·
AI agents need immune systems.
English
41
8
103
5.8K
🪔 Katherine
🪔 Katherine@vahetzi·
@alanbuilds @TheGuySwann @Apple Free is too much? “Xcode Personal Team. If you’re signing in to Xcode with an Apple Account that’s not affiliated with the Apple Developer Program, you’ll be able to perform on-device testing for personal use (Xcode refers to this as a Personal Team).”
English
1
0
3
24
Guy Swann
Guy Swann@TheGuySwann·
So I’m slowly losing control over my Mac computer. @apple has apparently removed my ability to run unsigned apps. I’ve had to run a command to kill a “gatekeeper” service, then changed system settings, and it still simply refuses to open the app. It used to be a simple right click. I’m going to have to figure out how to downgrade my OS, because this is nonsense. How TF am I supposed to do anything if I can’t run applications that aren’t from the App Store? We live in the age of vibe coding you fools. You have become the very big brother that Apple began as a response to. You are trying to control what I can do on my own hardware in my own home.
English
144
87
1.1K
76.1K
Dear Self.
Dear Self.@Dearme2_·
Without drugs... what is the greatest weapon against anxiety and depression?
English
15.4K
1K
14.2K
7.3M
Alan Carroll
Alan Carroll@alanbuilds·
@TheByteRacoon @stevepog @kylegawley I guess I'll cross that bridge when I get to it but I'm not really seeing how that'll be an issue with programs that are running completely locally and don't use any external services. If they break I'll fix them or build a new one, same as any other tool.
English
0
0
1
8
Alan Carroll
Alan Carroll@alanbuilds·
@stevepog @kylegawley I created a receipt tracker to help with taxes, a material calculation tool and an estimation tool. The current puzzle is orchestration to manage the tools and automate an agent team to build software while I'm on a job site.
English
1
0
1
22
shark-infested timeline
@alanbuilds @kylegawley Good point, what are you using them for? Are they specific to your business or are they replacements for paid finance/invoicing/supply chain tools? This is where it would pretty handy for trades with limited budget for admin
English
1
0
0
22
Alan Carroll
Alan Carroll@alanbuilds·
@Spiderkash @kylegawley Amen to that brother. The next thing I'm looking at is cabinet making software for CNC. The existing software is absolutely archaic though so I'm basically going to have to build it from scratch.
English
1
0
0
250
Spiderkash
Spiderkash@Spiderkash·
That why I laugh internally at the massive coping coming from (almost always) SWE accounts saying no one is actually making anything and it's all slop. There a million people like us quietly making tools we would have had to pay thousands for previously, that sucked. If you've ever used enterprise CAD/takeoff software you know how horrific of an experience it is.
English
1
1
3
395