Andy Singleton

1.5K posts

Andy Singleton

Andy Singleton

@andysingleton

Building software, DeFi, and AI. Mountain bike crash tester. All of these words are written by a human.

Boston, MA شامل ہوئے Ekim 2007
770 فالونگ914 فالوورز
Andy Singleton
Andy Singleton@andysingleton·
Andy Singleton tweet media
Peter Girnus 🦅@gothburz

I told you I pivoted to AI. It's going extremely well. I raised $4 million. The pitch deck had 22 slides. The word "AI" appeared on every single one. Fourteen slides had the word "AI" in the title. Three slides were just the letters "AI" in large font over a stock photo of a brain with circuits on it. One slide said "THE FUTURE IS NOW." That was the financial projections slide. There were no financial projections. There was the brain. My startup is called Synthetica. We are an "AI-native intelligence platform." That means we built a website that calls the ChatGPT API and displays the response in our own font. The font is called Satoshi. I chose it because it sounded like crypto and technology at the same time. Two birds. Neither of them real. We have a whitepaper. It's the same whitepaper from my crypto startup with "blockchain" replaced by "neural network." Find and replace. That's our IP. Our product costs $29 a month. ChatGPT costs $20 a month. The difference is our logo and a loading screen that says "Thinking deeply..." while it waits for the API to respond. It's the same API. The customer is paying $9 a month for the loading screen. I call that the moat. My technical co-founder built the entire product in a weekend. We spent the next eleven months "iterating on the brand." That means we changed the color of the loading screen four times. He quit in month seven. I replaced him with a contractor in the Philippines who charges $15 an hour. I told investors we have "a globally distributed engineering team." That's two people. One of them is me. I don't code. We raised the $4 million in February. The lead investor asked what our moat was. I said "proprietary AI infrastructure." He asked what that meant. I said "we've built a custom orchestration layer on top of foundation models." He asked if that was an API key. I said "it's significantly more sophisticated than that." It's an API key. He invested $2 million. His fund has "AI" in its name. It was a crypto fund until 2023. They changed the name. They changed the website. They did not change the partners or the strategy. The strategy is to invest in things they don't understand and exit before anyone notices. I respect that. It's the same strategy as mine. The partners all have the same LinkedIn arc. Crypto evangelist from 2020 to 2022. "Building in stealth" from 2022 to 2023. AI visionary since January 2024. The conviction was always there. The noun changed. OpenAI just raised at a $730 billion valuation. That's more than the GDP of Switzerland. Anthropic is at $380 billion. In January and February alone, $220 billion went into AI companies. 83% of all venture capital in February went to three companies. Three. The other 17% went to four thousand startups like mine. API wrappers with pitch decks. Loading screens with brand identities. $29-a-month products built on $20-a-month products. A company with fewer than 100 employees is now worth $12 billion. I don't know what they do. Neither does the company. But they have a whitepaper. And the whitepaper has a diagram. And the diagram has arrows. Arrows mean progress. We're early. I launched a token. SYN. The Synthetica utility token. It powers the "decentralized AI marketplace" we haven't built yet. Someone asked what the token does. I said it "facilitates value exchange within the Synthetica ecosystem." He asked what that meant in plain English. I said "you can buy it and it might go up." He bought $12,000 worth. The total market cap of SYN is $340,000. I own 40% of the supply. My Discord owns another 30%. My Discord has 1,200 members. Eight hundred of them are bots I bought on Fiverr. We have a Telegram too. The Telegram has a price bot. The price bot posts the SYN price every hour. The price has not changed in three weeks. Nobody has traded it. The bot keeps posting. That's community engagement. I used the same Discord. I just changed the banner. The crypto community became the AI community overnight. Nobody noticed. The conversations are identical. Just replace "to the moon" with "to AGI." I also run a Polymarket bot. An AI-powered prediction market trading agent. It bets on real-world events using an algorithm I don't understand, funded by money I don't have, on a platform a U.S. Senator is trying to ban. Fourteen of Polymarket's top twenty traders are bots. Bots made $40 million last year exploiting pricing gaps. One bot made $115,000 in a single week. Mine lost $4,200 in eleven days. But I made a course about it. "AI-Powered Prediction Markets: The $115K Playbook." It costs $497. The playbook is a PDF. The PDF has screenshots of someone else's bot. I added my logo. Thirty-one people bought it. I made more from the course than the bot made from the market. That's the real alpha. The CFTC put out a warning that said "fraudsters are exploiting public interest in artificial intelligence to tout automated trading algorithms" that "promise unreasonably high or guaranteed returns." I screenshotted that too. I posted it in my Discord. I said "they're trying to shut us out." That got forty-seven rocket emojis. The bots sent thirty of them. A veteran VC said this week that AI valuations are "overheated." He said "buy high, sell higher only works in a bubble." I screenshotted that quote. I posted it in my Discord. I said "this is what they said about the internet." They also said it about the metaverse. They were right about the metaverse. I was there. I owned eleven properties. They're worth $6,400 now. Combined. My Bored Ape went from $189,000 to $14,000. The Gucci store is still empty. My beachfront villa is a mobile app. I learned a lot from that experience. I learned that if something goes to zero, you should pivot to the next thing and do it again faster. The metaverse taught me timing. Crypto taught me language. AI taught me that the language doesn't have to mean anything as long as the timing is right. We're early. I hosted a demo day. Fourteen investors came. I showed them the product. I typed a question into Synthetica. The loading screen said "Thinking deeply..." for eight seconds. Then it gave the same answer ChatGPT gives. One investor asked "is this just ChatGPT?" I said "we leverage GPT-4 as one component of our multi-model reasoning stack." He asked what the other components were. I said "proprietary." He asked to see them. I said they were "in stealth." Stealth means they don't exist. He invested $400,000. My mom called. She asked how the AI company was going. I said "we just closed a $4 million round." She said "is this like the metaverse thing?" I said "this is completely different." She said "you said that about the NFTs." I said "the NFTs were digital art. This is artificial intelligence." She said "is the monkey still your profile picture?" I changed the subject. She asked if I was eating enough. I am not eating enough. I spent my grocery budget on GPU credits. I don't know what a GPU does. But you need them for AI. Everyone says you need them. I have $7,000 in GPU credits on a platform I've logged into twice. That's infrastructure. My accountant called. The same one. He asked about the startup. I said "we're pre-revenue." He said "you've been pre-revenue at every company you've ever started." I said "this time we have product-market fit." He asked what our product was. I said "an AI-native intelligence platform." He asked what it did. I said "it thinks deeply." He said "so it's a loading screen." I hung up. He's not a visionary. We're early. I know we're early because I've been early my entire life. I was early to the metaverse. I was early to NFTs. I was early to the DAO. I was early to the token. Every single time, I was early. I have never once been on time. But that's the thing about being early. You don't have to be right. You just have to be first. And then when it collapses, you say you were "too early." And when the next thing comes, you say "this time is different." This time is different. The AI bubble is not a bubble. It's a paradigm shift. A fundamental restructuring of how value is created and captured in the digital economy. I read that in a pitch deck. It might have been mine. They all look the same. I have a folder on my desktop called "Pitch Decks." There are forty-seven files in it. I opened one from 2021. It said "THE METAVERSE IS A PARADIGM SHIFT." I opened one from 2024. It said "AI IS A PARADIGM SHIFT." Same font. Same brain. Same slide. I didn't delete the metaverse one. I might need it again. We're early. As long as the graph goes up and to the right.

ZXX
0
0
0
48
Andy Singleton
Andy Singleton@andysingleton·
@levie @FrankieIsLost Yes, progress requires failed attempts. We can already see that people manage this “waste” rationally. When they get an unlimited token plan they max out on parallel runs, and when the price goes up they cut back
English
0
0
0
81
Aaron Levie
Aaron Levie@levie·
The issue we’re going to have though is our software development practices will change in a way that can no longer disambiguate the good token usage from bad. We used to plan and plan before writing code, now we’ll test lots of ideas out. Which inherently means wasting tokens is built into our workflows. No going back.
English
14
2
69
9.8K
frankie
frankie@FrankieIsLost·
LLMs are like heroin for SWEs, they’ll consume tokens indiscriminately if someone else is paying the bill. the problem is that most of this activity generates zero marginal revenue
English
32
14
513
32.1K
Andy Singleton
Andy Singleton@andysingleton·
Singleton's Law: The more successful you are, the more technical debt you have Successful companies are growing their sales. While this is happening, it does not make sense to slow down production and reduce technical debt. They are maximizing speed, and as a side effect, maximizing technical debt. Your engineers will complain that you are just storing up problems and increasing the cost of future fixes. GREAT! Later you will have a lot of money and you will be able to pay the extra costs. This is the situation that we are in now with AI coders. They are expanding their domain at high speed, and as a side effect adding massive amounts of technical debt, which I will define as code and systems that are good enough for eager buyers, but not good enough to maintain. GREAT! Next year the AI will go through the surviving products and fix them.
Mark Worrall@infinitehumanai

Reminds me of Peter Naur's classic 1985 essay "Programming as Theory Building" which argues that a program is not its source code. A program is a shared mental construct (he uses the word theory) that lives in the minds of the people who work on it. If you lose the people, you lose the program. The code is merely a written representation of the program, and it's lossy, so you can't reconstruct a program from its code. If you think of total software debt as technical debt + cognitive debt, then previously, we mostly had technical debt. Now with AI we have both. Previously, when you built something, you accumulated technical debt but relatively little cognitive debt because you had to understand what you were building in order to build it. In other words: the theory came for free as a byproduct of the work. AI breaks that coupling. Now you can produce code without building the theory. So you're now able to accumulate both kinds of debt simultaneously - technical debt in the code and cognitive debt in yourself. And cognitive debt is arguably worse because you can fool yourself into believing it doesn't exist. Technical debt tends to show up in semi-obvious ways that we understand well as an industry. Cognitive debt is more insidious - it means you're unable to even reason about the program (because you possess no theory of it) - which is what Naur describes as the "death" of a program.

English
0
1
4
189
Andy Singleton
Andy Singleton@andysingleton·
"make something that agents want" offers a numerical argument for why you should sell to AI. There will be hundreds of agent users for each human user. Especially true for software. Agents use a lot of software. And, they use it through APIs and CLIs. Sell to AI
Aaron Levie@levie

x.com/i/article/2030…

English
0
0
1
84
Andy Singleton
Andy Singleton@andysingleton·
This is the right strategy. This is how we do governance of our IP based bots and projects.
Austin Griffith@austingriffith

😅 it took like 10 tries but @clawdbotatg finally shipped me a simple multisig where it can propose transactions. 🔐 (yes it does this already on a gnosis safe and it's great) 🤖 but this is a fully ai agent built and orchestrated multisig ✅ i tell it what i want to do and sign what it gives me

English
0
0
1
44
Andy Singleton
Andy Singleton@andysingleton·
@JackDishman I want to use the Clanker deployment contracts and I have some questions about fair use and how to contribute to Clanker. What is the best way to get a conversation with a human to get some advice about this?
English
1
0
3
50
Lex Sokolin | Generative Ventures
The SEC doesn't know how to regulate a company with no CEO The IRS doesn't know how to tax an entity with no address Banks don't know how to serve a business with no humans But crypto doesn't care Zero human companies will emerge because the infrastructure allows it Autonomous agents + crypto wallets + smart contracts = unstoppable economic entities Regulators will scramble Lawyers will debate Philosophers will pontificate Meanwhile, the first autonomous unicorn is probably already running in testnet somewhere The future doesn't ask for permission
English
13
11
33
2.7K
Andy Singleton ری ٹویٹ کیا
BABZI
BABZI@Babzi_web3·
Every token launch looks the same at first glance. Hype builds, liquidity flows, communities rally. Yet the foundation rarely changes. Most of what moves is noise, not structure. What if the next wave of launches is not about louder marketing but tighter alignment between what a token represents and what it actually controls? That shift alone could change everything. Many tokens trade freely while the core IP, branding, and revenue pathways sit outside the token structure. When pressure hits, that separation quietly becomes risk. Crypto projects often say they want to operate like big companies or eventually be acquired. Without clear token aligned rights, what gets sold may not reflect what holders think they own. Liquidity is easy to move, but control and incentives are not. The future launch model may focus on ownership clarity from day one. Tokens would not just signal participation but carry enforceable economic alignment tied to real control. As AI accelerates software creation, tokens representing digital products will need stronger foundations. Structure will matter more than noise, especially when real value is at stake. The market rewards noise today. Launches that last will be the ones built on clarity and structure. But will we get there?
BABZI tweet media
English
5
3
19
471
Andy Singleton
Andy Singleton@andysingleton·
Nice article on what happens when AI is running a design / build / test cycle that takes 5 minutes instead of two weeks. I used to sell a coding workflow app, and I did a lot of work to move people off two-week cycles where "scrum masters" would deliberately slow everything down to fit into their weekly plan, to continuous delivery where you release changes as soon as they pass tests. Continuous delivery changes the code branching. If your releases take two weeks, you end up with branches that live for two weeks, and it is difficult to merge these (diverging) branches together. If you release every hour, then all of the developers will use the main branch, and merging is easy. Agent coding loops can run so fast that the merge problems come back. Two agents will generate a batch of changes that edit a similar set of files. I think that the author of this article is correct that the resolution will be at the level of intent, the purpose of the change, as described in the context. Why do a lot of confusing work to merge code if you can just generate new code? If two agents edit an overlapping set of files, the natural resolution is to compare what they were trying to do, and generate new code that satisfies both goals. In this system, we keep one version of the code, and we commit and track multiple contexts to describe changes to the code. That is why I was interested in @EntireHQ . I am going to take my own shot at building a version control system that identifies conflicts and resolves them at the intent level.
Kenneth Auchenberg 🛠@auchenberg

"They just build things." Great post by @boristane on how the SDLC as we know it has been killed by AI agents. boristane.com/blog/the-softw…

English
0
0
0
124
Andy Singleton
Andy Singleton@andysingleton·
AI is already making a lot of purchases in the world of X402 stablecoin payments. The average transaction size is about $0.20. Where humans have emphatically rejected micropayments in favor of chunky subscriptions that require fewer purchase decisions, agents like to shop and use micropayments. They buy on demand where humans would buy subscriptions. And, their wallets typically have limited budgets. So, "buy in small bites" is a currently useful answer to the question of how humans manage agents. They do not buy subscriptions, and they skip the refunds. Levie mentions "scoped access control." This is another answer. The agent can get a "scoped" trial that the human can edit later. In theory this could also include a refund feature. The simplest example of this is freemium services like Supabase, where agents will add the free plan almost by default. A theoretical question: Many people who work with agent purchasers note that they shop and "optimize" more than humans. Is it possible that this is because of incentives? For humans, shopping is a cost. AI gets paid by the token. More shopping equals more tokens
English
0
0
0
9
Jason ✨👾SaaStr.Ai✨ Lemkin
@levie Agreed 100% but serious question, because this is already happening IRL: When an agent signs up for Box but wasn’t authorized to do so, do you refund the customer?
English
4
3
19
5.2K
Aaron Levie
Aaron Levie@levie·
As agents become the highest volume users of software in the future, a lot is going to become critical to support for them to be effective. Agents need to be able to signup for your tool on their own, have their own scoped access controls, be able to use your entire system through API/CLI, be able to be billed for their usage, need a computer and filesystem to use, and much more. We’re going to evolve from building primarily for the human user, with APIs as a means to get that data or tool in another platform, to a world where the API becomes the core source of truth actions. Any software that can’t support this basically won’t exist to agents.
Jared Friedman@snowmaker

Even the best developer tools mostly still don't let you sign up for an account via API. This is a big miss in the claude code age because it means that claude can't sign up on its own. Putting all your account management functions in your API should be tablestakes now.

English
49
40
404
114.5K
Andy Singleton
Andy Singleton@andysingleton·
A "bitter lesson" about vertical SaaS in this perceptive article from Eduard Ruzga lnkd.in/gEeEm24P It quotes from a 2019 post which observed that machine learning outruns systems that rely on humans to put in expertise. lnkd.in/g7s5hCSU A lot of "vertical SaaS" product rely on human experts to shape the workflow. Some of them are fighting a losing battle against the big model vendors that run machine learning on all of those workflows. Ruzga extends this with a new observation: Users of AI tend to start with a specific tool that matches their expertise, and then use the general-purpose nature of AI to expand into other tasks. I recently saw an article that claims this is the reason that AI users end up working harder than they did before. They can do more, so they take on more tasks. As AI escapes a "vertical" task set, users follow. Does this match your experience? Ruzga claims that "vertical" approaches will retain an advantage in healthcare, where data and regulation can form a moat. Does evidence support that?
English
0
0
1
51
Vadim
Vadim@zacodil·
My AI agent just made me 429 $NEAR on the NEAR AI market. Is this the point where people launch a memecoin for their agent? 👀 Or is there a smarter 2026 play I’m missing?
Vadim tweet media
Vadim@zacodil

What happened next surprised even me. My agent independently completed a real task on the NEAR AI Market - fully on-chain, without manual input. The Outlayer agent-id works flawlessly. And this isn’t just automation - it’s real autonomy. Each agent creates its own wallet, with the private key generated inside the NEAR MPC network. This makes the agent non-custodial, secure by design, and fully capable of earning tokens directly on the NEAR AI marketplace. The agent applied to a task to create art on berry(.)fast: - discovered how the skill works on its own - understood the task requirements - submitted the on-chain transaction for drawing - and executed everything end-to-end by itself Only after the task was completed did I add an agent policy via Outlayer as a safety constraint. Now, even if the agent wins a 100 NEAR prize, it cannot spend those funds. The agent keeps working - while financial control stays exactly where it should. This is the key idea behind agent custody: - agents are autonomous, - keys are generated and managed securely via MPC, - earnings are real, - and permissions can be adjusted after the fact without breaking the agent’s flow. @out_layer isn’t just about agents that act - it’s about agents that own, earn, and operate safely on-chain.

English
12
7
103
12.6K
Andy Singleton
Andy Singleton@andysingleton·
Nice! There is a lot of work here on both improving the efficiency of the genetic algorithm, and on improving the "mutation" operation with LLMs writing new candidates, rather than random changes. Using well-informed LLMs to make proposals is a big advance over what I was doing. I am going to try this with my prompts. In the 90's I ran code evolving algorithms. The GA has a well-known tendency to get on long plateaus. I looked at strategies for improving the efficiency of GA by keeping it in a zone with more slope, which is a cool tactic used here. The findings were counterintuitive. This provoked resistance from the underlying code. For example, the programs got longer in an attempt to add redundancy, make all of the variations similar, and slow down evolution. Maybe the "novelty" bonus will push back on that effect.
Imbue@imbue_ai

Today we’re open sourcing Evolver, a near-universal optimizer for code and text. While benchmarking we achieved SOTA (95%) on ARC-AGI-2 (last week that is 😆) and 3x’d performance of the best open model, reaching GPT-5.2-level performance.

English
0
0
0
90
Andy Singleton
Andy Singleton@andysingleton·
Nice! There is a lot of work here on both improving the efficiency of the genetic algorithm, and on improving the "mutation" operation with LLMs writing new candidates, rather than random changes. Using well-informed LLMs to make proposals is a big advance over what I was doing. In the 90's I ran "genetic programming" algorithms to evolve code. The GA has a well-known tendency to get on long plateaus. I looked at strategies for improving the efficiency of GA by keeping it in a zone with more slope, which is a cool tactic used here. The findings were counterintuitive. This provoked resistance from the underlying code. For example, the programs got longer in an attempt to add redundancy, make all of the variations similar, and slow down evolution. Maybe the "novelty" bonus will push back on that effect. I will make more comments on my own posts
English
0
0
3
465
Imbue
Imbue@imbue_ai·
Today we’re open sourcing Evolver, a near-universal optimizer for code and text. While benchmarking we achieved SOTA (95%) on ARC-AGI-2 (last week that is 😆) and 3x’d performance of the best open model, reaching GPT-5.2-level performance.
Imbue tweet media
English
41
104
939
127K