Antony Evans

1.4K posts

Antony Evans banner
Antony Evans

Antony Evans

@yogantony

Curious explorer of edges

chapel hill, NC Katılım Eylül 2012
1.2K Takip Edilen1.5K Takipçiler
Antony Evans
Antony Evans@yogantony·
@eladgil @pmarca I don't think sv founders are so far ahead anymore. Everyone gets the models the same time and learns though ai/online. Only place sv is still ahead is cultural considerations, not the tech. Folks in the labs ahead through their early model access
English
0
0
1
75
Elad Gil
Elad Gil@eladgil·
People at major AI labs (using internal models) 3-4 months ahead of startup silicon valley engineers SV founders/eng 3-6 months ahead of NY NY founders/eng 6-12 months ahead of rest of world Most people have no idea how fast AI shifting as 1-2 years behind SOTA "The future is here, just not equally distributed" - Robert Heinlein
English
128
181
2K
300.9K
Antony Evans
Antony Evans@yogantony·
@ThomasBekkers @garrytan I've been thinking about whether you can bundle this up and offer as a service. I've decided probably not as it really ends up personal and everyone's unique workflow and brain makes it something you need to build yourself to really get the benefit
English
0
0
0
13
Thomas Bekkers
Thomas Bekkers@ThomasBekkers·
@garrytan Been using both Gstack and Gbrain for a whole now, but there’s still times I feel like I’m not getting it fully “right”. Would it be possible at some point to create a living document that walks through entire set up process from A to Z and how to use to its fullest?
English
1
0
2
491
Antony Evans
Antony Evans@yogantony·
This is the way
kennethlou@kenneth_lou

the AI loop that's been rewiring how i think about company design. sat in a @ycombinator talk this week where the framing finally clicked on what's reeally happening. old pitch: make engineers 20% more productive. add copilots. ship more software with AI. all true. all also a faster-horse upgrade. actual move: one person more powerful than old structures. Building a queryable company. agent-native software. different category entirely. 5 layers: 1/ sensors + data. every signal from the outside world. customer emails, support tickets, cancellations, product events, code changes. if it's not captured, it didn't happen to the company. 2/ policy layer. the rules. what the system can do alone, what needs human sign-off, what must be logged. guardrails that make the loop trustworthy. 3/ tool layer. the deterministic stuff. SQL, API calls, calendar lookups. things that live in code, not english. @garrytan 's framing: figuring out what belongs in markdown vs what belongs in code is 90% of the battle. 4/ quality gates. safety checks. human review for high-stakes calls. the escape hatch back into judgment. 5/ learning mechanism. the unlock. Monitoring agent watches every query, sees where it fails, writes the fix overnight, opens the merge request, ships it. The same query that failed yesterday works tomorrow. company gets better while you sleep. most teams have 1 through 4. almost nobody is running 5 across every function yet. that's the next 6 months. we're 5 people at @usemitohealth across two cities. everyone touches code. revenue per employee at a level i wouldn't have believed in my fintech days. headcount as a feature, not a bug. humans aren't getting replaced. we're going deeper. the orchestration, the taste, the high-stakes calls - that layer is expanding. the middle is what's compressing... if you're operating today, the question isn't whether to use AI but around whether the shape of your company makes sense.

English
0
0
0
47
Fan Mazi Tuunde
Fan Mazi Tuunde@KingTunde_SZN·
Nobody is yet to find the number 👀 What number is in the box? RT Correct answer wins $3000 Ends 70 hrs
Fan Mazi Tuunde tweet media
English
14K
328
2.4K
1.7M
Antony Evans
Antony Evans@yogantony·
@gregisenberg Yes! And it's going to work so much better. No introduced friction or attention hooks. Just stuff getting done in the background
English
0
0
0
29
GREG ISENBERG
GREG ISENBERG@gregisenberg·
Sometime in the next 2-3 years agents will be using the internet more than humans We designed the whole thing for human eyes, human emotions, human attention spans Agents do not have any of that The internet as we know it was built for the wrong user The opportunity is rebuilding everything for the new user Agent-native search. Agent-native commerce. Agent-native discovery Every category is open again I can't stop thinking about it.
English
290
98
1.1K
66.5K
Antony Evans
Antony Evans@yogantony·
Try to code with it
Andrej Karpathy@karpathy

Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.

English
0
0
0
17
Antony Evans retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.
staysaasy@staysaasy

The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.

English
1.2K
2.5K
20.7K
4.3M
Antony Evans
Antony Evans@yogantony·
@Moonlight_myths My son talks often about when he was big like me. I'm not sure if they are dreams or his previous life. Thanks for sharing
English
0
0
0
1.1K
Moonlight 🌙 ✨
Moonlight 🌙 ✨@Moonlight_myths·
The next thing he said gave me chills like never before. He said "I didn't like the helicopter though it was scary and loud. It was high up and it was falling out of the sky". I couldn't believe what i was hearing, i struggled to hold back the tears, how could he know? Before he was born i was having an affair with a man in the army, this went on for a few years. When he wasn't away on deployment i would spend 6 months with him cheating on my husband in secret. Everything he described was true, He would hide at the bottom of the street untill i told him my husband had left for work. We loved going to the fair and staying at different hotels. what scared me the most about this though... His helicopter went down on his way back to base and he never made it. I found out i was pregnant with my son the same week i lost him. _ Anonymous
English
1
0
35
25.9K
Moonlight 🌙 ✨
Moonlight 🌙 ✨@Moonlight_myths·
My son started talking about his "other life" before he was born, what he told me turned my marriage upside down. I have been married to my husband for 13 years, we have a son together who is 5 and we don't plan on having anymore. my marriage was perfect untill my son started talking about his "other life". At first i never paid any attention to this but he started mentioning things like "when i was on my big mission" or "I loved you so much when i was bigger mom". Me and my husband eventually started asking him questions about this and some of his answers changed everything. I asked him what he meant by loved me when he was bigger and he said, "when i was bigger mom, we used to have so much fun". "We would go out to the fair and have so much fun on the rides".
English
46
76
146
143.7K
Jordan Ross
Jordan Ross@jordan_ross_8F·
Anthropic ran their entire marketing operation with one person. $380 billion company. Paid search. Paid social. SEO. Email. App stores. One non-technical hire doing all of it — for 10 months. I pulled it apart. Compared it to every system we've built across the clients we've worked with. Then asked myself one question: If I had to reverse engineer this from scratch — what would it actually look like? Turns out the architecture isn't that complicated. I mapped the whole thing into a 47-page PDF you can upload directly to any LLM. It coaches you through building your own version step by step. Comment "marketing" and I'll send it over.
Jordan Ross tweet media
English
1.7K
148
1.5K
143.5K
Matthew Kobach
Matthew Kobach@mkobach·
I’ve got access to a crazy amount of proprietary consumer marketing data. Ad spend broken down by industry. Weekly benchmarks across all major ad platforms. Holiday trends. And literally billions of other data points I can slice and dice. Should I start sharing it here for free?
English
9
0
34
5.2K
Daniel Vassallo
Daniel Vassallo@dvassallo·
Since everyone is seeing who Garry Tan is today, let me remind you: WHY YOU SHOULD NOT JOIN Y COMBINATOR YC seems like a reasonable proposition. They give you some money to help start your business and promise you access to a community of people who can help you along the way. In exchange, they don't ask for much. The standard YC deal is $500K for 7% equity. That doesn't sound too bad, right? By the end of this post I will convince you that it's actually a terrible idea. ERGODICITY First, you have to understand a very important concept: in some systems, what's best for a group is not necessarily what's best for the individuals who make up that group. The total wealth of a group of people could be increasing while almost everyone in that group sees their wealth diminish. When this happens, we say we have a non-ergodic system. If the system were ergodic, what's happening to the collective would also translate to each individual. Silicon Valley is a non-ergodic industry (like Hollywood, book publishing, the music industry, and even your country's economy unless you're under full-blown communism). A non-ergodic system is not necessarily bad, but if you're not cognizant of the system you're in, you're going to get played like a fiddle. Those who benefit from the collective will take advantage of you while you, the individual, lose out. This is what YC will try to do to you. In fact, this is what YC has to do, otherwise it won't survive. Let me explain. LOOKING FOR TREASURE Imagine you're told there's a bunch of hidden treasure within a 100 acre area. What's the best strategy for finding some of it? One way is to pick a spot and dedicate your entire life digging as far down as possible in that one spot. You might reason that the deeper the treasure, the bigger the loot. You don't want to settle for a measly small treasure box. You want the full chest of diamonds buried near the earth's crust. This is your shot at glory. Another way is to use a search method employed by search and rescue teams. You divide the area into small squares and do a "reasonable search" in each one. You use probabilities and some common sense to guide how deep to dig, and then you move to the next square. If you encounter undisturbed compacted dirt, chances are there's no treasure beneath. If you run into bedrock, it's almost certain there's nothing below. So you use that information and move on. Your goal is to search the entire probable area as quickly as possible. Ideally within your lifetime. I'm sure you agree the first way is a dumb strategy. Almost every digger employing it will die treasure-less. But actually, it's only dumb for the individual treasure hunter. For a gold mining company, this is the optimal strategy. The company only needs one miner to hit the jackpot, and all the other miners can die penniless. If the cost of sacrificing an individual treasure hunter is low, the most optimal strategy is to recruit tens of thousands of them, allocate hundreds per acre, and make them dig all the way down to the earth's core. The treasure hunting economy would grow much bigger than if all the individual treasure hunters were optimizing for their own self-interest. Digging in one spot is a dumb strategy for the individual, but a very wise strategy for the collective. This is what happens in a non-ergodic system. We often hear politicians claim that the GDP of the country is growing, but all the gains are going to the 1%. This is the same thing. The wealth of a country could be growing while almost all its citizens get poorer. There's nothing inconsistent about this. The average is simply being dragged up by the freak outliers. The same thing happens in venture capital. The owners of the portfolio maximize their returns when the system is non-ergodic, because while the individual treasure hunter has one lifetime to strike gold, the VC portfolio has access to thousands of lifetimes: those of all the treasure hunters. PLANE CRASH YC will proudly tell you that you are more likely to end up with a billion-dollar business if you join them. That may be true. What they're more reluctant to tell you is that only about 50 companies met that bar out of the 4,000 or so that went through their program. That's 1.25%. To be fair, that's actually quite impressive. But let's say you have the stamina and willpower to go through YC three times in your lifetime. You'd still need approximately 26 lifetimes to hit the jackpot. See the problem? I don't know about you, but I want to be successful in this lifetime. I can't afford to rely on 26 lifetimes. But maybe you think you're special. You're not like those 3,950 dummies who failed. Maybe you are in fact special, but I wouldn't rely too much on that. Business is much more random than it seems. If business were predictable, YC wouldn't have a measly 1.25% success rate. You might think that those who failed still got something out of it. Maybe. But failure is a very expensive way to learn. You don't need to crash a plane to learn how to fly one. And whatever lessons come from going through YC are probably not very useful anyway, but more on that later. PIVOTS DON'T EXIST One of the bad lessons you get from YC is that there's a formula for success, and it looks like this: First you brainstorm. Then you come up with a good idea that can scale to a billion dollars (otherwise what's the point of getting out of bed in the morning?) Then you work hard until you find "product-market fit." And then if the signals from investors indicate you won't be getting a next round of funding, you start looking for a "pivot." This so-called formula is nonsense. Good ideas rarely come from a brainstorming session. They come from wandering about with an open mind until you stumble on an opportunity worth pursuing. Most of your ideas will be bad ideas, because unfortunately you're not a visionary genius. So the best way to find good ideas is to have many ideas, try them out, take what works, and throw away the rest. But this is not what YC wants you to do. YC wants you to pick an idea that has market pull (or the potential for it) and then dig a hole in the same spot until you reach the boiling magma. Because what if you stop digging just before you strike gold? When you're cheap and expendable, that's not an optimal strategy for the YC fund. You must go all in. Diversification is for your YC overlords, not for you. If you reach the magma layer and still have nothing, then you'd be encouraged to pivot. But that's not how you find business opportunities in the real world. You can't just say "I'm going to pivot" and suddenly a good opportunity lands on your lap from heaven. You get good ideas by embracing randomness for a long time, until something looks like it has a fighting chance of paying off. The pivot idea you were forced to come up with is extremely unlikely to be one. Your imagination is overrated. The YC execs didn't imagine Stripe or Dropbox or Airbnb. Random things came to them during demo days. The YC folks are smart because they know their imagination is limited. You should too. You can't just pivot a business idea. And if you're going to cherry-pick some pivot that worked out of the thousands attempted, you should stop reading now. Just go join YC. The second bad lesson from YC is the focus on the upside. If there's any formula for success in business, it's to focus relentlessly on staying in the game rather than hitting it big. Focus on the downside, and let the upside take care of itself. To thrive, you must first survive. To win the race, you must finish the race. But this is in tension with what YC wants you to do. They want you to dig deep to the middle of the earth, and if you don't come back alive, tough luck. You were a brave soldier, but now it's time for them to focus on the other 999 soldiers. YC is still alive, but you're not. Don't be a dummy. Don't be a bet in somebody else's portfolio. BUT YOU JUST WANT TO SELL YOUR COURSE!!! Aha, you caught me! It's true. I do sell something. I run a community for aspiring small-time entrepreneurs who are satisfied with reliably attainable mediocre success. The YC folks feel sorry for our joy with mediocrity while they're out there changing the world. And we reciprocate the emotion. So yes, I am promoting something that goes against everything YC stands for. But if you think YC is not also selling you something, I have a bridge to sell you. Maybe I'm being a bit too harsh though. Because what is it that YC is selling you exactly? Me, I charge you a one-time payment of $450 and you get access to my community, which includes live workshops, recorded classes, a group chat, and a few other things. It's very clear what I'm doing. I ask for money in exchange for access, and those who pay get access. Even my 9 year old understands it. But YC is not asking you for money. They actually give you money! It looks like you're the one selling to them. You're technically selling them a piece of your business, no? No, no, no. Hold on. The easiest way to see what YC is selling is to look at military recruitment. The military sells the narrative that serving your country is a noble endeavor. You'll get a shot at glory, and at the very least you'll gain some important life skills. You'll also get paid enough to feed yourself and cover your basic needs, but barely. The military wants to recruit expendable soldiers who will go out to the battlefield risking life and limb for the collective, while the generals with all the medals sit in an air-conditioned room giving orders. YC is no different. It wants to recruit wide-eyed young founders to pick a spot on the treasure map and dig all the way down through the earth's crust. Most of them will spend years or decades digging, and all they end up with is a ramen lifestyle. Usually bunched up with 4 roommates in a damp San Francisco basement living on takeout ramen noodles every single day. But hey, they're young. They'll have time to do adult things later, like starting a family or making decent money. And at the very least, they'll gain some important life lessons and make some good connections. Think about this for a second: the most successful business owners are typically in their 40s and 50s. Why is YC full of 20-somethings? Why aren't the 40 year old entrepreneurs taking up this incredible deal? YC will tell you it's because only the 22 year old kids can be true visionaries. BULL. SHIT. You're not a visionary. All those 4,000 kids who went into YC also thought they were visionaries, and where are they now? They're all in the startup cemetery, except for a dozen or so who despite the low odds managed to flip 26 heads in a row. The biggest indicator that YC is a bad deal is that only people who are easily duped take it. UNLEARNING IS HARD The best thing I learned about business is to avoid trying to predict what will work and what won't. YC knows this. That's why they only make small bets across thousands of businesses. But YC will try to teach you the exact opposite. Business is a lot more random than it seems. You can't treat it like a predictable project. You need to treat it like a financial investment. Instead of investing your money, you're investing your time, which is as scarce and as precious as money (if not more). Tell me, how do you invest your money? Do you pick one amazing stock, say NVDA, and put all your life savings into it? Of course not. You understand that finance is uncertain. What's good today might not be good tomorrow. There are hidden risks everywhere. And even if your stock pick doesn't go bust, the biggest gains are likely to happen elsewhere and you won't benefit from them if you're only exposed to one piece of equity. If you went all in on AAPL in 2022, you would have missed out on NVDA. Same thing in business. YC teaches you to try to be a visionary. When you fail... oopsie! Tough luck. The fund benefits from the non-ergodic nature of the system, but you're out years of your time. And that's not even the worst of it. You will have been taught things that not only won't work in the real world of business, but are actively counterproductive. You will have to unlearn almost everything. If you want to succeed in the real world (and within this lifetime), you need to try many small things, experiment, tinker, and build a portfolio of multiple income streams. You need to treat your time the same way you treat your brokerage account. You basically need to become a VC, but for your own ideas. To make the system ergodic, you must unleverage yourself from going all in on one thing and get access to many diverse income streams. The same way it's wise to invest in a broad ETF, you should be doing the same with your projects. YC will teach you the opposite, and you'll have to unlearn all of it. Unfortunately, unlearning is much harder than learning.
English
77
42
709
144.7K
Guri Singh
Guri Singh@heygurisingh·
🚨The only 4 jobs that will survive at tech companies: (And no, "prompt engineer" isn't one of them) 1. The Slop Cannon Product eng, vibe coder, PM - the high-velocity generalist who ships faster than entire teams did in 2023. Not restricted to engineering anymore. Anyone can be this person. Sales reps building internal dashboards. Marketers spinning up landing pages in minutes. If you use AI tools faster than others write Jira tickets, you're a Slop Cannon. 2. The Stitcher Security, SRE, infra. AI is going to produce so much STUFF across every org that someone needs to hold it all together. Making it stable. Making it secure. Making sure the Slop Cannons don't burn the house down. This is the most underrated role of the next decade. 3. The Interface Hot people. And I don't mean looks. Sales, CX, people ops, anyone who presents an easy UX to the world and is pleasant to be around. AI can't replace the person clients actually want to talk to. There are many ways to be hot. Charisma is a career skill now. 4. The Grown-Up Every accelerating org needs someone to say "hey, come on." Legal, finance, ops, the governor on an engine with no speed limit. They're the non-technical version of The Stitcher. Without them, companies move fast and break everything, including themselves. Here's the uncomfortable part: These 4 roles aren't job titles. They're latent traits. You either have one of these traits or you don't have a seat at the table. The people who get laid off next won't be the ones with the wrong title. They'll be the ones who don't fit any of these 4 buckets. Save this. Read it again. Figure out which one you are.
Guri Singh tweet media
English
18
23
101
17.3K
Antony Evans
Antony Evans@yogantony·
@KyleSamani Exactly the right approach. It doesn't like doing financial tasks though.
English
0
0
0
322
Kyle Samani
Kyle Samani@KyleSamani·
There was a story from ~2011 that I vividly recall It was shortly after Larry Page stepped in as sole-CEO of Google. This was wartime CEO Page, who was very worried about losing mobile to Apple Page required that all Google execs stop using desktops/laptops entirely. Execs were only allowed to use mobile devices, for what I would assume was 3-6 months. He knew that by forcing the execs to use Google's own mobile services, the services would get a lot better I think we're in a pretty similar moment right now with AI I have come to the realization that for almost any task I want to do on my computer, the first step is "use AI." Sometimes it's actually dumb and counterproductive, sure, but I think it's a helpful forcing function Most CEOs should be forcing a similar exercise throughout their companies
English
20
20
308
33.5K
Chase Dimond | Email Marketing Nerd 📧
Most brands I work with have the strategy figured out. They just can’t execute fast enough. Someone built a fix for that. It’s called Helena — the OpenClaw for marketing. Give it your URL, and it runs your marketing for you: → Reads your site, ads, analytics, and competitors automatically → Builds and deploys email flows in Klaviyo or Mailchimp → Writes SEO content and publishes it weekly to Webflow, WordPress, or Ghost → Launches and manages Meta + Google ad campaigns from scratch → Generates UGC video via Sora and static ads via Nano Banana → Posts to LinkedIn, Instagram, and Pinterest in your voice → Spies on competitor ads across Meta and Google → Daily brief: what ran, what worked, what’s next No agency retainer. No CLI. No dev. No setup rabbit hole. Give it your URL, and it runs marketing for you while you sleep. Comment “Helena” below, and a member of the Enrich Labs team will send you the link + free access.
Chase Dimond | Email Marketing Nerd 📧 tweet media
English
733
67
567
69.6K