Lesly Garreau

258 posts

Lesly Garreau banner
Lesly Garreau

Lesly Garreau

@lesly

Serial builder | 20 years, 8 companies | Now building affordable SaaS for coaches with AI (Follow for the build) - 1st one 👇 My Mini Funnel

Bordeaux, Kyiv & Bangkok Entrou em Kasım 2006
53 Seguindo1.7K Seguidores
Vu.
Vu.@TeeDevh·
Honestly stuck between 4 options: 3 × $20 Codex 3 × $20 ClaudeCode 1 × $100 Codex 1 × $100 ClaudeCode What would you pick? 🤔
English
208
5
366
81.4K
Alessandro Palombo
Alessandro Palombo@thealepalombo·
Japan has 9 million abandoned houses. By 2038, it's projected to be 1 in 3. Many of these sell for near-zero prices. The government covers 30–75% of renovation costs. Japan also places no restrictions on foreign property ownership, identical rights to citizens. Only a very specific profile would consider this. But there’s a lot of similarity to Italy's €1 home schemes, which were dismissed as gimmicks and are now attracting serious buyers to villages across Sicily and Sardinia. Japan's abandoned house market is a real entry point for people willing to look past the obvious. In Kyushu, you can also find move-in ready houses for $15,000–20,000 in towns with hot springs, fresh seafood, and Shinkansen access. I will be exploring later this year personally, but quality of life in Japan looks to be incredibly high. Is this one of the most overlooked property plays in Asia right now?
Alessandro Palombo tweet mediaAlessandro Palombo tweet mediaAlessandro Palombo tweet mediaAlessandro Palombo tweet media
English
1K
2.6K
26.9K
4.4M
Marc Lou
Marc Lou@marclou·
I've tried Claude Code and Codex. The upgrade was nowhere near worth the time needed to adapt. I ignore new tools that take more than 1 minute to set up and use. You can build a startup with GPT-3.
Adithya@curiousadithya

@marclou marc i don't understand why you are still using cursor. i a confused, am i missing something. Or is it your personal choice to code inside cursor. can you tell me what made you stick to cursor instead of using claude code or codex directly ??

English
181
21
616
124.7K
Pramod
Pramod@pramodk73·
can someone tell me if this is plan from ahrefs provide value? i just want to understand keywords, trends, competitors', etc
Pramod tweet media
English
44
0
39
11.6K
Marc Köhlbrugge
Marc Köhlbrugge@marckohlbrugge·
I just tried ui.sh and unfortunately I cannot recommend it. Here’s why… It’s too good I now want to redesign all my sites 🔥🔥
English
36
6
437
88.6K
Jagan Mohan R., MD
@heyrimsha Prerequisites VideoDB API Key (console.videodb.io) This is not open source. It is just a free wrapper for a paid service. Please do not call it 100% open-source. There is no such thing. It is either open-source or not open-source.
English
2
0
6
447
Rimsha Bhardwaj
Rimsha Bhardwaj@heyrimsha·
🚨BREAKING: Someone just built the open source Loom killer that turns your screen recordings into AI agents. It's called Bloom and it does something Loom never could. Your recordings don't just sit as files. They become queryable, searchable, agent-ready data the moment you stop recording. Here's what actually happens: You hit record. Bloom captures your screen, mic, and system audio locally. While you're still recording, it uploads chunks to the cloud in real time. The second you stop, it auto-generates a full transcript, creates visual embeddings, and indexes every spoken word. Then the wild part kicks in. You can query your recordings through APIs. You can hook Claude Code directly into your video library and ask it questions. Your screen recording just became a knowledge base. → Local-first, your files never get locked in → Cmd+Shift+R starts and stops from anywhere on your system → One-click shareable link for any recording → Chat directly with your video content through VideoDB Chat → Works with Claude Code and other agent frameworks out of the box Recordings are no longer files. They're inputs for AI. MIT License. 100% Opensource. Link in comments.
English
31
77
819
126.7K
Andrea
Andrea@acolombiadev·
You don’t need Obsidian if an agent is indexing a private GitHub repo. The wiki is just markdown files. Any agent with repo access can read, write, and maintain it. GitHub renders .md natively. Your agent handles the rest.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
24
6
219
50.9K
Lesly Garreau
Lesly Garreau@lesly·
@trq212 So I don’t understand. If there is usage limit on all accounts, why does it matter that people use their limit with openclaw or any other third party app?…
English
5
0
5
524
Lesly Garreau
Lesly Garreau@lesly·
I'm going to rant about ClaudeGate. Feel free to scroll past. My problem isn't that Anthropic added usage limits. Servers cost money, I get it. My problem is how they did it. No announcement. No community conversation. They just quietly flipped a switch and all hell broke loose. People noticed sessions maxing out after 2 prompts when the day before everything was fine. They posted on social. Anthropic's response? "Oh, that's a bug. Fix incoming." Days passed. No updates. Then: "We fixed it." They hadn't. More chaos. More cancellations. Mine included. Then Claude's source code leaked. People found out Anthropic had hardcoded limitations directly into the code, while bug fixes were only being pushed to Anthropic employees. They tried to silence the repos. Too late. Code was rewritten in Python and cloned everywhere. And the grand finale? Anthropic's official response, delivered by Lydia Hallie: "You guys don't know how to use prompts and context windows properly." Not joking. Those are basically the words. Go read the comments: lnkd.in/dkHkQNfE The "Ethical AI company" lied, got caught, and blamed its users. That's not a bug. That's a character trait. #ClaudeCode #Anthropic #AI
Lesly Garreau tweet media
English
0
0
0
60
Lesly Garreau
Lesly Garreau@lesly·
Well, I guess I’m going to join the 7% @AnthropicAI @claudeai doesn’t seem to care about and move to Codex as well… It’s just became ridiculous at this point. I’m on the $100 5x plan and I reach usage limit daily…
GIF
English
0
0
1
107
Lesly Garreau
Lesly Garreau@lesly·
I feel you deeply friend… I’m looking for alternatives ><. Please @claudeai fix your stuff!
Hayrettin Tüzel@devneeddev

I tested Claude Code on a fresh account - 1,500 lines of HTML cost me 50% of my window. Full video and summary is here.. I just ran a recorded test on Claude Code with a fresh account (Pro, not Max - my main account was 20x Max) , and the result is honestly insane. The task was trivial: create 3 simple demo HTML pages, around 500 lines each. Roughly 1,500 lines of code total. Nothing massive. Nothing enterprise-grade. Nothing that should meaningfully stress a premium coding product. And yet Claude Code burned through 40% of my 5-hour window almost immediately. I ran the exact same test with Codex, and it consumed only 2%. Then it got even worse: after the session ended, I did absolutely nothing for 15 minutes, and Claude still ate another 10%. Total: 50% of the 5-hour window gone for a tiny HTML demo. My weekly usage had already started at 2% before I even really used it, and after this tiny test it jumped to 8%. Now let us be generous and assume this entire run used around 30k tokens total. If 30k tokens represents 10% of weekly usage, that implies around 300k tokens per week. That is roughly 1.2M-1.3M tokens per month, and even if you round up aggressively, you are still in the 1.5M token range. Using the Sonnet 4.6 pricing you list: $3 per 1M input tokens $15 per 1M output tokens How exactly is this supposed to make sense for a paid coding product? Because from the user side, this no longer looks like "premium usage protection." It looks like a quota system that is either wildly inefficient, badly broken, or being accounted in a way users are not being told about. And that is before I even get to my main account: my $200 Max plan now dies in a single day. Just a few months ago, similar or heavier usage would last me about a week. So no, I do not buy the "maybe you just used it more" excuse anymore. Something is clearly broken in Claude Code. Either token accounting is broken, context handling is broken, background consumption is broken, or all three. @alexalbert__ is this really the experience you want users to pay for? Just watch the video. I tried to be very transparent and clear for your team! I was fan of Claude but just disappointed! And if you want, send me the detailed token accounting for this session and let us inspect it together publicly. Because from where I am standing, this is no longer a small pricing annoyance. It looks like something seriously wrong is happening, and users deserve a real explanation.

English
0
0
0
45
Lesly Garreau
Lesly Garreau@lesly·
🔥 Anthropic accidentally shipped the Claude Code source code today and there's some pretty interesting stuff buried in it. Here's what's coming. Proactive Mode, so basically Claude Code won't wait for you to ask anymore. He'll scan your codebase, find TODO comments, and just handle them. Running in the background while you do other things. Autonomous and always on. Coordinator Mode, instead of one agent doing everything, he becomes like a manager. Delegates to specialized subagents, one for research, one for synthesis, one for validation. More structured, parallelized, and dramatically more capable on complex problems basically. Adversarial Verification, right now Claude Code kind of assumes things work, pretty optimistic. The new adversarial agent flips that. He defaults to "this is broken" and actively tries to break your code before you ship it. Actual QA, not just vibes. Token Budgets, you'll be able to set a minimum compute spend. Tell him to use at least 500k tokens on something and he goes deep instead of stopping at the first plausible answer. Huge for research and architecture decisions basically. Reversible Context Collapse, long sessions won't degrade silently anymore. He scores every piece of context by risk level, compresses the low-risk stuff, keeps the critical things intact. And the compression is reversible, which is cool. Persistent Job Templates, turn any conversation into a reusable template, schedule it, automate recurring tasks. Basically treat Claude Code like a task management system instead of just a chat window. Bring Your Own Compute, they're shipping a Docker image so enterprise teams can run Claude Code on their own infrastructure. Security isolation, no bottlenecks, full control. Yeah well that one's going to be popular. And more!! So basically Claude Code is moving from "tool you prompt" to "agent that works." The teams that figure out how to configure and direct that are going to have a serious edge. The ones still typing one-off requests, not so much ><
Lesly Garreau tweet media
English
0
0
0
39
Lesly Garreau
Lesly Garreau@lesly·
Long time paid subscriber. Believed in @claudeai from early on. But now basically free tier works fine during peak hours. But pro subscribers get locked out after 2 prompts? That's not a capacity problem. That's a choice. And the craziest part, they already have the banner system. The one that tells you when peak hours are active. They just didn't use it. Because if you knew, you'd use less. Which is literally what they say they want... I mean, that's how you burn goodwill you spent years building. Which is a shame really.
Lesly Garreau tweet media
English
0
0
0
25
Lesly Garreau
Lesly Garreau@lesly·
@nico_jeannen @AnthropicAI Man, I wish they reverted this ><. It's horrible. I also never got even close to my limit, and since that update it's daily... I'm on the x5. Also, the way they brought this up was really, really badly done. They could have fathered old users in at least...
English
0
0
0
18
Nico
Nico@nico_jeannen·
Idk what's going on with @AnthropicAI but they messed up the weekly limits badly it seems Anyone else with the issue? I've almost never hit the weekly limit on the x200 plan (except maybe 2-3x a few hours before the reset) and now I'm hitting it on Monday lol
Nico tweet media
Thariq@trq212

To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged. During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll move through your 5-hour session limits faster than before.

English
372
83
2.1K
287.7K
Lesly Garreau
Lesly Garreau@lesly·
Everyone is racing to build AI agents. I think that's the wrong bet basically. The businesses that will dominate the next few years won't be the agent builders. They'll be the API builders, the APIs agents actually need to function. AI agents don't browse websites or click buttons. They call APIs. Claude, OpenAI, Cursor, they all operate by making structured API requests to execute tasks. So basically your customer isn't a person clicking a dashboard anymore. Your customer is a line of code inside someone else's system. That changes everything about retention. With traditional SaaS, users churn when a competitor has a better UI. With APIs, switching means a developer has to rip your integration out of their codebase, rewrite the logic, test it, and redeploy. That's real engineering cost. Nobody does that casually. It's structural stickiness, not behavioral. And the revenue model scales with your customers' success. Every user their app gains means more API calls, which means more revenue for you, without you doing anything extra. The proof is already out there. ScreenshotOne is a solo-founded API that renders screenshots from URLs. Simple problem, clean API interface, integrations with Zapier and Make. Doing $10k+ MRR. One founder. Posties is an open-source social media posting API. $60k/month. Because basically every agent that needs to post to social media reaches for it. Resend is an email API. Developers embed it once and never leave, because the switching cost is too high and it just works. These aren't unicorn exceptions. They're proof that a specific, well-scoped API solving a repeatable problem for agents and developers is a real, high-margin business. And most people miss this part. When your API connects to Zapier or Make, every new automation platform that ships becomes a distribution channel for you automatically. You stop being a product and start being infrastructure. So yeah, that's quite the compounding effect. On the build side, the barrier is genuinely low right now. With Next.js, TypeScript, Supabase for API key management and usage tracking, Vercel for deployment, and Stripe Checkout for billing, you can get an API MVP live in under a day. Total cost to start is basically zero. The unsexy part that actually determines success is documentation. AI agents and the developers who program them rely on clear docs to understand and integrate your API. Bad docs means no adoption, regardless of how good the underlying product is. I mean, come on, that's just how it works. Validate before you polish. Offer free API calls to early developers, watch whether they actually integrate it, confirm you're solving a real problem before you build out pricing tiers and a full landing page. The window for this is open right now because most builders are focused on the agent layer. The API layer is less crowded, stickier, and compounds faster. So basically, build the thing agents can't function without :)
Lesly Garreau tweet media
English
2
0
2
52
Lesly Garreau
Lesly Garreau@lesly·
@mhrescak You should check OpenOats, it’s a free open source version of Granola ;)
English
0
0
5
318
Matej
Matej@mhrescak·
Granola was overkill for my markdown note setup, so I had Claude build a fully local transcription CLI with a skill that works great and composes well with others. Read more: hrescak.com/notes/transcri…
English
6
9
207
22.1K
Matthew Berman
Matthew Berman@TheMattBerman·
@lesly @openclaw @vishalojha_me @Meta For statics I made @StealAds to automate that part. It doesn’t do video yet. For video… it’s more complicated. I have a kling automation im using until seedance. Work with great creators actually filming too!
English
1
0
2
270
Matthew Berman
Matthew Berman@TheMattBerman·
I run my meta ads with @openclaw for $0/month 😱 here's the system that runs autonomously: step 1: daily health check → social-cli (major shoutout to @vishalojha_me) wraps @Meta's marketing API (token refresh, pagination, rate limits all handled) → am I on track? what's running? who's winning? who's bleeding? any fatigue? → the same 5 questions I asked Ads Manager every morning for 20 years step 2: catch dying ads before CPA spikes → @OpenClaw pulls daily frequency by ad → frequency > 3.5 = audience is cooked, CTR is about to drop → this one signal saves more money than any dashboard step 3: auto-pause bleeders + shift budget to winners → CPA > 2.5x target for 48hrs? auto-pause. no hesitation. → ranks every campaign by efficiency. recommends shifting spend. → last fri it paused an $87 CPA campaign at 3am and scaled my best performer 30% step 4: write new ad copy from your winners → agent analyzes what's working (hooks, angles, CTAs) → generates variations based on the patterns in YOUR top performers → copy modeled on what already converts in your account. step 5: upload ads directly to your account → new creative + copy → live in @Meta Ads Manager → no more downloading, formatting, clicking through the upload flow → agent handles the entire publish cycle step 6: content concepts + morning brief → spots patterns across winners and suggests what to test next → delivers everything to Telegram, Slack, wherever you want it → 90 seconds to read. reply "approved." done. input: your ad account + your target CPA output: an AI that monitors, kills, scales, writes, AND uploads your ads dozens of hours in ad manager → 1 text message I packaged the entire system as the Meta Ads Kit. 5 @OpenClaw skills: - meta-ads (daily checks + auto-pause) - ad-creative-monitor (fatigue detection) - budget-optimizer (efficiency scoring + shift recs) - ad-copy-generator (writes variations from your winners) - ad-upload (publishes creative directly to your account) giving it away free. comment ADS + like + follow (must follow so i can DM)
English
2.4K
224
4.2K
633.9K