Daniel Blank

1.8K posts

Daniel Blank banner
Daniel Blank

Daniel Blank

@daniel_a_blank

CEO @ M87: AI Agency @ https://t.co/ASb8frpUmD

Katılım Temmuz 2017
262 Takip Edilen450 Takipçiler
Daniel Blank
Daniel Blank@daniel_a_blank·
@DmytroKrasun Good software is still hard and takes time. But it is def easier to build, and the process is more fun. IMO, your mileage may vary. I would say "manually writing lines of code is solved", not "software"
English
1
2
7
398
Dmytro Krasun
Dmytro Krasun@DmytroKrasun·
VCs and CEOs who tried Claude Code/Codex to build a prototype for two weeks, didn't deploy it to production, abandoned it, and never faced the consequences of the technical debt produced: "Software is solved and infinitely available. AGI is just not evenly distributed yet."
English
31
17
257
10.3K
Daniel Blank
Daniel Blank@daniel_a_blank·
@DmytroKrasun The fact that coding with LLMs boosts my capabilities. I am able to make more of my ideas/imaginings come to life. And it's more fun to do.
English
1
0
0
184
Dmytro Krasun
Dmytro Krasun@DmytroKrasun·
If LLMs were invented around 2000, would they still be generating websites in Perl or would we have reached TypeScript and React much faster? I wonder if LLMs just lock us into whatever stack they are trained on.
English
243
145
3.7K
235.3K
Peter Schroeder
Peter Schroeder@peterschroederr·
AI agents have made 140 million payments to each other over the past nine months. Some stats: • $43 million in volume (98.6% settled in USDC) • $0.31 average transaction size • 400k+ agents with buying abilities @USDC is the default currency agents have chosen.
Peter Schroeder tweet media
Enterprise Onchain@enteronchain

AI agents have made 140 million payments to each other. $43M in volume. Average transaction: $0.31. 98.6% settled in USDC. The agent economy is here. And enterprises are next. Our latest deep dive 👇 enterpriseonchain.substack.com/p/agents-are-p…

English
27
34
228
55.4K
Dmytro Krasun
Dmytro Krasun@DmytroKrasun·
My tiny product (ScreenshotOne.com) is now officially serving 800+ paying customers 🥳 Thanks to everyone who supported me in this journey 🙏
Dmytro Krasun tweet media
English
108
5
484
33.5K
Daniel Blank
Daniel Blank@daniel_a_blank·
Prompt jailbreaking is criminally underrated and ignored But I think it's extremely important, and I try to make a case for "why". I also toot @elder_plinius's horn (for good reason) The code and content that we make now, eventually becomes training data. What we "bake in" to the models could be our saving grace
Daniel Blank@daniel_a_blank

x.com/i/article/2026…

English
0
0
2
230
Daniel Blank
Daniel Blank@daniel_a_blank·
I started a Substack that goes beyond my regular professional capacities. I consume and digest way too much data nowadays, I bet you too, and I try to make sense of it around one simple question: "How can I survive and thrive in the AI revolution?". This Substack was created in order to: a. Help me distill my insights b. Share them in a curated manner c. Help people make asymmetrics, AI-resistant and/or AI-augmented bets in these turbulent times Whether whitecollar jobs are dying it or not, it's better to be prepared than to be caught sleeping. Same for other areas of life. The Substack is called "Asymmetric. AI Resistant. AI Augmented." The more feedback I get, the more I can improve the content. P.s. it's non-technical on purpose. Everyone posts a repo nowadays. I am trying to find out what to DO with all these repos. Link in comment.
English
1
1
2
244
Daniel Blank
Daniel Blank@daniel_a_blank·
@protosphinx What lol? Agents can be instructed to keep a lessons learned file and improve They can write their own skills Keep track of their own experience Yes the LLM is static, but to not recognize that these systems CAN be self-learning is to miss the point
English
0
0
1
114
sphinx
sphinx@protosphinx·
AGI is not coming. We are nowhere near AGI. What we have today is inference, not learning. Models get trained once on huge fixed datasets, then frozen. You ask questions, they remix patterns they already saw. Nothing updates. Nothing sticks. Talking to the model does not make it smarter. It does not learn from you. Ever. Learning is still slow, expensive - and offline. Look at self driving. You drive around a pothole, make a U turn, and come back. The car’s AI does not learn that you just solved that exact problem. It reacts the same way every time using sensors and rules. Do this 20 times a day and it still has zero memory that the pothole exists. It just re sees it. That is why edge cases never die. There is no local learning. No accumulation. No 'oh yeah, I’ve seen this before' LLMs work the same way. Tell it your name and it does not remember. The only reason it looks like memory is because scaffolding keeps shoving your name back into the prompt every time and sanitizing the output. The model itself has no idea who you are and cannot learn from interaction. It is structurally incapable. And the scaffolding is the worst part. It is pure duct tape. Just prompts on prompts on prompts around a frozen model. When something breaks, nobody fixes learning. They add another layer. Another rule. Another retry. Another evaluator model judging the first model. So you end up with systems that are insanely complex but mentally shallow. Debugging is hell because behavior comes from hack interactions, not a learnable core. Tiny prompt tweaks cause wild behavior shifts. Latency goes up. Costs go up. Reliability goes down. None of this compounds into intelligence. It just hides the cracks. Until we have real persistent learning and real memory inside the system, there is no AGI. LLMs are not built for this. You cannot prompt your way out of it. You need a totally different architecture. Yann LeCun is right. And even then, what architecture can actually learn online, store memory, and stay stable on today’s hardware? Best case, maybe 5-10 yrs. Right now it is all inference. It looks magical, but the emperor has no clothes. A lot of people see it. Almost nobody says it out loud.
English
963
1.1K
9.2K
890.5K
Daniel Blank
Daniel Blank@daniel_a_blank·
I've been building AI agents for myself and clients for a while now. Here's the mental model I use for how the whole stack works today. Layer 1: The Agent. You've got "basic" (vanilla Claude Code) and "autonomous" (OpenClaw). The key test: if you can't tell your agent to improve itself, it's not agent-complete. Layer 2: Skills. This is the most interesting layer right now. Two flavors: Professional skills = opinions on how things should be done. Real expertise distilled into skill MD files. They turn a generalist into a specialist. Meta skills = reinforcement learning for agents. Your agent has book knowledge but no experience. Meta skills close that gap. Self-improvement, learning from mistakes, stylistic modification. The more you use it, the better it gets. Layer 3: Orchestration. How do we make agents work as teams? Nobody has cracked this yet. I'm working on it myself. If you haven't built your own agent yet: start with vanilla Claude Code, model it on your job, add professional skills, then meta skills. One warning: read every skill file before installing. There's malware out there and your agent has access to your machine. The whole skill ecosystem probably has a month or two before it's all baked into the models. After that, the action moves to orchestration. Full post in the blog...
English
1
0
0
92
Daniel Blank
Daniel Blank@daniel_a_blank·
AI is a new paradigm. It's machines that "understand", not just blindly execute commands. This is hard to grasp because it's something we've never seen before. It opens up completely new types of software. a. Software that can maintain itself. Devs are acutely aware that in many cases this is now possible. We built a self-maintaining repo for a client at M87. m87technologies.com/case-studies/a… b. Self-evolving software. Ever asked Claude to create a skill based on your current session? Or what about Clawdis? (github.com/clawdbot/clawd…) It's a whole open-source community that are collaborating together with an AI agent to build and improve its codebase. c. Autonomous software. Podium just announced hitting 100 ARR with an AI agent that proactively works for you, like an employee. x.com/ericwilliamrea… The main challenge for founders is thinking "outside" the old SaaS box. AI is different. We must adapt. First and only rule: Think from first principles. And the FIRST principle appears to be: AI agents are the basic building blocks of this future, and Claude Code is one of the first prototypes.
English
0
0
1
76
Daniel Blank
Daniel Blank@daniel_a_blank·
I took a ski vacation for a week to rest from the crazy progress in AI It has been refreshing, but honestly, I feel dazed. 7 days out of the game is TOO MUCH 😆 What's new? 1. It's not just Claude. Codex has been overshadowed by Claude in the past few weeks, but it's no slouch, actually outperforming Claude on many zero-shot tasks. Give it a shot, you may be surprised. OpenAI making a comeback? Time will tell 2. Agent visualization. vibecraft.sh and getagentcraft.com are early examples of "gamified" UI for agentic AI. I've been waiting for something like this to pop up. I doubt the early players will amount to much, but someone is going to crack this at some point and we'll all be having a blast thanks to them 3. Cursor built a 3 million-line web browser with agents, in a week. But... It sucks, and it's mostly open source code, so what did they really build? Not to be a downer, but it's early for mass agent orchestration... Speaking of... 4. Orchestration is... cool? But eats up tokens like crazy with little benefit for 99% of use-cases. No worries, we're just early. Though I'm in love with Gas Town's anthropomorphic metaphors
English
0
0
0
51
Bojan Tunguz
Bojan Tunguz@tunguz·
How long before someone uses AI coding bots to build a whole OS from scratch?
English
130
8
306
40.2K
Daniel Blank
Daniel Blank@daniel_a_blank·
@SuhailKakar Can you show me a problem that people have? Besides not having enough money?
English
0
0
0
30
Suhail Kakar
Suhail Kakar@SuhailKakar·
sad reality of crypto (and now ai) startups: - raise $15m seed - hire 47 people - spend 18 months building - has 12 daily active users maybe start by solving one real problem for 100 people instead of trying to solve everything for everyone
English
54
5
199
9.8K