Tyrell Downer

2.3K posts

Tyrell Downer

Tyrell Downer

@TyrellDowner

Engineer

شامل ہوئے Ağustos 2021
702 فالونگ6.1K فالوورز
Tyrell Downer ری ٹویٹ کیا
Lee Robinson
Lee Robinson@leerob·
I really respect @antirez so I'd like to share my slightly different take on frontend development in 2026 (and especially in a coding agents world). First, on his point around libraries/frameworks and company size: > "We have things like Angular and React that are big-company-design stuff that became normal programming. It's like if every site runs on Kubernetes." It's true that frontend frameworks had to uniquely solve for the design constraints of BigCos. How do you build a system where thousands of engineers need to ship components independently without muddying the rest of the app? Composition! And if you take composition to its logical extreme and try to build a framework which works for both small *and* very large JavaScript apps, you end up with things like streaming, Suspense, and many of the other niceties of React and metaframeworks. Often, you do want many of these things to build high quality products. But sometimes you don't, and you don't have to ditch React's composition model and all the libraries, ecosystem, bundlers, et al to get there. Personally, I think Bun is one of the best realizations of this vision, where you can write React apps with a single toolchain. The layers of abstraction can fit in your head. > "There was, in big companies, an extreme desire to do two things: totally isolate frontend from backend, because the internal organization of big companies has such a split, and to make applications so standardized that hiring new people, firing old people, is something possible and easy." This might get into the HTMX holy war, but IMO this client/server debate has always been a thing. I'd also argue that, in many cases and now increasingly with AI, the client/server split is helpful for humans and agents to compartmentalize the codebase. I'm personally very supportive of open-source libraries like React and friends that get battle-tested at scale and get security patches (while painful sometimes). Models can learn this abstraction, and for many many cases, stop reinventing the wheel. Similar feelings about Tailwind. > "We later created a generation of programmers that can't even understand a single language very well in its internals, that is: Javascript, they often know the framework, not the language, nor even CSS well enough." It's true that a lot of frontend devs end up focusing on the app layer code concerns like React/Tailwind and maybe aren't as proficient at debugging heap snapshots. But I don't think the solution is to throw out the abstractions entirely, but instead to keep teaching the next generation of devs how to go up and down the stack as needed. This is now massively accelerated by AI and coding agents. Just like you can ask an agent to generate lots of frontend code for you, you can also ask it to deeply explain how every abstraction layer works. There's no forgoing competence to be a great frontend engineer. > "The irony is that front-end developers highly suffer from all that, for a number of reasons: they are forced to continue learning new ways to do the same button, form, pagination, and so forth. And, also, if they are smart they understand they don't really know what programming really is in most cases, and are not happy about it." Throughout my entire career doing frontend and product engineering, I've seen opinions like this over and over again. Back in the day, it was framing frontend as "just the HTML and CSS" / web developer, somehow less than"the great backend engineers. The reality is that there are many many incredibly talented frontend engineers who do lots of *extremely technical* work. It's time for a lot of backend engineers to give the frontend peeps their flowers, acknowledge some of this frontend stuff is Very Hard, and begrudgingly accept that React has some good ideas. And if you made it this far and still want to complain, I bet you can make an incredible frontend with Svelte/Tailwind and your coding agent of choice, taking 80-90% of the upside of the last decade of frontend dev
antirez@antirez

My POV on front-end of 2026

English
35
65
1K
172.3K
Tyrell Downer
Tyrell Downer@TyrellDowner·
It's more a concept limit. How many concepts can you come up with and technically understand how they work / fit into existing things. If you understand the concept and prescribe or agree with the agent on the concept for implementation it really never gets it wrong. I mean I've probably done 100 consecutively without an error. So reading code isn't a constraint. Knowing how it works, how it's built, how you'd build it, at the lowest sensible levels of abstraction is what matters. 37K lines of code a day, if we say for arguments sake is 27 concepts, if you spent all day writing a spec, maybe. But loc isn't inherently mind blowing. What concepts did you bring to life?
English
2
0
0
357
Tyrell Downer ری ٹویٹ کیا
Sibelius Seraphini
Sibelius Seraphini@sseraphini·
Axios is an example of package that you don’t need anymore Just use Fetch
English
6
23
179
8.6K
Tyrell Downer
Tyrell Downer@TyrellDowner·
@cramforce I had the same realization. IDEs are 95% noise. I built my own minimal. 'workspace' for each repo on my machine. Write plans and brain dumps directly to repo in a structured organized way. Through markdown viewer. Easy copy plan paths to Claude. Built skills around it.
English
0
0
0
234
Malte Ubl
Malte Ubl@cramforce·
A while ago I got so frustrated with cursor-the-editor that I churned back to VSCode. Now that agents write most of the code I don't want fancy autocomplete second-guessing what I want to do. I literally only go to the editor when I know exactly what to do. Turns out VSCode was even worse at this. It constantly refuses to save files. Like it has one job. And this is for trivial edits like package.json or .gitignore. So, I moved on again. Giving zed a try.
English
93
1
360
62.1K
Tyrell Downer
Tyrell Downer@TyrellDowner·
@TukiFromKL Opus 4.5 equiv model (in reality, not benchmarks) running on 16gb ram at 120 tps is maybe max 3 years away? Very exciting. Nobodies in Ernest is going to use these for say coding, context management makes it tiresome. But it's soon enough.
English
0
0
0
74
Tuki
Tuki@TukiFromKL·
Do you understand what just happened? Qwen dropped small models you can run on a $600 Mac Mini. Locally. No internet. No subscription. No company controls your access. Go do this right now: Download LM Studio Search Qwen 3.5 Grab the MLX versions Load them You now have unlimited AI on your own machine. Nobody can take it away from you. Not a company. Not a government. Not a terms of service update. Everyone's fighting over who controls AI. The answer just became: you do.
Qwen@Alibaba_Qwen

🚀 Introducing the Qwen 3.5 Small Model Series Qwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B ✨ More intelligence, less compute. These small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL: • 0.8B / 2B → tiny, fast, great for edge device • 4B → a surprisingly strong multimodal base for lightweight agents • 9B → compact, but already closing the gap with much larger models And yes — we’re also releasing the Base models as well. We hope this better supports research, experimentation, and real-world industrial innovation. Hugging Face: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw…

English
145
322
3.7K
553.3K
Tyrell Downer
Tyrell Downer@TyrellDowner·
@TukiFromKL With that said frontier today can one shot a wide variety of complex / difficult work. When cheaper alternatives reach that level, which is likely quarters a way, max a year, we could experience amazing reductions in cost.
English
0
0
0
15
Tyrell Downer
Tyrell Downer@TyrellDowner·
@TukiFromKL With a $200 CC subscription I can easily do what would cost a human $1000s to do in a day
English
1
0
0
40
Tuki
Tuki@TukiFromKL·
The biggest lie in AI right now is that you need the most expensive model for everything MiniMax M2.5 just landed in Notion. Open-weight. Fraction of the cost. Been using it in OpenClaw daily — it handles the bulk of my work that people are spending $200/month on ChatGPT Pro to do. The smartest move in AI isn't picking the best model. It's knowing which cheap model is good enough.
MiniMax (official)@MiniMax_AI

MiniMax M2.5 is now live as the first open-weight model inside Notion Custom Agents, optimized for lightweight, high-frequency agent workflows. Pretty cool to see open-weight models show up where scale and cost really matter. 🤠

English
21
6
120
12.6K
Tyrell Downer
Tyrell Downer@TyrellDowner·
Imo the benefit is having all your things (company goals, perf metrics, software, etc.) available to it, tell it where they are (clear, always avail pointers) and allow it to propose solutions, do work, explore unknown unknowns, etc. based on those things. Basically it's pre baked and acts across any task you'd have an agent do with all your context, allowing it to think better for itself on a wide range of tasks. A.) Transcribe this yt video and write a implementation guide, specifically focus on what's applicable for us. With just that prompt OpenClaw can (depending on setup) ref your goals, this week's tasks, what your business does, your current metrics, etc. Whereas you can throw what's relevant in the prompt with other agents but it's a lot more effort. Basically just makes many things much either you could setup with CC or code (i.e crons) and has much better memory / context retrieval
English
1
0
0
113
Alex Becker 🍊🏆🥇
Alex Becker 🍊🏆🥇@ZssBecker·
@noahkagan this is 100% accurate You can do 90% of the damage needed with literally just VS code + claude code or just using the online chat app.
English
9
6
169
10.9K
Noah Kagan
Noah Kagan@noahkagan·
Open claw is still overrated. Here's my hot take: Everyone asks around how do you use it - why? Cause no one has a use case they find invaluable besides making dashboards or trying to arbitrage Polymarket. Maintenance - I spend 80% of my time just keeping it online, remembering or fixing things. It forgets time and time again. Also sucks up all computer resources regularly. All the posts about SEO optimization, how they have 15 AI employees, etc. Are from people not making money. Token costs > Executive Assistant cost ($50 / hour). Turns out using better models and running tasks around the clock costs money. And trying to debug or explain things takes way longer than sending to my assistant (for now). It hurt my X account. When I had it run my X and then check my X for stats - I got throttled since it looks like a bot (cause it IS a bot!) - didnt realize it for a week. Every-single-person who's bragging about OpenClaw is mostly lying. When you ask them how it's really going it isn't as they seem. Is it awesome and hugely potential, yes. Still someways to go...
English
381
55
1.1K
145.7K
Tyrell Downer
Tyrell Downer@TyrellDowner·
@DorianDevelops It's the same hype triangle. OpenClaw is unreal, but the people broadcasting about it are generally just getting eye balls to sell a commodity to their audience now or later.
English
0
0
0
54
Dorian Develops
Dorian Develops@DorianDevelops·
I’m convinced most people hyping up OpenClaw aren’t actually using it. Maybe they installed it and played with it for a few minutes. But anyone who’s tried to seriously set it up to do all the “crazy” things AI hype accounts promise knows it’s mostly a gimmick. Change my mind!
English
269
11
405
54.7K
Tyrell Downer
Tyrell Downer@TyrellDowner·
@vitobotta @ImSh4yy Yes, which is why you compare the two options at per token cost / savings. It's portrayed that Mac studios are a magic box that defy physics and once you buy them you can run unlimited inference. Which is patently false
English
0
0
1
232
Vito Botta
Vito Botta@vitobotta·
@ImSh4yy If you use Kimi for agentic coding, you will very likely use a lot more than a couple million tokens each day, so the cost of using the API will be much higher than what you suggest. But totally agree that self hosting doesn't make much sense.
English
3
0
31
19.9K
Shayan
Shayan@ImSh4yy·
You need 2 Mac Studios with M3 Ultra to run Kimi K2.5 locally at ~20 TPS. At that speed you can barely run a single agent, not a "swarm" or an "army." Kimi K2.5 costs $1.5/M output via OpenRouter. Even if you max out 20 TPS 24/7 that's ~1.7M tokens/day which is equivalent of $2/day or $78/mo. You spent $20,000 to save $78/month, limited to one agent at a time, zero scalability, and hardware that's obsolete in 12 months. Your claim of "$20,000/month in API calls" is off by a factor of 256. You can drop to smaller models for higher speeds, but the API prices drop just as fast, Grok 4.1 Fast is $0.50/M output.
Alex Finn@AlexFinn

I'm sick and tired of the people who don't understand why I spent $20,000 on this set up, and plan on spending another $100,000 by the end of the year IT DOES NOT MATTER THAT LOCAL MODELS AREN'T AS GOOD AS OPUS 4.6 That is not the point. The point is me being able to run a swarm of local AI agents powered by local AI models unlocks a world you can't imagine A world never discovered by humanity before Right now, as you read this post, I have multiple local AI models reading thousands of posts on X and Reddit Hunting for challenges to solve Those local AI models are feeding hundreds of challenges a day to a manager model The manager model (Henry) decides what the company (Alex Finn Global Enterprises) will build. The company is constantly working. Constantly researching. Constantly building. Constantly shipping If I did this with local models I'd be spending $20,000 a month on API calls. With my set up, it's free. I have an army on my desk. Never resting. Never eating. Never complaining. Always conquering. Here is your problem: it's not that you don't understand this. You don't want to understand this. You don't want to think this is possible. Your brain doesn't want to believe this is the world we now live in. It is. And the faster you can accept this and get on board, the faster you can enter the new society. Otherwise, you will forever be doomed to the permanent underclass. Make your choice.

English
352
202
4.7K
841.8K
Tyrell Downer
Tyrell Downer@TyrellDowner·
@AlexFinn @bernhard_me It's a slight of hand. You're not running opus 24/7. You're running models that are SIGNIFICANTLY CHEAPER via API. Which begs the question, why couldn't you run them 24/7 via API? How much would it cost via API to get the same output?
English
0
0
1
163
Alex Finn
Alex Finn@AlexFinn·
Here is the part you and most of the world doesn’t get yet (you will tho): When you can run an AI model 24/7/365 in ClawdBot you unlock a whole new set of functionality. If you ran Opus 24/7, you’d be spending tens of thousands a month With local models, I can literally have it reading X and Reddit all day and night, finding challenges to solve, then building solutions to those challenges autonomously If this were a simple chat app where I’m asking it questions a few times a day, you’re right. This would be stupid. But I’m building autonomous agents that quite literally work 24/7. Not possible with Opus. Only possible with local models
English
62
5
338
34.3K
Bernhard
Bernhard@bernhard_me·
Genuine question @AlexFinn, Your MacStudio still runs Opus via API as the brain, right? You said yourself Henry uses "Opus as its brain and local models as employees." So the $10K MacStudio doesn't replace the $300-750/month API bill. It adds to it without adding value. Doesn't it? A $599 Mac Mini or even a $5/month VPS runs the OpenClaw gateway just fine. The intelligence comes from the API, not the hardware. The local models on you MacStudio handle what exactly basic triage tasks that a 13B model on a Mac Mini could do equally well? What am I missing? What other use case is there. I get the content angle: A Mac Studio "data center" makes a great video. But from a pure architecture standpoint, you're spending $10K on a machine whose main job is forwarding messages to Anthropic's servers. Apologies and please correct me if I'm wrong here. That is the understanding I got from your last post. If there's anything on features and functionality that I don't see yet here. I love your post and your insights.
Alex Finn@AlexFinn

Saturday night. 6 hours of sleep over the last week. My autonomous agent company having an emergency meeting on the left. My ClawdBot giving them new tasks on the right All being powered by local models in my Mac Studio data center I refuse to be in the permanent underclass

English
79
3
288
112.9K
Tyrell Downer
Tyrell Downer@TyrellDowner·
If it was a magic machine, like it's portrayed where once you buy it you can just use 100M tokens a day for free, then it'd be insane not to buy one. But, if cost rather than privacy, etc. is your concern, then this is merely sensationalism.
English
0
0
0
133
Tyrell Downer
Tyrell Downer@TyrellDowner·
TL;DR if you run inference every second of everyday on the Mac studios it will take you ~14 years to break even with API costs. For the Kimi K2.5 case, 24/tok per second is ~2M tokens per day. Which is ~$2-4 per day of API costs. Or $60-120 / mth. $20k capEx with a ~14 year payback period. It's good local is an option to prevent monopoly, but the idea it's a one time payment and you make all the LLM calls you want for free after is horribly misguided. You get ~2M tokens per day if you're pushing it hardcore. If you adjust for smaller models getting better and more sensibly hardware costs going up, then it looks more reasonable as a long term investment. Important to remember inference requiring less compute will just make the cloud cheaper. Right now, unless your going opus or codex, which you can't run locally anyway, it is generally significantly cheaper to use the cloud.
Alex Finn@AlexFinn

I'm sick and tired of the people who don't understand why I spent $20,000 on this set up, and plan on spending another $100,000 by the end of the year IT DOES NOT MATTER THAT LOCAL MODELS AREN'T AS GOOD AS OPUS 4.6 That is not the point. The point is me being able to run a swarm of local AI agents powered by local AI models unlocks a world you can't imagine A world never discovered by humanity before Right now, as you read this post, I have multiple local AI models reading thousands of posts on X and Reddit Hunting for challenges to solve Those local AI models are feeding hundreds of challenges a day to a manager model The manager model (Henry) decides what the company (Alex Finn Global Enterprises) will build. The company is constantly working. Constantly researching. Constantly building. Constantly shipping If I did this with local models I'd be spending $20,000 a month on API calls. With my set up, it's free. I have an army on my desk. Never resting. Never eating. Never complaining. Always conquering. Here is your problem: it's not that you don't understand this. You don't want to understand this. You don't want to think this is possible. Your brain doesn't want to believe this is the world we now live in. It is. And the faster you can accept this and get on board, the faster you can enter the new society. Otherwise, you will forever be doomed to the permanent underclass. Make your choice.

English
2
1
4
1.4K
Tyrell Downer ری ٹویٹ کیا
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
China is generating 40% more electricity than the US & EU combined. In the global race where energy = intelligence, we need to start waking up.
English
884
1.3K
8.6K
1.4M
Tyrell Downer
Tyrell Downer@TyrellDowner·
Foresight into where things will go wrong is an unbelievably underrated trait in engineering. (where users will get confused, where observability will become difficult to discern signal, where systems could fall into infinite loops) Most great engineers on paper lack it, and much time and capital is wasted due to the lack.
English
0
0
1
99
Tyrell Downer ری ٹویٹ کیا
elvis
elvis@omarsar0·
Claude Code plugin to persist memory across sessions. 3.7K⭐️ It's built with hooks, uses SQLite for session storage, and supports both semantic and keyword search.
elvis tweet media
English
28
116
1.2K
84.8K
Tyrell Downer ری ٹویٹ کیا
Jarred Sumner
Jarred Sumner@jarredsumner·
when will @SlackHQ add syntax highlighting to code blocks? Discord has had this for years
English
71
27
1.5K
106K
Tyrell Downer ری ٹویٹ کیا
ByteRover
ByteRover@ByteroverDev·
10x context for Claude Code, Cursor, and +10 other AI IDEs with open-source memory layer. Explore Cipher → first open-source memory layer for coding agents (currently at 2k+ ⭐ in 1 month). Built by ByteRover team. 🧠 Real-time, context-relevant memory retrieval that adapts to your growing, complex codebase with Semantic Search. 🧠 Dual memory layer that capture what matters for your agent's context: - system 1: programming concepts, past interactions with LLM, business logic. - system 2: reasoning steps of the model). 🤝 Easily share the context across your dev team in real time. 🔌 MCP integration with any IDE you want. Let see Cipher in action 👇 github.com/campfirein/cip…
ByteRover tweet media
English
21
37
132
29.8K