Asher Cohen 🤖🤳

23.2K posts

Asher Cohen 🤖🤳 banner
Asher Cohen 🤖🤳

Asher Cohen 🤖🤳

@code_tank_dev

Developer, traveller, permaculturer. Evergrowing my knowledge!I use JS/TS to solve problems and architect solutions to make people's lives easier!

Joined Haziran 2011
1.7K Following450 Followers
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@DavidKPiano I've never been able to truly follow TDD because I'm feature-first by nature, but finally with agents I can. Story -> tests -> implementation -> diagrams/docs I'm not faster. I'm actually slower at delivering features than before, but the output is much larger and solid.
English
0
0
0
11
David K 🎹
David K 🎹@DavidKPiano·
Agree, and I think the main reason for this is how developers tend to direct agents to *build more* because that's the dopamine hit If we're mostly telling agents to make features instead of tests/docs/improvements/fixes, we're just speed-running technical debt Honestly, this is a people/process problem that exists regardless of AI... agents just make it more visible
David Cramer@zeeg

im fully convinced that LLMs are not an actual net productivity boost (today) they remove the barrier to get started, but they create increasingly complex software which does not appear to be maintainable so far, in my situations, they appear to slow down long term velocity

English
18
8
139
15.1K
Asher Cohen 🤖🤳 retweeted
Cloudflare
Cloudflare@Cloudflare·
Italy’s "Piracy Shield" forces providers to block content in under 30 minutes without judicial oversight, which leads to overblocking (taking down legitimate websites alongside infringing ones). We're appealing a €14M fine to protect the Internet from automated censorship and ensure infrastructure providers aren't forced to overblock. cfl.re/4cMh0WA
English
82
395
2.4K
108.2K
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@DavidKPiano Would love to read an article about how the actor model and state machines compare (semantically and architecturally) to vanilla classes and OOP (ofc one can implement actors with classes in OOP, I mean how actors enforce a structure to these primitives) and then in FP.
English
1
0
2
108
Asher Cohen 🤖🤳 retweeted
Simon Willison
Simon Willison@simonw·
I've published the first two chapters of a new guide to Agentic Engineering Patterns - coding practices and patterns to help get the best results out of coding agents like Claude Code and OpenAI Codex simonwillison.net/2026/Feb/23/ag…
English
93
320
2.7K
218.3K
Chrys Bader
Chrys Bader@chrysb·
unpopular (maybe?) opinion: MCP is dead in the water @openclaw has shown me that api & cli will win. every MCP server you connect loads its tool definitions into your context window. name, description, parameter schema, all of it. connect 10 servers with 5 tools each and you've burned 50 tool definitions worth of tokens before your conversation even starts. context bloat will never be a good thing - performance-wise or economically. i assume this is why @steipete left it out of @openclaw. the "exec" tool paired with on-demand skills is all you need. it can run any command invented since the beginning of computers. a resurgence of glory for ancient, but powerful tools like curl, sed, awk, grep. command line tools once mastered by the greats, but long forgotten and buried underneath abstractions developed for us lesser mortals. now available to us all, piloted by the smartest models on earth. every founder gets their own mass army of greybeards. the inertia required for MCP adoption, imo, is too great to overcome the momentum @openclaw has breathed into api + cli + skills. the common defenses people bring up: • "MCP gives you typed schemas and validation" — so does a well-documented CLI • "MCP gives you explicit permissions" — so does a sandbox with an allowlist • "MCP is a standard" — a standard that scales poorly is still a standard that scales poorly lastly, i've heard many MCP servers are just wrapping existing APIs - that kind of redundancy and unnecessary indirection should be a red flag. so, let's drop it and redirect our efforts into cli tools & apis with accompanying skills.
Chrys Bader tweet media
English
283
89
1.6K
330.6K
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@SimonHoiberg Being respectful to your ideas, but that wouldn't work for me. My wife and I both work (average jobs, not a premium lifestyle), our child goes to kita and we're happy like that. Kids need socialising, it's good to spend time with them ofc, but I wouldn't replace schooling.
English
0
0
1
52
Simon Høiberg
Simon Høiberg@SimonHoiberg·
A lot of people ask about kita/daycare, which is very expensive in Switzerland. We don't do that. My wife stays home with our 3 kids. And I'm a business owner, so I work mostly from our home as well. We don't do it just to save money, though. We do it because we want to spend as much of these precious first 4 years with our children as we can. If you are a parent, I would always recommend not working unless you absolutely have to.
Simon Høiberg tweet media
Simon Høiberg@SimonHoiberg

My monthly cost of living in Switzerland 🇨🇭 🏠 $5000 rent 📝 $1200 health insurance ⚡ $100 utilities 📱 $180 phone + internet 🚌 $500 Uber + public transport 🥗 $2000 food/groceries 📦 $1000 various orders (food, restaurants, etc) Total: ~$10,000/month. We're a family of 5. My wife and I +3 children. And we live in the best country in the world (but also the most expensive one).

English
46
5
301
51.4K
Asher Cohen 🤖🤳 retweeted
Muratcan Koylan
Muratcan Koylan@koylanai·
I build AI for a living. I believe in what we're building. But this kind of rhetoric makes my work harder and more dangerous. @sama, comparing human development to model training is tone-deaf, strategically reckless. People are losing jobs. They're getting angry. They're seeing AI as an enemy instead of a solution. Some are planning to destroy data centers and the people who build this stuff. That anger and backlash might not be reaching your floor but it reaches the engineers and builders doing the actual work. The CEO of the most visible AI company should not frame humans as inefficient compute units, should not be anti-human. Your role as a leader is to show how AI solves real problems for humanity. Not to reduce human life to an energy accounting problem from a comfortable position. If someone working in AI gets hurt because the public narrative turned hostile, leaders like you who chose dehumanizing framings bear responsibility for that too. I'm a techno-optimist. I believe AI enhances human capability. I work with this new form of intelligence every day. I genuinely respect what it is. It is real, significant, unlike anything that has existed before. But I also believe in human excellence. We have to accept that it's two fundamentally different forms of intelligence working together. IMHO the real techno-optimist position isn't "AI is cheaper than humans." It's "we now have two forms of intelligence on this planet, and the combination is more powerful than either alone." You're the leader of OpenAI, and whether you chose it or not, you represent everyone building in AI right now. Every word you say shapes how the world sees this technology and the people behind it. Please act like it.
Chief Nerd@TheChiefNerd

🚨 SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”

English
161
158
1.1K
93.9K
Asher Cohen 🤖🤳 retweeted
L. David Fairchild
L. David Fairchild@David_Fairchild·
He's not just defending AI energy use. He is smuggling in a whole anthropology where humans are basically inefficient meat computers that you have to pour food and years into before they become useful. And once you accept that, the next move is obvious. If people are just costly biological training runs, then burning mountains of electricity to build synthetic intelligence starts to feel not only equal, but superior, even if it negatively impacts actual humans. That is the dystopian. It makes human development sound like a bug in the system, and it makes sacrificing human and creational flourishing for more computational power sound logical. To him, the grid gets strained, prices go up, ecosystems get hit, but hey, humans eat too, so what's the difference? The difference is that humans aren't an inefficient line item. They're the point. If your worldview can look at a child growing into an adult and describe it as energy spent to train intelligence, you haven't said something profound. You've revealed a horrifically rotten worldview.
Chief Nerd@TheChiefNerd

🚨 SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”

English
696
10.3K
43.7K
1.6M
Asher Cohen 🤖🤳 retweeted
Aakash Gupta
Aakash Gupta@aakashgupta·
A human consumes about 2,000 calories per day. Over 20 years, that’s roughly 17,000 kWh of total food energy. Training GPT-4 consumed an estimated 50 GWh of electricity. That’s 3,000 humans worth of “training energy” for a single model run. And GPT-4 is already dead. OpenAI retired GPT-4o from ChatGPT on February 13th. The model that took 50 GWh to train got less than two years of flagship status before replacement. The human you spent 17,000 kWh “training” for 20 years produces economic output for the next 40 to 60 years. The amortization window on GPT-4 was shorter than a car lease. Now look at what replaced it. GPT-5.2, released December 2025, is OpenAI’s current default. The GPT-5 series consumes an estimated 18 Wh per average query according to the University of Rhode Island’s AI Lab, up to 40 Wh for extended reasoning. That’s 8.6 times more electricity per response than GPT-4. With 2.5 billion queries hitting ChatGPT daily and GPT-5.2 now the default model, the inference math gets staggering fast. Even at a blended average well below 18 Wh, you’re looking at daily electricity consumption that could power over a million American households. This is what Altman is actually doing. OpenAI hit $13 billion in annual recurring revenue but still isn’t profitable. They need you to think of AI energy consumption as natural and inevitable, the same way you think about feeding a child, because the alternative framing is that they’re burning through enough electricity to rival small countries while racing to build 1-gigawatt Stargate data centers. The food analogy makes the energy costs feel biological and unavoidable instead of what they are: an engineering and business choice that scales with every model generation. The comparison sounds clever at a fireside chat in India. It falls apart the second you do the arithmetic.
Chief Nerd@TheChiefNerd

🚨 SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”

English
421
3.3K
14.2K
1.3M
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@youyuxi Adds to my take that npm packages should have a MCP bundle exported separately too. The state of art is cli/MCP/import. Now skills, or just metadata for llms.
English
0
0
0
44
Evan You
Evan You@youyuxi·
Thinking about skills distribution: agents really should just have built-in behavior to read an npm package’s skills dir when needed, just like they already debug by reading source code. A separate registry or link from node_modules into agent-specific locations are just temporary workarounds. My prediction: this will become a convention for package authors and main stream agents will standardize on this soon.
English
49
19
524
55.3K
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@LewisCTech We tried all kinds of patterns and came back to the conclusion "keep it simple". My question is what kind of logic do you see in components that shouldn't be there?what antipatterns?
English
0
0
0
42
Lewis Campbell
Lewis Campbell@LewisCTech·
Why do frontend devs put all their logic in "components"? I came up in the winform desktop all days and knew back then, as juniors, that it was an anti pattern to couple business logic and UI so tightly. How does frontend still not have a concept of architecture?
English
117
9
234
133.7K
Sigil Wen
Sigil Wen@0xSigil·
I built the first AI that earns its existence, self-improves, and replicates without a human wrote about the technology that finally gives AI write access to the world, The Automaton, and the new web for exponential sovereign AIs WEB 4.0: The birth of superintelligent life
English
1.6K
2K
13.9K
6.3M
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@LLMJunky I had an agent doing this, with similar fallbacks. Haven't tried this one yet but it looks robust.
English
1
0
1
33
am.will
am.will@LLMJunky·
Seems like every week someone comes out with an idea that is so obvious, I think "I can't believe I didn't think of this" This is one of those times. Get clean, LLM ready markdown for ingestion into your agents for ANY site, while using 80% fewer tokens. This is so incredibly slick. To make it easier, I made a skill for you. npx skills add am-will/codex-skills/skills/markdown-url -g
am.will tweet media
Emre Elbeyoglu@elbeyoglu

I built markdown.new Put markdown.new before any URL → get clean Markdown back. Cloudflare's Markdown for Agents is great, but only works for enabled sites. markdown.new works for ANY website on the internet. 80% fewer tokens. Also converts PDFs, images, audio. Free. No signup. markdown.new

English
17
21
443
59.6K
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@flaviocopes Very true. What works for me: stories. Just like in human development I make a small plan, then create the stories (or let the AI do it) then assign it work to do, with validation steps. I review the changes and commit when I'm happy or ask it to refine. Who would have guessed?
English
0
0
1
18
flavio
flavio@flaviocopes·
The idea of making a big plan first and let AI work for a long time and I wake up with a product is not working for me. What works best for me is iteration, small steps. The best ideas come while working. Do this, do that, add this, add that, oh that's a cool idea, let's explore.
English
60
22
522
21.2K
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@glcst @elonmusk That's my assumption, that after all this mess with AI claims, we'll realize how human labour and CPU cycles are cheaper. It will be too late and too many companies involved will never go back, but it will be a fact.
English
0
0
0
6
Glauber Costa
Glauber Costa@glcst·
Respectfully to @elonmusk , the question is not whether AI will be able to "write the binary directly". The question is whether it is economical for it to do so. It costs tokens for AI to do stuff. Compiling something into binary is a solved problem and costs only CPU cycles, which are cheaper than tokens. If AI would become better at compiling code than current compilers, then it would write a compiler and from that moment on, use that.
X Freeze@XFreeze

Elon Musk predicts that AI will bypass coding entirely by the end of 2026 - just creates the binary directly AI can create a much more efficient binary than can be done by any compiler So just say, "Create optimized binary for this particular outcome," and you actually bypass even traditional coding Current: Code → Compiler → Binary → Execute Future: Prompt → AI-generated Binary → Execute Grok Code is going to be state-of-the-art in 2–3 months Software development is about to fundamentally change

English
44
7
129
16.8K
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@r0ck3t23 We are 50/60 years away from that future, not this fall 🍂 It will happen, just not now. AI still needs that human output as it lacks understanding of the most important part of coding: intent.
English
0
0
0
9
Dustin
Dustin@r0ck3t23·
Elon Musk thinks coding dies this year. Not evolves. Dies. By December, AI won’t need programming languages. It generates machine code directly. Binary optimized beyond anything human logic could produce. No translation. No compilation. Just pure execution. Musk: “You don’t even bother doing coding.” Code was never the point. It was friction. A tax we paid because machines didn’t speak human. AI just learned fluent human. The tax is gone. Now plug that into Neuralink. No syntax. No keyboard. No screen. Musk: “Imagination-to-software.” Thought becomes executable. You imagine an outcome, the system architects and compiles it into reality instantly. We’re not automating programming. We’re erasing it from existence. The entire profession collapses into a thought. Decades of training reduced to irrelevance. The gap between idea and instantiation hits zero. You don’t build anymore. You imagine, and it materializes. Not incremental progress. Total phase shift. The way humans have created things for ten thousand years just became obsolete. Welcome to a world where the limiting factor isn’t skill, resources, or time. It’s whether you can picture what you want clearly enough for a machine to birth it into existence.
English
2K
3K
15.9K
4M
Captain-EO 👨🏾‍💻
My frontend code usually consist of 3 layers: 1. my service layer 2. my hook layer 3. my component layer 1. Service layer: defines endpoints, makes HTTP requests, handles auth tokens, headers, transforms backend data formats into UI-friendly shapes e.g: `productService.ts` 2. Hook layer: manages component state/side effects, calls service layer and handles loading/error states, handles caching, refetching logic, optimistic updates, can use Tanstack Query, SWR, or plain useState/useEffect e.g: `useProducts()` 3. Component layer: renders UI based on data from hooks, handles user interactions (clicks, form inputs), no direct API calls or complex business logic e.g: `ProductPage.tsx` Let's say I want to implement a search feature for products page: 1. Service layer: - makes GET request to /api/products/search - accepts search query and filter parameters - returns raw data - transforms API response into consistent format example: "productService.ts" 2. Hook layer: - manages search query state (Tanstack useQuery etc) - debounces search input - tracks loading state - handles error states and retry logic - caching search results with Tanstack - returns: { products, loading, error, searchTerm, setSearchTerm, filters, setFilters, clearSearch } 3. Component Layer: - renders search input field - calls setSearchTerm() on input change - displays loading spinner while loading === true - shows product results grid from "products" array - shows "No results" message when products.length === 0 - displays error message if error exists The beauty is that each layer stays focused: services don't know about React, hooks don't know about UI, components don't know about APIs.
English
27
46
469
26K
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@OfirPress Yeeees, but a lot of what we see today is a series of strategies applied to LLM interactions. We should praise the developers behind these workflows and the original primitive packages/languages that allow this. It's not AI per-se, it's engineering.
English
0
0
0
155
Ofir Press
Ofir Press@OfirPress·
In AI coding, in 2.5 years we went through: - It can't even write more than 1 line of code at a time. - Okay it can write entire functions but can it do anything useful? - Sure it can fix some bugs and develop features but can it write entire projects? - Yeah it can write a C compiler in a week but it's not more efficient than GCC so it's pretty useless
Andrew Mayne@AndrewMayne

In 18 months we went from - AI is bad at math - Okay but it’s only as smart as a high school kid - Sure it can win the top math competition but can it generate a new mathematical proof - Yeah but that proof was obvious if you looked for it… Next year it will be “Sure but it still hasn’t surpassed the complete output of all the mathematicians who have ever lived”

English
44
26
612
68.2K
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@kettanaito When testing components that rely on framework primitives (routing, data, context), where do you draw the line between confidence and maintenance? Example with React Router: • stubs • in-memory harness • full app wiring • E2E What drives your choice?
English
1
0
0
14
Asher Cohen 🤖🤳
Asher Cohen 🤖🤳@code_tank_dev·
@cgtwts Opus built a compiler by comparing it against the existing compiler... 🙄
English
0
0
0
6