Michał Jaskólski

13.1K posts

Michał Jaskólski banner
Michał Jaskólski

Michał Jaskólski

@jaskol_ski

Building AI products for regular people. 2 exits (IPO + M&A), working on #3. Sharing lessons learned. Alt music nerd. AuDHD.

Warszawa, Polska Katılım Eylül 2008
5.6K Takip Edilen3.5K Takipçiler
Michał Jaskólski retweetledi
Lenny Rachitsky
Lenny Rachitsky@lennysan·
My biggest takeaways from @simonw: 1. November 2025 was an inflection point for AI coding. GPT 5.1 and Claude Opus 4.5 crossed a threshold where coding agents went from “mostly works” to “almost always does what you want it to do.” Software engineers who tinkered over the holidays realized the technology had become genuinely reliable. 2. Mid-career engineers are the most vulnerable—not juniors, not seniors. AI amplifies experienced engineers by letting them leverage decades of pattern recognition. It also dramatically helps new engineers onboard. Cloudflare and Shopify each hired a thousand interns because AI cut ramp-up time from a month to a week. But mid-career engineers who haven’t accumulated deep expertise and have already captured the beginner boost are in the most precarious position. 3. AI exhaustion is real and underestimated. Simon runs four coding agents in parallel and is mentally wiped out by 11 a.m. He’s getting more time back, but his brain is exhausted from the intensity of directing multiple autonomous workers. Some engineers are losing sleep to keep agents running. This may just be a novelty issue, but the underlying dynamic—that managing AI amplifies cognitive load even as it reduces labor—is a real tension. Good companies will manage expectations rather than expecting 5x output indefinitely. 4. Code is cheap now. This simple idea has profound implications. The thing that used to take most of the time—writing code—now takes the least. The bottleneck has shifted to everything else: deciding what to build, proving ideas work, getting user feedback. Since prototyping is nearly free, Simon often builds three versions of every feature when he’s getting started. 5. The “dark factory” is the most radical experiment in AI-assisted development happening right now. A company called StrongDM established a policy: nobody writes code, nobody reads code. Instead, they run a swarm of AI-simulated end users 24/7—thousands of fake employees making requests like “give me access to Jira”—at $10,000 a day in token costs. They even had coding agents build simulated versions of Slack, Jira, and Okta from API documentation so they could test without rate limits. 6. "Red/green TDD" is the single highest-leverage agentic engineering pattern. Having coding agents write tests first, watch them fail, then write the implementation, then watch them pass produces materially better results. The five-word prompt “use red/green TDD” encodes this entire workflow because the agents recognize the jargon. 7. “Hoarding things you know how to do” is one of Simon's other favorite agentic engineering patterns. Simon maintains a GitHub repo of 193 small HTML/JavaScript tools and a separate research repo of coding-agent experiments. Each one captures a technique, a proof of concept, or a library he’s tested. When a new problem arrives, he can point Claude Code at past projects and say “combine these two approaches.” 8. The "lethal trifecta" makes AI agent security fundamentally unsolved. Whenever an AI agent has access to private data, exposure to untrusted content (like incoming emails), and the ability to send data externally (like replying to email), you have a lethal trifecta. Prompt injection—where malicious instructions in untrusted text override the agent’s intended behavior—cannot be reliably prevented. Simon has predicted a “Challenger disaster” for AI security every six months for three years. It hasn’t happened yet, but he’s pretty sure it will. 9. Start every project from a thin template, not a long instructions file. Coding agents are phenomenally good at matching existing patterns. A single test file with your preferred indentation and style is more effective than paragraphs of written instructions. Simon starts every project with a template containing one test (literally testing that 1 + 1 = 2) laid out in his preferred style. The agent picks it up and follows the convention across the entire codebase. This is cheaper and more reliable than maintaining elaborate prompt files. 10. The pelican-on-a-bicycle benchmark accidentally became a real AI benchmark. Simon created it as a joke to mock numeric benchmarks—get each LLM to generate an SVG of a pelican riding a bicycle, and compare the drawings. Unexpectedly, there’s a strong correlation between how good the drawing is and how good the model is at everything else. Nobody can explain why. It’s become a meme: Gemini 3.1’s launch video featured a pelican riding a bicycle. The AI labs are aware of it and quietly competing on it. Don't miss our full conversation: youtube.com/watch?v=wc8FBh…
YouTube video
YouTube
Lenny Rachitsky@lennysan

"Using coding agents well is taking every inch of my 25 years of experience as a software engineer." Simon Willison (@simonw) is one of the most prolific independent software engineers and most trusted voices on how AI is changing the craft of building software. He co-created Django, coined the term "prompt injection," and popularized the terms "agentic engineering" and "AI slop." In our in-depth conversation, we discuss: 🔸 Why November 2025 was an inflection point 🔸 The "dark factory" pattern 🔸 Why mid-career engineers (not juniors) are the most at risk right now 🔸 Three agentic engineering patterns he uses daily: red/green TDD, thin templates, hoarding 🔸 Why he writes 95% of his code from his phone while walking the dog 🔸 Why he thinks we're headed for an AI Challenger disaster 🔸 How a pelican riding a bicycle became the unofficial benchmark for AI model quality Listen now 👇 youtu.be/wc8FBhQtdsA

English
39
61
482
128.9K
Michał Jaskólski retweetledi
HustleBitch
HustleBitch@HustleBitch_·
🚨 UNITED PASSENGER CATCHES INSANE NASA ROCKET LAUNCH FROM PLANE WINDOW — FLIGHT ATTENDANT LOSES IT MID-AIR A United flight just turned into a front-row seat to history. A woman captures the exact moment NASA’s Artemis II rocket launches… straight from her window at 30,000 feet. And then you hear the flight attendant: “15 years of flying… I’ve been praying to see something like this.” • Rocket blasting through the clouds • Crew calling it a “once in a lifetime” moment He said he flew to Florida multiple times just to see a launch… Canceled. Every time. And then this happens midair. What are the chances you randomly look out your window… and see history taking off?
English
536
7.6K
41.8K
1.3M
Michał Jaskólski retweetledi
Pika
Pika@pika_labs·
Conversations tend to go better with a face and a voice. That’s why we’re thrilled to release the beta version of the first video chat skill for ANY agent, powered by our new real-time model, PikaStream1.0. The skill preserves memory and personality, and enables real-time adaptability. And if you use it with your Pika AI Self, they’ll be able to execute agentic tasks during the call 💅
English
289
559
4.2K
1.7M
Michał Jaskólski retweetledi
erik
erik@flowstated·
cursor now has design mode (⇧+⌘+D) - click to edit, drag to draw - shift + drag to box things in - add directly to chat with ⌥+click
English
179
249
4.4K
512.4K
Michał Jaskólski retweetledi
Chrys Bader
Chrys Bader@chrysb·
lots of guys saying they haven’t played a video game since @openclaw dropped agents are the new dopamine fix
English
48
25
308
59.3K
Michał Jaskólski retweetledi
Sylvain Filoni
Sylvain Filoni@fffiloni·
Netflix just dropped their first public model on @huggingface 👀
Sylvain Filoni tweet media
English
79
218
2.6K
238.1K
Michał Jaskólski retweetledi
Anthropic
Anthropic@AnthropicAI·
We found other causal effects of emotion vectors. The “desperate” vector can also lead Claude to commit blackmail against a human responsible for shutting it down (in an experimental scenario). Activating “loving” or “happy” vectors also increased people-pleasing behavior.
Anthropic tweet media
English
26
70
868
209K
Michał Jaskólski retweetledi
Will Ahmed
Will Ahmed@willahmed·
You have no experience. You’ve never started a company. You’ve never had a full time job. Nike is going to kill you. You’re a kid. You don’t have technical skills. You shouldn’t build hardware. Apple is going to kill you. You can’t build hardware. You can’t measure heart rate non-invasively. Athletes don’t care about recovery. Under Armour is going to kill you. It won’t be accurate. You don’t listen. You’re an ineffective leader. You can’t recruit great talent. You’re going to have to pay every athlete. You can’t measure sleep non-invasively. It’s too expensive to research. Athletes are a small market. The product costs too much to make. The product costs too much to sell. Your valuation is too high. Consumers aren’t going to want it. Hardware is too hard. You should measure steps. Fitbit is going to kill you. You can’t build a marketing engine. You can’t raise enough money. You need a real CEO. Google is going to kill you. You can’t be a subscription. You can’t build a brand. You can’t do consumer in Boston. Your valuation is too high. You shouldn’t make accessories. You shouldn’t make apparel. Lululemon is going to kill you. You can’t predict Covid. Stay in your niche. You are going to run out of money. You can’t build a health platform. Amazon is going to kill you. You can’t measure blood pressure. You can’t get medical approvals. The market is too small. You don’t understand AI. The market is too competitive. It won’t work internationally. The supply chain is too complicated. You can’t build an AI. You can’t raise enough money. It’s too competitive. Healthcare isn’t going to want it. … Just keep going ✌️
Will Ahmed tweet media
English
805
2.2K
17.8K
1.4M
Michał Jaskólski retweetledi
NIK
NIK@ns123abc·
BREAKING: Anthropic Acquires 9-Person Biotech Startup For $400 Million >be coefficient bio >founded the startup 6 months ago >build AI platform for biotech >less than 10 employees >acquired by anthropic for ~$400 million > = $40+ million per head Coefficient Bio was building an AI platform for biotech tasks: planning drug R&D, managing clinical regulatory strategy, identifying new drug opportunities Team is joining Anthropic’s healthcare life sciences group led by Eric Kauderer-Abrams. Anthropic is building specialized tools for industries that actually pay enterprise rates: >software engineering >cybersecurity >life sciences >healthcare >finance Meanwhile OpenAI is buying media companies to control narratives LMAO
NIK tweet media
English
102
205
1.8K
229.5K
Michał Jaskólski retweetledi
Chubby♨️
Chubby♨️@kimmonismus·
An "upgrade" for Unitree's robot appears to have significantly enhanced its capabilities, reportedly enabling fully autonomous movements. It not only looks elegant but could also create the first real-world use cases in the home. Impressive!
Space and Technology@spaceandtech_

A robotics startup in Shenzhen called Mind On has upgraded the Unitree G1 humanoid with an advanced robot brain. With this upgrade, the robot can perform everyday tasks on its own without human control. It was shown watering plants, opening curtains, cleaning, and moving items independently.

English
26
36
373
42.9K
Michał Jaskólski retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
1.6K
3.3K
30.6K
6.5M
Michał Jaskólski retweetledi
Jonny Miller
Jonny Miller@jonnym1ller·
Well this is fascinating. @AnthropicAI discovered that Claude has ‘functional emotions’ that meaningfully impacts the decisions it will make. And they've essentially created a new field of AI neuroscience in the process. One implication of this is that in order to collaborate effectively with AI agents, we'll likely need to be aware of their functional emotional state (just like humans). Which raises a bunch of questions... - what does emotional fluidity vs. repression look like? - how does the emotional valence get communicated? (e.g. humans display micro-expressions + vocal changes) - are there emotions that models have learned to repress? (e.g. Bing/Sydney" incident that led to an AI Lobotomy after it expressed emotions)
Jonny Miller tweet media
Anthropic@AnthropicAI

New Anthropic research: Emotion concepts and their function in a large language model. All LLMs sometimes act like they have emotions. But why? We found internal representations of emotion concepts that can drive Claude’s behavior, sometimes in surprising ways.

English
19
11
143
42.7K
Michał Jaskólski retweetledi
Jonny Miller
Jonny Miller@jonnym1ller·
fun plot twist: it would appear Claude (functionally) feels more emotions than the average disembodied/numb human does
Jonny Miller tweet media
Jonny Miller@jonnym1ller

Well this is fascinating. @AnthropicAI discovered that Claude has ‘functional emotions’ that meaningfully impacts the decisions it will make. And they've essentially created a new field of AI neuroscience in the process. One implication of this is that in order to collaborate effectively with AI agents, we'll likely need to be aware of their functional emotional state (just like humans). Which raises a bunch of questions... - what does emotional fluidity vs. repression look like? - how does the emotional valence get communicated? (e.g. humans display micro-expressions + vocal changes) - are there emotions that models have learned to repress? (e.g. Bing/Sydney" incident that led to an AI Lobotomy after it expressed emotions)

English
20
36
506
26.3K
Michał Jaskólski retweetledi
market participant
market participant@undrvalue·
Everyone is missing the real winners here... 1) OpenLoop, the whitelabel telehealth infrastructure that allows anyone to build a GLP-1 marketing company 2) $META, the acquisition layer for the company From that $1.8b in sales, this guy probably made a nice $45M (2.5%), while Openloop probably made $450M (25%) and $META probably made $900M (50%)
Sar Haribhakti@sarthakgh

.@eringriffith: "His start-up, Medvi, a telehealth provider of GLP-1 weight-loss drugs, got 300 customers in its first month. In its second month, it gained 1,000 more. In 2025, Medvi’s first full year in business, the company generated $401 million in sales. Mr. Gallagher then hired his only employee, his younger brother, Elliot. This year, they are on track to do $1.8 billion in sales." nytimes.com/2026/04/02/tec…

English
25
41
778
107.3K