Ole Tillmann

4.1K posts

Ole Tillmann banner
Ole Tillmann

Ole Tillmann

@oletillmann

Innovation consultant • Executive communication coach • Professional presenter • Startup advisor @apxberlin • First computer: C64. Founder: @peakberlin • 🍦

Berlin Katılım Aralık 2010
2.9K Takip Edilen1.2K Takipçiler
Ole Tillmann retweetledi
Ethan Mollick
Ethan Mollick@emollick·
Big deal paper here: field experiment on 515 startups, half shown case studies of how startups are successfully using AI. Those firms used AI 44% more, had 1.9x higher revenue, needed 39% less capital: 1) AI accelerates businesses 2) The challenge is understanding how to use it
Ethan Mollick tweet mediaEthan Mollick tweet media
Hyunjin Kim@hyunjinvkim

🚨 Excited to share a new working paper! 🚨 AI can improve individual tasks. But when does it improve firm performance? Our paper proposes one key friction firms face: the "mapping problem" -- discovering where and how AI creates value in a firm's production process. 🧵1/

English
102
153
1K
309.2K
Ole Tillmann retweetledi
Eli Lifland
Eli Lifland@eli_lifland·
AI timelines update: @DKokotajlo and I have updated our timelines earlier by ~1.5 years over the last 3 months, primarily due to (a) expecting faster time horizon growth, and (b) coding agents impressing in the real world. During 2025, we had updated toward longer timelines.
Eli Lifland tweet media
English
16
82
653
96.1K
Ole Tillmann retweetledi
Stitch by Google
Stitch by Google@stitchbygoogle·
We are completely humbled by the amazing response to our launch last week! 🫶 Now, we want to help you get the absolute best results from Stitch. In this new video, David East walks you through how to consistently get premium results. We also launched a new prompt enhancer (located under ‘+’ menu) to help you quickly collaborate on your vision before you submit your first prompt. Stitch doesn't replace the design process—it is a tool for fast exploration and refinement, which is most effective when you step into the role of Creative Director. Here are David's top strategies for taking your designs from generic to amazing: 🧠 Start with Intent: Define exactly who the design is for and how you want them to feel before you start building. 🎨 Enhance your prompt: You can use the new prompt enhancer (under the ‘+’ button’) to teach you design language and swap abstract words like "sporty" for tangible aesthetic descriptions like "high-end stationery" or "architectural limestone". 📐 Master Color Hierarchy: Treat colors as visual weight—Neutral for the canvas, Primary for ink, and Tertiary for your loudest accents. Watch the full breakdown and see the transformation here👇images in 🧵
English
54
276
3.3K
649.1K
Ole Tillmann retweetledi
Craig Hewitt
Craig Hewitt@TheCraigHewitt·
Very bullish on open source and local models Imagine running near-Opus-level model locally on that $600, 16GB Mac Mini you bought last month This 27B Qwen3.5 distill was trained on Claude 4.6 Opus reasoning traces and is putting up real numbers: - beats Claude Sonnet 4.5 on SWE-bench - keeps 96.91% HumanEval - cuts CoT (chain of thought) bloat by 24% - runs in 4-bit quantization Why this matters: local agent loops get a lot cheaper, faster, and more usable. frontier models aren’t going to keep subsidizing cheap tokens on subscriptions forever 300K+ downloads already on HF Link below 👇🏻 We’re early
Craig Hewitt tweet media
English
145
222
2.6K
444.4K
Ole Tillmann retweetledi
Unsloth AI
Unsloth AI@UnslothAI·
This model has been #1 trending for 3 weeks now. It's Qwen3.5-27B fine-tuned on distilled data from Claude-4.6-Opus (reasoning). Trained via Unsloth. Runs locally on 16GB in 4-bit or 32GB in 8-bit. Model: huggingface.co/Jackrong/Qwen3…
Unsloth AI tweet media
English
89
229
2.8K
204.7K
Ole Tillmann
Ole Tillmann@oletillmann·
Which metaphor or analogy could help us to understand better how AI integrates itself into organizational design? Is it like fascia? May AI acts like the connective tissue that wraps around existing structures, connects silos, and allows them to glide frictionlessly. It acts as the organization's largest sensory organ, distributing tension and signals across the entire system.
English
0
0
0
24
Ethan Mollick
Ethan Mollick@emollick·
I am not sure "Forward Deployed AI Engineers" are going to deliver on what a lot of companies are hoping for. They are useful, yes, but AI applications are far less of a technical issue, and much more about rethinking the deep expertise & structure of your organization around AI.
English
136
53
834
82.8K
Ole Tillmann
Ole Tillmann@oletillmann·
@ccatalini Hi Christian. Thank you very much for sharing your insights in this paper! I would love to interview you for my podcast for my German viewers. Would you be interested? Best regards from Berlin, Ole
English
0
0
1
25
Christian Catalini
Christian Catalini@ccatalini·
1/ Some Simple Economics of AGI—🔥🧵 Right now, there is a low-grade panic running through the economy. Everyone is asking the same anxious question: what exactly is AI going to automate, and what will be left for us?
Christian Catalini tweet media
English
146
382
2K
610.8K
Ole Tillmann
Ole Tillmann@oletillmann·
@Zhikai273 @karpathy Crazy!! And Congrats. How long will it take until it plays like a world class human player?
English
0
0
0
385
Zhikai Zhang
Zhikai Zhang@Zhikai273·
@karpathy Thanks! Glad it looks that real 🙂 This one is actually a real humanoid playing tennis.
English
12
2
62
7K
Ole Tillmann retweetledi
Zhikai Zhang
Zhikai Zhang@Zhikai273·
🎾Introducing LATENT: Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data Dynamic movements, agile whole-body coordination, and rapid reactions. A step toward athletic humanoid sports skills. Project: zzk273.github.io/LATENT/ Code: github.com/GalaxyGeneralR…
English
162
642
4.1K
1.4M
Ole Tillmann
Ole Tillmann@oletillmann·
Hey Andrej, ran your autoresearch setup overnight on an H100 NVL. 12 experiments, here's what came out: MATRIX_LR=0.05 confirmed optimal (val_bpb=1.084629). Tested 0.06 → slightly worse (1.084898). Most interesting finding: UNEMBEDDING_LR seems to be a cliff edge. Dropping from 0.004 to 0.003 caused what looked like catastrophic degradation, val_bpb jumped to 1.201 (+0.116). Weight decay, warmdown, and beta variations all came in worse than baseline. Honest disclaimer: I have no idea if any of this is meaningful or just gibberish. I ran it because I found the project fascinating, not because I can properly evaluate the results. You'd know much better than me whether this points to something real or if I'm just reading noise. Either way it was a genuinely cool experiment. Thanks for sharing it! Happy to send the raw logs if useful.
English
0
0
0
133
Andrej Karpathy
Andrej Karpathy@karpathy·
I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)
Andrej Karpathy tweet media
English
1.1K
3.7K
28.3K
11M
Ole Tillmann
Ole Tillmann@oletillmann·
Hey Andrej, ran your autoresearch setup overnight on an H100 NVL. 12 experiments, here's what came out: MATRIX_LR=0.05 confirmed optimal (val_bpb=1.084629). Tested 0.06 → slightly worse (1.084898). Most interesting finding: UNEMBEDDING_LR seems to be a cliff edge. Dropping from 0.004 to 0.003 caused what looked like catastrophic degradation, val_bpb jumped to 1.201 (+0.116). Weight decay, warmdown, and beta variations all came in worse than baseline. Honest disclaimer: I have no idea if any of this is meaningful or just gibberish. I ran it because I found the project fascinating, not because I can properly evaluate the results. You'd know much better than me whether this points to something real or if I'm just reading noise. Either way it was a genuinely cool experiment. Thanks for sharing it! Happy to send the raw logs if useful.
English
0
0
0
33
Ole Tillmann
Ole Tillmann@oletillmann·
Hi Andrej. Thanks for sharing. I have it running on an H100 on vast.ai. To point it out correctly: my OpenClaw assistant Bessy, named after our family‘s dead cat, runs it autonomously with Claude Code. All I did was setting up the account, bought some compute on the H100 an let it run. Looove it. Exciting times. Best regards from Berlin, Ole
Ole Tillmann tweet media
English
1
0
1
559
a16z
a16z@a16z·
"Not having a coding experience is becoming an advantage." Replit CEO Amjad Masad: "You don't need any development experience. You need grit. You need to be a fast learner." "If you're a good gamer, if you can jump in a game and figure it out really quickly, you're really good at this." "Coders get lost in the details." "Product people, people who are focused on solving a problem, on making money, they're going to be focused on marketing, they're going to be focused on user interface, they're going to be focused on all the right things." "I think this year it's gonna flip, and I think not having a coding background is gonna be more advantageous for the entrepreneur." @amasad with @jackhneel
English
598
518
4.6K
2.5M
Chubby♨️
Chubby♨️@kimmonismus·
Let that sink in. Anthropic has just published a study on AI and labor market. There's a huge difference between what AI can do today and what it will theoretically be able to do in the future. This already poses a serious problem for those starting their careers in the field.
Chubby♨️ tweet media
English
75
60
586
34.5K
Dustin
Dustin@r0ck3t23·
Raw compute is becoming a commodity. Deep human expertise is becoming the ultimate moat. OpenAI’s Sebastian Bubeck just shattered the illusion that AI will equalize human capability. It won’t. As models approach AGI, the barrier to entry for basic tasks drops to zero. But the ceiling for what’s possible rises infinitely. If you actually know what you’re doing. Bubeck: “I think expertise and deep expertise in a scientific field is more important than ever.” This is the part most people aren’t processing. If you don’t have foundational understanding of the physics, math, or engineering you’re working with, you cannot push the model past its surface. You get trapped in a loop. Typing prompts. Getting answers. Understanding neither. Building nothing. Bubeck: “The worry would be that there is even more of a separation between people who start relying too much on AI… and the people who are really studying precisely what’s happening.” The fracture is already forming. And it’s not the one anyone predicted. The future economy won’t be divided between those who have AI and those who don’t. Everyone will have AI. It will be divided between the people who understand the problem deeply enough to direct the system, and the people who just consume whatever it produces. One group builds with it. The other gets replaced by it. The legacy system rewarded memorization and credentials. The AI era rewards comprehension so precise you can tell the machine exactly where it’s wrong. That kind of knowledge doesn’t come from prompting. It comes from years of hard study that most people are currently skipping because the machine makes it feel unnecessary. That’s the trap. The machine is the engine. But you have to understand the terrain.
English
40
72
286
38.9K
Ole Tillmann retweetledi
Citrini
Citrini@citrini·
I spent 100 hours over the past week researching, writing and editing the piece we just put out. It’s a scenario, not a prediction like most of our work. But it was rigorously constructed, dismissing it outright requires the kind of intellectual laziness that tends to get expensive. And we’ve released it for free. Hopefully you enjoy it. citriniresearch.com/p/2028gic
English
907
2.4K
14.6K
10.8M
Ole Tillmann retweetledi
Boris Cherny
Boris Cherny@bcherny·
I'm Boris and I created Claude Code. I wanted to quickly share a few tips for using Claude Code, sourced directly from the Claude Code team. The way the team uses Claude is different than how I use it. Remember: there is no one right way to use Claude Code -- everyones' setup is different. You should experiment to see what works for you!
English
927
5.9K
50.9K
9.1M