Florin Otto

741 posts

Florin Otto

Florin Otto

@florinotto

Founder @deeprooted_UK | Co-Founder @humansdotai | Serial entrepreneur & product executive, building the future by connecting web3, AI & the real world

London, England Katılım Kasım 2012
545 Takip Edilen859 Takipçiler
Sabitlenmiş Tweet
Florin Otto
Florin Otto@florinotto·
The root of genius is God
English
5
0
12
1.2K
Florin Otto retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
The conspiracy version of this is wrong. The real version is worse. Anthropic published a postmortem last September documenting three separate infrastructure bugs that degraded Claude's quality for weeks. Routing errors sent requests to wrong server pools. A compiler bug corrupted token selection. An adaptive thinking system started under-allocating reasoning on complex turns. 30% of Claude Code users got misrouted during the affected period. None of that was intentional. All of it produced exactly the pattern in this chart. Here's what actually drives the decline. Every AI company faces the same constraint: inference costs scale linearly with users but revenue doesn't. Quantization (compressing model weights from 16-bit to 8-bit or 4-bit) cuts GPU memory by 2-4x. Adaptive thinking allocation reduces compute per request. Batching groups requests to maximize throughput. Each optimization is individually rational. Each one shaves quality by a few percent. Stack five of them under peak load and users feel it. The timing matches launches perfectly because launch day has minimum users on the new model and maximum GPU allocation per request. Three months later you have 10x the users on the same infrastructure. The quality delta between "launch day inference budget" and "Tuesday afternoon at peak load inference budget" is the entire gap in that chart. Benchmarks miss this because benchmarks run on dedicated hardware with no load balancing, no quantization, no request batching. The model that scores 92% on MMLU in a lab scores 92% on MMLU in production too. But the user experience of interacting with that model through six layers of inference optimization at 4pm EST? That's a different product. The real problem is that "intentional nerfing" gives companies too much credit. Intentional nerfing implies control. What's actually happening is that nobody fully understands how inference optimization degrades the long tail of capabilities until users report it weeks later.
Marcin Krzyzanowski@krzyzanowskim

"Anthropic, OpenAl and Google release their new models with high quality from day one then slowly nerf them until the next model, so when the next model hits, its perceived as a bigger jump than it actually is" sounds right what's happening

English
25
48
379
53.5K
Noise2Signal
Noise2Signal@noiseto_signal·
Most businesses think they're behind on AI. They are. But not for the reason they think. The gap isn't ChatGPT, Claude or Gemini. The gap is agents. Plus here's what nobody says out loud: most of what's being called an "agent" right now isn't one. It's a rebranded chatbot that dies the moment it touches a real workflow. That's why 80-90% of agent projects fail in production. Agents as a concept work but the problem is people building the wrong thing. The ones winning aren't building Jarvis. They're building boring. Karaca (Turkish home goods retailer): 44% ROAS optimizing ad spend across 2,000 products. Carta: Ledger reconciles 90% of transactions in minutes instead of days. Logistics ops: Saving $4.3M/year from fewer delays. Its narrow, constrained and repetitive. Compounding every single day. Gartner: 40% of enterprise applications will have agents built in by end of 2026. Up from <5% in 2025. That's not a forecast. It's this year. Businesses already moving didn't wait for perfect. They picked one boring task. Built a tight pipeline. Kept a human in the loop. Measured → scaled. Stop chasing sci-fi. Start with boring that prints money. 🚀 #AIAgents #AgenticAI #EnterpriseAI #clauecowork #openclaw
Noise2Signal tweet media
English
1
0
8
145
Noise2Signal
Noise2Signal@noiseto_signal·
Everyone thinks AI creative is one prompt. It finally is. I built the zero-friction future of viral clips to pro outputs. Drop any TikTok reference. Open Claw runs the full pipeline (Claude > Nano Banana > Kling 3.0) No prompt engineering. No tool-hopping. Just output. Full app soon. Comment "UNLOCK" and I’ll DM you early access.
English
7
0
13
813
Noise2Signal
Noise2Signal@noiseto_signal·
Top: Tiktok reference Bottom: Kling 3.0 motion Claude + Nano Banana + Kling 3.0 We're building the one-click version of this pipeline. Soon.
English
5
1
9
190
Noise2Signal
Noise2Signal@noiseto_signal·
#QuitGPT trending. Thousands switching this week. Most are jumping to whatever is trending without asking the only question that matters. Which AI is actually built for the way you work? A reasoning engine won't help you if you need real-time research. An open-source model won't help you if you've never touched a terminal. I tested 30+ models last month. Mapped the 4 that matter and what each one is actually best at. Save this for when you're ready to switch.
English
5
0
13
273
Noise2Signal
Noise2Signal@noiseto_signal·
30+ AI releases dropped in February. Everyone is hyped about the application layer. The real unlock happened at the model layer. I mapped the ones that actually matter. My top picks: Claude Opus 4.6, Perplexity Computer, GPT-5.3 Codex, Seedance 2. Bookmark this before it gets buried.
Noise2Signal tweet media
English
4
0
11
491
Noise2Signal
Noise2Signal@noiseto_signal·
One WhatsApp message >> test-ready creative in minutes. I wired Openclaw so a single message pulls context from Notion + Drive, generates structured directions, and outputs variations ready to test. No briefs, file hunting or coordination loops. The real unlock? You don't have to manually stitch together 5 different AI tools. Creativity didn’t change. Latency did. For high-velocity companies, speed is the moat.
English
2
0
5
304
Noise2Signal
Noise2Signal@noiseto_signal·
Most founders build like it's 2036. They market like it's 2016. We built the app MVP in 2h, now we needed to see if there was *actual* demand. You know the drill: Ideate a storyboard Write scroll-stopping copy Make a vertical version for socials Want to change scene order? Start again Then we paused. If we ship new product features daily (sometimes hourly) why does marketing production still feel like 10 years ago? Why are we dragging keyframes around when everything else in our business is autonomous? So we rethought the process… and built the video with code The product video was not edited, it was engineered. Instead of screen recording the app in use, we rebuilt the interface in code. 917 frames. 30 frames per second. Every scene defined in React. Problem → Escalation → Solution → Feature walkthrough → Call to Action Results feel native because it is not a screen recording. It's the actual app, repurposed. First version too slow? “Claude, pls increase the pacing” Updated the UI? “Copy UI changes from the last commit and render” No re-recording. No engineering-marketing sync calls. ⬜️ The Stack We Used Claude was used to write the storyboard and orchestrate Remotion. NanoBanana for the base image, Kling to bring it to life Remotion handled the component animations, scene transitions and video rendering using React. And when we needed a vertical version for socials? Remotion updated the styling to respect the new dimensions and Claude used FFmpeg to handle the orientation correctly. Product Marketing in 2026 When product velocity increases but marketing velocity remains sluggish, growth is constrained by execution. If marketing output becomes programmatic you can: Iterate instantly Maintain stylistic consistency Scale across formats without duplication Treat distribution as an engineering problem This is a structural shift in how product marketing works. You cannot remove the creative side of marketing, because that is the human element which provides the ‘spark’. But the execution layer can be systemised. Teams that understand that early will move faster than everyone else. DM me “REMOTION” and I’ll send you the exact step-by-step (build + tools + gotchas).
English
2
0
6
263
Noise2Signal
Noise2Signal@noiseto_signal·
Kling 3.0 literally made this in ONE prompt. AI media is hitting an inflection point and 2026 is going to be a huge opportunity for creators who see it early. ⬛️ Quick read for builders + creators Kling 3.0 isn’t about flashy demos. It’s about control + reliability. Multi-shot sequences, shot-reverse-shot, camera logic and pacing. For the first time, AI video actually mostly holds together instead of collapsing into slop. ⬛️ What blew me away: > Prompt adherence is WAY up > Character + environment consistency is no longer the bottleneck > Motion + physics feel intentional, not random > Native audio actually syncs (huge) > This enables SCENES, not just 5s clips ⬛️ Important reality check: > This is not “press button → perfect movie.” AI can only do so much. You’ll still need to review, tweak prompts, adjust pacing, and clean things up. Think co-director, not autopilot. > Credit burns FAST - would love a low res option for test mode to iterate cheaply Net: Kling 3.0 feels like the moment AI video crossed from toy to tool. Solo filmmakers just got real leverage.
English
2
0
4
246
Florin Otto
Florin Otto@florinotto·
Motion XAI starts from a simple idea: People aren’t data exhaust. They’re the real context. If systems don’t earn trust, they won’t get meaning. And without meaning, data is just noise.
English
0
0
0
48
Florin Otto
Florin Otto@florinotto·
We collect health signals, behaviour, and habits. But the data is messy, disconnected, and often defined about people, not with them. When people don’t trust how it’s used, they stop participating.
English
1
0
0
73
Florin Otto
Florin Otto@florinotto·
I attended WEF in Davos, and one thing became clear very quickly: We have built insanely powerful systems, but we still struggle to understand human meaning.
Florin Otto tweet media
English
29
14
14
394
Florin Otto
Florin Otto@florinotto·
@MOTIONxAI is building the next phase An AI economy where people aren’t mined for data. They take part and keep the value their experiences create.
English
0
0
0
63
Florin Otto
Florin Otto@florinotto·
The AI gold rush ends with a massive data rush to own/licence niche, verified data sets. Humans are treated as raw material scraped, modelled, and monetised without agency or ownership.
English
1
0
1
74
Florin Otto
Florin Otto@florinotto·
Will.i.am said it perfectly. Lived experience is yours and it shouldn’t live on someone else’s database.
English
23
9
10
398