𝗙𝗹𝗼

8.3K posts

𝗙𝗹𝗼 banner
𝗙𝗹𝗼

𝗙𝗹𝗼

@Snite_Flo

Rouen / Rennes Katılım Mayıs 2016
22 Takip Edilen37 Takipçiler
Sabitlenmiş Tweet
𝗙𝗹𝗼
𝗙𝗹𝗼@Snite_Flo·
yippee
𝗙𝗹𝗼 tweet media
English
0
1
4
167
𝗙𝗹𝗼 retweetledi
Riot Rammus
Riot Rammus@Riot_Rammus·
Riot Rammus tweet media
ZXX
10
957
15K
179K
𝗙𝗹𝗼 retweetledi
Ruoyu Sun
Ruoyu Sun@RuoyuSun_UI·
ICML Reviewer kindly suggested we cite a paper posted on arXiv in March 2026 as related work. For reference, the submission deadline of ICML was January 28, 2026. A fair request, assuming authors are now expected to survey not just the literature, but also the future.
English
12
29
702
48.5K
𝗙𝗹𝗼 retweetledi
Bryan Johnson
Bryan Johnson@bryan_johnson·
@maximumpain333 This advice is dangerous. Chronic sleep fragmentation raises cortisol, accelerates epigenetic aging, and impairs glucose regulation. Insomnia is associated with depression, metabolic disease, and early mortality.
English
193
116
6.8K
170.5K
𝗙𝗹𝗼 retweetledi
Bryan Johnson
Bryan Johnson@bryan_johnson·
experimenting with a new sleep protocol
Bryan Johnson tweet media
English
221
45
3.6K
401.7K
Ikaros
Ikaros@ikarosalpha_·
@bridgemindai wait I'm not receiving neither the email nor extra usage credits. 5x max plan. Is it a gradual rollout?
English
7
0
21
4.7K
BridgeMind
BridgeMind@bridgemindai·
The truth is finally out. Anthropic just emailed every Claude subscriber. The rate limits weren't a bug. Third party tools like OpenClaw were putting "outsized strain" on their systems. Anthropic's fix? Cut them off. Starting April 4, third-party harnesses no longer get your subscription limits. Pay-as-you-go only. To make up for it, every subscriber gets a one time credit equal to their monthly subscription. I'm getting $200 in extra usage. This is huge. If this is what was killing Claude Code rate limits for Max plan users, tomorrow should feel like a completely different product. I'll be testing Claude Opus 4.6 all day and reporting back. Stay tuned.
BridgeMind tweet media
English
117
45
722
73.4K
𝗙𝗹𝗼 retweetledi
Alex Volkov
Alex Volkov@altryne·
PSA: If you've been running out of Claude session quotas on Max tier, you're not alone. Read this. Some insane Redditor reverse engineered the Claude binaries with MITM to find 2 bugs that could have caused cache-invalidation. Tokens that aren't cached are 10x-20x more expensive and are killing your quota. If you're using your API keys with Claude this is even worse. This is also likely why this isn't uniform, while over 500 folks replied to me and said "me too", many (including me) didn't see this issue. There are 2 issues that are compounded here (per Redditor, I haven't independently confirmed this) : 1s bug he found is a string replacement bug in bun that invalidates cache. Apparently this has to do with the custom @bunjavascript binary that ships with standalone Claude CLI. The workaround there is to use Claude with `npx @anthropic-ai/claude-code` 2nd bug is worse, he claims that --resume always breaks cache. And there doesn't seem to be a workaround there, except pinning to a very old version (that will miss on tons of features) This bug is also documented on Github and confirmed by other folks. I won't entertain the conspiracy theories there that Anthropic "chooses" to ignore these bugs because it gets them more $$$, they are actively benefiting from everyone hitting as much cached tokens as possible, so this is absolutely a great find and it does align with my thoughts earlier. The very sudden spike in reporting for this, the non-uniform nature (some folks are completely fine, some folks are hitting quotas after saying "hey") definitely points to a bug. cc @trq212 @bcherny @_catwu for visibility in case this helps all of us.
Alex Volkov tweet media
Alex Volkov@altryne

My feed is showing me a bunch of folks who tapped out their whole usage limits on Mon/Tue. Is this your experience? Please comment, I want to understand how widespread this is

English
225
424
5K
1.6M
𝗙𝗹𝗼 retweetledi
Lydia Hallie ✨
Lydia Hallie ✨@lydiahallie·
We're aware people are hitting usage limits in Claude Code way faster than expected. Actively investigating, will share more when we have an update!
English
1.6K
747
13.6K
4.2M
𝗙𝗹𝗼
𝗙𝗹𝗼@Snite_Flo·
@ojim_france C'est quoi le rapport avec l'image d'un homme qui fait du sport ?
Français
0
0
1
4
Observatoire du journalisme (Ojim)
🚨L'ARCOM souhaite CENSURER davantage les propos qu'elle considère "masculinistes" : le gendarme du numérique souhaite désormais des avancées LÉGISLATIVES pour renforcer la modération de ces contenus. C'est une PREMIÈRE.
Observatoire du journalisme (Ojim) tweet media
Français
210
203
469
81.9K
𝗙𝗹𝗼 retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Cursor is raising at a $50 billion valuation on the claim that its “in-house models generate more code than almost any other LLMs in the world.” Less than 24 hours after launching Composer 2, a developer found the model ID in the API response: kimi-k2p5-rl-0317-s515-fast. That’s Moonshot AI’s Kimi K2.5 with reinforcement learning appended. A developer named Fynn was testing Cursor’s OpenAI-compatible base URL when the identifier leaked through the response headers. Moonshot’s head of pretraining, Yulun Du, confirmed on X that the tokenizer is identical to Kimi’s and questioned Cursor’s license compliance. Two other Moonshot employees posted confirmations. All three posts have since been deleted. This is the second time. When Cursor launched Composer 1 in October 2025, users across multiple countries reported the model spontaneously switching its inner monologue to Chinese mid-session. Kenneth Auchenberg, a partner at Alley Corp, posted a screenshot calling it a smoking gun. KR-Asia and 36Kr confirmed both Cursor and Windsurf were running fine-tuned Chinese open-weight models underneath. Cursor never disclosed what Composer 1 was built on. They shipped Composer 1.5 in February and moved on. The pattern: take a Chinese open-weight model, run RL on coding tasks, ship it as a proprietary breakthrough, publish a cost-performance chart comparing yourself against Opus 4.6 and GPT-5.4 without disclosing that your base model was free, then raise another round. That chart from the Composer 2 announcement deserves its own paragraph. Cursor plotted Composer 2 against frontier models on a price-vs-quality axis to argue they’d hit a superior tradeoff. What the chart doesn’t show is that Anthropic and OpenAI trained their models from scratch. Cursor took an open-weight model that Moonshot spent hundreds of millions developing, ran RL on top, and presented the output as evidence of in-house research. That’s margin arbitrage on someone else’s R&D dressed up as a benchmark slide. The license makes this more than an attribution oversight. Kimi K2.5 ships under a Modified MIT License with one clause designed for exactly this scenario: if your product exceeds $20 million in monthly revenue, you must prominently display “Kimi K2.5” on the user interface. Cursor’s ARR crossed $2 billion in February. That’s roughly $167 million per month, 8x the threshold. The clause covers derivative works explicitly. Cursor is valued at $29.3 billion and raising at $50 billion. Moonshot’s last reported valuation was $4.3 billion. The company worth 12x more took the smaller company’s model and shipped it as proprietary technology to justify a valuation built on the frontier lab narrative. Three Composer releases in five months. Composer 1 caught speaking Chinese. Composer 2 caught with a Kimi model ID in the API. A P0 incident this year. And a benchmark chart that compares an RL fine-tune against models requiring billions in training compute without disclosing the base was free. The question for investors in the $50 billion round: what exactly are you buying? A VS Code fork with strong distribution, or a frontier research lab? The model ID in the API answers that. If Moonshot doesn’t enforce this license against a company generating $2 billion annually from a derivative of their model, the attribution clause becomes decoration for every future open-weight release. Every AI lab watching this is running the same math: why open-source your model if companies with better distribution can strip attribution, call it proprietary, and raise at 12x your valuation? kimi-k2p5-rl-0317-s515-fast is the most expensive model ID leak in the history of AI licensing.
Harveen Singh Chadha@HarveenChadha

things are about to get interesting from here on

English
247
549
4.4K
1.4M
𝗙𝗹𝗼 retweetledi
JUMPERZ
JUMPERZ@jumperz·
yeah, there’s obviously something wrong with Claude and you can’t convince me the opposite.. This was 3 prompts btw on Claude max
JUMPERZ tweet media
English
165
24
902
152.5K
𝗙𝗹𝗼 retweetledi
Prasenjit
Prasenjit@Star_Knight12·
claude is down 95% of devs productivity went to zero
Prasenjit tweet media
English
59
23
698
35.3K
𝗙𝗹𝗼 retweetledi
Veer Masrani
Veer Masrani@veermasrani·
Claude Paid Plan Limits Are Suddenly Draining in Minutes My Claude Pro usage limit has been getting exhausted within minutes, sometimes after just a few prompts. This started during the 2x off-peak promotion, but the limit bar now fills almost instantly. Others on Pro and Max are reporting the same behavior, while Anthropic’s status page still shows all systems operational.
Veer Masrani tweet media
English
45
17
261
22.4K
𝗙𝗹𝗼 retweetledi
NZ ☄️
NZ ☄️@CodeByNZ·
OpenAI's latest repo has Claude as the third top contributor 😭😂
NZ ☄️ tweet media
English
152
349
8.4K
562.7K
𝗙𝗹𝗼 retweetledi
Google Research
Google Research@GoogleResearch·
Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI
GIF
English
1K
5.8K
39K
19.2M
𝗙𝗹𝗼 retweetledi
Claude
Claude@claudeai·
New in Claude Code: auto mode. Instead of approving every file write and bash command, or skipping permissions entirely, auto mode lets Claude make permission decisions on your behalf. Safeguards check each action before it runs.
English
2.1K
2.9K
39.3K
7.6M
𝗙𝗹𝗼 retweetledi
Bryan Johnson
Bryan Johnson@bryan_johnson·
Bryan Johnson tweet media
ZXX
203
233
10.5K
403.2K
𝗙𝗹𝗼 retweetledi
Ralph Müller
Ralph Müller@cartoucheralph·
@LouisWitter T’es pas le côté le plus frais de l’oreiller
Français
3
7
311
2.2K
𝗙𝗹𝗼 retweetledi
Nausicaa
Nausicaa@pheacienne·
Ils choisissent une cause pour pouvoir détester tous ceux qui ne la partagent pas. Il y a en eux une haine sans objet, une haine qui se cherche des prétextes, une haine de soi qu’ils reportent sur le monde en politisant leurs angoisses. Ce que Nietzsche a bien décrit :
Nausicaa tweet media
Français
12
133
761
0
𝗙𝗹𝗼 retweetledi
Bryan Johnson
Bryan Johnson@bryan_johnson·
@keikumata sleeping at 4 am is grinding
English
61
66
6.7K
114K