
Data-drone
504 posts

Data-drone
@BplLaw
ML & AI @databricks


i still think about how these models performed better in cursor harness than their native ones


anthropic's playbook, confirmed: 1. drop new model (Mythos Preview, today) 2. quietly make the old one spastic 3. charge the same price 4. blame the user when people notice the data is in. 6,852 claude code sessions analyzed: - thinking depth dropped 67% - the habit of reading code before editing it: gone from 6.6 reads average to 2 - lazy behavior violations: zero to 10 per day they went quiet for weeks. then boris cherny shows up on the github issue the moment the numbers went public. that's not accountability. that's pr management. mythos drops today. opus 4.6 just became the "old model." same price. And now fucking retarded. i'm glad i'm building around local models. gemma 4/GLM runs on your machine. it doesn't get quietly worse when a new product launches. can't shrinkflation a model you control.


Computer use is now in Claude Code. Claude can open your apps, click through your UI, and test what it built, right from the CLI. Now in research preview on Pro and Max plans.



we’ve signed Zero Data Retention agreements with all providers for Go all models now follow a zero-retention policy your data is not used for training



Qwen 3.5 27B (Dense) with Hermes Agent is REALLY GOOD

Tired of openclaw doing this all the time Time for Hermes



Starting Thursday, we'll be updating our revenue sharing incentives to better reward the content we want on X: We will be giving more weight to impressions from your home region—to encourage content that resonates with people in your country, in neighboring countries and people who speak your language. While we appreciate everyone's opinion on American politics, we hope this will disincentivize gaming the attention of US or Japanese accounts and instead, drive diverse conversations on the platform. We invite creators to start building an audience locally. X will be a much richer community when there's relevant posts for people in all parts of the world.

Running 400B model on iPhone! 0.6 t/s Credit @danveloper @alexintosh @danpacary @anemll

I got a 1T-parameter model running locally on my MacBook Pro. LLM: Kimi K2.5 1,026,408,232,448 params (~1.026T) Hardware: M2 Max MacBook Pro (2023) w/ 96GB unified memory Running on MLX with a flash-style SSD streaming path + local patching. This is an experimental setup and I haven’t optimized speed yet, but it’s stable enough that I’ve started testing it in an autoresearch-style loop. #LocalAI #MLX #MoE





bruh claude code remote control session does not connect half the time. i thought my wifi had issues.




I realized something else AI has changed about coding: you don't get stuck anymore. Programming used to be punctuated by episodes of extreme frustration, when a tricky bug ground things to a halt. That doesn't happen anymore.








