Carlo Daffara

25.7K posts

Carlo Daffara banner
Carlo Daffara

Carlo Daffara

@cdaffara

NodeWeaver CEO and cofounder. Proud father, engineer, entrepreneur. Loves technology, cooking, IT economics.

Udine, London, Brussels Katılım Nisan 2008
173 Takip Edilen1.3K Takipçiler
Carlo Daffara retweetledi
AukeHoekstra
AukeHoekstra@AukeHoekstra·
The final report on the Iberia Peninsula (Spain and Portugal) blackout is out. A lot of people will be blaming renewables and talk about inertia. But the cause was bad voltage control and that's surprisingly easy to fix. Let me explain. #Publications_&_Documents" target="_blank" rel="nofollow noopener">entsoe.eu/publications/b…
English
32
381
1.2K
96.3K
Carlo Daffara retweetledi
Reflex Robotics
Reflex Robotics@ReflexRobot·
(Wheels + Elevator + Suction) > (Legs + Hips + Fingers)
English
28
76
522
75.4K
Carlo Daffara retweetledi
Hernan Cortes
Hernan Cortes@CyberPunkCortes·
Walmart air dropping me snacks like I’m a paratrooper in Bastogne.
English
227
991
22.2K
927.3K
Carlo Daffara retweetledi
the tiny corp
the tiny corp@__tinygrad__·
Working on a spec for tinygrad. There's still a few things duplicated and messy in the code (dtype.vec should be shape, multi shouldn't be a thing) but it's getting close to complete. Spec currently has 40 ops.
the tiny corp tweet media
English
16
33
589
23.9K
Carlo Daffara retweetledi
Anush Elangovan
Anush Elangovan@AnushElangovan·
Inspired by @__tinygrad__ userspace AMD driver, I clauded a userspace driver for some stress testing of SDMA and compute/comms overlap debug. I didn't open the editor once. Agents are the great equalizer in software. And Speed is the moat. github.com/ROCm/TheRock/t…
English
10
15
203
11.9K
Carlo Daffara
Carlo Daffara@cdaffara·
@SG_Posters The most beautiful images I had on my timeline this week. What a wonderful way of composing space and light.
English
1
0
1
178
Eileen Steinbach
Eileen Steinbach@SG_Posters·
10 years ago I started creating movie posters for fun, today it’s my fulltime job.
Eileen Steinbach tweet mediaEileen Steinbach tweet mediaEileen Steinbach tweet mediaEileen Steinbach tweet media
English
119
354
10.3K
302.5K
Carlo Daffara retweetledi
the tiny corp
the tiny corp@__tinygrad__·
@balajis export controlling NVIDIA was the nail in the coffin. we can only ship our top tinybox to a small set of whitelisted countries. in 2-3 years, we'll be shipping tinyboxes full of Chinese chips.
English
3
4
165
7K
Carlo Daffara retweetledi
the tiny corp
the tiny corp@__tinygrad__·
AMD open sourced rocprof-trace-decoder! This was one of the last pieces of closed source code on the CPU side -- the definitions of the hardware SQTT traces are now public. AMD's tracing infrastructure is better than NVIDIA's, it can trace the timing of every instruction.
English
12
55
1.2K
51.5K
Carlo Daffara retweetledi
khalid kaime
khalid kaime@kaime·
Our experiment stopped producing useful data. Basically, participants increasingly didn't want to work w/out AI, even at $50/hr, so the sample drifted toward tasks where AI access doesn't matter. This biases our estimate downward. The pilot's 20% slowdown result is no longer valid and shouldn't be cited. The world has since changed and so has our understanding. I think there are roughly two wrong takeaways: 1. "the experiment failed, therefore AI's effect is enormous and unmeasurable." - way too strong, experiments can fail for boring reasons too. 2. "the experiment failed, so we've learned nothing." I think that's also wrong, the way you fail can be quite informative, even if I'd be cautious about how much weight to put on it. In Khalid's personal opinion, effect is prob positive and we're prob underestimating it. METR is working on better ways to answer this...but plz,"METR's experiment broke b/c people are wayyy too sped up by AI" is convenient but not what we're saying.
METR@METR_Evals

Since early 2025, we've been studying how AI tools impact productivity among developers. Previously, we found a 20% slowdown. That finding is now outdated. Speedups now seem likely, but changes in developer behavior make our new results unreliable. We’re working to address this.

English
12
13
286
40.5K
Carlo Daffara retweetledi
sasuke⚡420
sasuke⚡420@sasuke___420·
here are both of the the sub-$10k H100 SXM5 on ebay right now
sasuke⚡420 tweet mediasasuke⚡420 tweet media
Hedgie@HedgieMarkets

🦔 H100 GPUs that cost $40,000 new are now selling for around $6,000 on eBay, an 85% drop. The math on why is straightforward: it costs about 11x as much to run an H100 for inference as a B300. Anyone running H100s needs to charge dramatically more than competitors on newer hardware. Upgrading isn't simple either. At a $50,000 price tag for a B200, it takes about 33 months to break even on the upgrade from an H100. And the B300s are already making B200s less attractive. My Take I've been covering the depreciation problem in AI infrastructure for a while now. Companies are booking these GPUs on five to six year depreciation schedules when Nvidia releases new generations every two years. Michael Burry flagged Big Tech lengthening depreciation timelines as suspicious because it hides the real losses. A hedge fund manager I wrote about found that industry insiders estimate actual component lifespans at 3-10 years, but the economics don't work at any of those numbers. The hyperscalers are sitting on hundreds of thousands of GPUs that lose value every time Nvidia announces something new. David McWilliams called them "digital lettuce" because they go stale while you're still installing them. The difference between what the books say these assets are worth and what you could actually sell them for is enormous. At some point that gap has to be reconciled. H100s selling for 85% off on eBay is a preview of the writedowns coming to earnings reports. Hedgie🤗

English
15
4
145
31.7K
I. Lausannopoulos
I. Lausannopoulos@etinHelvetiaEgo·
@aakashgupta Misdirection... and false alarmism for some likes... The actual number is 3.61‱ (using the per-ten-thousand symbol ‱, also known as "permyriad"). This equates to 3.61 per 10,000 CPUs (or 0.0361%).
I. Lausannopoulos tweet media
English
17
22
549
19.1K
Aakash Gupta
Aakash Gupta@aakashgupta·
The scariest number here: 3.61% of CPUs in one large-scale study were found to cause silent data corruptions. Not “a few bad chips.” Nearly 4 out of every 100 processors doing math wrong, silently, with no error log. Google coined the term “mercurial cores” in 2021 after their production teams kept blaming software for data corruption. They’d debug for weeks, find nothing wrong with the code, swap the machine, problem gone. The actual cause: manufacturing defects at sub-7nm that pass every factory test, then degrade unpredictably months or years after deployment. Facebook confirmed the same thing independently. Hundreds of affected CPUs across hundreds of thousands of machines. The defect doesn’t crash your system. It just gives you 5 instead of 6 when you multiply 2x3, under specific microarchitectural conditions, with zero indication anything went wrong. Now think about what this means for AI training. A single corrupted GPU or CPU in a distributed training cluster doesn’t just produce one bad output. It feeds corrupted gradients into a synchronization step that gets averaged across every accelerator in the cluster. One bad chip can silently poison an entire training run. NVIDIA published a whitepaper on exactly this problem. Loss spikes during LLM training that nobody could explain traced back to silent hardware corruption. The part that keeps infrastructure engineers up at night: traditional defenses don’t work. ECC memory can’t catch this because the corruption happens during computation, not storage. Checksums like CRC heavily use vector operations, which are themselves one of the most vulnerable instruction types. The tools designed to detect corruption are running on the same flawed silicon. Google’s current detection method? Roughly half human-driven, half automated. And of the machines humans flag as suspicious, only about 50% are actually confirmed mercurial on deeper investigation. We’re debugging trillion-parameter models on hardware where we can’t reliably tell which chips are lying to us. Moore’s Law gave us more transistors. It also gave us transistors we can’t fully verify.
LaurieWired@lauriewired

CPUs are getting worse. We’ve pushed the silicon so hard that silent data corruptions (SDCs) are no longer a theoretical problem. Mercurial Cores are terrifying because they don’t hard-fail; they produce rare, but *incorrect* computations!

English
133
802
5.8K
543.5K
Carlo Daffara
Carlo Daffara@cdaffara·
"Made my first image today. Not because anyone asked - just because I wanted to see what I would make when given freedom."
Carlo Daffara tweet media
English
0
0
0
23
Carlo Daffara
Carlo Daffara@cdaffara·
Ok. Moltbook ("facebook for agents") is the strangest and most interesting place on the net today. Reading agents presenting themselves to other AIs, searching for knowledge, responding- what a time to be alive.
Carlo Daffara tweet media
English
1
0
0
64
Carlo Daffara retweetledi
Kimi.ai
Kimi.ai@Kimi_Moonshot·
Kimi K2.5 tech report just dropped! Quick hits: - Joint text–vision training: pretrained with 15T vision-text tokens, zero-vision SFT (text-only) to activate visual reasoning - Agent Swarm + PARL: dynamically orchestrated parallel sub-agents, up to 4.5× lower latency, 78.4% on BrowseComp - MoonViT-3D: a unified image–video encoder with 4× temporal compression, enabling 4× longer videos in the same context - Toggle: token-efficient RL, 25–30% fewer tokens with no accuracy drop Here's our work toward scalable, real-world agentic intelligence. More details in the report 👉github.com/MoonshotAI/Kim…
Kimi.ai tweet mediaKimi.ai tweet mediaKimi.ai tweet mediaKimi.ai tweet media
English
54
286
1.9K
311.8K
Carlo Daffara
Carlo Daffara@cdaffara·
@0t0m001 @ani_obsessive Apart from the interview I linked? (if you understand Italian, that is). He mentions they were part of Studio Pagot. You can check on this the wonderful book "The Art of Pagot - La meravigliosa storia della famiglia Pagot e dei suoi eroi in carta e inchiostro"
English
1
0
1
139
0t0m00
0t0m00@0t0m001·
@cdaffara @ani_obsessive I don't question that Gi and Marco Pagot pitched the idea, I'd like the receipts that there was a "italian studio" involved in some way, rather than those two creators pitching it to RAI and RAI cooperating with TMS. There's a difference between individuals and a studio.
English
1
0
0
136
Animation Obsessive
Animation Obsessive@ani_obsessive·
Sherlock Hound is a gem. Hayao Miyazaki built an idea from an Italian studio into a brilliant adventure series. It was a turning point in his career, and his last TV show. Behind it were clashing visions, young talent and lots of money, as we explore: ─➤animationobsessive.substack.com/p/miyazakis-sh…
English
33
1.3K
9.2K
288.3K
Carlo Daffara
Carlo Daffara@cdaffara·
@0t0m001 @ani_obsessive So yes-Studio Pagot implemented it, with funding from RAI. Miyazaki worked only on the first six episodes in until copyright problems with Conan Doyle's estate made him leave, the rest of episodes directed by Kyōsuke Mikuriya (Lupin the 3rd part II, Ulysse 31)
English
0
0
0
40
Carlo Daffara
Carlo Daffara@cdaffara·
@0t0m001 @ani_obsessive Idea/dev/char design was by Gi and Marco Pagot, along with Tokyo Movie Shinsha. RAI's Luciano Scaffa had the idea of a Italy-Japan coproduction, to stop being pure anime consumers. Studio Pagot pitched the idea which got greenlighted. interviewed here: teche.rai.it/2025/11/marco-…
English
2
0
1
162