🇺🇦🇮🇱dmitriy samsonov

18.7K posts

🇺🇦🇮🇱dmitriy samsonov

🇺🇦🇮🇱dmitriy samsonov

@d0rc

programming since 1986

Milano, Lombardia Katılım Ocak 2011
2.1K Takip Edilen2.6K Takipçiler
🇺🇦🇮🇱dmitriy samsonov retweetledi
Grant Kot
Grant Kot@kotsoft·
chemical reaction between sodium and water
English
37
182
2K
136.7K
Gita Gopinath
Gita Gopinath@GitaGopinath·
A painting of the end of meritocracy: A meeting of the two largest economies and not one woman at the table.
Gita Gopinath tweet media
English
14.4K
10.3K
44.9K
11.3M
🇺🇦🇮🇱dmitriy samsonov retweetledi
Simone Conradi
Simone Conradi@S_Conradi·
7 roots orbit in ℂ. Perturb the degree-7 polynomial's coefficients with Gaussian noise: isolated roots stay sharp, clustered ones smear into a cloud. These are the polynomial's ε-pseudozeros: Wilkinson conditioning made visible. #Pseudospectra #MathArt
English
2
16
87
2.6K
🇺🇦🇮🇱dmitriy samsonov
@MickRhythm russia is not a state, it’s a gang holding hostages; but, yes, any state is indeed illegitimate, as well as any politician or religious leader is a criminal.
English
0
0
0
18
Bruce Nielson
Bruce Nielson@bnielson01·
There's a mathematical proof that says no algorithm — no matter how clever, how sophisticated, or how well-designed — can outperform random guessing when averaged across all possible problems. Not A*, not neural networks, not even humans in the loop. Every advantage on one class of problems is paid for, dollar for dollar, somewhere else. It sounds like it should be false. It isn't. Please review the article: the theorem is sound. So you can't just dismiss it. Welcome to the "No Free Lunch Theorem". This raises an interesting question: how can humans be universal learners if this theorem says universal learns are impossible? Make your best arguments here. I hint at one possible resolution to this problem. mindfiretechnology.com/blog/archive/t…
English
75
39
325
29.2K
🇺🇦🇮🇱dmitriy samsonov retweetledi
Max Welling
Max Welling@wellingmax·
We put the paper online that provides further details (beyond my ICLR keynote) on the role of spontaneous symmetry breaking and Goldstone modes in deep learning. Enjoy! (w/ Nabil Iqbal, Thomas Andy Keller, Takeru Miyato and Yue Song.) arxiv.org/abs/2605.14685
English
2
67
333
25.8K
Paul Graham
Paul Graham@paulg·
At one point my son and his friend kept looking for shortcuts to getting rich. Over and over I told them the way to do it is just to make something people want. If this is what I tell my own kids about getting rich, why won't politicians believe this is how a lot of people do it?
English
380
295
5.5K
367.7K
Michael McFaul
Michael McFaul@McFaul·
I wish Trump could praise leaders of our democratic allies just half as much as he fawns over autocratic leaders like Xi and Putin.
English
131
439
2.4K
30.3K
🇺🇦🇮🇱dmitriy samsonov retweetledi
Mitko Vasilev
Mitko Vasilev@iotcoi·
Datadog Toto 2.0: 4M→2.5B open time-series models. Forecasting gets its foundation model era Not another chatbot in a suit. Just a model reading production noise: "Your latency is about to ruin your weekend." Useful AI. May your CRPS be low! May your incidents be forecasted!
Mitko Vasilev tweet media
English
0
6
26
1.3K
🇺🇦🇮🇱dmitriy samsonov retweetledi
NormaCore
NormaCore@norma_core_dev·
With NormaCore Station, your robot's full operational life is the dataset. ✅Continuous capture at max hardware speed. ❌ No downsampling. ❌No "start/stop" recording. It’s all automatically compressed, encrypted, and stored. When you need a dataset, just pull it. No scripts. No pipelines. Just data. 🎯
English
1
1
11
844
RTSG
RTSG@RTSG_Main·
✅FACT: Stalin was a HERO!
RTSG tweet media
English
102
122
959
29.1K
🇺🇦🇮🇱dmitriy samsonov retweetledi
Thomas G. Dietterich
Thomas G. Dietterich@tdietterich·
Attention @arxiv authors: Our Code of Conduct states that by signing your name as an author of a paper, each author takes full responsibility for all its contents, irrespective of how the contents were generated. 1/
English
100
779
4.7K
793.4K
🇺🇦🇮🇱dmitriy samsonov retweetledi
Ameet Talwalkar
Ameet Talwalkar@atalwalkar·
Today we’re releasing Toto 2.0: a family of open-weights time series foundation models spanning 4M to 2.5B parameters. The question we set out to answer was simple (yet previously open): Do time series foundation models get reliably better as they scale? Our answer: yes! 🧵
Ameet Talwalkar tweet media
English
11
59
590
51.8K
🇺🇦🇮🇱dmitriy samsonov retweetledi
Michael MacKay
Michael MacKay@mhmck·
Peace is not possible without destroying the Moscow Empire and liberating the captive nations.
English
85
246
1.7K
16.2K
🇺🇦🇮🇱dmitriy samsonov retweetledi
Underfox
Underfox@Underfox3·
ChipMATE achieves 75.0% and 80.1% pass@1 on VerilogEval V2 with 4B and 9B base models, outperforming all existing self-trained models and even DeepSeek V4 with 1600B parameters.
Underfox tweet media
English
1
2
3
319
Simplifying AI
Simplifying AI@simplifyinAI·
🚨 BREAKING: NVIDIA proved back-propagation isn't the only way to build an AI. Billion-parameter models were trained without a single gradient. No calculus, no exploding memory, no massive GPU clusters. The culprit? A long-dismissed technique called Evolution Strategies. NVIDIA and Oxford just made it scalable with EGGROLL, which replaces bloated mutation matrices with two tiny ones, enabling hundreds of thousands of parallel mutations at inference-level speed. They're pretraining models from scratch using only simple integers. No backprop. No decimals. We assumed the future of AI required endless precision hardware. Evolution had other plans.
Simplifying AI tweet media
English
28
161
1.1K
184.6K
🇺🇦🇮🇱dmitriy samsonov retweetledi
Patrick Smith
Patrick Smith@NotGovernor·
Capitalism is the only economic system. Everything else is a violence system.
English
42
119
776
10.1K