danidbio

1.4K posts

danidbio banner
danidbio

danidbio

@danidbio

Mostly dev/ml/ops, olive farming. Husband of wife, walker of dog. Doing things @adataopera, making niche software and working with cool businesses - more soon.

Italy Katılım Haziran 2014
802 Takip Edilen95 Takipçiler
danidbio retweetledi
danidbio retweetledi
Sam Lambert
Sam Lambert@samlambert·
it’s easier to do hard things when you have a sense of humor
English
13
30
372
11.3K
danidbio retweetledi
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
All the Doomers and hawks are lining up behind this distillation "attack" farce because they want to see open source banned. It's really as simple as that. They want to take away your right to choose, and take away businesses' rights to fine tune and make your products cheaper and better. The end state here, if we let these short-sighted people win, is a horrible place for America: They will look to ban Chinese models under the guise of national security grounds and conveniently leave only proprietary American companies standing in the USA. The only real "attack" happening is closed source companies attacking open source the same way Microsoft once tried to attack Linux to create regulatory capture. If you can't win in the market, win in Washington is their strategy. Do not be fooled by this regulatory capture and saber rattling nonsense. It's a bait and switch. The goal is to rob you of choice. That's it. These short sighted policies will make America weaker, not stronger. These are the very folks whose shoot-us-in-the-foot policies lost NVIDIA 100% of the market share in China, driving it to basically 0%, while kickstarting the moribund Chinese chip ecosystem. It was dead in the water, and now it's awakened from its deep slumbers. Old state sponsored dinosaurs are reborn as emerging chip powerhouses. The demand for Chinese chips is accelerating and it will only get stronger. When Jensen is proven right a few years from now (he's the best long term thinker in business today) and you have hundreds of cheap Chinese models running optimized on Chinese chips and those models are now hard to run on NVIDIA hardware you can thank these folks. If you're banned in the USA from using these models and these chips, do you think the rest of the world will be? Nope. They'll happily adopt the cheaper, faster, good enough models that we kickstarted with our short-sightedness. 1 billion people in the west will be banned and using closed/gated/sluggish/censored/surveilled models that destroy your privacy while 6 billion other people use the now dominant Chinese ecosystem and your NVIDIA retirement shares lose money. When you can't use open source anymore because it gets banned for Americans, you can thank these short-sighted, foolish folks. When your API bills is a billion dollars and burns your budget in three months instead of 12, you can thank these folks. When all your personal intimate, personal data flows threw a few tight gateways and choke points mandated by law, you can thank these folks.
Chris McGuire@ChrisRMcGuire

Sorry but that just isn’t true—distillation attacks are illicit activity, not an industry standard. They are against the terms of service of all frontier AI labs. There is a reason OpenAI, Anthropic, and Google all put out reports warning about it: none of them do it.

English
17
37
151
47.3K
danidbio retweetledi
nature
nature@Nature·
Geneticist J. Craig Venter, best known for his role in sequencing the human genome, has died aged 79. He spoke to Nature in 2023 about AI, sequencing the ocean – and why he had no plans to stop working. go.nature.com/4tHEf9M
English
16
250
643
75.1K
danidbio retweetledi
The Hacker News
The Hacker News@TheHackersNews·
🚨 Attackers are targeting enterprise admins with fake tools and running control through #Ethereum smart contracts. Malware spreads via SEO-poisoned #GitHub repos, then pulls live C2 from blockchain. No domains to block. Access lands on high-privilege systems. 🔗 Learn how this campaign turns search results into enterprise breaches → thehackernews.com/2026/04/etherr…
The Hacker News tweet media
English
4
28
89
9.8K
danidbio retweetledi
untitled folder
untitled folder@untitledfold_er·
untitled folder tweet media
ZXX
18
936
6.6K
122.3K
danidbio retweetledi
Theo - t3.gg
Theo - t3.gg@theo·
Tanner begged NPM to take down a squatted "tanstack" package that was being held ransom against him. 48 days later, it was compromised and shipped malware. There is no excuse. NPM needs to make significant changes.
Tanner Linsley@tannerlinsley

.@SH20RAJ, we could really use the `tanstack` npm package name. We've proactively reached out via email many times in the past with no response but are now getting complaints from unsuspecting users and agents mistaking it for the official TanStack CLI. Please respond 😊

English
60
113
2.7K
215.6K
danidbio
danidbio@danidbio·
@kenwheeler worth pointing out that he is wearing a white collar and spewing mostly-nonsense, it would probably be easiest for an LLM to just replace him
English
0
0
2
117
patagucci perf papi
patagucci perf papi@kenwheeler·
i’m beginning to believe this is a fetish for this fellow
English
108
51
1.3K
77.3K
danidbio retweetledi
Brooks Otterlake
Brooks Otterlake@i_zzzzzz·
This is just like being alive in the 1600s when they got good at making complicated clocks and deduced that every complicated thing in the universe probably functioned exactly like a clock
Dwarkesh Patel@dwarkesh_sp

There's a quadrillion-dollar question at the heart of AI: Why are humans so much more sample efficient compared to LLM? There are three possible answers: 1. Architecture and hyperparameters (aka transformer vs whatever ‘algo’ cortical columns are implementing) 2. Learning rule (backprop vs whatever brain is doing) 3. Reward function @AdamMarblestone believes the answer is the reward function. ML likes to use pretty simple loss functions, like cross-entropy. These are easy to work with. But they might be too simple for sample-efficient learning. Adam thinks that, in humans, the large number of highly specialised cells in the ‘lizard brain’ might actually be encoding information for sophisticated loss functions, used for ‘training’ in the more sophisticated areas like the cortex and amygdala. Like: the human genome is barely 3 gigabytes (compare that to the TBs of parameters that encode frontier LLM weights). So how can it include all the information necessary to build highly intelligent learners? Well, if the key to sample-efficient learning resides in the loss function, even very complicated loss functions can still be expressed in a couple hundred lines of Python code.

English
107
1K
13.1K
804.5K
danidbio retweetledi
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
Training an AI with synthetic data is absurd (except for a few specific niches). It's AI modelling how another model models the real world, rather than AI directly modelling the real world.
English
14
2
36
1.9K
danidbio
danidbio@danidbio·
@rfleury reading this was like a breadth of fresh air, thank you
English
0
0
0
266
Ryan Fleury
Ryan Fleury@rfleury·
A fundamental division between schools of thought in programming is (a) the elimination through simplifying of cruft, boilerplate, and extra abstraction layers, and (b) the automation of maintaining cruft, boilerplate, and extra abstraction layers. One of the reasons I drifted away from C++ and newer languages with adjacent philosophies towards a subset of C is that I found myself in the first camp. Some problems were simply not as hard as I was making them. Memory management, threading, UI, and so on could be simplified such that not only the high level C code became simple, but the actual machine code also became simple. This is starkly different from modern C++ and Rust programming culture, where the philosophy is simply that dealing with the complicated lower level details is a matter of *automation*. The compiler needs to generate something extra, it needs to check extra things, and so on. “Agentic programming” falls into the latter camp, and this is also why I don’t employ it in my workflow (other than search engine usage and so on). I don’t need it to generate 10s of 1000s of lines of code. The requirement of 10s of 1000s of lines of code—for implementing something derived from the information content inside a tiny prompt—is an architectural red flag. Perhaps a substantial portion of that code simply shouldn’t exist. I find that my programs become much better when I do that simplification pass first. After that, there’s drastically less boilerplate, less maintenance, and less busywork to begin with.
English
42
75
948
34.9K
danidbio retweetledi
International Cyber Digest
International Cyber Digest@IntCyberDigest·
‼️🚨 BREAKING: Wiz got access to millions of GitHub repositories across users and organizations using one git push. CVE-2026-3854: git push -o options injected into an internal header split by semicolons, parsed last-write-wins. GitHub patched production in 6 hours.
International Cyber Digest tweet mediaInternational Cyber Digest tweet media
English
27
244
1.7K
224.5K
danidbio
danidbio@danidbio·
@GergelyOrosz peple pay them to avoid the reliability issues they think they might have if hosting some alternative themselves
English
0
0
0
261
Gergely Orosz
Gergely Orosz@GergelyOrosz·
It's hard to have too much sympathy any time the 900-pound gorilla in the room is having reliability issues. Sure, it's hard for them: but as the clear market leader, either figure it out, or give up your customers to those who can deal with the problem of load without issues.
English
17
15
403
31.1K
danidbio retweetledi
Ryan Fleury
Ryan Fleury@rfleury·
@raysan5 The tech is amazing and can speak for itself; all of the grifters and undue hype turning it into something it isn't will damage the adoption & development of the tech for decades
English
9
20
461
19.4K
Vivo
Vivo@vivoplt·
What’s next after AI?
English
1.3K
53
855
166.9K