lux

35.3K posts

lux banner
lux

lux

@lux

Trisolarian Panspecies Anachronist / Semantic Ghost Hunter dms are welcome but will be only be answered during stable eras when rehydrated

~ circle of confusion Katılım Temmuz 2006
3.7K Takip Edilen3.7K Takipçiler
Sabitlenmiş Tweet
lux
lux@lux·
reality is underrated
lux tweet media
English
4
5
39
10.4K
tuna🍣
tuna🍣@tunahorse21·
anybody want to talk about the minutiae of agent progressive disclosure btw
English
6
0
14
650
Vjekoslav Krajačić
Vjekoslav Krajačić@vkrajacic·
X is flooded with people advocating for AI and vibe coding, but I see almost no cool demos or anything I can download to improve my daily work. I don't care how many tokens you burn, how many LOC you generate, which IDE you use. Can I use your application, please?
English
180
21
694
32.3K
ali
ali@endingwithali·
i just used claude code for the first time ama
English
21
0
53
2.9K
lux
lux@lux·
more proof of semantic wave hypothesis?
Yasir Ai@AiwithYasir

🚨Just IN: MIT proved you can delete 90% of a neural network without losing accuracy. Researchers found that inside every massive model, there is a "winning ticket”, a tiny subnetwork that does all the heavy lifting. They proved if you find it and reset it to its original state, it performs exactly like the giant version. But there was a catch that killed adoption instantly.. you had to train the massive model first to find the ticket. nobody wanted to train twice just to deploy once. it was a cool academic flex, but useless for production. The original 2018 paper was mind-blowing: But today, after 8 years… We finally have the silicon-level breakthrough we were waiting for: structured sparsity. Modern GPUs (NVIDIA Ampere+) don’t just “simulate” pruning anymore. They have native support for block sparsity (2:4 patterns) built directly into the hardware. It’s not theoretical, it’s silicon-level acceleration. The math is terrifyingly good: a 90% sparse network = 50% less memory bandwidth + 2× compute throughput. Real speed.. zero accuracy loss. Three things just made this production-ready in 2026: - pruning-aware training (you train sparse from day one) - native support in pytorch 2.0 and the apple neural engine - the realization that ai models are 90% redundant by design Evolution over-parameterizes everything. We’re finally learning how to prune. The era of bloated, inefficient models is officially over. The tooling finally caught up to the theory, and the winners are going to be the ones who stop paying for 90% of weights they don’t even need. The future of AI is smaller, faster, and smarter.

English
0
2
5
237
Tyler
Tyler@rezoundous·
Does anyone actually run 20 agents in parallel?
English
278
2
234
92K
lux
lux@lux·
@Teknium Oh no don't enable the madness! Although I have been wanting to crank that out for awhile now....
English
1
0
1
14
Teknium 🪽
Teknium 🪽@Teknium·
What features, capabilities or ideas do you have for Hermes Agent that would unlock your ideas for submissions to the Hermes Agent Creativity Hackathon?
English
104
10
229
15.9K
lux
lux@lux·
@dynemetis fair enough! It's definitely not stopping me from working on my brains-in-a-box ... 'AUghTOknow' subsystem
English
0
0
1
9
Josh
Josh@dynemetis·
@lux You perhaps misunderstand the nature of competition. Thousands of ai memory startups instantly are validated by this. Many will be worse than what anthropic offers, and the market will soon discover that. Some will be better, some will specialize.
English
1
0
1
15
lux
lux@lux·
@hunvreus I can show you how to do it if you want, I made some tooling to make it easy
English
0
0
0
21
Ronan Berder
Ronan Berder@hunvreus·
Talking to smarter folks than me, I'm convinced many of the AI folks in my timeline are full of shit. Nobody is "running 20 agents over night" and building stuff for actual users. Maybe some are building internal tools or disposable software. Maybe. But building software people like using? That doesn't get hacked on day one or blow up after the 3rd user? Nope. I don't even understand what that's supposed to look like. Do you work out a 57 pages document that perfectly describes what you want to build and then summon 14 agents and have them run wild for 6 hours? And what comes out on the other end isn't a broken pile of shit? Nope. Not buying it. PS: it may also be that I have an IQ of 82 and can't figure it out.
English
659
258
4.8K
712.9K
lux
lux@lux·
@hunvreus I'm only running 9 agents, but here's what they are up to right now
lux tweet media
English
0
0
0
15
lux
lux@lux·
@Tech_girlll who said i was a valuable developer? I'm just making an extension for git, you must be thinking of someone else, idk
English
0
0
0
13
Mari
Mari@Tech_girlll·
If AI can write your code, fix your bugs, and explain your logic. what exactly makes you a valuable developer today?
English
187
7
132
31.8K
lux
lux@lux·
Portable non substrate/harness dependent soul (and soul data). Easy Hermes squad formations. Knowledge graphs with lots of dots. Ok for real? One thing I'd love is for an agent that looks at all my images and exif metadata. Makes a knowledge graph from that. Then I can just say "train a lora" on some person/place/thing from the knowledge graph. It would gather all the images, train a lora, then produce memories I never actually experienced irl
English
0
0
2
39
lux
lux@lux·
@codegraph I haven't tried it, but I think (not sure) it's more like the graph-like memory that Claude Desktop has.
English
0
0
1
32
lux
lux@lux·
@mattshumer_ it's more geared for long-running complex tasks ... you have to be confident and clear about what you want going in, but once it gets started ... extremely capable. It will pepper you with details and options and administrivia if you don't know what you want going in.
English
0
0
2
364
Matt Shumer
Matt Shumer@mattshumer_·
i'm a few days late to realizing this but: wow, opus 4.7 is god awful like so, so bad it's making mistakes on things i'd expect gpt-4o to handle cleanly there's got to be some explanation, right?
English
254
34
1.4K
208.9K
Layton Gott
Layton Gott@Layton_Gott·
I don’t care if using AI to code is not “real coding” I care if the product works. Nobody opens your app and asks how many lines you wrote manually. They click around for 10 seconds and decide if it sucks or not.
English
169
28
483
14.7K
lux
lux@lux·
@oprydai it's got sticky glue around it
English
0
0
5
75
Mustafa
Mustafa@oprydai·
HOW DOES A PHOTON KNOW IT'S BEING OBSERVED?
English
1.7K
379
7.9K
1.3M
lux
lux@lux·
I'm not in security, but still relevant. I made a parser that runs 24/7 parsing code in deep composable knowledge graphs. AST/LSP/data flow/control flow/dependencies ... it's all there. The goal is to parse all of open source github.com/repolex-forx can run a security scanner over this data for remediation
English
0
0
0
35
INFOSEC F0X 🔥
INFOSEC F0X 🔥@infosec_fox·
People who work in cybersecurity, can we 👀 some of your hobbies ?
English
194
14
251
44K