Distractosphere

1.8K posts

Distractosphere banner
Distractosphere

Distractosphere

@Distractosphere

what a time to be alive

Null Island Sumali Ağustos 2015
427 Sinusundan153 Mga Tagasunod
Naka-pin na Tweet
Distractosphere
Distractosphere@Distractosphere·
Alignment
English
2
3
9
3.9K
Noah Zweben
Noah Zweben@noahzweben·
Some big Remote Control stability improvements have landed. Please update your CLI to 2.1.85+ and ensure you're on the latest mobile app! Happy Friday - go touch grass this weekend!
English
34
20
398
18.7K
Tanay
Tanay@TanayAyitmaz·
Bugün Google TurboQuant KV-Cache Sıkıştırması ve yönetimiyle ilgili bir duyuru yaptı. Herkes paylaştı ama tüm AI ekosisteminde test edebilecek yada ne olduğunu net anlayabilecek kaç kişi var? Ben kendi LM modelim için TurboQuant ve RR-QJL 'i rahatlıkla uygulayabilirim hatta dokümanlarımda sıraya ekledim ve test/geliştirme olarak tanımladım. TanAI 'da zaten KV-Cache Yönetimi / Context Yönetimi / Hot-Window / Adaptive RAG (bunlar çalışıyor) ve yine eklenti listesinde olan Graph RAG mimarileri var. Planı hazır, önce mevcut Single GPU 'da tam bir metrik testi yaparak TurboQuant'ı 16bit -> 8bit Quantize -> 4 bit Quantize -> Agresif 3 bit TurboQuantize olarak test edeceğim. KV-Cache sonrası Retrieval yapısı için DB/vektör belleğini küçültmek, ANN Lookup maliyetini düşürmek ve aynı Big-Memory Subsrate'lerini aynı donanımda çalıştırabilmek olacak. Tabi Kv-Cache 'te Key ve Value compress edilip Query aynı kalacak. Sonrasında ilk metriklere göre ne değişiyor görebilirim. Gelelim şimdi TR AI ekosisteminde bunları konuşup ben şu modelde denedim, ben kendi mimarim de denedim diyebilecek kaç firma yada geliştirici var? Ben cevap vermiyorum buna... O yüzden TR'de AI ekosistemi ChatBot kullanmanın ötesine çok çok uzun süre geçemeyecek! Model'in base eğitimi sürerken gibip Street Fighter 'da Ryu ile Hadoken çekiyim... Bu çıktı KV-Cache management Base-LM çıktısı Prefill/Decode-Tok/s uzun ve düşecek.
Tanay tweet media
Türkçe
5
3
46
6.6K
Lossfunk
Lossfunk@lossfunk·
🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵
English
153
287
2.2K
1.2M
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️
I don't know much about this, so can someone explain to me the implications of products like Cerberus on NVIDIA? It's WAY faster inference. And WAY cheaper. So: 1. Is NVIDIA's advantage just in training now? 2. Is it impossible for Cerberus to do training as well?
English
4
1
17
4.8K
Daniel Sempere Pico
Daniel Sempere Pico@dansemperepico·
You guys all run Claude Code with claude --dangerously-skip-permissions right? Because otherwise how in the world can you sit there accepting every single permission when building something?
English
475
22
2.2K
286.1K
Terrible Maps
Terrible Maps@TerribleMaps·
I have a solution
Terrible Maps tweet media
English
6.5K
3.7K
51.5K
10.6M
tobi lutke
tobi lutke@tobi·
Autoresearch works even better for optimizing any piece of software. make an auto folder, add program.md and bench script, make a branch and let it rip.
English
70
66
1.7K
121.4K
dax
dax@thdxr·
what if the models haven't actually improved for months what if we're all just getting dumber
English
252
211
4.2K
132.3K
Distractosphere
Distractosphere@Distractosphere·
What they’re doing is undoubtedly valuable but I dont understand, If it’s sterile enough to require gloves and a gown, why aren’t a cap and a mask being worn as well? hair or beard could fall off; saliva droplets could also spread from the mouth if he sneezes, coughs, or even talks. If contamination isn’t a concern, then why the gloves and gown?
Distractosphere tweet media
English
0
0
2
737
The Rundown AI
The Rundown AI@TheRundownAI·
"Can it run DOOM?" was a joke for 30 years. A petri dish full of human skin cells just said yes. Cortical Labs trained 200,000 human neurons (!) to play the 1993 FPS game in a week:
English
43
184
1.3K
90K
Chris Tate
Chris Tate@ctatedev·
New experimental flag: agent-browser --native Single Rust binary. Direct CDP. → No abstraction ceiling → Self-contained daemon → No Node.js process at runtime → Less memory, smaller footprint → Zero runtime dependencies, just the binary and a browser
Chris Tate tweet media
English
72
120
1.7K
159.4K
void
void@sakevoid·
lan bi saniye herkes GPU yarisi yapiyor ya, nvidia hisseleri, TPU clusterlari, "benim 8xH100 var senin ne var" muhabbeti, simdi birileri gercek insan noronlarini silikon cipe koymus ve LLM'e baglamis 200.000 canli insan noronu, labda buyutulmus, once pong oynattirmislar, sonra DOOM, simdi bu noronlar LLM'in her token secimini yapiyor, gercek elektriksel impulslarla yani anliyor musun, yapay zeka artik tam olarak "yapay" degil, biyolojik noronlar hangi kelimeyi sececegine karar veriyor ve sen hangi kanalin stimule edildigini gorebiliyorsun herkes "AGI ne zaman gelecek" tartisirken birileri coktan biyolojik zeka ile yapay zekayi birlestirmis durumda simdi dusun bi saniye: - GPU'lar enerji tuketimiyle sinirli, biyolojik noronlar inanilmaz dusuk enerjiyle calisiyor - silikon transistorler fiziksel limitlere dayandi, noronlar kendi kendine adapte oluyor - ve en kritigi: noronlar ogreniyor, degisiyor, evrim geciyor, GPU'daki agirliklar oyle duruyor millet hala "hangi model daha iyi" tartiyor, asil soru artik "hesaplama biyolojik mi olacak dijital mi" cortical labs'i arastirin, burada gordugumuz sey gunumuz tarihinde yeni bir bolumun baslangici olabilir
void tweet media
chiefofautism@chiefofautism

someone connected LIVING BRAIN CELLS to an LLM Cortical Labs grew 200,000 human neurons in a lab and kept them alive on a silicon chip, they taught the neurons to play Pong, then DOOM now someone wired them into a LLM... real brain cells firing electrical impulses to choose every token the AI generates you can see which channels were stimulated, the feedback from the neurons in choosing that letter or word

Türkçe
34
63
1.1K
178.3K
Julien Chaumond
Julien Chaumond@julien_c·
Only 128GB max RAM on the M5 Max 😢
English
61
8
367
54.8K
Distractosphere
Distractosphere@Distractosphere·
@zephyr_z9 They tried to compete with companies like OpenAI, DeepSeek, Anthropic, and ZAI ... by releasing their own models. But their models are not very good, and in the end they just become something like Amazon with a slight difference.
English
0
0
0
1.4K
Distractosphere
Distractosphere@Distractosphere·
@kimmonismus How to win the customers you loose 101. Bombard them with feature, new releases. They will come back. And make that new models lobotomized after enough saturation…
English
0
0
0
408
Chubby♨️
Chubby♨️@kimmonismus·
Seriously, I dont get it. - Today, GPT-5.3 instant is being released . - The blog post states at the very bottom that 5.3 Thinking and Pro will also be released very soon. - An hour later, the official OpenAI X account tweeted that GPT-5.4 will be released very soon. ??? So in a few days we get: GPT-5.3 thinking + Pro + GPT-5.4 (???) instant/thinking/pro?
English
235
71
2.8K
304K
ℏεsam
ℏεsam@Hesamation·
“Claude is down. time to go back to hand coding.”
English
42
301
4.3K
388.6K
Mic King
Mic King@iPullRank·
I wish all the model providers would have a model named "latest" or something like that so I don't have to go back and update model names in a ton of projects. Looking at you @OfficialLoganK
English
8
1
255
47.3K