Distractosphere

1.8K posts

Distractosphere banner
Distractosphere

Distractosphere

@Distractosphere

what a time to be alive

Null Island 가입일 Ağustos 2015
427 팔로잉155 팔로워
고정된 트윗
Distractosphere
Distractosphere@Distractosphere·
Alignment
English
2
3
9
3.9K
Tanay
Tanay@TanayAyitmaz·
Bugün Google TurboQuant KV-Cache Sıkıştırması ve yönetimiyle ilgili bir duyuru yaptı. Herkes paylaştı ama tüm AI ekosisteminde test edebilecek yada ne olduğunu net anlayabilecek kaç kişi var? Ben kendi LM modelim için TurboQuant ve RR-QJL 'i rahatlıkla uygulayabilirim hatta dokümanlarımda sıraya ekledim ve test/geliştirme olarak tanımladım. TanAI 'da zaten KV-Cache Yönetimi / Context Yönetimi / Hot-Window / Adaptive RAG (bunlar çalışıyor) ve yine eklenti listesinde olan Graph RAG mimarileri var. Planı hazır, önce mevcut Single GPU 'da tam bir metrik testi yaparak TurboQuant'ı 16bit -> 8bit Quantize -> 4 bit Quantize -> Agresif 3 bit TurboQuantize olarak test edeceğim. KV-Cache sonrası Retrieval yapısı için DB/vektör belleğini küçültmek, ANN Lookup maliyetini düşürmek ve aynı Big-Memory Subsrate'lerini aynı donanımda çalıştırabilmek olacak. Tabi Kv-Cache 'te Key ve Value compress edilip Query aynı kalacak. Sonrasında ilk metriklere göre ne değişiyor görebilirim. Gelelim şimdi TR AI ekosisteminde bunları konuşup ben şu modelde denedim, ben kendi mimarim de denedim diyebilecek kaç firma yada geliştirici var? Ben cevap vermiyorum buna... O yüzden TR'de AI ekosistemi ChatBot kullanmanın ötesine çok çok uzun süre geçemeyecek! Model'in base eğitimi sürerken gibip Street Fighter 'da Ryu ile Hadoken çekiyim... Bu çıktı KV-Cache management Base-LM çıktısı Prefill/Decode-Tok/s uzun ve düşecek.
Tanay tweet media
Türkçe
5
3
46
6.5K
Lossfunk
Lossfunk@lossfunk·
🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵
English
153
287
2.2K
1.2M
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️
I don't know much about this, so can someone explain to me the implications of products like Cerberus on NVIDIA? It's WAY faster inference. And WAY cheaper. So: 1. Is NVIDIA's advantage just in training now? 2. Is it impossible for Cerberus to do training as well?
English
4
1
17
4.7K
Daniel Sempere Pico
Daniel Sempere Pico@dansemperepico·
You guys all run Claude Code with claude --dangerously-skip-permissions right? Because otherwise how in the world can you sit there accepting every single permission when building something?
English
475
22
2.2K
286K
Terrible Maps
Terrible Maps@TerribleMaps·
I have a solution
Terrible Maps tweet media
English
6.5K
3.7K
51.5K
10.6M
tobi lutke
tobi lutke@tobi·
Autoresearch works even better for optimizing any piece of software. make an auto folder, add program.md and bench script, make a branch and let it rip.
English
70
67
1.7K
121.3K
dax
dax@thdxr·
what if the models haven't actually improved for months what if we're all just getting dumber
English
252
211
4.2K
132.3K
Distractosphere
Distractosphere@Distractosphere·
What they’re doing is undoubtedly valuable but I dont understand, If it’s sterile enough to require gloves and a gown, why aren’t a cap and a mask being worn as well? hair or beard could fall off; saliva droplets could also spread from the mouth if he sneezes, coughs, or even talks. If contamination isn’t a concern, then why the gloves and gown?
Distractosphere tweet media
English
0
0
2
737
The Rundown AI
The Rundown AI@TheRundownAI·
"Can it run DOOM?" was a joke for 30 years. A petri dish full of human skin cells just said yes. Cortical Labs trained 200,000 human neurons (!) to play the 1993 FPS game in a week:
English
43
184
1.3K
90K
Chris Tate
Chris Tate@ctatedev·
New experimental flag: agent-browser --native Single Rust binary. Direct CDP. → No abstraction ceiling → Self-contained daemon → No Node.js process at runtime → Less memory, smaller footprint → Zero runtime dependencies, just the binary and a browser
Chris Tate tweet media
English
72
119
1.7K
159.3K
void
void@sakevoid·
lan bi saniye herkes GPU yarisi yapiyor ya, nvidia hisseleri, TPU clusterlari, "benim 8xH100 var senin ne var" muhabbeti, simdi birileri gercek insan noronlarini silikon cipe koymus ve LLM'e baglamis 200.000 canli insan noronu, labda buyutulmus, once pong oynattirmislar, sonra DOOM, simdi bu noronlar LLM'in her token secimini yapiyor, gercek elektriksel impulslarla yani anliyor musun, yapay zeka artik tam olarak "yapay" degil, biyolojik noronlar hangi kelimeyi sececegine karar veriyor ve sen hangi kanalin stimule edildigini gorebiliyorsun herkes "AGI ne zaman gelecek" tartisirken birileri coktan biyolojik zeka ile yapay zekayi birlestirmis durumda simdi dusun bi saniye: - GPU'lar enerji tuketimiyle sinirli, biyolojik noronlar inanilmaz dusuk enerjiyle calisiyor - silikon transistorler fiziksel limitlere dayandi, noronlar kendi kendine adapte oluyor - ve en kritigi: noronlar ogreniyor, degisiyor, evrim geciyor, GPU'daki agirliklar oyle duruyor millet hala "hangi model daha iyi" tartiyor, asil soru artik "hesaplama biyolojik mi olacak dijital mi" cortical labs'i arastirin, burada gordugumuz sey gunumuz tarihinde yeni bir bolumun baslangici olabilir
void tweet media
chiefofautism@chiefofautism

someone connected LIVING BRAIN CELLS to an LLM Cortical Labs grew 200,000 human neurons in a lab and kept them alive on a silicon chip, they taught the neurons to play Pong, then DOOM now someone wired them into a LLM... real brain cells firing electrical impulses to choose every token the AI generates you can see which channels were stimulated, the feedback from the neurons in choosing that letter or word

Türkçe
34
63
1.1K
178.2K
Julien Chaumond
Julien Chaumond@julien_c·
Only 128GB max RAM on the M5 Max 😢
English
61
8
369
54.8K
Distractosphere
Distractosphere@Distractosphere·
@zephyr_z9 They tried to compete with companies like OpenAI, DeepSeek, Anthropic, and ZAI ... by releasing their own models. But their models are not very good, and in the end they just become something like Amazon with a slight difference.
English
0
0
0
1.4K
Distractosphere
Distractosphere@Distractosphere·
@kimmonismus How to win the customers you loose 101. Bombard them with feature, new releases. They will come back. And make that new models lobotomized after enough saturation…
English
0
0
0
408
Chubby♨️
Chubby♨️@kimmonismus·
Seriously, I dont get it. - Today, GPT-5.3 instant is being released . - The blog post states at the very bottom that 5.3 Thinking and Pro will also be released very soon. - An hour later, the official OpenAI X account tweeted that GPT-5.4 will be released very soon. ??? So in a few days we get: GPT-5.3 thinking + Pro + GPT-5.4 (???) instant/thinking/pro?
English
236
71
2.8K
304K
ℏεsam
ℏεsam@Hesamation·
“Claude is down. time to go back to hand coding.”
English
42
301
4.3K
388K
Mic King
Mic King@iPullRank·
I wish all the model providers would have a model named "latest" or something like that so I don't have to go back and update model names in a ton of projects. Looking at you @OfficialLoganK
English
8
1
255
47.3K
Sam Altman
Sam Altman@sama·
Here is re-post of an internal post: We have been working with the DoW to make some additions in our agreement to make our principles very clear. 1. We are going to amend our deal to add this language, in addition to everything else: "• Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. • For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information. Just like everything we do with iterative deployment, we will continue to learn and refine as we go. I think this is an important change; our team and the DoW team did a great job working on it. 2. The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. 3. For extreme clarity: we want to work through democratic processes. It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it). But 4. There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods. 5. One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future. In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to. We will host an All Hands tomorrow morning to answer more questions.
English
3.9K
639
6.1K
3.6M