
drorlb
2.2K posts

drorlb
@drorlb
Mathematician and programmer. Perhaps I should post something.



🚨 SUPER GEMMA 4 26B UNCENSORED IS INSANE LLM WIZARD COOKING AGAIN @songjunkr Dropped SuperGemma4-26B-Uncensored GGUF v2 and it’s trending on @huggingface🤗 This thing SMOKES the regular Gemma-4 26B: 🤯0/100 refusals (actually uncensored) 🚀Fixed all the tool-call + tokenizer jank ⚡️90% faster prompt processing 🏆Sharper, smarter, way more capable responses - Perfect local beast for llama.cpp ✅ Runs ~18-22 GB VRAM (16.8 GB Q4_K_M file) - Run on 16 GB GPUs! The 31B version in the works, should be out SOON 🤯 Pull this version on hugging face below 👇🏻




Good read. FV isn’t perfect but you should do it anyway




i didn’t realize how bad it was until i saw this comment section on instagram


Database table size impacts performance in more ways than one: a) B-tree depth. Using 8k pages and a 16b uuid: 1 level = ~370 rows 2 levels = ~138k rows 3 levels = ~50m rows 4 levels = ~20b rows The lookup cost on a table with 100k rows is not the same as one with 1b rows. This can apply both to the table itself (MySQL cluster index) as well as the indexes. Sometimes a single query requires many of them. b) Small table → fits in RAM → fast reads. The larger the table, the more likely to read from disk plus churn the cache. c) # of indexes. Each adds maintenance overhead for insertions, and for Postgres vacuum overhead as well. Keep an eye on this! It's useful to take regular stock of your tables + indexes. Clean bloat. Remove unused indexes. Partition if needed.


UPDATE: We were able to replicate the Mythos findings using existing models (GPT5.4) Writeup coming early next week, no BS prompts, it's real reproduction




@david_lisovtsev זה אחד הדברים הכי מדכאים שקראתי


Exactly. AI people don't just *predict* that AI will make humans unemployable. They make it their explicit corporate goal! OpenAI's corporate mission is to create "AGI", which they define as something that can do any human's job!













