
drorlb
2.2K posts

drorlb
@drorlb
Mathematician and programmer. Perhaps I should post something.



๐จ SUPER GEMMA 4 26B UNCENSORED IS INSANE LLM WIZARD COOKING AGAIN @songjunkr Dropped SuperGemma4-26B-Uncensored GGUF v2 and itโs trending on @huggingface๐ค This thing SMOKES the regular Gemma-4 26B: ๐คฏ0/100 refusals (actually uncensored) ๐Fixed all the tool-call + tokenizer jank โก๏ธ90% faster prompt processing ๐Sharper, smarter, way more capable responses - Perfect local beast for llama.cpp โ Runs ~18-22 GB VRAM (16.8 GB Q4_K_M file) - Run on 16 GB GPUs! The 31B version in the works, should be out SOON ๐คฏ Pull this version on hugging face below ๐๐ป




Good read. FV isnโt perfect but you should do it anyway







i didnโt realize how bad it was until i saw this comment section on instagram


Database table size impacts performance in more ways than one: a) B-tree depth. Using 8k pages and a 16b uuid: 1 level = ~370 rows 2 levels = ~138k rows 3 levels = ~50m rows 4 levels = ~20b rows The lookup cost on a table with 100k rows is not the same as one with 1b rows. This can apply both to the table itself (MySQL cluster index) as well as the indexes. Sometimes a single query requires many of them. b) Small table โ fits in RAM โ fast reads. The larger the table, the more likely to read from disk plus churn the cache. c) # of indexes. Each adds maintenance overhead for insertions, and for Postgres vacuum overhead as well. Keep an eye on this! It's useful to take regular stock of your tables + indexes. Clean bloat. Remove unused indexes. Partition if needed.



UPDATE: We were able to replicate the Mythos findings using existing models (GPT5.4) Writeup coming early next week, no BS prompts, it's real reproduction




@david_lisovtsev ืื ืืื ืืืืจืื ืืื ืืืืืื ืฉืงืจืืชื


Exactly. AI people don't just *predict* that AI will make humans unemployable. They make it their explicit corporate goal! OpenAI's corporate mission is to create "AGI", which they define as something that can do any human's job!












