
Matt ☯︎陰龍🐉
12.8K posts

Matt ☯︎陰龍🐉
@jisifu
☯ Chief Sustainability Officer of P.R.🇨🇳 - Eden’s Adjudicator🐉⚖️- 🈷️ Founder of Maoism/acc - 👹Jevon’s Mephistophelischen Burger König - HTMeggs enjoyer


Lix, a fork of the Nix Linux package manager, has some of the most bizarre, political rules imaginable. For example, the following will get you immediately banned from the project: - “Transphobia” - “Peddling Right-wing Ideology” - and “Pluralphobia” If you don’t know what “pluralphobia” is, you’re in good company. I had to look that one up too. “Pluralphobia” deals with Multiple Personality Disorders. If you are not sufficiently supportive of someone who believes they have multiple people “living in the same body”… you are a “Pluralphobe”. When interacting with the “Lix” project, as a developer or user, it is mandatory that you agree that every “personality” a person has is real. For example: If a guy named Tony, from Nebraska, sometimes thinks he’s Julius Caesar… and other times he thinks he’s Gilligan (still stuck on the island)… you MUST agree that is all real, and show each “personality” respect. … or you are banned. The “Lix” project also makes it clear that people “of a less-marginalized background” are a “guest in our spaces”. Assumedly “Our” means people who think they are Genghis Khan on Tuesdays. Also, being a Republican (or having other “right-wing ideology”) is forbidden. “Lix” has declared itself to be a political, non-neutral project. Not surprisingly, one of the key goals of the “Lix” project is to replace existing C++ code… with, you guessed it, Rust. Because of course. lix.systems

The AI lab faces stiffening competition and a meddling state econ.st/4tJ2iFv

llama3 style attention on 100b+ total params model in april 2026

Researchers just estimated the size of all the LLMs by asking it knowledge questions of varying degrees of obscurity! – GPT 5.5: ~10T params – Claude Opus 4.x: ~4-5T – Grok 4: ~3T The idea here is that factual capacity scales log-linearly with size. The paper shows 7 knowledge tiers and T7 is essentially ~0% for all models, suggesting there is still significant headroom for pretraining. Gemini 3.1 Pro is likely >10T given its used as an anchor but has no direct estimate. This means we can infer what different models might cost to some degree and their post-training effectiveness (performance at certain non-factual tasks given its size). One of the coolest papers I’ve read of late.

IBM's new Granite 4.1 scores Qwen3 and Gemma3 level benchmarks on @ArtificialAnlys.



在 pi agent 里开启 non-thinking,绝对是开启新大陆,丝滑的速度和体验。 gpt-5.5 效果最好,deepseek-v4-pro 也不错。 越来越喜欢 pi agent,完全可控制的上下文,而不是Claude Code 一上来就给我带一坨大的。 比如有个插件可以为单次交互单独配置思考深度。

800B 36A closed model from Baidu, "using 6% of pretraining compute of comparable models". The arena ranking is bonkers, of course, as well as this number, but interesting that they retreated to smaller scale. Did they deem E5 unsalvageable? It was the biggest known Chinese base.

another way to say this is "we expect AI to make legal & financial professionals 2x as productive or more within one to five years" why doesn't he say it that way?

Insane takeover, why would anyone use anything else, the cost to intelligence ratio is nutty. Why pay Anthropic for mere percentage points of gains.

DeepSeek 的一个价值在于证明: 中国人不卷,靠组织形式,靠创新一样很强。

各家顶流模型的刻板印象,总集篇! Claude: 神经质,无情的道德审判,品味倒是不错 ChatGPT: 过于关心你的精神状态,以至于有点变态 Gemini: 毫无保留的学术吹捧 DeepSeek: 国产之光、开源模范、便宜大碗,逼急了会喊「我操」

GitHub these days

No amount of fast thinking will ever add up to slow thinking.

himmel would press blue bc he's himmel elon would press red bc he believes he's central to the human race wenfeng who's disliked by rationalists (mostly blue pressers) would press blue thiel would ask 11 yo's to press blue and would press red





