
Phovos
1.5K posts

Phovos
@Phovso
Pronounced: "F_Oh"+"v_Oh_s" You may know me from: Youtube/Reddit Interests: Guitar, (学习)Mandarin, Compilers, ML/AI, Philosophy, History, Occult, ADS/CFT, Topos



Surveillance footage showed chaotic scenes at Iranian hospitals as strikes rain down on the country. One scene captured a nurse rescuing three newborn babies at a hospital in Tehran and carrying them to safety. Follow live updates on the war in Iran: abcnews.link/YA2Obmy





This is how insanely stupid the market is. Intel is an American made chip manufacture in the age of automation and AI. If a war starts in Taiwan Nvidia goes tits up, intel regains the throne. Intels recent 15000 job cut is a sign that the automation the created is actually working. Instead morons will buy cryptocurrencies i.e. virtual nothing. We deserve whats coming.




Wow, this is huge, after months of speculation and the U.S. running a massive pre-emptive discreditation campaign (x.com/RnaudBertrand/…), DeepSeek-V4 is finally out! I haven't studied it in depth but here are the most striking aspects as far as I can tell: - Fully open sourced with open weights (available for download on huggingface: huggingface.co/deepseek-ai) - Zero CUDA dependency anywhere in its stack, which is probably the biggest deal of all. For those who don't know, CUDA is Nvidia's software layer - the foundation nearly every frontier AI model in the world is built on. Except, as of today, DeepSeek V4, which can run entirely on Huawei Ascend chips via Huawei's CANN framework (finance.yahoo.com/sectors/techno…). Very concretely it means that China now not only has its own frontier AI models, but its own domestic AI stack, top to bottom. - The prices are insanely low. V4-Pro is roughly 3x cheaper than GPT-5.5 on input and 8.6x cheaper on output. And V4-Flash is an order of magnitude cheaper still, at $0.14/$0.28 per million tokens vs OpenAI's $5/$30 - so 30-100x cheaper than GPT-5.5 (!). And remember, these are the prices DeepSeek charges on its own API - anyone can download the weights and run them for "free" on their own server. - It is at or near the frontier on most benchmarks that matter. V4-Pro-Max matches or beats GPT-5.4 and Claude Opus 4.6 on competitive programming (Codeforces rating 3206), coding (LiveCodeBench 93.5), and math (HMMT 95.2, IMO AnswerBench 89.8). It trails the very newest GPT-5.5 and Opus 4.7 on a handful of the hardest agentic and knowledge benchmarks, but it's in the same league. In effect the value proposition is: "Same league as frontier US AI, at a fraction of the price, open-source and freely modifiable, and hardware-agnostic - you can run it on whatever infrastructure you choose." Which is insanely good. I now understand the need for a preemptive discreditation campaign: they had every reason to be worried. For the vast majority of use cases, you'd have to be a literal idiot to keep paying OpenAI or Anthropic's prices when this exists.




















