Nigmat
120 posts


@thegenioo definitely 5.5. I'm not a billionare, no chance to use opus 4.7
English

@abeni_t @thsottiaux Codex with 5.5 is strong as f, I write agentic skills in my researching area (like bioinformatics), With that the dominant agent can keep watching the whole process and record every mistakes and what they actually do. When I wake up I can see the new article and new idea
English

@iruletheworldmo It's too cheap that I have no reason to try others
English

this is a great read, the image is pretty cool too.
deepseek is a serious ‘threat’, or competitor if you will. make sure to bookmark.
it’s not quite sota but it’s so cheap it may not matter.
Matthew Berman@MatthewBerman
English

@bridgemindai They do most of the efforts in KV cache, ability might increase in the near few months with stronger post training
English

DeepSeek V4 Pro debuted at #14 on LMArena Code.
Remember when DeepSeek was supposed to be the future of open source AI?
The model that was going to compete with frontier labs?
Ranked below a $0.33 input model from Alibaba.
DeepSeek fell off hard.

English

@victor207755822 Definitey, AGI belongs to everyone, not like the anthropic. Ego company acts like know all the things
English

DeepSeek-V3: Dec 26, 2024
DeepSeek-V4: Apr 24, 2026
484 days later, we humbly share our labor of love.
As always, we stay true to long-termism and open source for all.
AGI belongs to everyone. ❤️🌍
#DeepSeekV4 #AGIforEveryone #OpenSource
DeepSeek@deepseek_ai
🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length. 🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models. 🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice. Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today! 📄 Tech Report: huggingface.co/deepseek-ai/De… 🤗 Open Weights: huggingface.co/collections/de… 1/n
English

@Brooooook_lyn Yes. I tried it in the web maybe 2 weeks ago. It's freaking fast
English

@iruletheworldmo @sama finally, I hope u can get back to ur normal nose tdy
English

@chadptg @OmniScopeBio @chatgpt21 it's their internal codex that will be totally different than how we use and also its models will be different. internally it seems they are still using 5.4 xhigh probably because Spud is not really good at coding but good at emotional intelligence and creativity simila to 4o
English

Full porting spud into codex

OpenAI Developers@OpenAIDevs
@embirico just some very important planning 😉
English

@chadptg @chatgpt21 BTW, was Heavy Thinking (Web) already a thing before? I just noticed it, but I’d never used it. Is it new, or did I somehow miss it earlier?
English

@chadptg @chatgpt21 Chill... just kidding. What I’m saying is, they’re still not smart enough to port Spud into Codex using Spud from the cloud. Like, you shouldn’t need to import Spud into Codex if you can just rely on the ability of spud
English









