Nigmat

120 posts

Nigmat

Nigmat

@OmniScopeBio

Katılım Kasım 2025
56 Takip Edilen5 Takipçiler
Nigmat
Nigmat@OmniScopeBio·
too ez for gpt in codex now. toooooo ez
Nigmat tweet media
English
0
0
0
9
Nigmat
Nigmat@OmniScopeBio·
@thegenioo definitely 5.5. I'm not a billionare, no chance to use opus 4.7
English
1
0
0
349
Hamza
Hamza@thegenioo·
Is it GPT-5.5 vs Opus-4.7 or GPT-5.5 Pro vs Opus-4.7 ??
English
6
2
37
9.1K
Nigmat
Nigmat@OmniScopeBio·
@abeni_t @thsottiaux Codex with 5.5 is strong as f, I write agentic skills in my researching area (like bioinformatics), With that the dominant agent can keep watching the whole process and record every mistakes and what they actually do. When I wake up I can see the new article and new idea
English
0
0
1
77
Tibo
Tibo@thsottiaux·
Looking at the traffic dashboard for Codex just now, it would be scary if we didn't have a lot more compute coming online in the coming weeks. All according to plan fortunately.
English
251
101
4.9K
195.8K
Nigmat
Nigmat@OmniScopeBio·
@sama Absolutely, especially Agent friendly is the main topic in the near future, compute should not be wasted in useless GUI design just for entertainment. It's not effective
English
1
0
7
1.6K
Sam Altman
Sam Altman@sama·
feels like a good time to seriously rethink how operating systems and user interfaces are designed (also the internet; there should be a protocol that is equally usable by people and agents)
English
1.8K
787
12.5K
1.5M
Nigmat
Nigmat@OmniScopeBio·
Doing a research that intergrate MOE in Oncology, hope there will be a good outcome
English
0
0
0
6
Nigmat
Nigmat@OmniScopeBio·
@bridgemindai They do most of the efforts in KV cache, ability might increase in the near few months with stronger post training
English
5
0
2
775
BridgeMind
BridgeMind@bridgemindai·
DeepSeek V4 Pro debuted at #14 on LMArena Code. Remember when DeepSeek was supposed to be the future of open source AI? The model that was going to compete with frontier labs? Ranked below a $0.33 input model from Alibaba. DeepSeek fell off hard.
BridgeMind tweet media
English
25
0
151
14.4K
Nigmat
Nigmat@OmniScopeBio·
@victor207755822 Definitey, AGI belongs to everyone, not like the anthropic. Ego company acts like know all the things
English
0
0
1
317
Deli Chen
Deli Chen@victor207755822·
DeepSeek-V3: Dec 26, 2024 DeepSeek-V4: Apr 24, 2026 484 days later, we humbly share our labor of love. As always, we stay true to long-termism and open source for all. AGI belongs to everyone. ❤️🌍 #DeepSeekV4 #AGIforEveryone #OpenSource
DeepSeek@deepseek_ai

🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length. 🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models. 🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice. Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today! 📄 Tech Report: huggingface.co/deepseek-ai/De… 🤗 Open Weights: huggingface.co/collections/de… 1/n

English
352
1.3K
13.1K
1M
Chris
Chris@chatgpt21·
1. What?
Chris tweet media
English
4
1
65
4.3K
Nigmat
Nigmat@OmniScopeBio·
@Brooooook_lyn Yes. I tried it in the web maybe 2 weeks ago. It's freaking fast
English
0
0
0
3.8K
Broooooklyn
Broooooklyn@Brooooook_lyn·
DeepSeek v4 in API? So fucking fast 🤯 Ascend or Nvidia?
English
23
20
390
97.2K
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
@sama i’m surprised it could get this much better. makes you think what future intelligence can unlock. helluva week sammy. happy birthday btw.
English
3
1
56
3.6K
Sam Altman
Sam Altman@sama·
Images 2.0 really got over some important qualitative threshold for me that I didn't know existed.
English
567
137
4.9K
297.1K
Nigmat
Nigmat@OmniScopeBio·
@sama just do it!!!!
English
0
0
0
13
Sam Altman
Sam Altman@sama·
we love seeing our users win. we want to give you the best tools, lots of compute, and watch you do the magic.
English
1.4K
508
11.8K
651.7K
Nigmat
Nigmat@OmniScopeBio·
Show us more SAMA, just smash all of us
English
0
0
0
7
Kevin 🍓
Kevin 🍓@blueshades2020·
@chadptg @OmniScopeBio @chatgpt21 it's their internal codex that will be totally different than how we use and also its models will be different. internally it seems they are still using 5.4 xhigh probably because Spud is not really good at coding but good at emotional intelligence and creativity simila to 4o
English
1
0
0
84
Nigmat
Nigmat@OmniScopeBio·
@nicdunz Elon says grok 5 will be agi, so now just call it agi 0.43
English
0
0
1
70
nic
nic@nicdunz·
if they dont call tomorrow agi 0.5 i will be disappointed
English
6
0
59
2.8K
Nigmat
Nigmat@OmniScopeBio·
@chadptg @chatgpt21 BTW, was Heavy Thinking (Web) already a thing before? I just noticed it, but I’d never used it. Is it new, or did I somehow miss it earlier?
English
0
0
0
18
Nigmat
Nigmat@OmniScopeBio·
Uhhhh, I don’t remember having access to Thinking Heavy. Maybe I just haven’t used Thinking on web in a while. Can someone tell me when they rolled it out?
Nigmat tweet media
English
0
0
0
7
Nigmat
Nigmat@OmniScopeBio·
@chadptg @chatgpt21 Chill... just kidding. What I’m saying is, they’re still not smart enough to port Spud into Codex using Spud from the cloud. Like, you shouldn’t need to import Spud into Codex if you can just rely on the ability of spud
English
1
0
0
61