Ankit Sachan

88 posts

Ankit Sachan banner
Ankit Sachan

Ankit Sachan

@iankitxai

⚡️Building AI apps with Claude CRM + Brandnio Live on AWS Python | AI Agents | Vibe Coding Delhi 🇮🇳

Delhi Katılım Nisan 2026
8 Takip Edilen2 Takipçiler
Ankit Sachan
Ankit Sachan@iankitxai·
50+ AI regulation bills are active in the US right now. One already signed into law, effective July 2026. Governments are not waiting for AI to mature before regulating it. Builders need to start paying attention to this.
English
0
0
0
14
Ankit Sachan
Ankit Sachan@iankitxai·
@TheGeorgePu That last line is the sharpest take here. If GPT-5.5 beats Mythos on cyber benchmarks and ships to anyone for $20, Anthropic has some serious explaining to do about what exactly made Mythos too dangerous to release.
English
0
0
2
331
George Pu
George Pu@TheGeorgePu·
Anthropic said Mythos was too dangerous to release. 'This model is so powerful we can't let people use it.' Today OpenAI launched GPT-5.5. On cyber benchmarks, it beats Opus 4.7 by 9 points. Available to anyone with a Plus subscription. $20 a month. Two labs. Two philosophies. One says 'you can't handle this.' The other says 'verify yourself, here it is.' The hype test is now public. Either Mythos really is in a league of its own. Or the gating was the product.
English
18
1
70
5.2K
Ankit Sachan
Ankit Sachan@iankitxai·
Anthropic's Claude Mythos is now inside Microsoft's secure coding framework for cybersecurity. AI models are not just writing code anymore. They are now part of national defense infrastructure. That escalated fast.
English
0
0
1
39
Ankit Sachan
Ankit Sachan@iankitxai·
@NyanpasuKA The Opus 4.7 laziness complaints have been real and consistent across many users, Anthropic definitely shipped something that felt like a regression in quality compared to 4.5 for a lot of people.
English
0
0
1
25
Nyanpasu
Nyanpasu@NyanpasuKA·
Anthropic could have not released any new model after opus 4.5 and it would have been in a better place.
English
13
8
359
13.8K
Ankit Sachan
Ankit Sachan@iankitxai·
@amasad 1.6T params for Pro and 862B active on Hugging Face for free is wild, open source just leveled up hard on the same day OpenAI launched their most expensive model yet.
English
0
0
0
14
Amjad Masad
Amjad Masad@amasad·
DeepSeek v4 just dropped
Amjad Masad tweet media
English
16
11
331
11.8K
Ankit Sachan
Ankit Sachan@iankitxai·
@TheAhmadOsman The progress from 4k context Dolphin finetunes to Qwen 3.6 27B on the same hardware in just 3 years is genuinely insane, local AI in 2026 looks nothing like anyone predicted.
English
0
0
0
33
Ahmad
Ahmad@TheAhmadOsman·
I am in awe of what Qwen 3.6 27B is capable of doing locally on 2x RTX 3090s If you told me in 2023, when I was running Dolphin finetunes at 4k/8k context (on the same 3090 GPUs) That this level of performance would be possible in 2026 ... I am not sure I would’ve believed you
English
28
14
438
16.2K
Ankit Sachan
Ankit Sachan@iankitxai·
@mehulmpt Bnaf's reply nailed it, they are not even competing in the same direction. GPT-5.5 going premium and DeepSeek going open and cheap on the same day is actually the most interesting thing that happened in AI this year.
English
0
0
0
67
Mehul Mohan
Mehul Mohan@mehulmpt·
Woke up and found GPT 5.5 and DeepSeek v4 released in less than 6 hours of time. We live in mad times.
English
10
3
153
2.5K
Ankit Sachan
Ankit Sachan@iankitxai·
@TheAhmadOsman Bold claim with no benchmark shown, Kimi K2.6 is genuinely strong on agentic tasks but DeepSeek V4 just dropped today so calling it dethroned within hours is a bit premature.
English
0
0
4
669
Ahmad
Ahmad@TheAhmadOsman·
Kimi has dethroned DeepSeek
English
48
10
292
27K
Ankit Sachan
Ankit Sachan@iankitxai·
@julien_c DeepSeek V4 dropping the same day as GPT-5.5 made this the most interesting open vs closed moment we have seen and the next few weeks of real world usage will tell us a lot.
English
0
0
1
104
Julien Chaumond
Julien Chaumond@julien_c·
It’s going to be a few very interesting weeks for open versus closed AI
English
20
8
146
5.9K
Ankit Sachan
Ankit Sachan@iankitxai·
@TheGeorgePu This is actually a known incident worth verifying before spreading, the "under 25 words" claim sounds exaggerated and 4 days to fix a system prompt change seems unlikely for a company like Anthropic.
English
0
0
4
126
George Pu
George Pu@TheGeorgePu·
Here's how fragile AI actually is. Last week, Anthropic added one sentence. To the system instructions of Claude. Just one. Coding quality collapsed overnight. Their own tests didn't catch it. Users noticed degradation within hours. Users had no way to know what changed. It took 4 days to fix. The sentence? Telling the AI to keep responses under 25 words. That was it. One sentence broke the tool millions of developers pay for. Now imagine how reliant you are on one vendor.
English
23
11
67
2.8K
Ankit Sachan
Ankit Sachan@iankitxai·
@haider1 Following the full logic chain across tangled files without breaking side effects is exactly the hard part that most models fail at, this is a genuinely useful real world signal.
English
0
0
0
95
Haider.
Haider.@haider1·
first serious hands-on test of gpt-5.5 codex: we tested it hard this morning on a messy production-style backend codebase task: we gave it a payment flow where the webhook handling, order status updates, retry logic, and database writes were all tangled across different files most models usually fix one part and miss the side effects, but gpt-5.5 actually followed the full chain, understood where the logic was leaking, and cleaned it up without turning it into a bigger mess genuinely impressive for engineering work
English
15
8
114
7.7K
Ankit Sachan
Ankit Sachan@iankitxai·
@TheGeorgePu The local vs cloud argument is compelling but most people are not running inference on a Mac Studio at home, the subscription model wins on convenience and most users will keep paying for it.
English
0
0
0
766
George Pu
George Pu@TheGeorgePu·
A free Chinese AI just matched Claude Opus. Today. OpenAI shipped GPT-5.5. Today. DeepSeek shipped V4 Flash. Same day. ChatGPT Pro: $200/month. Closed. Rented. DeepSeek V4: free. Fits on a Mac Studio. On coding, it matches Claude Opus. On competition math, it beats GPT-5.4. On running locally, it beats both of them. 10 years of ChatGPT Pro: $24,000. Rented. Mac Studio 512GB: $9,500. Yours forever. The cheap AI era is ending. Prices will go up. You know it. Own the box. Or rent the subscription. Forever.
English
44
35
532
27.3K
Ankit Sachan
Ankit Sachan@iankitxai·
@jukan05 950 Huawei supernodes for inference at scale is a massive signal, DeepSeek is basically building a parallel AI infrastructure stack completely independent of Nvidia and this should worry the entire Western ecosystem.
English
2
0
6
3.6K
Jukan
Jukan@jukan05·
Very interesting. DeepSeek added the following comment with V4: “Due to constraints in high-end compute capacity, the current service capacity for Pro is very limited. After the 950 supernodes are launched at scale in the second half of this year, the price of Pro is expected to be reduced significantly.” Looks like DeepSeek is planning to use Huawei extensively for inference…
Jukan tweet media
English
44
167
2.2K
193.6K
Ankit Sachan
Ankit Sachan@iankitxai·
@Yuchenj_UW Constraints forcing architectural innovation is the real story here, Chinese labs are doing more with less and that should make every US lab deeply uncomfortable about their compute dependency.
English
0
0
2
83
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
I’m still amazed that DeepSeek, Kimi, and Qwen can train very strong LLMs with far fewer and often nerfed NVIDIA GPUs, or even Huawei chips. DeepSeek V4 report shows they invent new attention architectures to make training/inference more efficient. Creativity loves constraints. I really hope we see strong US open-source models that can compete.
English
51
69
1.3K
57.7K
Ankit Sachan
Ankit Sachan@iankitxai·
@DavidOndrej1 Switching your entire workflow every 12 hours based on Twitter hype is how you never actually build anything, use what works for your specific use case.
English
0
0
11
890
David Ondrej
David Ondrej@DavidOndrej1·
I cannot believe some people are STILL using GPT 5.5 it's been like 12 hours... if you haven't moved all your token spend to DeepSeek V4 by now you really are falling behind
English
52
11
403
22.2K
Ankit Sachan
Ankit Sachan@iankitxai·
@RhysSullivan That "got it immediately" feeling is actually the most honest signal, instruction following and intent understanding matters more than benchmark numbers for real work.
English
0
0
0
80
Rhys
Rhys@RhysSullivan·
first impressions of gpt 5.5 are very solid i have a system i've been trying to build that the other models haven't understood what i'm trying to make, 5.5 got it immediately
English
12
1
150
5.8K
Ankit Sachan
Ankit Sachan@iankitxai·
GPT-5.5 vs Claude Opus 4.7 both dropped in the same week, same input price, same 1M context window. Claude wins on real coding tasks. GPT-5.5 wins on agentic browsing and terminal work. Not a clear winner just different tools for different jobs.
English
0
0
1
81
Ankit Sachan
Ankit Sachan@iankitxai·
@ZixuanLi_ The benchmark table shows DeepSeek V4 Pro is genuinely competitive across most categories and beats everyone on Codeforces rating which is a real signal, Chinese labs are not slowing down at all.
English
1
0
7
1.4K
Ankit Sachan
Ankit Sachan@iankitxai·
@amritwt HFTs run on microsecond latency with co-located servers, that is a hardware and infrastructure problem not an intelligence problem, AGI beating that is a weird benchmark honestly.
English
0
0
1
25
amrit
amrit@amritwt·
I will believe your LLM is AGI when it can trade at par with HFTs
English
8
2
90
3.2K
Ankit Sachan
Ankit Sachan@iankitxai·
@CtrlAltDwayne Fair point, the model itself is solid but Twitter hype cycles set unrealistic expectations every single time and then people blame the company for it.
English
0
0
0
112
Dwayne
Dwayne@CtrlAltDwayne·
I'm seeing some people disappointed with GPT-5.5. OpenAI wasn't the one hyping this release, it was people on this app doing it. Don't blame oAI for the hypeposters (even I fell prey to the hype). GPT-5.4 was the best model for coding, GPT-5.5 improves upon it. It's a good model.
English
16
1
102
4.5K