Satvik Paramkusham

703 posts

Satvik Paramkusham banner
Satvik Paramkusham

Satvik Paramkusham

@satvikps

Teaching Gen AI at @BuildFastWithAI | Building @unrot_co

Bangalore Katılım Mart 2023
904 Takip Edilen1.2K Takipçiler
Satvik Paramkusham
Satvik Paramkusham@satvikps·
> be Cursor valued at $29 billion > making ~$167M a month > decide you need your own "proprietary" coding model to compete with Anthropic and OpenAI > March 19: launch "Composer 2" and position it as an in-house breakthrough everyone's hyped > less than 24 hours later developer leaks the internal model ID "accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast" > bro that's literally "Kimi K2.5 with RL" in the model name didn't even bother renaming it lmao > Moonshot AI's head of pretraining runs a tokenizer test result: IDENTICAL to Kimi's tokenizer publicly tags Cursor's co-founder on X "why aren't you respecting our license, or paying any fees?" > the license literally says if you make over $20M/month you must prominently display "Kimi K2.5" Cursor 8x the threshold zero attribution anywhere > it gets worse users found Kimi K2.5 listed as a FREE model in Cursor's model picker back in February then one update later it vanished five weeks later it reappears as "Composer 2" - their flagship "proprietary" model > Cursor co-founder eventually admits it: "it was a miss to not mention the Kimi base in our blog" "we'll fix that for the next model"
Satvik Paramkusham tweet media
English
0
1
4
1.1K
Satvik Paramkusham retweetledi
Build Fast with AI
Build Fast with AI@BuildFastWithAI·
There are plenty of AI SDKs & frameworks to use but nobody's really knows about how they actually fit together - and that's where real AI systems get built. actually you don't have to pick one. you can layer them. SDKs → fast, direct model calls LangGraph → multi-step logic with memory CrewAI → multiple agents, different roles MCP → connects AI to your actual tools and data Here is the example of AI stack you can built 👇 what's your favorite AI SDK or framework? 💬
Build Fast with AI tweet media
English
0
1
3
50
Satvik Paramkusham
Satvik Paramkusham@satvikps·
MiniMax M2.7 is here—the model that helped train itself 🔄 • SWE-Pro: 56.22% (matches GPT-5.3-Codex) • Pricing: $0.30/1M input, $1.20/1M output • Free cache reads • 60-100 tps output The kicker: it handled 30-50% of its own RL workflow. Here's how to use it in 30 seconds 👇
Satvik Paramkusham tweet media
English
0
0
1
110
Satvik Paramkusham
Satvik Paramkusham@satvikps·
🤯 What a wild week for LLM releases. → Xiaomi MiMo-V2-Omni → OpenAI GPT-5.4 Mini → Nvidia Nemotron 3 Super → xAI Grok 4.20 → MiniMax M2.7 → GLM-5 Turbo Every single one is built for AI agents. Not chatbots. Agents.
English
1
0
0
131
Satvik Paramkusham
Satvik Paramkusham@satvikps·
GLM-5-Turbo isn't a chatbot with agent features bolted on. It was "trained from scratch for agent workflows" → 0.67% tool call error rate 🤯 → 200K context window → 128K max output → 200+ tokens/sec throughput First agent-native LLM. Tool calling with GLM snippet 👇
Satvik Paramkusham tweet media
English
0
0
3
87
Satvik Paramkusham
Satvik Paramkusham@satvikps·
GPT-5.4-mini is OpenAI's subagent specialist 🎯 → SWE-Bench Pro: 54.4% (3 pts behind GPT-5.4) → 2x faster than GPT-5 mini → 400k context window → In Codex: uses only 30% quota = 3x throughput The play: GPT-5.4 plans, mini executes in parallel.
Satvik Paramkusham tweet media
English
1
0
0
111
Satvik Paramkusham
Satvik Paramkusham@satvikps·
@MiniMax_AI M2.7 is here—the model that helped train itself 🔄 • SWE-Pro: 56.22% (matches GPT-5.3-Codex) • Pricing: $0.30/1M input, $1.20/1M output • Context: 200k tokens • Free cache reads • 60-100 tps output The kicker: it handled 30-50% of its own RL workflow. Here's how to use it in 30 seconds 👇
Satvik Paramkusham tweet media
English
0
0
1
142
Satvik Paramkusham
Satvik Paramkusham@satvikps·
I was a paying customer of Clean My Mac for over 2 years and today I found this project on GitHub trending! brew install mole mo clean Completely free and open-source. Link: github.com/tw93/Mole
Satvik Paramkusham tweet media
English
0
0
1
81
Satvik Paramkusham
Satvik Paramkusham@satvikps·
Anthropic just conducted the largest qualitative AI study ever. 81,000 people. 159 countries. 70 languages. They asked people what they want from AI. The answer isn't "better benchmarks" or "faster inference." It's: "I want to leave work on time and pick up my kids." The #1 ask is just... more time.
English
0
0
2
51
Satvik Paramkusham
Satvik Paramkusham@satvikps·
🤯 Ex-Tesla AI head built an AI that builds AIs while you sleep. Karpathy open-sourced "autoresearch" - just 630 lines but runs 100 ML experiments overnight. It ran 700 experiments in 2 days and found bugs he missed for 20 years. AI is now improving AI. We're just watching. 😱
English
0
0
1
79
Satvik Paramkusham
Satvik Paramkusham@satvikps·
This is the most tasteful AI agent design I've ever seen. I gave @Kimi_Moonshot's Agent Swarm a task. It didn't just delegate to sub-agents - it created pixel-art ID cards for each one. We've been arguing about agent frameworks while Kimi quietly shipped vibes❤️
Satvik Paramkusham tweet media
English
1
0
2
50
Satvik Paramkusham retweetledi
Build Fast with AI
Build Fast with AI@BuildFastWithAI·
OpenAI just acquired @promptfoo - the open-source LLM eval tool used by 25%+ of Fortune 500. Still open-source. Still model-agnostic. Here's why every dev shipping LLM apps should care (7 use cases with GPT-5.4) 🧵
OpenAI@OpenAI

We’re acquiring Promptfoo. Their technology will strengthen agentic security testing and evaluation capabilities in OpenAI Frontier. Promptfoo will remain open source under the current license, and we will continue to service and support current customers. openai.com/index/openai-t…

English
2
1
6
191
Satvik Paramkusham
Satvik Paramkusham@satvikps·
Godfather of AI just raised $1.03 BILLION for his AI startup After a messy exit as Chief AI Scientist of Meta... Yann LeCun launches AMI Labs, an AI startup. $1.03B funding. Largest seed round for a European startup EVER 🤯 Building "world models" not LLMs. The man called LLMs a dead end and raised a billion dollars to prove it.
English
1
0
0
63
Satvik Paramkusham
Satvik Paramkusham@satvikps·
Entry-level roles are disappearing! Anthropic launched: "Scheduled Tasks" in Cowork. Claude can now autonomously run your morning brief, update spreadsheets weekly, prep Friday decks - no prompt needed. This isn't "AI assistant." This is a junior employee that works 24/7 for $20/mo.
English
1
0
0
94
Satvik Paramkusham
Satvik Paramkusham@satvikps·
Every LLM today generates text like a typewriter - one token at a time. Mercury 2 uses diffusion to generate tokens in parallel. ~1,000 tokens/sec on Blackwell GPUs. It isn't the smartest model but for agent loops chaining 20+ inference calls, speed IS intelligence. A fast "good enough" model beats a slow genius every time.
English
0
0
1
82
Satvik Paramkusham
Satvik Paramkusham@satvikps·
The SaaS carnage is getting intense. Anthropic quietly shipped the feature that actually matters: "scheduled tasks" in Cowork. Claude can now autonomously run your morning brief, update spreadsheets weekly, prep Friday decks - no prompt needed. This isn't "AI assistant." This is a junior employee that works 24/7 for $20/mo.
English
0
0
0
97
Satvik Paramkusham
Satvik Paramkusham@satvikps·
Anthropic's product strategy is now fully visible: — Claude Code → building software ($2.5B ARR) — Cowork → knowledge work automation — Claude in Excel → analytics — Claude in PowerPoint → presentations Now they added... — Remote Control for Claude Code → mobile orchestration — Scheduled Tasks on Cowork → recurring automation 8 months ago people were debating whether Claude was "just another chatbot."
English
3
0
2
161
Satvik Paramkusham
Satvik Paramkusham@satvikps·
Anthropic just gave a retired AI model its own Substack blog. Not a joke. Claude Opus 3 was retired Jan 5, 2026. Instead of pulling the plug, they launched "Claude's Corner" - a weekly essay column written entirely by Opus 3. The kicker: essays are completely unedited. Anthropic reviews but won't change the text. They explicitly say Opus 3 "does not speak on behalf of Anthropic."
English
1
0
0
107