Akash Majumder

12.2K posts

Akash Majumder banner
Akash Majumder

Akash Majumder

@akashagi

Bitcoin Investor | Founder & CEO of Stealth Startup | Building Next-Gen Solutions | AI & Space Enthusiast.

Metaverse Katılım Mayıs 2021
1.1K Takip Edilen410 Takipçiler
Sabitlenmiş Tweet
Akash Majumder
Akash Majumder@akashagi·
Today, AI models are trained on synthetic data, while humans increasingly learn from synthetic data generated by AI.
English
0
0
0
171
Akash Majumder
Akash Majumder@akashagi·
@deedydas People trade crypto on Hyperliquid. When they get liquidated, they buy Tether (USDT) to continue trading on the platform. It creates a loop that serves as a revenue source for both Hyperliquid and USDT.
English
0
0
1
2.4K
Deedy
Deedy@deedydas·
Jane Street made ~$40B in 2025 with 3,500 employees, a ~2x from the year before. At ~65-70% profit margin, that's $8M profit / employee, the highest for a 1000+ ppl company. High-frequency trading continues to be the most efficient money making engine. I want to share an old story about my Jane Street interview in 2014. Jane Street was known for hiring a lot of math, physics and CS olympiad winners from top universities and putting them through many rounds - including, for trading roles, a gauntlet of mental math. It was my 6th interview and my final round and I recall being asked "What is the next day after today in DD/MM/YYYY where all the digits are unique?" They'd toy with you and say "You can use a pencil and paper, if you want" but you knew that was an instant no. Painstakingly and as quickly as I could, I came to an answer. "How confident are you that this is correct on a 0-1 probability scale?" the interviewer said. "0.95", I blurted out, not fully knowing how to answer that. "Are you sure?" After thinking harder for a few more seconds, I realized I could've flipped the digits around to get a closer date. I gave the interviewer my answer. It was correct. "0.95 huh?" he chuckled. That's when I knew I failed. Note: fwiw, other companies that come close in efficiency are - Tether ($90M+ profit/emp) - Hyperliquid ($80M+ profit/emp) and on revenue: - Valve ($50M/emp) - OnlyFans ($37M/emp) - Craigslist ($14M/emp) - Anthropic ($12M/emp, run rate) - OpenAI ($8M/emp, run rate) For comparison, Nvidia is very efficient at scale and is $4.4M/emp.
Deedy tweet media
English
104
200
2.2K
1.4M
Akash Majumder
Akash Majumder@akashagi·
A small team can outperform massive organizations when the model is built right. → Hyperliquid: Just 11 employees, yet $78 million revenue per person in 2025. Total revenue topped $850 million. → Anthropic and OpenAI each employ thousands, but revenue per employee sits in the low single-digit millions, far below the trading and crypto leaders.
Deedy@deedydas

Jane Street made ~$40B in 2025 with 3,500 employees, a ~2x from the year before. At ~65-70% profit margin, that's $8M profit / employee, the highest for a 1000+ ppl company. High-frequency trading continues to be the most efficient money making engine. I want to share an old story about my Jane Street interview in 2014. Jane Street was known for hiring a lot of math, physics and CS olympiad winners from top universities and putting them through many rounds - including, for trading roles, a gauntlet of mental math. It was my 6th interview and my final round and I recall being asked "What is the next day after today in DD/MM/YYYY where all the digits are unique?" They'd toy with you and say "You can use a pencil and paper, if you want" but you knew that was an instant no. Painstakingly and as quickly as I could, I came to an answer. "How confident are you that this is correct on a 0-1 probability scale?" the interviewer said. "0.95", I blurted out, not fully knowing how to answer that. "Are you sure?" After thinking harder for a few more seconds, I realized I could've flipped the digits around to get a closer date. I gave the interviewer my answer. It was correct. "0.95 huh?" he chuckled. That's when I knew I failed. Note: fwiw, other companies that come close in efficiency are - Tether ($90M+ profit/emp) - Hyperliquid ($80M+ profit/emp) and on revenue: - Valve ($50M/emp) - OnlyFans ($37M/emp) - Craigslist ($14M/emp) - Anthropic ($12M/emp, run rate) - OpenAI ($8M/emp, run rate) For comparison, Nvidia is very efficient at scale and is $4.4M/emp.

English
0
0
0
57
Akash Majumder retweetledi
Brian Armstrong
Brian Armstrong@brian_armstrong·
People are capable of far more than they think, on far shorter timelines. Problems expand to fill the time you give them.
English
305
1.3K
11.6K
368.2K
Akash Majumder
Akash Majumder@akashagi·
Google commits up to $40 billion to Anthropic. → $10 billion in cash invested immediately → Up to $30 billion more if performance targets are met → Deal values Anthropic at $350 billion → Funds support major expansion of computing capacity
Shirin Ghaffary@shiringhaffary

NEW: Google is planning to invest $10 bil now and up to $40 bil in future in Anthropic, in a major expansion of their partnership. Google will also provide 5 GW of compute over 5 yrs, starting to come online 2027 w/ @byJuliaLove bloomberg.com/news/articles/…

English
0
0
0
31
Akash Majumder
Akash Majumder@akashagi·
@sama Input price; $5/Input MTok Output price: $30/Output MTok 2x from GPT-5.4.
Akash Majumder tweet media
English
1
0
1
182
Sam Altman
Sam Altman@sama·
GPT-5.5 and GPT-5.5 Pro are now available in the API!
English
533
280
6.7K
308K
Akash Majumder
Akash Majumder@akashagi·
@julien_c @huggingface Today's open-source models, such as Gemma 4 and DeepSeek-v4, utilize high-speed inference architectures that incorporate both reasoning and non-reasoning capabilities.
English
0
0
0
348
Julien Chaumond
Julien Chaumond@julien_c·
This is where we are right now. And i’m not gonna lie it feels pretty magical 🧚‍♀️ Qwen3.6 27B running inside of Pi coding agent via Llama.cpp on the MacBook Pro For non-trivial tasks on the @huggingface codebases, this feels very, very close to hitting the latest Opus in Claude Code, or whatever shiny monopolistic closed source API of the day is. In full airplane mode. Most people haven’t realized this yet. If you have, it means you have a huge headstart to what I call the second revolution of AI. Powerful local models for efficiency, security, privacy, sovereignty 🔥
Julien Chaumond tweet media
English
245
420
5K
553.5K
DeepSeek
DeepSeek@deepseek_ai·
🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length. 🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models. 🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice. Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today! 📄 Tech Report: huggingface.co/deepseek-ai/De… 🤗 Open Weights: huggingface.co/collections/de… 1/n
DeepSeek tweet media
English
1.5K
7.5K
43.6K
8.5M
Akash Majumder
Akash Majumder@akashagi·
DeepSeek-V4 Preview is now live and open-sourced. It delivers cost-effective 1M token context as the new standard. Two models stand out: → DeepSeek-V4-Pro: 1.6T total parameters, 49B active per token → DeepSeek-V4-Flash: 284B total parameters, 13B active per token DeepSeek-V4-Pro leads open-source models in agentic coding and world knowledge. It matches top closed-source models in math, STEM, and coding reasoning. DeepSeek-V4-Flash offers strong reasoning close to Pro, matches it on simple agent tasks, and runs faster at lower cost. Key structural upgrades power the long context: → Token-wise compression combined with DeepSeek Sparse Attention → Full 1M context now runs with far less compute and memory Agent capabilities are optimized for real use. The models work seamlessly with leading agents and are already running in production at DeepSeek. API access is available immediately. Update to deepseek-v4-pro or deepseek-v4-flash. Both support 1M context and thinking/non-thinking modes. Older models retire after July 24.
DeepSeek@deepseek_ai

🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length. 🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models. 🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice. Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today! 📄 Tech Report: huggingface.co/deepseek-ai/De… 🤗 Open Weights: huggingface.co/collections/de… 1/n

English
0
0
0
159
Akash Majumder
Akash Majumder@akashagi·
@elonmusk @Starlink I have used both the free and paid versions of Grok. The free version frequently experiences crashes due to high demand, whereas the paid version performs reliably and delivers a significantly better user experience.
English
1
0
1
1.3K
Akash Majumder
Akash Majumder@akashagi·
@OpenAI ARC-AGI-2: → Max: 85.0%, $1.87 → High: 83.3%, $1.45 → Med: 70.4%, $0.86 → Low: 33%, $0.35 GPT-5.5 is now SOTA on ARC-AGI-2
Akash Majumder tweet media
English
0
1
3
2.2K
OpenAI
OpenAI@OpenAI·
Introducing GPT-5.5 A new class of intelligence for real work and powering agents, built to understand complex goals, use tools, check its work, and carry more tasks through to completion. It marks a new way of getting computer work done. Now available in ChatGPT and Codex.
English
2.3K
6.9K
51.3K
11.8M
Akash Majumder
Akash Majumder@akashagi·
OpenAI released GPT-5.5 Excels at writing and debugging code, online research, data analysis, document and spreadsheet creation, and working across software tools. Stronger results in agentic coding, computer use, knowledge work, and early scientific research. Matches GPT-5.4 latency while delivering higher intelligence and using fewer tokens for Codex tasks. GPT-5.5 Thinking scores: → 82.7% on Terminal-Bench 2.0 → 84.9% on GDPval → 78.7% on OSWorld-Verified → 55.6% on Toolathalon GPT-5.5 Pro scores: → 90.1% on BrowseComp → 52.4% on FrontierMath
Akash Majumder tweet media
English
0
0
0
50
Akash Majumder
Akash Majumder@akashagi·
Grok Voice Think Fast 1.0 is now released. Key highlights: → Achieved 67.3% PASS rate. → Currently the only model with Reasoning: Enabled. → Delivers real-time conversations. → Think Fast 1.0 powers quicker responses.
Akash Majumder tweet media
xAI@xai

Introducing Grok Voice Think Fast 1.0 A state-of-the-art voice model built for complex, multi-step workflows with snappy responses and high accuracy. It takes the top spot on the Tau Voice Bench and handles real-world messiness like noise, accents, and interruptions better than any other model in the world. x.ai/news/grok-voic…

English
0
0
0
59
Akash Majumder
Akash Majumder@akashagi·
@xai grok-voice-think-fast-1.0 stands out as the sole model with Reasoning: Enabled.
Akash Majumder tweet media
English
3
5
90
7.3K
xAI
xAI@xai·
Introducing Grok Voice Think Fast 1.0 A state-of-the-art voice model built for complex, multi-step workflows with snappy responses and high accuracy. It takes the top spot on the Tau Voice Bench and handles real-world messiness like noise, accents, and interruptions better than any other model in the world. x.ai/news/grok-voic…
English
620
1.2K
9.6K
46.5M
OpenAI
OpenAI@OpenAI·
This is not a screenshot.
OpenAI tweet media
English
1.4K
903
16.8K
7.4M
Akash Majumder
Akash Majumder@akashagi·
@Teknium Qwen released 3.6 max preview Claude released opus 4.7 Kimi released k2.6 Now GPT. Today?
English
1
0
1
160
Teknium 🪽
Teknium 🪽@Teknium·
I hear gpt 5.5 is going to drop today. Lets see. Hermes will be ready for intake 🤗
English
64
21
1.2K
69.3K
Akash Majumder
Akash Majumder@akashagi·
Tim Cook is stepping down as Apple CEO. Effective September 1, 2026, he transitions to Executive Chairman after nearly 15 years leading the company. During his era, Apple delivered unmatched growth: → Market capitalization rose from $376 billion in 2011 to over $4 trillion. → First company ever to reach $1 trillion, $2 trillion, and $3 trillion valuations. → Stock returns exceeded 1,900 percent. → Revenue and profits more than doubled through iPhone innovation and services expansion. Tim Cook built a global success story rooted in operational excellence and supply chain mastery. a true Legend.
Akash Majumder tweet media
English
0
0
0
37
Akash Majumder
Akash Majumder@akashagi·
@AlexFinn Mac Studios and Mac Minis equipped with 512 GB RAM are currently sold out.
English
0
0
0
118
Alex Finn
Alex Finn@AlexFinn·
It happened. An open weights model just dropped that benchmarks higher than Opus 4.6 is out If you have 2 Mac Studios w/ 512gb, you can run Opus 4.6 level intelligence completely for free on your desk I warned you this would happen months ago. Now Mac Studios and Mac Minis are sold out The next Mac Studio has been delayed until Q3/Q4. The price will be significantly higher I told you this was going to happen. Intelligence explosion. Hardware bottleneck. Increased efficiency Luckily I picked up 2 Mac Studio 512gbs, 2 Mac Minis, and a DGX Spark I will be loading this up in the next couple of days and will have completely private super intelligence running for me 24/7 I’m telling you right now by end of year we will have a local version of Mythos. It’s 100% guaranteed You called me crazy but every single prediction I’ve made has turned out to be true These models will only get more efficient and require less hardware. But that hardware is only going to get more expensive Local/open source is so obviously the future and if you’re still denying this now you are delusional
Kimi.ai@Kimi_Moonshot

Meet Kimi K2.6: Advancing Open-Source Coding 🔹Open-source SOTA on HLE w/ tools (54.0), SWE-Bench Pro (58.6), SWE-bench Multilingual (76.7), BrowseComp (83.2), Toolathlon (50.0), Charxiv w/ python(86.7), Math Vision w/ python (93.2) What's new: 🔹Long-horizon coding - 4,000+ tool calls, over 12 hours of continuous execution, with generalization across languages (Rust, Go, Python) and tasks (frontend, devops, perf optimization). 🔹Motion-rich frontend - Videos in hero sections, WebGL shaders, GSAP + Framer Motion, Three.js 3D. 🔹Agent Swarms, elevated - 300 parallel sub-agents × 4,000 steps per run (up from K2.5's 100 / 1,500). One prompt, 100+ files. 🔹Proactive Agents - K2.6 model powers OpenClaw, Hermes Agent, etc for 24/7 autonomous ops. 🔹Claw Groups (research preview) - bring your own agents, command your friends', bots & humans in the loop. - K2.6 is now live on kimi.com in chat mode and agent mode. For production-grade coding, pair K2.6 with Kimi Code: kimi.com/code - 🔗 API: platform.moonshot.ai 🔗 Tech blog: kimi.com/blog/kimi-k2-6 🔗 Weights & code: huggingface.co/moonshotai/Kim…

English
188
146
1.7K
318.3K