Kiran Mohan

853 posts

Kiran Mohan banner
Kiran Mohan

Kiran Mohan

@KMohan40821

Dad, Love writing, deep thinking, consciousness, philosophy, psychology and science. Chasing Exponential Growth. Definite Optimist.

Warsaw, Poland Katılım Eylül 2024
298 Takip Edilen112 Takipçiler
Sabitlenmiş Tweet
Kiran Mohan
Kiran Mohan@KMohan40821·
in last few years and especially last few months. Two thoughts that hit me simultaneously every morning: I need to keep up with AI. There's no point keeping up with AI. Spent a while thinking about why both feel true at once and what it means for what we should actually be learning right now. Wrote it up. Honest about where I'm uncertain. Full piece linked below. Curious what the split looks like in your own work.
Kiran Mohan@KMohan40821

x.com/i/article/2035…

English
0
0
0
40
Tech Dev Notes
Tech Dev Notes@techdevnotes·
What's our plan if Grok Build model sucks?
English
70
2
120
38.8K
Jesse
Jesse@jesse_vermeulen·
honest question: what do people do during the 5-10 min while Claude is running?
English
2K
58
2.4K
457.8K
Mickal
Mickal@mickal·
@MiniMax_AI Amazing! I asked my @cifer_security agent to create a song for Cifer after installing the MMX-CLI and it made this.
English
5
4
27
5.6K
MiniMax (official)
MiniMax (official)@MiniMax_AI·
Introducing MMX-CLI — our first piece of infrastructure built not for humans, but for Agents. Your Agent can read, think, and write. But ask it to sing, paint, or show you a world it's never seen — and it falls silent. Not because it doesn't understand, but because it has no mouth, no hands, no camera. Today, that changes. MMX-CLI gives every Agent seven new senses — image, video, voice, music, vision, search, conversation — powered by MiniMax's full-modal stack, today's SOTA across mainstream omni-modal models. One command: mmxAgent-native I/O. Zero MCP glue. Runs on your existing Token Plan. Two lines to give your Agent a voice: npx skills add MiniMax-AI/cli -y -g npm install -g mmx-cli Then tell it: "you have mmx commands available." It'll learn the rest. Github → github.com/MiniMax-AI/cli Token Plan: platform.minimax.io/subscribe/toke…
MiniMax (official) tweet media
English
115
368
3.2K
373.3K
Kiran Mohan
Kiran Mohan@KMohan40821·
I can vouch from personal experience at my organization. Karpathy's main point: There's a big split in what people think about AI right now. Some people think it's still kinda dumb and glitchy. Others think it's amazingly powerful. Both are kinda right; it just depends on which AI they're using and how they're using it. Group 1: Casual users (the "it's not that great" crowd) They tried the free version of ChatGPT last year (or even older stuff). They saw it make silly mistakes, "hallucinate" (make up facts), or mess up simple questions. Example: Viral videos of the free "Advanced Voice Mode" failing easy stuff like "should I drive or walk?" Their view: AI is overhyped and still broken for normal everyday tasks. Group 2: Heavy pro users (the "this is mind-blowing" crowd) They pay for the latest top-tier tools (like OpenAI Codex or Claude Code). They use it professionally for coding, math, or research. What they see: The AI can now work on its own for hours, fixing huge code problems that used to take a human days or weeks. Their view: AI progress feels insane and even a little scary ("AI psychosis"). Why the difference? AI got way better in specific technical areas (like coding) because: Companies make the most money there (big business value). It's easy to test if the AI is right (e.g., does the code run? Yes/No). Everyday stuff (writing emails, giving advice, searching) didn't improve as much; it's harder to measure "good" automatically. Bottom line: Free/old AI still fumbles simple things → people laugh at it on social media. Paid cutting-edge AI crushes hard professional tasks → experts are blown away. The two groups are basically talking about completely different tools, so they argue past each other. In short: AI is uneven. Super weak in some spots, unbelievably strong in others and most people only see one side.
English
0
0
0
5
Andrej Karpathy
Andrej Karpathy@karpathy·
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.
staysaasy@staysaasy

The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.

English
1.1K
2.4K
20K
4.1M
Alex Finn
Alex Finn@AlexFinn·
A month ago I went on Moonshots and described my vision of the future to everyone on the podcast @alexwg immediately called me after and said @021T_vc wanted to invest so we can turn the vision into a reality And today here we are
English
17
0
136
15.7K
Alex Finn
Alex Finn@AlexFinn·
Biggest announcement of my life: I have raised pre-seed funding from 021T, @alexwg , and @devontriplett21 to build an AI agent that will change the world The biggest issue with AI is it is creating incredible value but for only a small group of people Most people hate AI and don't use it I have built Henry Intelligent Machines (HIM) to solve this HIM is a personal swarm of AI agents autonomously creating economic value for you 24/7 Right now as we speak HIM is collecting data across thousands of websites autonomously 24/7/365 They're hunting for challenges to solve at all times When you use Henry, he will deeply research you and get to know you. Then based on the thousands of opportunities it has in its database, find the value generating opportunities that most closely match your interests, skills, assets, and expertise Henry and its swarm will then proceed to build those micro-businesses out for you You will have complete control over the swarm. Reviewing and approving all work. Editing where you find appropriate. You give Henry a budget, then it hunts and autonomously creates value Say you have an expertise in vibe coding tools and Henry discovers there's no vibe coding guides on Gumroad. It will take your expertise, build drafts for a guide, run it by you, post with your approval, then use your budget to get customers Say you're into AI and speak Portuguese Henry will go through the Portuguese AI education market, see there are no educational products in that language, then create a full AI educational business in Portuguese Most people hate AI. This is because they get 0 value from it, see their friends getting laid off, and become scared HIM is the antidote to this. HIM allows ANYONE to get value from AI. HIM will allow anyone to get access to the trillions of dollars of value that are up for grabs in the new AI world. To ensure Henry creates value and not slop, this will be an extremely slow rollout We will be letting people into HIM 1 by 1. Working with them hands on to ensure Henry only builds real value for them, then expanding from there. If you'd like to be one of the early users of Henry, feel free to sign up at the link below. Forward.
English
402
93
2K
183.8K
Kiran Mohan
Kiran Mohan@KMohan40821·
@alexwg Excellent initiative. Congratulations to you and @AlexFinn . I'm looking forward to broadening my scope with HIM.
English
1
0
0
329
Tyler
Tyler@rezoundous·
Dear Anthropic, please fix the Claude Code usage limit bug asap. My $100 plan feels like a $20 plan for almost a week now.
English
326
132
2.9K
386K
CapCut
CapCut@capcutapp·
Today we are expanding Dreamina Seedance 2.0 to more users worldwide within CapCut - including Europe, Canada, Australia, New Zealand, South Korea, SEA, MENA, LATAM and Africa. Plus, we’ve provided everyone with one free trial of Dreamina Seedance 2.0 across CapCut’s app, desktop and web. Enjoy creating now! Here’s a quick guide to help you explore different CapCut features that support Dreamina Seedance 2.0: bytedance.larkoffice.com/wiki/Fdz8wMypw… RT+Comment in 9hr to get extra 1000 credit in your DM!
English
1.4K
1.1K
2.1K
289.7K
Kiran Mohan
Kiran Mohan@KMohan40821·
@beffjezos How do you optimize token usage by opus 4.6 on Claude code. I'm very curious
English
0
0
0
223
Kiran Mohan
Kiran Mohan@KMohan40821·
@jennyzhangzt I can't begin to even fathom how crazy this is, even though in singularity this is expected
English
0
0
0
76
Jenny Zhang
Jenny Zhang@jennyzhangzt·
Introducing Hyperagents: an AI system that not only improves at solving tasks, but also improves how it improves itself. The Darwin Gödel Machine (DGM) demonstrated that open-ended self-improvement is possible by iteratively generating and evaluating improved agents, yet it relies on a key assumption: that improvements in task performance (e.g., coding ability) translate into improvements in the self-improvement process itself. This alignment holds in coding, where both evaluation and modification are expressed in the same domain, but breaks down more generally. As a result, prior systems remain constrained by fixed, handcrafted meta-level procedures that do not themselves evolve. We introduce Hyperagents – self-referential agents that can modify both their task-solving behavior and the process that generates future improvements. This enables what we call metacognitive self-modification: learning not just to perform better, but to improve at improving. We instantiate this framework as DGM-Hyperagents (DGM-H), an extension of the DGM in which both task-solving behavior and the self-improvement procedure are editable and subject to evolution. Across diverse domains (coding, paper review, robotics reward design, and Olympiad-level math solution grading), hyperagents enable continuous performance improvements over time and outperform baselines without self-improvement or open-ended exploration, as well as prior self-improving systems (including DGM). DGM-H also improves the process by which new agents are generated (e.g. persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. This work was done during my internship at Meta (@AIatMeta), in collaboration with Bingchen Zhao (@BingchenZhao), Wannan Yang (@winnieyangwn), Jakob Foerster (@j_foerst), Jeff Clune (@jeffclune), Minqi Jiang (@MinqiJiang), Sam Devlin (@smdvln), and Tatiana Shavrina (@rybolos).
Jenny Zhang tweet media
English
155
645
3.6K
494.4K
Kiran Mohan retweetledi
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
You've got 8 billion potential customers on Earth, BUT... In 2026, only ~5.3 billion have internet access. That means 2.7 billion people still can't access the exponential tools we talk about daily—AI, telemedicine, online education, digital banking. The gap: The missing ~3 billion represent the largest untapped market in human history. Starlink alone now has 10,000+ satellites in orbit (just crossed that milestone yesterday). When connectivity becomes ubiquitous in the next 3-4 years, we're not just adding users—we're adding builders, creators, entrepreneurs. The implication: The next Einstein, the next Elon, the next medical breakthrough might be sitting in a village without Wi-Fi right now. Abundance doesn't just mean "more for current participants"—it means unlocking latent genius at global scale.
English
490
528
2.6K
521.4K
Kiran Mohan
Kiran Mohan@KMohan40821·
@elonmusk @Math_files @ToKTeacher would love to hear your take on this. My impression is that this is not a hard to vary explanation of the effects due to quantum mechanical nature of reality
English
1
0
2
83
Elon Musk
Elon Musk@elonmusk·
@Math_files This perfectly matches simulation theory. Objects in a video game are only rendered when observed and there is a minimum voxel size.
English
367
120
1.6K
89.4K
Kiran Mohan
Kiran Mohan@KMohan40821·
@rohanpaul_ai I think there is a chance that this is just a phase. Once we have more advanced Openclaws or personal computer or Jarvis. We allow them to manage everything for us. And we get to do more fun/creative stuff with deep focus
English
0
0
0
39
Rohan Paul
Rohan Paul@rohanpaul_ai·
New Harvard Business Review research reveals that excessive interaction with AI is causing a specific type of mental exhaustion ( or AI brain fry), which is particularly hitting high performers who use the tech to push past their normal limits. A survey of 1,500 workers reveals that AI is intensifying workloads rather than reducing them, leading to a new form of mental fog. While AI is generally supposed to lighten the load, it often forces users into constant task-switching and intense oversight that actually clutters the mind. This mental static happens because you aren't just doing your job anymore; you are managing multiple digital agents and double-checking their work, which creates a massive cognitive burden. The study found that 14% of full-time workers already feel this fog, with the highest impact seen in technical fields like software development, IT, and finance. High oversight is the biggest culprit, as supervising multiple AI outputs leads to a 12% increase in mental fatigue and a 33% jump in decision fatigue. This isn't just a personal health issue; it directly impacts companies because exhausted employees are 10% more likely to quit. For massive firms worth many B, this decision paralysis can lead to millions of dollars in lost value due to poor choices or total inaction. Essentially, we are working harder to manage our tools than we are to solve the actual problems they were meant to fix. --- hbr .org/2026/03/when-using-ai-leads-to-brain-fry
Rohan Paul tweet media
English
146
367
1.5K
568.8K
Paul Rossi, anti-maxxer
Paul Rossi, anti-maxxer@pauldrossi·
@a16z @pmarca What about the trap of overreacting to stimuli that seem significant but aren't. Then you just make faster mistakes.
English
2
0
18
2K
a16z
a16z@a16z·
"Speed wins." "You have to be willing to commit to being fast. You can't have long bureaucratic processes. You can't have a risk-averse posture." @pmarca explains the OODA loop — and why the fastest operator controls the narrative in business, media, and politics: "There's a framework called the OODA loop, originally developed for fighter pilots and later for broader military strategy." "It stands for observe, orient, decide, act. It's basically the decision-making cycle." "If speed is the thing that matters, then the person who gets through that cycle the fastest is the one who's going to win." "If you can have a sustainably faster OODA loop processing cycle than the next guy — think about what happens… You operate and make a decision within an hour. The other guy is still inside his own OODA loop when you make your decision. He's only halfway through his process and now has to start over. You've changed the parameters of what's going on." "This is also a big explanation for what's happened in traditional media." "The New York Times has its own OODA loop, and it's like 24 hours to go through its process."
English
144
528
4.2K
344.1K