prasad kompalli

711 posts

prasad kompalli banner
prasad kompalli

prasad kompalli

@pkompalli

tech and start ups ( SAP, Indus Bionics, myntra, MFine, Oncourse )

Bangalore Katılım Şubat 2008
585 Takip Edilen881 Takipçiler
prasad kompalli
prasad kompalli@pkompalli·
Loved this write up about my all time favorite movie @pranatik/note/c-214860941?r=qm98&utm_source=notes-share-action&utm_medium=web" target="_blank" rel="nofollow noopener">substack.com/@pranatik/note…
English
0
0
1
42
prasad kompalli
prasad kompalli@pkompalli·
@fabianstelzer wow ... we are not very far from -> every lecture , style, form-factor, lecturer, topic is just few mins of vibe code / config of tools would love to test and be early an adopter.
English
0
0
0
42
fabian
fabian@fabianstelzer·
You are insufficiently astonished by what Claude can do with the right scaffolding. Here I asked it to generate an influencer video explaining LDL cholesterol and statins on a white board.
English
216
201
3.2K
460.7K
prasad kompalli
prasad kompalli@pkompalli·
The notion of *unearned wisdom* 👌 good to find a name for something we see often but were unable to call it …
Carlos E. Perez@IntuitMachine

Ever wonder why ChatGPT can write a sonnet about quantum physics but sometimes fails at counting letters? There's a fascinating psychological explanation – and it starts with a warning Carl Jung wrote decades ago. You know how you can't just read about riding a bike and suddenly be good at it? You have to fall a few times. Build muscle memory. Go from wobbly to confident. Jung called knowledge without that struggle "unearned wisdom." And he warned: beware of it. Here's what's wild: this 20th-century insight might explain one of AI's biggest unsolved problems. (Stick with me – this gets interesting.) Think about how you actually learned math. First addition, then multiplication, then algebra. Each builds on the last. You can't skip steps. Your brain literally organized itself differently because of that sequence. It created what researchers call "Unified Factored Representations" – modular, adaptable knowledge. Now imagine learning addition, calculus, and differential equations all at the same exact moment. Sounds ridiculous, right? That's basically how we train AI. Modern language models are "epistemological vacuum cleaners" (love that phrase). They ingest millions of documents simultaneously – nursery rhymes alongside PhD dissertations, basic arithmetic alongside advanced proofs. Everything flattened into one massive parallel gulp. Here's where it gets problematic: When you learn naturally, your knowledge is modular. You can update your understanding of, say, cooking techniques without forgetting how to ride a bike. But when AI learns everything at once? It creates what researchers call "Fractured Entangled Representations." Imagine dropping all the buildings of a city from the sky at once instead of building them street by street. Sure, you have a city. But try to renovate one building? The whole thing might collapse. Everything's tangled with everything else. That's the AI's knowledge structure. This explains SO much: Why AI hallucinates facts it should "know" Why updating one capability can break another (catastrophic forgetting) Why these systems can't do genuine compositional reasoning Why they're bad at truly novel problems The knowledge was never earned. It was dumped. The hidden cost is what researchers call "destroyed evolvability." You can't easily improve these systems without retraining from scratch. The representations aren't modular enough. Touch one thing, ripple effects everywhere. It's intellectual spaghetti code. Think about a master carpenter. She didn't read every woodworking book at once. She: Learned basic cuts Struggled with joints Felt wood grain Made mistakes Developed intuition Her knowledge compresses elegantly. Yours does too. AI's knowledge? Bloated and brittle. Here's the part that keeps me up at night: We're building systems with vast knowledge but no foundation. They're like buildings with impressive facades but no load-bearing walls. Jung intuited this – knowledge without the struggle of earning it lacks depth. But wait – this isn't just philosophical musing. Some researchers are exploring solutions: Curriculum learning (carefully sequenced training) Adversarial methods (creating productive struggle) Meta-learning (optimizing for future adaptability) Teaching AI like we'd teach a child, not filling a database. Next time you interact with AI, notice: When does it feel shallow vs. insightful? When does it confidently claim something false? When can't it adapt a concept to a new context? You're probably seeing the fingerprints of unearned knowledge. The deeper question: Can we create artificial wisdom, or just artificial knowledge? Wisdom requires synthesis, struggle, developmental stages. You can't speedrun it. Makes you wonder if we're optimizing for the wrong thing entirely. The price of unearned knowledge isn't what the AI fails to learn. It's what it becomes incapable of learning in the future. Jung saw this in human psychology. We're rediscovering it in machine learning. Some lessons, it turns out, are timeless.

English
0
0
0
79
prasad kompalli
prasad kompalli@pkompalli·
@kmr_dilip incidentally came across this interesting piece talking about US's *struggle with stories* x.com/IntuitMachine/…
Carlos E. Perez@IntuitMachine

I keep seeing the same weird pattern everywhere: Western leaders constantly "surprised" by moves China telegraphed years in advance. DeepSeek AI. Electric vehicle dominance. 5G infrastructure. Solar panel manufacturing. Every time: shock, confusion, scrambling to respond. Why? 🧵 Here's the thing that finally clicked for me: The West and China are literally running different cognitive operating systems. We tell stories. They make maps. And that difference explains almost everything about why we're constantly playing catch-up. Think about how you normally explain something important... You probably tell a story, right? "Here's what happened, here's why, here's what it means." Beginning, middle, end. Heroes and villains. Cause and effect. That's how Western minds work. We're storytelling machines. Our entire political system runs on stories: "Make America Great Again" "Build Back Better" "The American Dream" Even our economic theories are stories. The "invisible hand" isn't a mechanism you can diagram - it's a narrative device. But here's where it gets interesting... Stories are TERRIBLE navigation tools. Imagine trying to drive across the country using only stories about other people's road trips. No GPS, no maps, just "Well, Bob said he turned left after the giant cow statue..." Now imagine you're competing against someone using actual maps. While you're trying to remember if Bob mentioned construction on I-80, they're looking at real-time traffic data, weather patterns, and calculating optimal routes. Who wins that race? Have you ever noticed how China announces their plans publicly, years in advance, yet somehow still achieves "strategic surprise"? They literally told everyone they were going all-in on electric vehicles in 2009. On AI dominance by 2030. On solar manufacturing in the 2000s. We heard the words but couldn't process the information. Why? Because we were waiting for a STORY. A narrative that made sense. "China is doing X because Y, which means Z." But they weren't telling stories. They were showing maps. Here's a mental experiment: Imagine two kids playing chess. One narrates every move: "My brave knight ventures forth to challenge your bishop!" The other just sees positions: "Knight to K5 threatens three pieces and opens the center." Who's going to win? This isn't about intelligence. It's about frameworks. When Western analysts look at China, they ask: "What's their story? What do they believe? What's their ideology?" When Chinese strategists look at the West, they ask: "What's their position? Where are they moving? What are their dependencies?" The map shows things stories hide. A map reveals that electric vehicles require battery technology at a certain evolution point. That AI depends on computational commoditization. That solar dominance requires polysilicon control. These aren't narrative connections - they're structural dependencies. But that's not the interesting part... Maps let you PRE-POSITION. While we're telling stories about "the future of transportation," China was investing in lithium processing facilities. While we debated the ethics of AI, they were stockpiling GPUs and training data. The Five-Year Plans everyone mocks? They're not stories about China's future. They're maps. Each province provides positional data: where they are, where they're heading, what resources they need. It's GPS coordinates, not narrative arcs. Think about how you normally engage with news about China... You probably look for the story angle, right? "China cracks down on tech" or "China opens up to foreign investment." But these aren't plot points in a story. They're positional adjustments on a map. Here's where it clicks: Stories lock you into timeframes. A political story needs resolution within an election cycle. A corporate story needs quarterly closure. Maps work across ALL timescales simultaneously. You can see the 40-year trajectory while navigating today's terrain. Then something weird happened... I realized this explains our entire strategic confusion. We keep trying to have narrative contests with someone playing a positional game. We're having a poetry slam while they're playing chess. Trade wars? We see them as stories about "winning" and "losing." China sees positions on a supply chain map. Tech competition? We narrative it as an "AI race." China maps computational resources, data advantages, and talent flows as distinct coordinates. Here's what this means for you: Next time you read analysis about China (or any strategic competitor), ask yourself: Is this telling me a story, or showing me positions? Is this explaining what happened, or revealing what's possible? Try this experiment: Take any news about China and strip away the narrative. Just look at: What positions are they taking? What capabilities are they building? What dependencies are they securing? The strategy becomes visible once you stop looking for the story. Even better - watch how Western responses reveal story-thinking: Tariffs? That's a narrative tool ("protecting workers"). Sanctions? Story device ("punishing bad behavior"). Technology bans? Plot twist ("denying them victory"). None of these change positional reality. Now you might think: "But stories matter! They inspire people, create meaning, build cohesion!" Absolutely true. Stories create meaning. Maps create advantage. Stories explain why. Maps show how. Stories comfort. Maps navigate. We need both. But we only have one. The real kicker? One Western strategist figured this out. Simon Wardley @swardley created a mapping methodology for strategy. His inspiration? Sun Tzu's Art of War. Ancient Chinese strategic text. When Westerners independently develop strategic tools, they converge on Chinese principles. Makes you wonder what else we're missing because we're looking for stories instead of patterns... Are we misreading Russia? India? The entire global tech landscape? How many "surprises" are actually positions clearly marked on maps we don't know how to read? The choice ahead isn't abandoning stories - they're core to who we are. It's about adding cartographic capability. Learning to see positions as well as narratives. Understanding structure alongside meaning. Because in a world of increasing complexity, navigation beats narration every time. Changes how I think about every "China shocked the world" headline. They didn't shock anyone. They showed their map, made their moves, reached their position. We were just too busy crafting narratives to notice the terrain had already shifted.

English
0
0
0
138
Dilip Kumar
Dilip Kumar@kmr_dilip·
If there’s one thing America has mastered and India hasn't, is the the art of selling a story. The American Dream was the best marketing campaign of the 20th century. They sold the American Dream so hard that the brightest talent across the world left their homes to chase it. They postured about American Dynamism until every investor believed that innovation could only happen in Silicon Valley. These stories gave them soft power, talent, and capital. I think India today is at a similar moment. Yes, we still wrestle with corruption, bureaucracy, and gaps in infrastructure. But so did the US in its rise. That didn’t stop them from relentlessly selling their own narrative. What India needs now is conviction to sell the Indian Dream. To build for 1.4B people and to create confidence among our entrepreneurs and consumers that Indian products and companies and brands are worth backing and using. There are a ton of challenges we've in health, climate, education & energy. But these are the very challenges that can create opportunities. We should call our problems what they are -trillion dollar opportunities. America didn’t just build companies but built stories the world wanted to believe in. If America won on story and China on scale, India must win by solving the hardest problems for the most people living in India. The media, government & investors should constantly be talking about entrepreneurs here and the Indian Dream.
English
100
119
960
50.6K
prasad kompalli retweetledi
Harsha G.
Harsha G.@harshaog·
Why are we pretending that the pinnacle of AI interaction is typing questions and getting text back? We are pattern-matching, visual-processing, spatial-reasoning creatures who happened to develop language as a useful hack. We think in relationships and patterns, yet currently we're forced to translate everything into text requests and then back into understanding. Read more on my latest blog: Re-imagining AI Interfaces
Harsha G. tweet mediaHarsha G. tweet mediaHarsha G. tweet media
English
2
10
22
3.2K
prasad kompalli
prasad kompalli@pkompalli·
This test match win and the overall series feels very special.. as it’s pulled off by supposed *team in transition with not much experience* #INDvsENG Right up there along with #BGT2021
English
0
0
3
105
prasad kompalli
prasad kompalli@pkompalli·
great to see the nextgen win the match. Finally a team thats grounded, hungry for learning and focussed to perform and with no one who feels they are above the game itself... #INDvsENG2025
English
1
0
1
141
prasad kompalli
prasad kompalli@pkompalli·
Tech head tells me “last week Devin gave max PRs and the feature change you asked was completely done by Devin” Amazed at how fast and far AI for coding has come and what it lies ahead for software dev.. @getOncourseAI #AI #AIforcoding #genAI
English
0
0
2
80
prasad kompalli
prasad kompalli@pkompalli·
Kids want to add *Equity* concept into Monopoly game !!! Sign of times and influences (-: .. just curious - would VCs enjoy such a game if it exists ? @banglani @TheSwamy
English
0
0
2
134
prasad kompalli
prasad kompalli@pkompalli·
@AravSrinivas @AravSrinivas instead of solving, try benchmarking in setting a JEE advanced level paper without any errors in the question formation. I found that more interesting and challenging for the reasoning capabilities of LLMs
English
0
0
2
56
Aravind Srinivas
Aravind Srinivas@AravSrinivas·
Has anyone benchmarked how much these AI tools like Perplexity, Grok or ChatGPT with reasoning mode (and also web search) would score in a JEE Advanced? If someone’s interested in carrying out a rigorous evaluation, would be happy to support with API credits.
English
203
69
2.3K
224.8K
prasad kompalli
prasad kompalli@pkompalli·
I find all LLMs faltering with consistency , even in basic concepts.. of course with straight forward prompting only. but expectation is one need not be expert at prompting to get useful questions from LLMs
English
0
0
0
77
prasad kompalli
prasad kompalli@pkompalli·
A small observation : more than solving HL math/physics/coding problems, I find getting LLMs to 'formulate' good set of solvable problems in a given topic ( algebra, geometry ... ) is a challenge. LLMs should be benchmarked in this. #GenAI #LLMbenchmarks
English
2
0
3
159
prasad kompalli
prasad kompalli@pkompalli·
Its been super exciting , ...building cool new learning experiences ..!
English
0
1
2
814
prasad kompalli retweetledi
Harsha G.
Harsha G.@harshaog·
Are you a fresher who wants to help sculpt the future of Learning with AI? Let's chat. We're hiring a Software Engineering & a Product Intern.
Harsha G. tweet media
English
1
6
9
1.9K
Ritesh Banglani
Ritesh Banglani@banglani·
Any sufficiently advanced technology is indistinguishable from magic (minutes before landing in Delhi 🤯)
Ritesh Banglani tweet media
English
7
1
59
4.9K