Rishi Kulkarni

3.9K posts

Rishi Kulkarni banner
Rishi Kulkarni

Rishi Kulkarni

@rishikulkarni

Co-founder https://t.co/Zagg7qHRNk, Co-founder @revv_so (acquired LegalZoom). Founder @1clickio (acquired Freshworks)

Bangalore انضم Mayıs 2009
3.2K يتبع1.1K المتابعون
Rishi Kulkarni
Rishi Kulkarni@rishikulkarni·
@MohapatraHemant This is still early phase point-of-view! Everything is being thrown at the models to build great software autonomously. Every long tail custom workflow is being built. That artifact is a software. We use tokens to build something that doesn't need tokens is the outcome.
English
0
0
1
61
Hemant Mohapatra
Hemant Mohapatra@MohapatraHemant·
There is such misunderstanding of seat vs token based billing in AI. Software was static - you literally sat on a seat, clicked on GUIs, and extracted value through actions. Companies paid fixed salaries to people who extracted value out of & through software. The more action you take (i.e. the more seats you have), the more value you could extract. AI, if you are truly using it as you're supposed to, feels like an endless tube of toothpaste. One day you'll want to squeeze a lot of it, the next not so much. A third day you may spin up heavy token usage very long horizon task with extended thinking on. You can spin up tasks while you sleep, on a flight, or at the gym, or with family. Or rather, when you're not in the seat! There is no other way to price AI _except_ usage and anyone who's not moved to this model is indicating to the world they just don't get it yet...
English
21
2
98
26.9K
Tejeshwi Sharma 🇮🇳
Tejeshwi Sharma 🇮🇳@tejeshwi_sharma·
Founders often over-index on the function they know best-product, ops, GTM - and treat it as the make-or-break. That’s a bias, not a strategy. Great companies aren’t great at one thing. They’re great at many things working in sync. Functions are tools, not the outcome. The fix: hire leaders who are world-class where you’re not.
English
3
2
36
2.5K
Rishi Kulkarni
Rishi Kulkarni@rishikulkarni·
💯 "This is a moment to build, to expand scope, to take on the work that wasn't possible before. The people and organizations who treat AI as leverage will look back on this period as the one where they got a great deal more ambitious."
PitCrew Agents@GoPitCrew

x.com/i/article/2049…

English
1
0
1
97
Rishi Kulkarni
Rishi Kulkarni@rishikulkarni·
UPI and this 👇 are the fuel
The Indian Matrix@indianmatrix

In 2005, India couldn’t meet 12.3% of its own peak demand. By 2007, the shortfall had widened to nearly 16.6%, and close to 18,000 megawatts were unavailable. The early 2000s were years of genuine electricity poverty. Factories ran on diesel backup generators as a matter of routine. Homes in smaller cities and villages received power for a few hours a day. Distribution, which is the final link between the grid and the household, was historically the most neglected and most corrupt part of the chain. Electricity theft was widespread, billing was unreliable, and state electricity boards were financially broken. Reforms here were uneven and politically difficult, but schemes like UDAY, launched in 2015, restructured the debt of state distribution companies and pushed them toward financial viability. The Saubhagya scheme, from 2017, connected the last unelectrified households, around 25 million of them, to the grid by 2019. India’s solar capacity in 2010 was negligible. Today, it is measured in hundreds of gigawatts. The price of solar panels fell globally by over 90% across this period, and India made a strategic bet to capture that cost decline at scale. Rooftop solar programmes brought electricity generation to homes, factories, and commercial buildings. And the International Solar Alliance, co-founded by India in 2015, helped build global momentum. The timing proved critical. India’s peak electricity demand now falls in the afternoon, driven by air conditioning in an increasingly hot country. Solar generates hardest in exactly those hours. On April 25, around 12:30 pm, solar plants and rooftop systems together supplied roughly one-third of all electricity being generated at that moment. Across the full day, solar’s share was around 22%. India today draws 52% of its electricity from non-fossil sources. More than half of every unit generated comes from sun, water, wind, or nuclear. The deficit percentage, which once sat stubbornly above 10%, has now collapsed. Since 2024, it has been effectively zero. Reliable electricity means a small business owner does not budget for a diesel generator as a fixed cost. It means an electric vehicle is practical for someone who cannot afford to be stranded. It means a student in a rural home can study at night without planning around power cuts. It means a hospital runs its equipment on the assumption that the supply will hold. Electricity reliability is, in the end, a quiet form of equity. When the grid is unreliable, those with money buy backup. Those without simply go without. India's closing of its power deficit means that the gap no longer falls along economic lines. The country that once rationed darkness now delivers light on demand, at the moment of highest need, to everyone connected to the grid. That took two decades and thousands of infrastructure decisions. It is not the kind of achievement that fits in a headline. But on an April afternoon, when 256 gigawatts flowed, and nothing broke, it showed.

English
0
0
1
56
Rishi Kulkarni
Rishi Kulkarni@rishikulkarni·
@paraschopra Deploying an agent is not easy task. OpenClaw had the right scaffold which finally pushed vibe coders to think deployment first. The connections are truly deployment-first solution, which vibing doesn't make it easy. OpenClaw was the IKEA of agents.
English
0
0
1
462
Paras Chopra
Paras Chopra@paraschopra·
I don’t get the OpenClaw hype Connecting Claude with Telegram / WhatsApp is trivially easy, you can literally ask it to help you do this and it’ll guide you. Same story with recurring jobs. I just did this - now Claude send me local bangalore news summary at 12pm IST daily on Telegram. Took me 15 mins to build. If the argument is that Claw lets nontech users do this, imagine the security implications when users let an LLM take over their system while having no idea what’s happening under the hood. Making custom scripts and workflows with Claude lets you at least know what you’re configuring on your system.
English
243
36
1.1K
172.4K
Rishi Kulkarni
Rishi Kulkarni@rishikulkarni·
@sajithpai But that's just one dimension. Work per token (if such a dimension existed) has also fallen off. Too many tokens are needed to get anything done.
English
0
0
1
312
Lucas Meijer
Lucas Meijer@lucasmeijer·
Who has a really nice setup like this? - Cloud based agents - Kanban board-esque overview - Full control over agent loop
English
74
4
183
41.3K
Rishi Kulkarni
Rishi Kulkarni@rishikulkarni·
@ravi_lsvp Context winners from multiple angles. Systems of records believe they have right to win, Frontier models could build harness infra to simplify context capture. And domain or vertical experts bring unique workflow trajectories that address trust+context (eg financia services)
English
0
1
3
40
Ravi Mhatre
Ravi Mhatre@ravi_lsvp·
Stanford’s AI Index dropped this week — the most data-dense picture we have of where this technology is and where it’s scaling. A few things jumped out at me. The headline most people will miss: AI capability is no longer the bottleneck. The top four frontier models are now separated by fewer than 25 points on Arena. SWE-bench Verified has the leaders clustered within a few percentage points of each other. On real computer tasks (OSWorld), the best model went from 12% to 66% accuracy in under a year, within 6 points of human performance. Same for Terminal-Bench: 20% to 77%. The models can do the work. That debate is over. It suggests the massive transformation we’ve seen in the coding market the past few months is coming for other industries very soon. Employment for software developers ages 22-25 fell nearly 20% from 2024. What happened to software engineering is coming for legal, finance, and professional services. So why is AI agent deployment still stuck in single digits across nearly all business functions? Why are most businesses experimenting? 88% say they use AI somewhere. The report’s data suggests two big blockers. The first is context. The report clearly shows AI gains are strongest in tasks that are structured, with clear feedback loops and quality monitoring. That is partly why coding fell first: tight iteration loops between agent and codebase, outputs you can test. The test is how structured and accessible the underlying data is: AI adoption is highest in software engineering and knowledge management, and lowest in strategy and finance. Scaling AI into legal, consulting, professional services means making the context loop tighter. That means building context graphs that connect agents to permissioned, always-current organizational data (which companies like Glean are doing) — and much more architecture besides. Models still struggle to synthesize across long documents even as context windows grew 30x. An infra layer is missing. The second is trust infrastructure. Foundation model transparency actually declined this year. Hallucination rates across 26 top models range from 22% to 94%. Safety benchmarks remain spotty. Documented AI incidents rose to 362, up from 233 the year before. A CFO is not going to trust an AI agent to help close the books if it could cook them. $582 billion flowed into AI last year, up 130%. Hundreds of billions of that went to compute, but only a fraction went to the infrastructure that determines whether any of it gets deployed where it matters. Software is now cheap. But context is still expensive, and what scales will be what is trusted. If we solve those unhobblings, we’ll see more tipping points in more parts of the economy. Great summary from @LuizaJarovsky 👇
Luiza Jarovsky, PhD@LuizaJarovsky

🚨 BREAKING: Stanford's 423-page AI Index Report 2026 is out! [Bookmark it below]. These are its key takeaways: 1. AI capability is not plateauing. It is accelerating and reaching more people than ever. 2. The U.S.-China AI model performance gap has effectively closed. 3. The U.S. hosts the most AI data centers, with the majority of its chips fabricated by one Taiwanese foundry. 4. AI models can win a gold medal at the International Mathematical Olympiad but cannot reliably tell time, an example of what researchers call the jagged frontier of AI. 5. Robots still fail at most household tasks, even as they excel in controlled environments. 6. Responsible AI is not keeping pace with AI capability, with safety benchmarks lagging and incidents rising sharply. 7. The U.S. leads in AI investment, but its ability to attract global talent is declining. 8. AI adoption is spreading at historic speed, and consumers are deriving substantial value from tools they often access for free. 9. Productivity gains from AI are appearing in many of the same fields where entry-level employment is starting to decline. 10. AI’s environmental footprint is expanding alongside its capabilities. 11. AI models for science can outperform human scientists, though bigger models do not always perform better. 12. AI is transforming clinical care, but rigorous evidence remains limited. 13. Formal education is lagging behind AI, but people are learning AI skills at every stage of life. 14. AI sovereignty is becoming a defining feature of national policy, but capabilities remain uneven, even as open-source development helps to redistribute who participates. 15. AI experts and the public have very different perspectives on the technology’s future, and global trust in institutions to manage AI is fragmented. - 👉 Download the full document below. 👉 To learn more about AI's legal and ethical challenges, join my newsletter's 93,500+ subscribers (link below).

English
2
0
6
1.2K
Rishi Kulkarni
Rishi Kulkarni@rishikulkarni·
It's communlative and interesting to see non-frontier models led the hanesss engg. Expect more domain/vertical context driving the next phase.
Akshay 🚀@akshay_pachaar

from weights → context → harness engineering (evolution of agent landscape from 2022-26) the biggest shift in AI agents had nothing to do with making models smarter. it was about making the environment around them smarter. here's how agent engineering evolved in just 4 years, across three distinct phases: 𝗽𝗵𝗮𝘀𝗲 𝟭: 𝘄𝗲𝗶𝗴𝗵𝘁𝘀 (𝟮𝟬𝟮𝟮) everything was about the model itself. bigger models, more data, better training. scaling laws told us that progress = more parameters. RLHF and fine-tuning shaped behavior. if you wanted a better agent, you trained a better model. this worked great for single-turn tasks. ask a question, get an answer. but it hit a wall fast. updating one fact meant retraining. auditing behavior was nearly impossible. and personalization across millions of users from one frozen set of weights? not happening. 𝗽𝗵𝗮𝘀𝗲 𝟮: 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 (𝟮𝟬𝟮𝟯-𝟮𝟬𝟮𝟰) the realization: you don't always need to change the model. you can change what the model sees. prompt engineering, few-shot examples, chain-of-thought, RAG. suddenly the same frozen model could behave completely differently based on what you put in front of it. developers stopped fine-tuning and started iterating on prompts and retrieval pipelines instead. it was cheaper, faster, and surprisingly effective. but context windows are finite. long prompts get noisy. models attend unevenly (the "lost in the middle" problem is real). and every new session starts fresh with zero memory of what happened before. context made agents flexible. it didn't make them reliable. 𝗽𝗵𝗮𝘀𝗲 𝟯: 𝗵𝗮𝗿𝗻𝗲𝘀𝘀 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 (𝟮𝟬𝟮𝟱-𝟮𝟬𝟮𝟲) this is where we are now, and the shift is fundamental. the question changed from "what should we tell the model?" to "what environment should the model operate in?" the model is no longer the sole location of intelligence. it sits inside a harness that includes persistent memory, reusable skills, standardized protocols (like MCP and A2A), execution sandboxes, approval gates, and observability layers. the model stays the same. what changes is the task it's being asked to solve. a concrete example: a coding agent asked to implement a feature, run tests, and open a PR. without a harness, the model must keep repo structure, project conventions, workflow state, and tool interactions all inside a fragile prompt. with a harness, persistent memory supplies context, skill files encode conventions, protocolized interfaces enforce correct schemas, and the runtime sequences steps and handles failures. same model. completely different reliability. 𝘁𝗵𝗲 𝗽𝗮𝘁𝘁𝗲𝗿𝗻 𝗮𝗰𝗿𝗼𝘀𝘀 𝗮𝗹𝗹 𝘁𝗵𝗿𝗲𝗲 𝗽𝗵𝗮𝘀𝗲𝘀 𝗶𝘀 𝘀𝗶𝗺𝗽𝗹𝗲: - weights encoded knowledge in parameters (fast but rigid) - context staged knowledge in prompts (flexible but ephemeral) - harnesses externalized knowledge into persistent infrastructure (reliable and governable) each phase didn't replace the previous one. it layered on top. weights still matter. context engineering still matters. but the center of gravity has moved outward. the most consequential improvements in agent reliability today rarely come from changing the base model. they come from better memory retrieval, sharper skill loading, tighter execution governance, and smarter context budget management. building better agents increasingly means building better environments for models to operate in. there's a great paper on this: Externalization in LLM Agents: A Unified Review of Memory, Skills, Protocols and Harness Engineering paper: arxiv.org/abs/2604.08224 i also published this deep dive (article) on agent harness engineering, covering the orchestration loop, tools, memory, context management, and everything else that transforms a stateless LLM into a capable agent.

English
0
0
3
76
Rishi Kulkarni
Rishi Kulkarni@rishikulkarni·
This is the one reason there is potential for multiple harnesses in an org. Each harness provides the ability to encode environment, guardrails, interventions and tools for a given set of outcomes, evolving into batteries-included harness managed by "Harness success managers"
Aaron Levie@levie

The more enterprises I talk to about AI agent transformation, the more it’s clear that there is going to be a new type of role in most enterprises going forward. The job is to be the agent deployer and manager in teams. Here’s the rough JD: This person will need to figure out what are the highest leverage set of workflows on a team are (either existing or new ones) where agents can actually drive significantly more value for the team and company. In general, it’s going to be in areas where if you threw compute (in the form of agents) at a task you could either execute it 100X faster or do it 100X more times than before. Examples would be processing orders of magnitude more leads to hand them off to reps with extra customer signal, automating a contracting review and intake process, streamlining a client onboarding process to reduce as many straps as possible, setting up knowledge bases than the whole company taps into, and so on. This person’s job is to figure out what the future state workflow needs to look like to drive this new form of automation, and how to connect up the various existing or new systems in such a way that this can be fulfilled. The gnarly part of the work is mapping structured and unstructured data flows, figuring out the ideal workflow, getting the agent the context it needs to do the work properly, figuring out where the human interfaces with the agent and at what steps, manages evals and reviews after any major model or data change, and runs and manages the agents on an ongoing basis tracking KPIs, and so on. The person must be good at mapping the process and understanding where the value could be unlocked and be relatively technical, and has full autonomy to connect up business systems and drive automation. This means they’re comfortable with skills, MCP, CLIs, and so on, and the company believes it’s safe for them to do so. But also great operationally and at business. It may be an existing person repositioned, or a totally net new person in the company. There will likely need to be one or more of these people on every team, so it’s not a centralized role per se. It may rile up into IT or an AI team, or live in the function and just have checkpoints with a central function. This would also be a fantastic job for next gen hires who are leaning into AI, and are technical, to be able to go into. And for anyone concerned about engineers in the future, this will be an obvious area for these skills as well.

English
0
1
2
80