Ajay Krishna Amudan

72 posts

Ajay Krishna Amudan

Ajay Krishna Amudan

@ajkrish95

Founder, CTO @Maximor_AI, Prev @Microsoft @Stanford @IITMadras, Math Olympian and Competitive Programmer in another lifetime ( IMO 2011, 12 & ACM ICPC WF 2015 )

New York, NY Katılım Nisan 2014
1.7K Takip Edilen246 Takipçiler
Ajay Krishna Amudan
Ajay Krishna Amudan@ajkrish95·
“Plenty of vendors inside regulated verticals are still getting squeezed because they never became AI-native. BlackLine ($BL) and Trintech are feeling it in close and reconciliation as Numeric, Maximor, and Stacks build AI-native from day one. nCino ($NCNO) in banking faces the same challenge. The regulatory moat buys you time. It doesn't buy you the decade.” ^ Respectfully, while it is true that Blackline and Floqast customers are flocking to @maximor_ai , what we’re really building is the Autonomous Finance and Ops Engine. We’re helping our CFOs and the entire Office of the CFO move from being mostly back office to mostly front office. Our contract sizes blow Blackline’s and Floqast’s out of the water precisely because of that - our average ACVs TODAY ( and we’re just about about 8 months post commercialization ) are 3x-4x Floqast’s ( they started in 2012 ) - this is only possible because when CFOs turn to Maximor - they’re doing it for Palantir style transformation for the Finance function but with short implementation timelines ( most of our agents now get implemented in < 2 weeks ) with 99%+ accuracy with an entire verification layer underpinning our architecture.
Brad Lyons@blyons151

In August I wrote a thesis I never published. The funds I was warning were key Crossover Research clients, so I stayed quiet. Since then, 𝗦𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲𝘀 𝗮𝗿𝗲 𝗱𝗼𝘄𝗻 𝟱𝟬%+. Salesforce $CRM, ServiceNow $NOW, Adobe $ADBE, Workday $WDAY all off 40% from highs. Thomson Reuters $TRI dropped 16% in a single session on the Anthropic legal agent launch. The SaaSpocalypse arrived. So here's the follow-up. Not commentary on what happened, but where I think this goes next. Most vertical SaaS companies aren't underperforming because their software is bad. 𝗧𝗵𝗲𝘆'𝗿𝗲 𝘂𝗻𝗱𝗲𝗿𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗶𝗻𝗴 𝗯𝗲𝗰𝗮𝘂𝘀𝗲 𝘁𝗵𝗲𝘆 𝗻𝗲𝘃𝗲𝗿 𝗯𝘂𝗶𝗹𝘁 𝘁𝗵𝗲 𝘀𝗲𝗰𝗼𝗻𝗱 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀. And the first business is under attack. For twenty years, one of the biggest SaaS moats was engineering complexity: deep technical talent, long roadmaps, compounding codebases that were genuinely hard to replicate. 𝗔𝗜 𝘂𝗽𝗲𝗻𝗱𝗲𝗱 𝘁𝗵𝗮𝘁 𝗮𝗹𝗺𝗼𝘀𝘁 𝗼𝘃𝗲𝗿𝗻𝗶𝗴𝗵𝘁. Product development is democratizing to operators with no code background but strong product vision. Look at Anthropic: they've built the engine and are shipping lookalike products at a cadence that would have taken a legacy SaaS vendor three years of roadmap, with a fraction of the headcount. That pace can kill legacy businesses overnight. 𝗜𝗳 𝘁𝗵𝗲 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗺𝗼𝗮𝘁 𝗶𝘀 𝗴𝗼𝗻𝗲, 𝗳𝗼𝘂𝗿 𝗺𝗼𝗮𝘁𝘀 𝗿𝗲𝗺𝗮𝗶𝗻: 𝗱𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻, 𝗽𝗿𝗼𝗽𝗿𝗶𝗲𝘁𝗮𝗿𝘆 𝗱𝗮𝘁𝗮, 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄 𝗯𝗿𝗲𝗮𝗱𝘁𝗵, 𝗮𝗻𝗱 𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗶𝗻𝘀𝘂𝗹𝗮𝘁𝗶𝗼𝗻. The first three are moats the company builds. The fourth is a moat the company captures, and it's the one most resistant to AI disruption. 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗰𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 𝗰𝗿𝗲𝗮𝘁𝗲𝘀 𝘀𝘄𝗶𝘁𝗰𝗵𝗶𝗻𝗴 𝗰𝗼𝘀𝘁𝘀 𝘁𝗵𝗮𝘁 𝗵𝗮𝘃𝗲 𝗻𝗼𝘁𝗵𝗶𝗻𝗴 𝘁𝗼 𝗱𝗼 𝘄𝗶𝘁𝗵 𝗽𝗿𝗼𝗱𝘂𝗰𝘁 𝗾𝘂𝗮𝗹𝗶𝘁𝘆. Once a vendor is embedded in a compliance workflow, ripping them out means re-attesting, re-auditing, and re-certifying every downstream process. The buyer isn't paying for software, they're paying for the accumulated paper trail. Tyler Technologies ($TYL) is the clearest version of the pattern. State and local government software across courts, public safety, assessment, and ERP. Every module is married to statutory process, FIPS, CJIS, audit trails, and procurement cycles that take years. TYL is down 42% TTM and 2026 guidance came in soft, but the moat didn't break. Revenue still compounded, and government procurement runs on five-year cycles, not five-week news cycles. Veeva is the sharper version. Revenue up 16% in FY26, Q4 beat, the stock still down 25%. The market is selling execution, not weakness. Guidewire in P&C insurance, where regulatory filings and rate approvals anchor the stack, sits in the same setup: still compounding ARR, still winning cloud conversions, multiple reset anyway. Same pattern across all three: multiples compressed, fundamentals intact. The moat is the regulatory surface area itself, and it compounds because the rules get more complex, not less. 𝗜 𝘄𝗮𝘀 𝗹𝗼𝗻𝗴 𝗣𝗮𝗹𝗮𝗻𝘁𝗶𝗿 𝗮𝘁 $𝟭𝟯 (read that here: x.com/blyons151/stat…). 𝗡𝗼𝘁 𝗯𝗲𝗰𝗮𝘂𝘀𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝗼𝗿 𝘁𝗵𝗲 𝘁𝗼𝗼𝗹𝗶𝗻𝗴. 𝗕𝗲𝗰𝗮𝘂𝘀𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗼𝗻𝘁𝗼𝗹𝗼𝗴𝘆. Palantir is the proprietary-data version of the regulatory thesis. Once Palantir sits between the customer and their own data, ripping it out means rebuilding the data model from scratch. Snowflake and Databricks never had that entrenchment layer. AIP bootcamps then turned the data moat into a distribution moat: 660 bootcamps in a single quarter, 94% y/y US customer deal growth, bookings at 1.9x sales. Own the data, ship functional AI on top of it, let the GTM compound. Every vertical incumbent has a version of this available. The question is whether they'll build it before a challenger does. But regulatory insulation is necessary, not sufficient. Plenty of vendors inside regulated verticals are still getting squeezed because they never became AI-native. BlackLine ($BL) and Trintech are feeling it in close and reconciliation as Numeric, Maximor, and Stacks build AI-native from day one. nCino ($NCNO) in banking faces the same challenge. The regulatory moat buys you time. It doesn't buy you the decade. 𝗧𝗵𝗲 𝘄𝗶𝗻𝗻𝗶𝗻𝗴 𝗳𝗼𝗿𝗺𝘂𝗹𝗮 𝗶𝘀 𝗱𝗮𝘁𝗮 𝗼𝗿 𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝘀𝘂𝗿𝗳𝗮𝗰𝗲 𝗮𝗿𝗲𝗮 𝗽𝗹𝘂𝘀 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗔𝗜, 𝗻𝗼𝘁 𝗼𝗻𝗲 𝗼𝗿 𝘁𝗵𝗲 𝗼𝘁𝗵𝗲𝗿. Look at why Claude is winning. Anthropic isn't competing on model benchmarks, they're competing on functional workflow. Building for the user, not the leaderboard. That's the playbook vertical incumbents need to run. Take the moat you already have, whether it's regulatory or data-entrenchment, layer genuine workflow AI on top, and the challenger can't catch you. The vendors that do both win the decade. The ones that rely on inertia alone get caught. The ones that ship AI without an anchor get commoditized. You need both. 𝗧𝗵𝗲 𝗯𝘂𝘆𝗲𝗿 𝗶𝘀 𝘁𝗲𝗹𝗹𝗶𝗻𝗴 𝘆𝗼𝘂 𝘁𝗵𝗶𝘀 𝗽𝗹𝗮𝗶𝗻𝗹𝘆. A study we ran with Battery Ventures on AI adoption in the Office of the CFO (battery.com/blog/first-cod…) surveyed 129 finance leaders at companies from $50M to $5B+ in revenue. 77% said they want to uplevel existing systems with AI from new vendors that layer onto existing systems. Only 15% want to replace their current system of record with an AI-native platform. The incumbent wins if they ship AI. The AI-native challenger wins only if the incumbent doesn't. The signal shows up in our VoC data too. In regulated verticals, mission criticality scores cluster above 9, and NPS doesn't track satisfaction, it tracks switching friction. Customers will tell you the product is mediocre and still score it 9 on "would not switch" because the compliance team vetoes any alternative. 𝗧𝗵𝗮𝘁'𝘀 𝘁𝗵𝗲 𝘀𝗶𝗴𝗻𝗮𝘁𝘂𝗿𝗲 𝗼𝗳 𝗮 𝗰𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲-𝗶𝗻𝘀𝘂𝗹𝗮𝘁𝗲𝗱 𝘃𝗲𝗻𝗱𝗼𝗿, 𝗮𝘀 𝗹𝗼𝗻𝗴 𝗮𝘀 𝘁𝗵𝗮𝘁 𝘃𝗲𝗻𝗱𝗼𝗿 𝗶𝘀 𝗮𝗰𝘁𝗶𝘃𝗲𝗹𝘆 𝘀𝗵𝗶𝗽𝗽𝗶𝗻𝗴 𝗮𝗴𝗮𝗶𝗻𝘀𝘁 𝘁𝗵𝗲 𝗔𝗜 𝗰𝘂𝗿𝘃𝗲. Which brings us back to the second business for everyone outside the regulated or data-entrenched moat. Seat ARR got them to $100M. But with the shift to agentic workforce structures, partial human capital replacement, and pricing pressure compressing margins, the traditional SaaS model has to transform fast. The next $500M comes from monetizing the installed base: marketplace rake on demand they generate for their own customers, capital products underwritten by their own transaction data, supplier monetization, brand partnerships, group buying. The assets are already sitting there. Captive SMB audience. Proprietary transaction and behavioral data. A distribution pipe (the UI itself) that delivers new products at near-zero CAC. 𝗪𝗵𝗮𝘁'𝘀 𝗺𝗶𝘀𝘀𝗶𝗻𝗴 𝗶𝘀 𝗼𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝘄𝗶𝗹𝗹. Monetizing the installed base requires a different org than the one that got you to scale. Different GTM, P&L optics, and talent. Founders and boards under-invest because year one looks worse before it looks better, and public markets punish any SaaS multiple that starts to look like fintech or marketplace. So the second business never ships. The round prices in the optionality. The multiple compresses. The exit underwhelms. 𝗧𝗵𝗿𝗲𝗲 𝗱𝗶𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝗻𝗼𝘁 𝗲𝗻𝗼𝘂𝗴𝗵 𝗶𝗻𝘃𝗲𝘀𝘁𝗼𝗿𝘀 𝗮𝗿𝗲 𝗮𝘀𝗸𝗶𝗻𝗴: 𝟭. 𝗪𝗵𝗮𝘁 𝗽𝗲𝗿𝗰𝗲𝗻𝘁 𝗼𝗳 𝗿𝗲𝘃𝗲𝗻𝘂𝗲 𝗰𝗼𝗺𝗲𝘀 𝗳𝗿𝗼𝗺 𝘀𝗼𝘂𝗿𝗰𝗲𝘀 𝗼𝘁𝗵𝗲𝗿 𝘁𝗵𝗮𝗻 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗽𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗽𝗮𝘆𝗺𝗲𝗻𝘁 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴? Under 5%, they haven't started. 10 to 20%, thesis is live. Over 20%, it's working. 𝟮. 𝗛𝗼𝘄 𝗵𝗮𝗿𝗱 𝘄𝗼𝘂𝗹𝗱 𝗶𝘁 𝗯𝗲 𝘁𝗼 𝗿𝗲𝗰𝗿𝗲𝗮𝘁𝗲 𝘁𝗵𝗶𝘀 𝗰𝗼𝗺𝗽𝗮𝗻𝘆 𝗳𝗿𝗼𝗺 𝘀𝗰𝗿𝗮𝘁𝗰𝗵 𝘄𝗶𝘁𝗵 𝗔𝗜 𝘁𝗼𝗱𝗮𝘆? If a well-funded team with Claude and six engineers could rebuild the functional product in nine months, the software isn't the moat. The moat has to live somewhere else: proprietary data, a network, integrations, or regulatory surface area the challenger can't clear. If you can't point to at least one, you're underwriting a melting ice cube. 𝟯. 𝗪𝗵𝗮𝘁 𝗽𝗲𝗿𝗰𝗲𝗻𝘁 𝗼𝗳 𝘁𝗵𝗲 𝗯𝘂𝘆𝗲𝗿'𝘀 𝘀𝘁𝗶𝗰𝗸𝗶𝗻𝗲𝘀𝘀 𝗶𝘀 𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆, 𝗮𝗻𝗱 𝘄𝗵𝗶𝗰𝗵 𝘄𝗮𝘆 𝗶𝘀 𝘁𝗵𝗲 𝗿𝘂𝗹𝗲 𝘀𝗲𝘁 𝗺𝗼𝘃𝗶𝗻𝗴? A regulatory moat evaporates if the regulation simplifies. Underwrite the direction of travel, not just the current state. 𝗔𝗻𝗱 𝘁𝗵𝗲 𝗰𝗹𝗼𝗰𝗸 𝗶𝘀 𝘁𝗶𝗴𝗵𝘁𝗲𝗿 𝘁𝗵𝗮𝗻 𝗺𝗼𝘀𝘁 𝗿𝗲𝗮𝗹𝗶𝘇𝗲. Retention in enterprise SaaS has largely been defined by the pain of systems replacement, not genuine moat. If the stickiness isn't backed by proprietary data, a harvesting flywheel, or regulatory surface area, those vendors are about to get disrupted. Pure seat-based pricing is dying unless vendors embrace agent-seat models, and LLM providers have been subsidizing the market on token cost, with recent pricing shifts signaling cash reserves aren't infinite. 𝗛𝗲𝗿𝗲'𝘀 𝘁𝗵𝗲 𝘂𝗻𝗱𝗲𝗿𝗮𝗽𝗽𝗿𝗲𝗰𝗶𝗮𝘁𝗲𝗱 𝗽𝗼𝗶𝗻𝘁: 𝗔𝗜-𝗻𝗮𝘁𝗶𝘃𝗲 𝗰𝗼𝗺𝗽𝗲𝘁𝗶𝘁𝗼𝗿𝘀 𝗵𝗮𝘃𝗲 𝘄𝗼𝗿𝘀𝗲 𝗴𝗿𝗼𝘀𝘀 𝗺𝗮𝗿𝗴𝗶𝗻𝘀 𝘁𝗵𝗮𝗻 𝗦𝗮𝗮𝗦 𝗶𝗻𝗰𝘂𝗺𝗯𝗲𝗻𝘁𝘀, 𝗻𝗼𝘁 𝗯𝗲𝘁𝘁𝗲𝗿. Inference costs haven't collapsed, and burning VC cash to subsidize unit economics is a bridge, not a business model. The incumbents should be winning on P&L. They're losing on product velocity and AI-readiness. That's a solvable problem if the board has the will to ship. Vendors without a second business, without a data moat, and without regulatory insulation will still lose, despite having better margins than their AI-native challengers. Customers switch on features and speed, not on unit economics. 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗮𝗻𝗱 𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗲𝗱 𝘃𝗲𝗿𝘁𝗶𝗰𝗮𝗹𝘀 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗹𝗮𝘀𝘁 𝘀𝗮𝗳𝗲 𝗵𝗮𝗿𝗯𝗼𝗿, 𝗮𝗻𝗱 𝗼𝗻𝗹𝘆 𝗯𝗲𝗰𝗮𝘂𝘀𝗲 𝗼𝗳 𝗱𝗮𝘁𝗮 𝗯𝗿𝗲𝗮𝗱𝘁𝗵 𝗮𝗻𝗱 𝗰𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲. Everywhere else, the premium is about to get competed away. Any fund underwriting vertical SaaS exposure right now should be asking the second-business question before the next check clears. DM me, email me brad@crossoverresearch.com, or let's chat about your portfolio/underwriting process (book.crossoverresearch.com). Crossoverresearch.com

English
0
2
8
492
Ajay Krishna Amudan
Ajay Krishna Amudan@ajkrish95·
I never made any claims on the amount of value yet to be captured at the data layer that is untouched by Snowflake or Databricks. There is likely some but I love Snowflake and Databricks and think they're incredible comoanies/products as well. My point of view is simply this - there will be CONSOLIDATION of vendors ( aka "more horizontal/more capable" products ) which will mean incrementally more shared context, greater ability to leverage synergies across agent touchpoints and a decreasing need for "best in breed" products which are just features masquerading as products. NONE OF THIS means that the winning architecture will be "Anthropic + Snowflake/Databricks + Anthropic FDEs" - I don't think this has absolutely anything to do with long horizon vs short horizon agents. Long horizon agents are autonomous agents that are given more "time" and "tools" to explore and self correct. They are not a substitute for better UX, better data models ( with the right levers of flexibility ), better decision "tracing" or exception handling. In fact long horizon agents tend to degrade very fast the longer the end to end value chain of the intented workflow you're automating becomes ( @ashugarg has explained how the math compounds - let's just say compound interest is a bit** ) - I'd even argue if you don't choose the workflows correctly - you might altogether go the wrong direction. I respect many of your views and even in this debate, I do agree founders need to keep experimenting broadly as new agentic capabilities emerge but I think you maybe off base on this one.
English
0
0
4
65
Gokul Rajaram
Gokul Rajaram@gokulr·
@ajkrish95 Let's see how many new enduring data startups are created in the age of Snowflake and Databricks. I'm willing to bet it's very, very VERY few.
English
3
0
4
815
Gokul Rajaram
Gokul Rajaram@gokulr·
I’m not claiming this yet, but i think it’s going to be more likely as time passes. Why? Because horizontal agents will subsume context and decision traces across hundreds of companies because of their enterprise products. I think you’re assuming that horizontal agents won’t capture any context. This is the opposite of what will happen. They will capture the most context of all, more than any vertical company. What do we think their enterprises teams and FDEs are doing? If a horizontal agent has decision / context data from 50 fintech companies, how different is the 51st fintech company from the other 50? Everyone believes their company is a special snowflake but: — most companies operate and make decisions quite similarly to others in their vertical. — the company might be operating wrongly and the horizontal agent will be able to suggest best practices.
Jaya Gupta@JayaGup10

I think what you’re claiming is: a long-horizon agent given enough time (but minimal context) can match what a vertical agent does with rich context. In other words, capability can substitute for memory. Grind long enough and you’ll figure it out. You can’t substitute time for knowledge/memory. A long-horizon agent working for a century still doesn’t know that we handle Customer X differently because of what happened in 2024.

English
19
8
106
50.3K
Ron Williams
Ron Williams@McclaneDet·
The mistake the structured data and ontology believers make is they can’t let the human experience of data access go so the primitives being used blind then to what is actually possible. The thought experiment is how did a CFO of a pre-computer age company run a global empire like US Steel during the time of Carnegie? You could deploy a network of LLMs with a camera, a microphone, a speaker and a printer and just drop them in to replace all the people and the backend of the business would hum along. No databases, no data lake, etc.. and that’s just today’s LLMs.
English
1
0
1
74
Ajay Krishna Amudan retweetledi
Jaya Gupta
Jaya Gupta@JayaGup10·
I think what you’re claiming is: a long-horizon agent given enough time (but minimal context) can match what a vertical agent does with rich context. In other words, capability can substitute for memory. Grind long enough and you’ll figure it out. You can’t substitute time for knowledge/memory. A long-horizon agent working for a century still doesn’t know that we handle Customer X differently because of what happened in 2024.
Gokul Rajaram@gokulr

VERTICAL AI CHALLENGE Vertical AI Founders: You've spent 2+ years building your agents, training your model on your customers' data, embedding into workflows, creating a powerful GTM motion, all the best practices. You've beaten back challengers and are the #1 or #2 player in your vertical. I'm sorry, you cannot relax. In fact, you need to massively up your game. Turns out you are facing an existential challenge: long-horizon agents (eg: Claude Code). Agents that are not trained on a specific domain, but can reliably work for hours or days on end in pursuit of a goal, self-correct, and actually do stuff. I'm sure many Vertical AI founders will say: "Oh, we are not worried. We are the system of record for decision traces. We train on enterprise-specific context. That's why these horizontal agents can never catch up with this." You might well be right. But, but, but ... you cannot afford to bury your head in the sand. These long-horizon agents will get better very, very quickly. You need to understand precisely how good they are at the exact jobs you've built your agents on. You cannot wait for someone else to do this. For example, if you're a legal AI company with an agent that automates contract review, you must compare how good your specialized agent is versus a general-purpose long-horizon agent that's simply given the contract and asked to perform the same review. My challenge to you: Assign a strong engineer on your team to focus 100% on using long-horizon agents (with minimal context, other than just the contract in the example above) to compete with your custom-trained agents. Benchmark how the long-horizon agents perform vs your agent. Rinse and repeat it every few months. Like with most other things worth measuring, what matters is the rate of improvement (the "slope" vs the Y-intercept). If the long-horizon agent is 30% as good as your vertical agent on Day 1, but 50% as good on Day 60, and 70% as good on Day 120, you need to reassess your product strategy. AGI is coming for everyone. Long-horizon agents are the closest we have to AGI, and as a Vertical AI company, you need to figure out how you compete and survive. Game on.

English
25
7
108
45.1K
Katie Xu
Katie Xu@katiexsocials·
After building the UGC program from 0 to 750M views, I'm stepping down from Head of Marketing at Cluely. My time here was unforgettable, I became obsessed with sourcing creators and writing viral scripts and meticulously automating the entire system to scale. Final results include: - 300+ creators hired and coached - 20,000 organic videos posted - 750+ million views - hundreds of IG and TT accounts posting daily, some hitting 40k, 50k, and 80k followers All accomplished under a $1 CPM. Shoutout to the team, y'all are ambitious and talented and so fun to be around. Learned so much from you guys and rooting for everyone. Excited for the next chapter!
Katie Xu tweet media
English
102
19
935
133.2K
Ajay Krishna Amudan
Ajay Krishna Amudan@ajkrish95·
There’s a definitely need for a variation of an “obfuscation function” F to be applied to case data which can then be reused for memory “more freely”. Let’s say the set of all cases of a legal firm is X1, X2, X3,….,Xn. What we know is you can’t just tap into this data for future cases - that’s breaking client confidentiality rules. But we know that “general knowledge” gained by the firm from its cases is reusable. If there was a function F(X) or a series of functions eg: G(F(X1) U F(X2)) that can be applied to each case and then to the accumulation of cases to create a “rich final representation” - any new case say Xn+1 can start borrowing from this single shared corpus “freely”. I’m not an expert at all types of law ( beyond narrow aspects of tax law ) - and how to construct the set of obfuscative reductive function is highly dependent on the ontology and semantics of the domain. We’re broadly working on similar architectures at @maximor_ai for Finance ( replace laws by accounting principles ).
English
0
1
5
824
Jaya Gupta
Jaya Gupta@JayaGup10·
I largely agree with the participation framing. Participation is necessary, but the constraint is whether what’s learned can be operationalized safely, without violating confidentiality. The most valuable precedent lives inside client-confidential documents, notes, and negotiations, and the risk is that retrieval and synthesis can leak which clients had which issues, or let patterns bleed across matters. If you rebuilt a legal AI startup from first principles TODAY around unlocking that corpus, you’d reimagine both the backend primitives and the interface around governed precedent: stable pseudonyms that preserve referential integrity, conflict-aware policy that governs what can influence an answer (not just what can be displayed), per-matter views that auto-adjust based on staffing/independence, and audit trails over “what influenced what,” with escalation/approval as first-class objects. As many people have already picked up, there’s a chance to reimagine what a world class product in legal AI looks like today with this new framework vs when chatGPT came out.
Aatish Nayak@nayakkayak

x.com/i/article/2008…

English
5
4
44
9.1K
Ajay Krishna Amudan retweetledi
Animesh Koratana
Animesh Koratana@akoratana·
1/ Context graphs don’t really exist out in the wild today because they require joins across coordinate systems that don't share keys. Traditional databases solved joins decades ago. You have a customer_id, an order_id, a foreign key relationship. The join is discrete, the keys are stable, the operation is well-defined. Organizational reasoning requires a different kind of join. You need to connect: what happened (events) to when it happened (timeline) to what it means (semantics) to who owned it (attribution) to what it caused (outcome). These are five different coordinate systems. None of them share a primary key. And the keys themselves are fluid. "Jaya Gupta" in an email, "J. Gupta" in a contract, "@JayaGup10" in Slack. Same entity, no shared identifier. The join condition isn't equality. It's probabilistic resolution across representations in latent space Every existing data system optimizes for joins within a single coordinate space. Context graphs require joins across all of them simultaneously.
English
32
37
459
44.8K
Ajay Krishna Amudan
Ajay Krishna Amudan@ajkrish95·
blackline.com/products/verit… - we've looked into Blackline's AI capabilities because we have actual Blackline customers who've migrated to us. In fact between Blackline and Floqast, we've been able to migrate customers in < 1 month and as fast as 1 week in some cases. Your original tweet asked - "what features do folks move out of Blackline for" - customers don't move out for features - they move out for outcomes. Blackline is most known for reconciliation - have you actually looked at what part of the reconciliation they automate? Barely anything is automated. There are a bunch of rules that users are expected to put in place which Blackline will then validate and flag. Maximor GENERATES these rules. I don't think you realize the capability difference that expects the user to input and maintain rules vs when an agent looks into past data and creates these rules. Different ball game altogether.
English
0
0
2
64
Tarun
Tarun@tarunmallappa·
See this ss from Blackline. It knows what the user is thinking and therefore has zeroed down the actions to just two; This the best case scenario for a cfo in all honesty. Ps: my best guess is that this capability was acquired. So, that’s another angle to consider for startups before taking on incumbents.
Tarun tweet media
English
1
0
0
119
Jaya Gupta
Jaya Gupta@JayaGup10·
🧠 Prediction 2: Decision traces become the new data moat “My first prediction is about getting agents into the execution path. This one is about what happens once they’re there. When an agent executes a workflow, it pulls context from multiple systems, applies rules, resolves conflicts, routes exceptions, and acts. Most AI systems discard all of that the moment the task is complete. But if you persist the decision trace - what inputs were gathered, what policies applied, what exceptions were granted, and why - you end up with something enterprises almost never have: a structured, replayable history of how context turned into action. We call this the context graph: a living record of decision traces stitched across entities and time, so precedent becomes searchable. It explains not just what happened, but why it was allowed to happen. And it compounds. The more workflows you mediate, the more traces you capture. The more traces you capture, the better you get at automating the next edge case. Data is no longer the new oil; it’s decisions - the map of how the organization actually works. Startups have a structural advantage here. Because they sit in the execution path, they see the full context at decision time. Incumbents are either siloed or in the read path rather than the write path (data warehouses receive information via ETL after decisions are made - by then, the decision context is gone). SaaS incumbents can add AI to their data, but they can’t capture what they never see.”
ashu garg@ashugarg

x.com/i/article/2006…

English
20
14
261
47.7K
Ajay Krishna Amudan
Ajay Krishna Amudan@ajkrish95·
@GanatraSoham Tad disrespectful to say that no? Would be great if you could also explain how that’s what you’re building.
English
2
0
3
243
Ajay Krishna Amudan
Ajay Krishna Amudan@ajkrish95·
Now more than ever lines between new grads and staff engineers have become extremely blurred. The number 1 skill a great engineer possesses today is the ability to willingly experiment as new tools, techniques and information becomes available ( ask yourself how soon did you give Cursor a shot while you were already used to Copilot ) and form points of view that juxtapose empirical data with first principles thinking. In other words, being a 10x engineer in today’s world ( which is likely a 100x engineering compared to the 2022 world ) requires the level of open-mindedness, first principles thinking and observing and trusting empirical data to be fine tuned. The open question is still what the optimization function is here? Is it Output ( for example how @WorkWeave measures it ), is it about how well existing engineers understand the depths of what the code they “wrote” does or is it the technical debt accumulated that will likely cause rework some point soon? Or is it none of these? Would love to know what others are observing works best for them.
Boris Cherny@bcherny

I feel this way most weeks tbh. Sometimes I start approaching a problem manually, and have to remind myself “claude can probably do this”. Recently we were debugging a memory leak in Claude Code, and I started approaching it the old fashioned way: connecting a profiler, using the app, pausing the profiler, manually looking through heap allocations. My coworker was looking at the same issue, and just asked Claude to make a heap dump, then read the dump to look for retained objects that probably shouldn’t be there; Claude 1-shotted it and put up a PR. The same thing happens most weeks. In a way, newer coworkers and even new grads that don’t make all sorts of assumptions about what the model can and can’t do — legacy memories formed when using old models — are able to use the model most effectively. It takes significant mental work to re-adjust to what the model can do every month or two, as models continue to become better and better at coding and engineering. The last month was my first month as an engineer that I didn’t open an IDE at all. Opus 4.5 wrote around 200 PRs, every single line. Software engineering is radically changing, and the hardest part even for early adopters and practitioners like us is to continue to re-adjust our expectations. And this is *still* just the beginning.

English
0
2
12
634