
JP
139 posts

JP
@Parvashah_
i learn, i write, i build.





@sama how did anthropic fumble the biggest open sourced ai project of all time 😭😭😭😭😭😭



sorry, is it just me who's not getting the hype around this? the rlm paper is a great formalization of what many production teams have built over the past year. devin, hippocratic, manus, claude code, codex cli, they all independently converge on this exact pattern. > prompts are mutable env variables > recursive self delegation > persistent state across tool calls > chunking long contexts > farming out subtasks to sub agents at my previous company @Parvashah_ and i built a similar agentic architecture for ads management on the meta console. the agent could dynamically generate functions and register them as callable tools at runtime. it had built-in tooling for prompt switching. as the execution context moved through campaigns, then adset, and then ad creation, the system would swap parameter schemas and validation rules. the harness would also reconfigure itself based on where the agent was in the workflow. i'm appreciative of @lateinteraction's work. he did great work with dspy too. practitioners were doing ad hoc with prompt optimization, and he gave it a formal framework so thousands of teams could adopt it. rlms will do the same. now that the pattern has a name and ablations and a training recipe, way more teams will build on it. that's genuinely valuable. and labs like anthropic are betting on the idea that models reasoning through code and recursive self-delegation is the path to general capability.


Almost every YC founder I’ve talked to switched from Cursor to Claude Code. Am I the only one still on Cursor?

Much like the switch in 2025 from language models to reasoning models, we think 2026 will be all about the switch to Recursive Language Models (RLMs). It turns out that models can be far more powerful if you allow them to treat *their own prompts* as an object in an external environment, which they understand and manipulate by writing code that invokes LLMs! Our full paper on RLMs is now available—with much more expansive experiments compared to our initial blogpost from October 2025! arxiv.org/pdf/2512.24601


















