Hen Sapir
46 posts

Hen Sapir
@hensapir
cofounder @charcoal_hq | previously eng @fronthq

Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support. Note: Cursor accesses Kimi-k2.5 via @FireworksAI_HQ ' hosted RL and inference platform as part of an authorized commercial partnership.





context window won’t be “solved” as long as attention is quadratic and presumably Suhail is thinking about the compaction problem as it occurs in long running agents like claude code but this is downstream from an architectural problem with standard agent implementations (claude code among them) that use a linear “chat-like” history we all work through coding tasks linearly, but any seasoned software engineer’s mental model of their progress looks more like a call stack: pushing tasks on and popping them off when complete when the claude code harness organizes the context more like a call stack (think flame graph) than a linear chat log, compaction will not even be necessary in many cases and less lossy in the cases where it is for the familiar, think: loom






Introducing SWE-grep and SWE-grep-mini: Cognition’s model family for fast agentic search at >2,800 TPS. Surface the right files to your coding agent 20x faster. Now rolling out gradually to Windsurf users via the Fast Context subagent – or try it in our new playground!














