

Sasha Cui
54 posts




@LocasaleLab Insane stray bullet regarding dating undergrads


Terence Tao put it plainly: there is no evidence that LLMs exhibit genuine creativity. Yes, they have solved some Erdős problems. But these are low-hanging fruit, questions that attracted little attention and that yield once the right existing techniques are applied. That is not creativity. That is search plus recombination. Yes, LLM outputs can look impressive. But look at who is impressed: typically non-experts. Experts know very well that LLM performance gets terrible when you approach the frontier of human knowledge. And this is not a temporary gap. It reflects a structural limitation. We do not fully understand human creativity. But we do know a key property: Conceptual leaps: the ability to generate new representations, not just recombine existing ones. LLMs do not do this. They interpolate in representation space. They operate within existing conceptual frameworks; they do not create new ones. This is why we haven’t “yet seen them take the next step”.




Everyone who is currently bombared by "solve math solve everything" "autoformalization -> AGI" psyops should read this and try to understand where this sentiment comes from



After the apparently amazing announcement by @mathematics_inc on the formalization of a major recent Fields-medal winning theorem, i had no idea how pissed the math-formalization community is. Very worrying discussions by some of the leaders/founders of Lean's mathlib. cc @ChrSzegedy





Moments when the 11-year-old player defeated chess master Dina Belenkaya: "You will be glad you played with me in the future."





Much like the switch in 2025 from language models to reasoning models, we think 2026 will be all about the switch to Recursive Language Models (RLMs). It turns out that models can be far more powerful if you allow them to treat *their own prompts* as an object in an external environment, which they understand and manipulate by writing code that invokes LLMs! Our full paper on RLMs is now available—with much more expansive experiments compared to our initial blogpost from October 2025! arxiv.org/pdf/2512.24601





MIDNIGHT

The most important asset companies will have in the AI era is context, a durable record of how organizations actually get work done. Most enterprise AI still treats context as static input - documents, emails, and stored records. But real businesses don’t run only on static data. They run on something far more dynamic and specific to them - decisions, exceptions, approvals, judgment calls, and hard-won operating knowledge that emerges in live workflows, handoffs, escalations, and day-to-day collaboration. This context is specific and unique to each team within each organization. Today, that context rarely survives. A discount is approved, a contract is escalated, or an exception is granted. The system of record captures the outcome, but the reasoning as to why and how it happened disappears. No replay. No audit. No precedent. But that is the exact truth that AI needs to have situational awareness, serve your teams, and produce ROI. That’s the gap we are building ContextFabric to fill. ContextFabric turns your teams’ lived work into an execution backbone for AI: -Learns intent from real workflows and work patterns -Delivers the right context at the moment decisions are made -Powers every agent with shared, governed enterprise context This is how AI moves beyond pilots and becomes a true digital teammate, one that is embedded in daily operations, operating within real constraints, and delivering durable enterprise value. Context is the enduring competitive advantage in the AI era. Read more about ContextFabric in the link below (in the comments section). If you’d like to learn more, we’d love to hear from you. Feel free to reach out to us (contact@workfabric.com) We just launched a new site. See what we’re building: workfabric.com @RohanMurty @gnychis @guruprasad_r94 @NabeelQuryshi