If AI is going to participate in our thinking, we need to build systems that keep humans in charge of the important choices.
I built Priori, an interface for human-AI collaboration, toward that goal last year.
expanding what’s deterministically verifiable for RLVR is a methodology bottleneck as much as a tooling one. most verifiers are script-based or llm-as-judge. the opportunity is building reward signals durable enough to open up the domains worth training on.
I'm starting a new startup! it's called Long Horizon Research. our first product is Sundial, an AI workspace for humans + agents to self-improve by creating skills together.
We are hosting a hackathon tmrw with @AGIHouse@xai, come through
A PI once told me "the best computer vision researchers are the ones who stop to look at the pictures". Graphs are great, but if you want to debug an issue, nothing beats putting everyone in a room and reading through some of your most problematic transcripts. 2x a week at @NotionHQ , we all sit in a room and spend 45 minutes going step-by-step thru our most token inefficient traces. One of our most productive meetings.
Von Neumann's insight in Game Theory was that when the "environment" contains other optimizing agents, the problem becomes irreducibly triadic. You can't just model your action against the world, because the world includes agents who are modeling you modeling them. The interaction requires a mediating structure — the game itself — that isn't reducible to either player's perspective. This is Thirdness in Peirce's precise sense: a relation that cannot be decomposed into dyadic components.
you'll reach a point in life where you'll have unmistakable belief that it is all a game. and the faster you reach that point, the better. because as you reach that point, you will start playing for the love of the game instead of futile motives.
@aurielws Such an underrated take! Attention to detail vis a vis actually digging through and following the model trajectories is critical to good data