

Crunchyroll might be dying as a company.. >Removes free tier for anime fans >Month later they raise prices >Constant crashes and subtitle issues >Class-Action Lawsuit filed against them >Now mass layoffs of Employees
Tone Elt
68 posts

@StoneFelt1
Dad jokes, self depreciation and death.


Crunchyroll might be dying as a company.. >Removes free tier for anime fans >Month later they raise prices >Constant crashes and subtitle issues >Class-Action Lawsuit filed against them >Now mass layoffs of Employees


@paulnovosad We wrote about the implicit risk of violence in prediction market semantics. This is far from the only market that carries such potential nefarious incentives. x.com/betbreaknews/s…


Idea for a project on top of @Polymarket Private orderbook for top traders to avoid copytraders Who building it?

The lifecycle of a pure math theorem: - 1997: my PhD advisor asks me to work on one of his conjectures - 2000: I solve the simplest case and dream of generalizing my approach - 2003: after years of struggle, I come to the conclusion that my approach *cannot* generalize - 2006: after reading a paper by Daan Krammer, I have a lighting bulb moment and realize that my approach works in full generality *up to equivalence of categories*... this enables me to solve my advisor's conjecture... I then use it as an ingredient in the proof of a much older and more famous conjecture (the "K(π,1) conjecture for finite complex reflection groups") - 2007: I submit my article for publication - 2009: referee #1 gives up - 2010: 2 more referees have now given up, complaining that the paper is too hard to read - 2012: referee #4 is finally able to produce a report, the revision work starts - 2014: the paper is accepted for publication - 2015: the paper is published - 2007-2025: because the older conjecture overshadows the lesser known conjecture by my advisor, and because my paper is too difficult, virtually no-one asks any question about the "lighting bulb" categorical idea at the core of the proof - Jan 22, 2026: I received an inbound email from a mathematician from another hemisphere, inquiring about the categorical aspects - Jan 26, 2026: I have my first ever videocall discussing the specifics of this core component of my proof

programming always sucked. it was a requisite pain for ~everyone who wanted to manipulate computers into doing useful things and im glad it’s over. it’s amazing how quickly I’ve moved on and don’t miss even slightly. im resentful that computers didn’t always work this way

🔥 Codex 0.9.0 is out, and with it, a bunch of new changes. Some of these updates are exactly what you've been waiting for 🫵. Starting with Codex's new Plan Mode! There's been some improvements since my last tweet. This is going to be a long one, so buckle up. 👇



One thing that current AI models still struggle with is how objects can be arranged in space, i.e., spacial world models. Tikz, a native package latex for creating diagrams from scratch, is a good sandbox to test this. It requires the model to create code for representing visual objects specially. I asked Claude Code to recreate a set of PPT slides in beamer, using tikz for the diagrams. The writing was perfect, but here was the first diagram (left). Text was misaligned, arrows in wrong place, inserted a random x in the middle. I iterated over and over and had no luck. I gave the same task to GPT 5.2 Thinking, asking it to change the diagram if it was too hard to reproduce, but to make sure everything was aligned an non overlapping. Middle picture was the output--even worse. Iterating was no help (giving it images, trying different prompts)--it did not have a model of how these objects should be oriented in space. I tried Gemini 3 Pro, on a different slide. Here was the output (right). Pretty bad. Tikz seems like a nice benchmark to have for studying how these models evolve over time.

The definitive solution to prediction markets (imo) was born in the 2000's as a toy research model that had some papers written about it but was only recently revived with prediction market product market fit + crypto. Dynamic pari-mutuel distribution. web.stanford.edu/~yyye/ec2009-1…

On Saturday Synth’s team identified two entities xazb and a4385 profiting $250k+ on Polymarket’s 15 min price markets by manipulating the BTC price on @binance and other exchanges... Yes, you read that correctly. A single entity is moving the BTC price to win on polymarket As a recap the 15 min markets resolve based on whether BTC finishes above or below the starting price at the end of a 15 min window. Settlement uses the Chainlink BTC oracle price, while traders often reference Binance prices during the market. The structure creates a very tight settlement window where small price moves matter a lot. Here’s an example from Saturday January 17 on the BTC 15 min ending at 15:00 UTC, with a target price of $95,414 required for the market to resolve Up. At 14:59 UTC, one minute before settlement, the Chainlink BTC oracle price was $95,383. BTC had been effectively static for the prior five minutes. It was a Saturday afternoon, liquidity was thin, and short term realized volatility was extremely low. Based on our internal models, the probability of the market resolving Up at that point was well below 1 percent. Despite this, the Polymarket contract was trading at approximately an 80 percent probability of finishing ‘Up’. Historically, Polymarket prices track our models within a few percentage points and in this case divergence was extreme. Two accounts, identified as xazb and a4385, had accumulated roughly $35k in Up contracts between them during the course of the market. They were heavily positioned on the side that should almost certainly lose. Between 14:59:17 and 14:59:24 UTC, the Chainlink BTC oracle price jumped from $95,383 to $95,438. This represents roughly a 5 basis point move occurring in seconds during a period of near zero volatility. The price then remained pinned at that level until market close at 15:00 UTC. The market resolved Up. This is not an oracle issue. This is a coordinated move on the BTC price on Binance and other large exchanges. For anyone tracking cross exchange trade data, a forensic review of BTC activity at 14:59:17 UTC on Saturday, January 17, 2026 would be highly informative. This example is not isolated. We observed the same tactic repeated throughout Saturday afternoon into Sunday morning across BTC and XRP 15 minute markets. Positions are accumulated early, probabilities are distorted throughout the window, and the underlying asset price is pushed in the final seconds to force favorable settlement. As a result, many market makers and retail participants have lost significant funds and exited participation. Liquidity has thinned and order books are much weaker than before. Our team has actively traded on Polymarket over the last 2 months with $20m in volume across these markets. We spotted this behaviour early last week, paused mm and built a monitoring system using @synthdata to alert when probabilities diverged significantly from our models. In the coming days we will release this via our dashbord and API to enable market participants to access volatility aware pricing benchmarks and anomaly detection alerts on +




The information extraction machinery behind financial exchanges finally has a prediction market translation Harvard researchers map out market scoring rules the automated dealers that let you trade beliefs even when no one's on the other side How to make truth-telling the dominant strategy, and why the house's worst-case loss is always capped Heavy reading but pays off:



