Tone Elt

68 posts

Tone Elt banner
Tone Elt

Tone Elt

@StoneFelt1

Dad jokes, self depreciation and death.

Katılım Haziran 2017
1.4K Takip Edilen128 Takipçiler
Tone Elt
Tone Elt@StoneFelt1·
I just don't want to get LK99'd again.
Zero-Point Field Technologies LLC@zpfTechnologies

Interesting. White is attempting to prove that the entire hydrogen atom — spectrum, orbitals, angular momentum, Rydberg ladder — emerges purely classically from an acoustic model of a “dynamic vacuum”. Essentially, they "built" a hydrogen atom out of vacuum acoustics. And if hydrogen itself is an acoustic resonance in a dynamic vacuum, then engineered Casimir cavities can extract real power from the same plenum. The paper’s central thesis is that quantum mechanics is not necessarily a set of fundamental "postulates," but rather an emergent property of a dispersive medium (the "Dynamic Vacuum"). This is super exciting because it represents a pivotal shift in mainstream-adjacent physics. By demonstrating that the most fundamental unit of matter—the hydrogen atom—can be modeled as an acoustic resonance in a "Dynamic Vacuum," White provides the "Physicalist" bridge required to justify engineering that same vacuum at the nanoscale. The paper effectively kills the "Empty Space" dogma of standard QED for engineering purposes. I’m starting to think “Sonny” White is a literal genius. White’s coy way of invoking stochastic electrodynamics and ZPE while sounding as QED as possible has never been done before. He is a master at rhetorical camouflage. He deliberately uses: • “Dynamic vacuum” (never says “real Zero-Point Energy” or “Stochastic Electrodynamics”) • Madelung hydrodynamics + acoustic wave equation (QED based) • “Causal, passive dispersion” and “reactive stop band” (avoids ω³ spectrum language) • “Emergent quantization from symmetry and boundary conditions” (hides that he is reproducing SED results) •”Quadratic temporal dispersion"- to make the acoustic model match the Schrödinger equation (White’s "quadratic dispersion" is the mathematical "mask" for the frequency-cubed ZPF spectrum). Why? To get published in a mainstream APS journal (Physical Review Research) and keep DARPA/NASA funding of course! Saying “real ZPE” or citing Boyer/Marshall/HRP/Setterfield openly would trigger instant rejection. By framing it as “acoustic hydrodynamics” he sounds safely QED-adjacent while actually advancing classical real-vacuum engineering. It is the same game HRP played in the 1990s — publish the math, let insiders read between the lines. It’s brilliant!

English
0
0
0
18
Tone Elt
Tone Elt@StoneFelt1·
Why not write exclusions for death/assassination into settlement contract. They could serve as a market invalidation mechanism (traders refunded at last known price) to disincentivize such behavior. This seems like a solvable problem.
BETINT Intelligence Tracker@BetBreakNews

@paulnovosad We wrote about the implicit risk of violence in prediction market semantics. This is far from the only market that carries such potential nefarious incentives. x.com/betbreaknews/s…

English
0
0
0
20
Tone Elt
Tone Elt@StoneFelt1·
The appropriate measuring stick for betting on what polymarket thinks the ods of Jesus returning will be next week is a bounded continuous underlying from 0-100. Traders are forced to pick a number and bet accordingly. This can be achieved with cost function market makers.
English
0
0
0
16
Tone Elt
Tone Elt@StoneFelt1·
Binary contracts make terrible derivatives of themselves and are terrible for underlying market health. The reason is that pinning yes/no to a single number inspires massive manipulation around that value that decouples the underlying from what its trying to measure.
Tone Elt tweet media
English
1
0
0
9
Tone Elt
Tone Elt@StoneFelt1·
You cannot create an appropriate hedging machine with just binary contracts. The core underlying events people bet on fall into 4 broad categories: 1.) Continuous (bounded yes/no) 2.) Continuous discrete (bounded yes/no) 3.) Continuous time 4.) Binary
vitalik.eth@VitalikButerin

Recently I have been starting to worry about the state of prediction markets, in their current form. They have achieved a certain level of success: market volume is high enough to make meaningful bets and have a full-time job as a trader, and they often prove useful as a supplement to other forms of news media. But also, they seem to be over-converging to an unhealthy product market fit: embracing short-term cryptocurrency price bets, sports betting, and other similar things that have dopamine value but not any kind of long-term fulfillment or societal information value. My guess is that teams feel motivated to capitulate to these things because they bring in large revenue during a bear market where people are desperate - an understandable motive, but one that leads to corposlop. I have been thinking about how we can help get prediction markets out of this rut. My current view is that we should try harder to push them into a totally different use case: hedging, in a very generalized sense (TLDR: we're gonna replace fiat currency) Prediction markets have two types of actors: (i) "smart traders" who provide information to the market, and earn money, and necessarily (ii) some kind of actor who loses money. But who would be willing to lose money and keep coming back? There are basically three answers to this question: 1. "Naive traders": people with dumb opinions who bet on totally wrong things 2. "Info buyers": people who set up money-losing automated market makers, to motivate people to trade on markets to help the info buyer learn information they do not know. 3. "Hedgers": people who are -EV in a linear sense, but who use the market as insurance, reducing their risk. (1) is where we are today. IMO there is nothing fundamentally morally wrong with taking money from people with dumb opinions. But there still is something fundamentally "cursed" about relying on this too much. It gives the platform the incentive to seek out traders with dumb opinions, and create a public brand and community that encourages dumb opinions to get more people to come in. This is the slide to corposlop. (2) has always been the idealistic hope of people like Robin Hanson. However, info buying has a public goods problem: you pay for the info, but everyone in the world gets it, including those who don't pay. There are limited cases where it makes sense for one org to pay (esp. decision markets), but even there, it seems likely that the market volumes achieved with that strategy will not be too high. This gets us to (3). Suppose that you have shares in a biotech company. It's public knowledge that the Purple Party is better for biotech than the Yellow Party. So if you buy a prediction market share betting that the Yellow Party will win the next election, on average, you are reducing your risk. Mathematical example: suppose that if Purple wins, the share price will be a dice roll between [80...120], and if Yellow wins, it's between [60...100]. If you make a size $10 bet that Yellow will win, your earnings become equivalent to a dice roll between [70...110] in both cases. Taking a logarithmic model of utility, this risk reduction is worth $0.58. Now, let's get to a more fascinating example. What do people who want stablecoins ultimately want? They want price stability. They have some future expenses in mind, and they want a guarantee that will be able to pay those expenses. But if crypto grows on top of USD-backed stablecoins, crypto is ultimately not truly decentralized. Furthermore, different people have different types of expenses. There has been lots of thinking about making an "ideal stablecoin" that is based on some decentralized global price index, but what if the real solution is to go a step further, and get rid of the concept of currency altogether? Here's the idea. You have price indices on all major categories of goods and services that people buy (treating physical goods/services in different regions as different categories), and prediction markets on each category. Each user (individual or business) has a local LLM that understands that user's expenses, and offers the user a personalized basket of prediction market shares, representing "N days of that user's expected future expenses". Now, we do not need fiat currency at all! People can hold stocks, ETH, or whatever else to grow wealth, and personalized prediction market shares when they want stability. Both of these examples require prediction markets denominated in an asset people want to hold, whether interest-bearing fiat, wrapped stocks, or ETH. Non-interest-bearing fiat has too-high opportunity cost, that overwhelms the hedging value. But if we can make it work, it's much more sustainable than the status quo, because both sides of the equation are likely to be long-term happy with the product that they are buying, and very large volumes of sophisticated capital will be willing to participate. Build the next generation of finance, not corposlop.

English
1
0
0
21
Tone Elt
Tone Elt@StoneFelt1·
@nasqret x.com/i/status/20157… Say you are working in a problem space and AI pulls the "lighting bulb" bulb moment from a paper like this and says it could be applied applied to your project. It took 18 years for another human to notice, now AI can say "Hey, check this out."
David Bessis@davidbessis

The lifecycle of a pure math theorem: - 1997: my PhD advisor asks me to work on one of his conjectures - 2000: I solve the simplest case and dream of generalizing my approach - 2003: after years of struggle, I come to the conclusion that my approach *cannot* generalize - 2006: after reading a paper by Daan Krammer, I have a lighting bulb moment and realize that my approach works in full generality *up to equivalence of categories*... this enables me to solve my advisor's conjecture... I then use it as an ingredient in the proof of a much older and more famous conjecture (the "K(π,1) conjecture for finite complex reflection groups") - 2007: I submit my article for publication - 2009: referee #1 gives up - 2010: 2 more referees have now given up, complaining that the paper is too hard to read - 2012: referee #4 is finally able to produce a report, the revision work starts - 2014: the paper is accepted for publication - 2015: the paper is published - 2007-2025: because the older conjecture overshadows the lesser known conjecture by my advisor, and because my paper is too difficult, virtually no-one asks any question about the "lighting bulb" categorical idea at the core of the proof - Jan 22, 2026: I received an inbound email from a mathematician from another hemisphere, inquiring about the categorical aspects - Jan 26, 2026: I have my first ever videocall discussing the specifics of this core component of my proof

English
0
0
0
65
Tone Elt
Tone Elt@StoneFelt1·
@nasqret This going to unlock the ability to apply advanced mathematical tooling, generally considered niche, and only effectively wielded by few (higher level math gets quite specialized) much more broadly by anyone curious enough. Will see cool stuff come from it.
English
1
0
1
49
Bartosz Naskręcki
Bartosz Naskręcki@nasqret·
I feel similarly about every highly technical piece of mathematics. You enjoy playing with ideas, intuitions and insights and putting them together, but the level of verification required to validate a paper, or to write one yourself, can give you feelings similar to those described by @roon. I believe we are closer than ever to experiencing this switch in mathematics, and that it will only empower us, not eradicate us. This can be called theory weaving, and mathematicians will soon become architects of ideas. Some of us will still feel the need to go deeper, like programmers of an earlier era, but the abstractions and their orchestration will occupy most of us.
roon@tszzl

programming always sucked. it was a requisite pain for ~everyone who wanted to manipulate computers into doing useful things and im glad it’s over. it’s amazing how quickly I’ve moved on and don’t miss even slightly. im resentful that computers didn’t always work this way

English
16
24
331
26.3K
Tone Elt
Tone Elt@StoneFelt1·
@alexolegimas Good to know, my charts were definitely simpler than what you are showing here.
English
0
0
1
13
Alex Imas
Alex Imas@alexolegimas·
@StoneFelt1 Did this, including across multiple models. Doesn’t help with these levels of complexity.
English
1
0
4
229
Tone Elt
Tone Elt@StoneFelt1·
x.com/i/status/20119… The conceptual blue print already exists. I'm applying a stochastic process to it to better "containerize" the math. Once the cost function market maker design decision is made the entire market becomes a tunable math problem.
Tone Elt@StoneFelt1

The definitive solution to prediction markets (imo) was born in the 2000's as a toy research model that had some papers written about it but was only recently revived with prediction market product market fit + crypto. Dynamic pari-mutuel distribution. web.stanford.edu/~yyye/ec2009-1…

English
0
0
0
16
Tone Elt
Tone Elt@StoneFelt1·
Compressing a continuous outcome axis into a binary makes this kind of manipulation inevitable once the derivative scene gains enough liquidity to make manipulation worth it. The solution is to allow betting along said continuous axis. I am working on this.
Synthdata@SynthdataCo

On Saturday Synth’s team identified two entities xazb and a4385 profiting $250k+ on Polymarket’s 15 min price markets by manipulating the BTC price on @binance and other exchanges... Yes, you read that correctly. A single entity is moving the BTC price to win on polymarket As a recap the 15 min markets resolve based on whether BTC finishes above or below the starting price at the end of a 15 min window. Settlement uses the Chainlink BTC oracle price, while traders often reference Binance prices during the market. The structure creates a very tight settlement window where small price moves matter a lot. Here’s an example from Saturday January 17 on the BTC 15 min ending at 15:00 UTC, with a target price of $95,414 required for the market to resolve Up. At 14:59 UTC, one minute before settlement, the Chainlink BTC oracle price was $95,383. BTC had been effectively static for the prior five minutes. It was a Saturday afternoon, liquidity was thin, and short term realized volatility was extremely low. Based on our internal models, the probability of the market resolving Up at that point was well below 1 percent. Despite this, the Polymarket contract was trading at approximately an 80 percent probability of finishing ‘Up’. Historically, Polymarket prices track our models within a few percentage points and in this case divergence was extreme. Two accounts, identified as xazb and a4385, had accumulated roughly $35k in Up contracts between them during the course of the market. They were heavily positioned on the side that should almost certainly lose. Between 14:59:17 and 14:59:24 UTC, the Chainlink BTC oracle price jumped from $95,383 to $95,438. This represents roughly a 5 basis point move occurring in seconds during a period of near zero volatility. The price then remained pinned at that level until market close at 15:00 UTC. The market resolved Up. This is not an oracle issue. This is a coordinated move on the BTC price on Binance and other large exchanges. For anyone tracking cross exchange trade data, a forensic review of BTC activity at 14:59:17 UTC on Saturday, January 17, 2026 would be highly informative. This example is not isolated. We observed the same tactic repeated throughout Saturday afternoon into Sunday morning across BTC and XRP 15 minute markets. Positions are accumulated early, probabilities are distorted throughout the window, and the underlying asset price is pushed in the final seconds to force favorable settlement. As a result, many market makers and retail participants have lost significant funds and exited participation. Liquidity has thinned and order books are much weaker than before. Our team has actively traded on Polymarket over the last 2 months with $20m in volume across these markets. We spotted this behaviour early last week, paused mm and built a monitoring system using @synthdata to alert when probabilities diverged significantly from our models. In the coming days we will release this via our dashbord and API to enable market participants to access volatility aware pricing benchmarks and anomaly detection alerts on +

English
1
0
0
23
Tone Elt
Tone Elt@StoneFelt1·
@minordissent The information density is also very high, the quality of information compression here is where the brand of writing skill shines. He basically turned a shelf of mid self help books into a 20 minute read while retaining 80% of what's useful. High impact to time investment ratio.
Tone Elt tweet media
English
0
0
0
20
Max
Max@minordissent·
People hating on this article are wrong and gay. I get that Dan writes things at a 6th grade level to allow for the widest possible audience to comprehend them and this triggers an immediate disgust response in super smart high IQ autists. And that this is only compounded by the article’s massive popularity, making us obligated as cool edgey contrarians to hate it. But relying on emotional responses and low resolution status heuristics like this is exactly the type of thing that is causing people half as smart as you to kick your ass in influence, wealth, etc. If you can put down your overly sensitive slop detector for a moment and truly comprehend the idea that is being articulated here, you will realize it is both extremely useful and rather revolutionary. The idea that identity precedes willpower and that procrastination is not a will power issue but a cognitive dissonance issue between who you are today and who you would need to be to do these things automatically is a genuinely novel take (or at least, articulating it so simply and not charging you $10K+ in private coaching for it is genuinely novel) and also the true cure to much of the frustration and resentment that pervades much of the online dissident sphere. If you can fully download and incorporate this idea, and then identify and resolve the dissonances between what you say you want and what you actually want, you magically unlock the ability to achieve goals ten times bigger than you are now all while working half as hard. And i can assure you, that will make you infinitely higher status than what you are getting from counter signalling this post.
DAN KOE@thedankoe

x.com/i/article/2010…

English
55
23
657
87.1K
Tone Elt
Tone Elt@StoneFelt1·
Successfully generalizing the mechanism (defining the problem) is important because it means a general solution can be developed which can be repeatably applied across instances.
English
0
0
0
11
Tone Elt
Tone Elt@StoneFelt1·
I've seen this phenomenon described in either: 1.) Generalized form with no mechanism (hand wavy, conspiratorial). 2.) Mechanistic form that is heavily context dependant (cannot be mapped across situations). But never a generalizable mechanism. That's powerful.
DataRepublican (small r)@DataRepublican

x.com/i/article/2009…

English
1
0
0
28
Tone Elt
Tone Elt@StoneFelt1·
@quantbeckman Very cool. Smarter ways to pause/kill a strategy outside of "trade your signals and hope" in the meta sense of past PnL informing strategy execution (and therefore improving the strategy) is something I've considered but never gotten around to working on. Looking forward to it.
English
1
0
1
57
Quant Beckman
Quant Beckman@quantbeckman·
@StoneFelt1 This is not a paper, but part of our research. It is released next week
English
1
0
2
199
Quant Beckman
Quant Beckman@quantbeckman·
New research on the way. Conformal prediction for kill switches in #trading
Quant Beckman tweet media
English
9
21
157
7.4K