
GeorgL0ngGamma
4.2K posts




@KeatonInglis What a job you have! Legal must have changed your messaging up on you 13 times this year and somehow you almost kept up. What’s the current official marketing message you’re told to use by legal? Which version was your favorite?

"When will then be now? Soon" Here's a look at consensus drift on Straight of Hormuz reopening. Public GS data. We all know it will be open soon, but what possibilities become the future? 1/3 Chart 1 looks at the expected disruption on the day of publication. 74 days so far, and the estimates keep drifting higher.







A common problem in insurance pricing in a highly competitive market: you have a bunch of independent estimates of the EV of a risk that's complex and difficult to price. Each estimate = true (unknown) mean + random model error which can be big or small, positive or negative. You could be the most accurate modeler and win very few bids because there will always be some idiot who undercuts you, at (unknowingly to them) a loss. Result is that the whole market loses money. Economists call this "the winner's curse". My question is - how big of a problem is this in the world of RFQ same game parlays on prediction markets? Seems like it would be similarly vulnerable.



We read CZ’s ‘memoir’ to protect anyone else from having to ft.trib.al/EKGyIHp

CME, the largest US derivatives exchange, and Silicon Data are teaming up to create a futures market for computing power, a key factor needed to help power the AI boom bloomberg.com/news/articles/…




Stephen Wolfram, founder of Wolfram Research, explains how LLMs are quietly dismantling our deepest assumptions about consciousness: He argues that large language models have done something philosophy and neuroscience couldn't: "In terms of consciousness, I have to say, the idea that there's sort of something magic that goes beyond physics that leads to sort of conscious behavior, I kind of think that LLMs kind of put the final nail in that coffin." His reasoning is that LLMs keep doing things people assumed they couldn't: "There were all these things where it's like, oh, maybe it can't do this, but actually it does. And it's just an artificial neural net." Wolfram then challenges a core assumption about conscious experience: the feeling that we are a single, continuous self moving through time. "I think our notion of consciousness is a lot related to the fact that we believe in the single thread of experience that we have. It's not obvious that we should have a persistent thread of experience." He points out that physics doesn't actually support this intuition: "In our models of physics, we're made of different atoms of space at every successive moment of time. So the fact that we have this belief that we are somehow persistent, we have this thread of experience that extends through time, is not obvious." Then Wolfram offers a striking origin story for consciousness itself. @stephen_wolfram suggests it traces back to a simple evolutionary pressure: the moment animals first needed to move. "I kind of realized that probably when animals first existed in the history of life on Earth, that's when we started needing brains. If you're a thing that doesn't have to move around, the different parts of you can be doing different kinds of things. If you're an animal, then one thing you have to do is decide, are you going to go left or are you going to go right?" That single binary choice, he argues, may be the seed of everything we now call awareness: "I kind of think it's a little disappointing to feel that this whole wanted thing that ends up being what we think of as consciousness might have originated in just that very simple need to decide if you are an animal that can move. You have to take all that sensory input and you have to make a definitive decision about do you go this way or that way." The takeaway is unsettling but clarifying. If LLMs can produce complex behavior from simple rules, then consciousness may not be a mystical add-on to physics. It may just be what happens when a layered enough system has to make a decision.

Professor Marcos López de Prado at Cornell - the man Shannon and Thorp's framework eventually became He personally managed $13 billion at Guggenheim Partners with an audited risk-adjusted return of 2.3 - institutional Sharpe-equivalent that less than 1% of fund managers ever hit Then became the first head of machine learning at AQR Capital, a $226B fund. Then went to Cornell to teach this Shannon used information theory to beat Buffett. Thorp used it to beat Vegas. López de Prado uses it to manage billions for institutions today The article above is that exact same lineage applied to Polymarket - KL-divergence, max-entropy, entropy collapse, three tools you can use today 1 hour from one of three people on Earth qualified to teach this ↓





