

Jared Augustine
4.7K posts

@jaredaugustine
Family first. @PeoplesLeagueX @AskBillyBets Mega Labs. Previous @thuzio @juliusworks @knightsofdegen. Creator eco, sports, AI, crypto...NYG expert.



Today marks 1 year since I made the decision to remove alcohol from my life. I don’t miss it. I will never go back. I finally feel “awake.” Here’s the story… 🧵

I don’t think a lot of people understand how bad it has gotten in elementary school Parents are fighting to keep their kids away from screens and junk videos only to have their public schools give away the game

This might just be one of the most insane self-own disclosures I have ever witnessed. Notice how many blinks and ticks you can find before he doesn’t answer the question. “Are you a profitable bettor?” @DubClub_win @SGP_Vick @PortmanTracker2 @BBBtracker

In 1945, Friedrich Hayek outlined the Knowledge Problem that any society faces: The central economic problem is not resource allocation - it is how to use knowledge that is dispersed among millions of individuals. He argues that information is fragmented, local, dynamic, and often hidden. He explains that no government or central planner can ever fully possess it, which makes them inefficient resource allocators. He proposes markets as the solution: knowledge is decentralized and prices are how society aggregates it. This idea is the intellectual foundation of modern prediction markets. Decades later, in 1988, the University of Iowa launched the Iowa Electronic Markets (IEM), which allowed small size trades on US elections and macro events. The results: even thin, low-capital markets outperformed polls. This was the first credible empirical proof that market prices are effective aggregators of public beliefs. A variety of corporate and policy experiments followed in the 2000s. Google, HP, and Microsoft all tried their own internal versions of prediction markets to forecast product launches and sales targets. DARPA built its own to forecast geopolitical events. The results were consistent: broad participation with monetary incentives led to accurate forecasts. Then, in 2015, Philip Tetlock published Superforecasting. The book, which is the culmination of decades of research into human judgment, shows that groups of curious and humble “forecasters” dramatically outperformed intelligence analysts and domain experts at forecasting. By showing that smart amateurs can outperform experts, Tetlock put into question authority figures and whether we should trust them for predictions about the future. Today, Kalshi is sitting on one of the largest repositories of high quality market data in the world. For the first time, public beliefs across a variety of domains - from economics, to politics and culture - are aggregated at scale through market prices and updated in real-time as new information arrives. Our data contains answers to open questions held about prediction markets - why they outperform traditional belief aggregation methods, how to detect shifts in collective sentiment, and which players drive market accuracy. This proprietary data has been closed to the public. We are launching @KalshiResearch to change that. We invite academics, researchers, economists, philosophers, and interested parties to work with us to study and uncover the fundamentals underpinning belief formation and prediction markets. Like Hayek proposed 80 years ago, prediction markets have the potential to improve society's collective decision making and resource allocation. The goal for Kalshi Research is to fulfill his vision.

🚨BREAKING: MIT hooked people up to brain scanners while they used ChatGPT. What they found should concern every single person reading this. ChatGPT users showed 55% weaker brain connectivity than people who didn't use it. Not after years. After just four months. Here's how they tested it. 54 people were split into three groups: one used ChatGPT to write essays, one used Google, and one used nothing but their own brain. They wore EEG monitors that tracked their brain activity in real time across four sessions over four months. The brain-only group built the strongest, most widespread neural networks. Google users were in the middle. ChatGPT users had the weakest brains in the room. Every time. Then the memory test hit. Participants were asked to recall what they'd just written minutes earlier. 83% of ChatGPT users couldn't quote a single line from their own essay. They wrote it. They couldn't remember it. The words passed through them like they were never there. It gets worse. In the final session, ChatGPT users were told to write without AI. Their brains were measurably weaker than people who never used AI at all. 78% still couldn't recall their own writing. The damage didn't go away when the tool was removed. Meanwhile, brain-only users who tried ChatGPT for the first time? Their brains lit up. They wrote better prompts. They retained more. Their brains were already strong enough to use AI as a tool instead of a crutch. The researchers also found that every ChatGPT essay on the same topic looked almost identical. More facts, more dates, more names. But less original thinking. Everyone using ChatGPT produced the same generic output while believing it was their own. MIT gave this a name: cognitive debt. Like financial debt, you borrow convenience now and pay with your thinking ability later. Except there's no way to pay it back. The question isn't whether ChatGPT is useful. It's whether the price is your ability to think without it.

We are hosting our first Prediction Market Conference in March 2026. Researchers, economists, policymakers, traders will discuss big questions around prediction markets and knowledge aggregation. Spots will be limited. Reply here with a topic if interested in joining.






BREAKING: The U.S. Supreme Court just ruled in a 6–3 decision that California's school policy keeping secrets from parents about their children's gender transitions is unconstitutional.

February was Kalshi’s biggest single month of trading since it launched in July of 2021 — $10.4 Billion of volume. Sports made up over 82%. Top Markets: NCAA MBB - $2.3B NBA - $1.7B Tennis - $1.4B Parlays - $1.2B Bitcoin - $631M Soccer - $518M NFL - $394M Golf - $258M


More evidence that the global decline in test scores that began after 2012 is linked to the proliferation of smartphones and computers in class: The slide was bigger in countries where students began spending more time on devices (for leisure) generationtechblog.com/p/phones-at-sc…




This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.