Ashuwat

144 posts

Ashuwat banner
Ashuwat

Ashuwat

@_devott

building hft systems

Katılım Haziran 2025
29 Takip Edilen17 Takipçiler
Ashuwat
Ashuwat@_devott·
@Luc1924370 @gemchange_ltd @Polymarket @PolymarketTrade That’s true, you can use a volatility smile to see DISTRIBUTION of volatility at periods of time but u can’t forecast volatility the same way. The volatility isn’t reflected in the market but in information, and if u aren’t using info theory, then it doesn’t work.
English
1
0
3
79
Luc
Luc@Luc1924370·
@_devott @gemchange_ltd @Polymarket @PolymarketTrade isn't it just a binary option, which is very easy to back out an implied vol from? a lot of the options offer a series of strikes (e.g. on btc price), allowing you to see how the smile varies across maturities. Lack of liquidity is true, but idk why that makes it a future
English
1
0
1
132
gemchanger
gemchanger@gemchange_ltd·
The math that made Wall Street billions pricing options just got ported to prediction markets This paper builds the first Black-Scholes equivalent for platforms like Polymarket Treating belief volatility as a quotable risk factor, with proper tools for hedging jump risk around elections and macro events. The paper is dense but worth it:
gemchanger tweet media
English
55
183
1.9K
322.7K
Ashuwat
Ashuwat@_devott·
@jino_rohit You should make a large object tensor and reference K, W and Q so u can do simd transforms. Will be much faster.
English
0
0
1
22
Jino Rohit
Jino Rohit@jino_rohit·
multi head attention in C++ in 50 lines of code
Jino Rohit tweet media
English
20
35
509
27.4K
Ashuwat
Ashuwat@_devott·
@jia_seed I thought that it was the iPhone 17 in orange
English
0
0
1
90
jia
jia@jia_seed·
feeling genuinely lucky to be in san francisco in this era
jia tweet media
English
27
0
241
20.1K
Ashuwat
Ashuwat@_devott·
@alex_prompter Most of the llm research are just mapping distributions and called it a day without validating. I’m starting to hate it now
English
0
0
1
34
Alex Prompter
Alex Prompter@alex_prompter·
This paper from Harvard and MIT quietly answers the most important AI question nobody benchmarks properly: Can LLMs actually discover science, or are they just good at talking about it? The paper is called “Evaluating Large Language Models in Scientific Discovery”, and instead of asking models trivia questions, it tests something much harder: Can models form hypotheses, design experiments, interpret results, and update beliefs like real scientists? Here’s what the authors did differently 👇 • They evaluate LLMs across the full discovery loop hypothesis → experiment → observation → revision • Tasks span biology, chemistry, and physics, not toy puzzles • Models must work with incomplete data, noisy results, and false leads • Success is measured by scientific progress, not fluency or confidence What they found is sobering. LLMs are decent at suggesting hypotheses, but brittle at everything that follows. ✓ They overfit to surface patterns ✓ They struggle to abandon bad hypotheses even when evidence contradicts them ✓ They confuse correlation for causation ✓ They hallucinate explanations when experiments fail ✓ They optimize for plausibility, not truth Most striking result: `High benchmark scores do not correlate with scientific discovery ability.` Some top models that dominate standard reasoning tests completely fail when forced to run iterative experiments and update theories. Why this matters: Real science is not one-shot reasoning. It’s feedback, failure, revision, and restraint. LLMs today: • Talk like scientists • Write like scientists • But don’t think like scientists yet The paper’s core takeaway: Scientific intelligence is not language intelligence. It requires memory, hypothesis tracking, causal reasoning, and the ability to say “I was wrong.” Until models can reliably do that, claims about “AI scientists” are mostly premature. This paper doesn’t hype AI. It defines the gap we still need to close. And that’s exactly why it’s important.
Alex Prompter tweet media
English
379
2.1K
8.2K
1.2M
Ashuwat
Ashuwat@_devott·
@shafu0x We do. It’s not native but Neon.
English
0
0
0
62
shafu
shafu@shafu0x·
why don't we have the EVM on Solana
English
33
0
60
11.3K
Ashuwat
Ashuwat@_devott·
@cachecrab This is all of rusts features in one image lol
English
0
0
1
105
cache crab
cache crab@cachecrab·
you havent seen my level of rust type wrangling
cache crab tweet media
English
16
6
170
10.9K
Ashuwat
Ashuwat@_devott·
@davidgu I didn’t even know that was possible
English
0
0
0
56
David Gu
David Gu@davidgu·
we run 18 million EC2 instances per month. At our scale, we see very rare bugs very frequently. Last week, we received *half* an HTTP request. Not a HTTP 206, literally half a request. Content-Length was 2350 bytes. Body was actually 1200 bytes, and was truncated mid json doc.
English
168
100
3.9K
1.4M
Ashuwat
Ashuwat@_devott·
@ankurnagpal The amount of capital you would need to get this to statistically work is a lot
English
0
0
1
390
Ankur Nagpal
Ankur Nagpal@ankurnagpal·
I recently learnt about this financial engineering trick that blew my mind: You can trade options to create a "synthetic loan" that lets you (potentially) borrow millions of dollars at treasury rates And get a capital loss for the interest you pay Here's how it works:
English
35
61
1.1K
247.7K
Ashuwat
Ashuwat@_devott·
@lorden_eth The thing with these particular with these markets (BTC, ETH) is that the outcome is not gatekept. It transforms these predictions into literal futures eventually fizzling out to some guaranteed price (in this case 0 or 1). This is traditional financial math in pred markets.
English
1
0
1
52
Lorden
Lorden@lorden_eth·
I revealed the exact strategy of the BTC arbitrage bot on Polymarket This script made $217k in just 30 days, trading with the tricky method and catching spreads Here is the range of its entry prices and trade sizes, showing the bot's positions across different price levels Low entry (15–25¢) > many small and medium trades > bet on high upside with limited risk High entry (75–100¢) > several similar buys > betting on outcomes that are very likely to happen to lock in profit Entries around 50¢ are used rarely mostly for hedging or arbitrage When prices are far from 50¢ (20¢ or 80–100¢) position size increases Bot's profile: @15m-a4?via=trader_overview" target="_blank" rel="nofollow noopener">polymarket.com/@15m-a4?via=tr…
Lorden tweet media
Lorden@lorden_eth

A new “bot trading wallet” has been found and it’s already farming profits It uses the same strategy as previous ''bots'' only on BTC Up or Down markets I'm curious how they're doing it so clean and perfect The main strategy is to buy outcomes in total under $0.99 It pays only <0.99 cents for something guaranteed to be worth 1 dollar And takes the diff between buying YES and NO And it takes a small profit from there, step by step but with a 100% chance @15m-a4?via=trader_overview" target="_blank" rel="nofollow noopener">polymarket.com/@15m-a4?via=tr…

English
52
7
213
23.1K
Chaz Byrnes
Chaz Byrnes@ChazByrnes4·
Introducing the Official Polymarket Rust CLOB Client 🦀 One of the first things I ended up working on after joining Polymarket was a Rust client for the CLOB. There were already good clients in other languages, but a lot of builders were asking for native Rust. I also wanted to introduce Rust into the Polymarket organization and ecosystem, especially for people running trading systems where performance and type safety are critical. With that, I decided to design and implement rs-clob-client, inspired by the existing Python and Typescript clients, and recently open-sourced it. It's focused on ergonomics and performance. If you're building on the CLOB and prefer Rust, hopefully this saves you some time. @PolymarketBuild Enjoy! github.com/Polymarket/rs-…
English
47
54
733
55.2K
Ashuwat
Ashuwat@_devott·
@theskilledcoder This is a bit misleading. There are different types of queues that are incredibly fast. For ex: SCSP queues (lock free) have less than 250 ns (some even lower) but only work when you make sure there is no contention (thread affinity, min cache coherency time).
English
0
0
1
83
Skilled Coder
Skilled Coder@theskilledcoder·
When to Use a Queue 👇 If user request waits on slow IO → add queue If traffic comes in bursts → add queue If failure of one component shouldn’t fail request → add queue When NOT to Use a Queue 👇 If task must complete within request lifecycle → don’t queue If order matters strictly and latency is critical → don’t queue If throughput is low and predictable → queue adds overhead
English
4
6
84
5.8K
Quant Beckman
Quant Beckman@quantbeckman·
I like this paper. This LxCIM method gave me several cool ideas 🤓
Quant Beckman tweet mediaQuant Beckman tweet media
English
8
9
40
2K
Ashuwat
Ashuwat@_devott·
@orrdavid Remember, these are uncorrelated returns.
English
0
0
0
32
David Orr
David Orr@orrdavid·
Medallion returns for the last few years: 2022: 19% 2023: 25% 2024: 30% 2025 YTD: 20% This is way lower than their historic average. Only their 2001-2005 period was this low. Could be variance, but more often than not it's a sign that quant investing is finally peaking.
English
54
16
579
99.3K
Ashuwat
Ashuwat@_devott·
@quantscience_ 472% returns does not exist in ONE strategy for a singular market that has large liquidity. Too good to be true
English
0
0
1
640
Quant Science
Quant Science@quantscience_·
How to make a simple algorithmic trading strategy with a 472% return using Python. A thread. 🧵
Quant Science tweet media
English
23
78
677
75.1K
PyQuant News 🐍
PyQuant News 🐍@pyquantnews·
Algorithmic trading cheat sheet (for download).
PyQuant News 🐍 tweet media
English
12
92
892
38.5K
Ashuwat
Ashuwat@_devott·
@quant_xbt It has to be some crazy volume to influence the future. For political markets I can see how this would be the case, but given current volume, not a chance. It’s a very interesting thought tho, and it’s a very hard but plausible reality
English
0
0
0
134
Quant
Quant@quant_xbt·
Prediction markets are supposed to model reality. As they grow in volume, there’s a risk they start shaping it instead. That’s a dangerous line to cross.
English
5
1
60
5.1K
Quant Beckman
Quant Beckman@quantbeckman·
Bet you don’t even know what equation this is, do you? 😎
Quant Beckman tweet media
English
23
24
221
13.3K
Akshay 🚀
Akshay 🚀@akshay_pachaar·
You're in an ML Engineer interview at Microsoft. The interviewer asks: "Why Boosting models primarily use Trees as the base learner? What's wrong with Linear regression or SVMs?" You: "Because linear models can’t fit non-linear data." Interview over. Here's what you missed: Texts describing boosting start with “weak learners” but then immediately pivot to trees. But this DOES NOT mean they can only work with trees. Consider a simple boosting algorithm: 1) Train a tree model. 2) Calculate the left-over residual. 3) Train the next model on this leftover residual. 4) Go to step 2. Looking closely, was it really necessary to use a tree there? All we need is the residual term, which can come from any model. This shows that while boosting is often associated with trees, the algorithm itself is agnostic to the type of base learner used. If you use sklearn, it is actually possible to employ a different base learner with AdaBoost. 𝗦𝗼 𝘄𝗵𝘆 𝘁𝗿𝗲𝗲𝘀? The reason is simple. Tabular data is quite complex: - Variables can be skewed. - Features can have missing values. - Different features can have different scales. - There can be categorical variables. - And more. Using standard algorithms as base learners will require extensive data cleaning and feature engineering. But this isn’t the case with tree-based models. You can just plug them into any dataset and overfit. Also, since we continuously add a new model to fit the left-over residuals, the distribution of the dependent variable (residual in case of regression) keeps evolving. The feature engineering applied during the first step of boosting will likely not be helpful in the subsequent steps, requiring further manual intervention. Using tree models, however, resolves this due to their ability to operate on any kind of data with minimal feature engineering. 👉 Over to you: What are some other reasons behind using tree models as base learners in boosting? ____ Share this with your network if you found this insightful. Find me → @akshay_pachaar ✔️ For more insights and tutorials on LLMs, AI Agents, and Machine Learning!
Akshay 🚀 tweet media
English
22
69
576
49.3K