Quant Fiction

2.1K posts

Quant Fiction banner
Quant Fiction

Quant Fiction

@quantfiction

Be wary of those who believe in a neat little world @pandilladeflujo fella

Entrou em Şubat 2018
538 Seguindo13.1K Seguidores
Tweet fixado
Quant Fiction
Quant Fiction@quantfiction·
The Sharpe Ratio is an industry standard for comparing investments/strategies, but also a favorite target of criticism: "Stdev isn't risk!" "Returns aren't normally distributed!" "Skew!" "Kurtosis!" Does any of it matter? Let's take a look 🧵:
Quant Fiction tweet media
English
40
106
852
0
Quant Fiction
Quant Fiction@quantfiction·
I am once again wondering what the point of these markdown files actually is
Quant Fiction tweet media
English
0
0
3
345
Quant Fiction
Quant Fiction@quantfiction·
Arb is still open for Thursdays’s Chiefs vs Bills game though!
English
0
0
0
322
Quant Fiction
Quant Fiction@quantfiction·
@macrocephalopod Everyone should try this with the markets in the screenshot.. last I checked there won’t be NFL games for about half a year
English
0
0
0
235
Quant Fiction
Quant Fiction@quantfiction·
@cubantobacco @geminicli Gemini CLI is the only harness that “works”.. it’s 10x worse in opencode (This is it. I broke the build)
Quant Fiction tweet mediaQuant Fiction tweet mediaQuant Fiction tweet mediaQuant Fiction tweet media
English
0
0
0
177
Jacob Matson
Jacob Matson@matsonj·
claude negging gemini
Jacob Matson tweet media
English
3
0
10
1.9K
Quant Fiction retweetou
Alex Noonan
Alex Noonan@AlexNoonan6·
As a bills fan I'm uninterested in this illegitimate so called super bowl
English
1
1
4
655
Quant Fiction
Quant Fiction@quantfiction·
@matsonj I think actual “dashboards” still have a purpose for real-time data (think grafana, etc).. like a literal car dashboard “how fast am I going right now” not “what was my average oil pressure for the past 3 months”
English
1
0
1
346
banteg
banteg@banteg·
@badlogicgames i eagerly tried to use gemini via all available login and api options, it works for 5 minutes and then either goes crazy or stops replying. literally unusable.
English
2
0
4
186
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
I wanted to have a good, lightweight, and fast semantic embedding model for local search for both my cass tool (for searching across coding agent sessions) and my xf tool (for searching your downloaded X archives). Basically, it has to run on CPU only and should be fairly quick (sub-1-second response) and actually "understand" semantic concepts well. I also needed a "reranker" model for fusing together the semantic search results with the standard lexical search results to get a good hybrid search, with the same requirements for CPU-only speed. There are so many options to choose from for both that it's a bit overwhelming if you want to pick the current all-around best ones. So I had Claude do a bunch of web research and then conduct a "bake off". You can see what it came up with here (the whole /docs directory is filled with relevant stuff): github.com/Dicklesworthst… So what did I end up choosing in the end? The two main choices were the potion-128M model, which has sub-millisecond response time and "ok" results, and a bona fide mini transformer model, all-MiniLM-L6-v2, that has really decent embeddings but takes 128ms to respond, or 223x slower! Finally, I realized I didn't need to choose, I would have my cake and eat it, too. I asked Claude: "what about a 2-tier system where we use potion as a first pass but at the same time in the background (separate thread or memory-resident "hot" process for quick start) we do miniLM-L6 and then when it finishes we "upgrade" the search results in an intuitive way, showing the results continuously moving to rearrange according to the revised semantic scores; this shouldn't change the rankings TOO much." Claude liked the idea (see screenshots) and the rest is history. This will be my standard search that I use across all my Rust tooling (I'll probably port it to Golang, too, so I can embed it in bv natively).
Jeffrey Emanuel tweet mediaJeffrey Emanuel tweet mediaJeffrey Emanuel tweet media
English
17
7
168
16.9K
Quant Fiction
Quant Fiction@quantfiction·
@doodlestein For sure, just curious on your thoughts bc I'm trying to implement similar things in a couple projects but then see all the "RAG is dead" content and then start to re-consider. I still don't know if I'm sold on agentic/file search for plain english prompt -> correct result
English
1
0
1
80
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
@quantfiction Well what I already had, using an index, is far better and faster for lexical search. But what I’m talking about here is semantic search, where you can give it the concept of what you want and it will find things that are similar based on the meaning.
English
1
0
2
157
Zac
Zac@PerceptualPeak·
WOW!!! If you have semantic memory tied to your UserPromptSubmit hooks, you MUST ALSO include it in your PreToolUse hook. I promise you - it will be an absolute GAME CHANGER. It will put your efficiency levels are over 9,000 (*vegeta voice*). How many times have you sat there, watching Claude code go through an extended workflow, just to notice it start to go down a path you just KNOW will be error filled - and subsequently take it forever to FINALLY figure it out? The problem with relying strictly on the UserPromptSubmit hook for semantic memory injection is the workflow drift from your original prompt. The memories it injects at the initiation of your prompt will be less and less relevant to the workflow the longer the workflow is. Claude has a beautiful thing called thinking blocks. These blocks are ripe for the picking - filled with meaning & intent - which is perfect for cosign similarly recall. Claude thinks to itself, "hmm, okay I'm going to do this because of this", then starts to engage the tool of its choice, and BOOM: PreToolUse hook fires, takes the last 1,500 characters from the most recent thinking block from the active transcript, embeds it, pulls relevant memories from your vector database, and injects them to claude right before it starts using its tool (hooks are synchronous). This all happens in less than 500 milliseconds. The result? A self correcting Claude workflow. Based on my testing thus far, this is one of the most consequential additions to my context management system I've implemented yet. Photos: ASCII chart showing the workflow of the hook, and then two real use-cases of the mid-stream memory embedding actually being useful. If you already have semantic memory setup, just paste this tweet and photos into Claude code and tell it to implement it for you. Then enjoy the massive increase of workflow efficiency :)
Zac tweet mediaZac tweet mediaZac tweet media
English
29
51
668
59.7K
Quant Fiction
Quant Fiction@quantfiction·
@JaredKubin @ihavedumbtakes I think the whole Mac mini thing is because most of the services it uses don’t have accessible APIs (by design) so it has to literally impersonate you as a user (likely in breach of ToS)
English
1
0
2
202
main street dow is over 50,000
main street dow is over 50,000@ihavedumbtakes·
New toy. Guess what I’m up to this week. @JaredKubin Will set up new id / accounts etc and not have access to my own personal stuff.
main street dow is over 50,000 tweet media
English
1
0
1
1.7K
Quant Fiction
Quant Fiction@quantfiction·
For Discord specifically, have any Clawd users gotten hit with a ban for impersonation using user tokens/selfbots? Been holding off on integrating this for a while because their ToS makes it seem like it's grounds for an un-appealable ban:
yanni@YanniTrades

x.com/i/article/2015…

English
3
0
5
575