alvinjamur

1.5K posts

alvinjamur banner
alvinjamur

alvinjamur

@alvinjamur

Founder/CEO, tradeInsightAI: https://t.co/sIRpL0Utxg Founder/PM, SAC Synapse - 1st eq stat arb @SAC (‘93-‘04). Founder/CoCIO, Katonah Capital (2014-19).

New York, NY Katılım Temmuz 2007
1K Takip Edilen4.9K Takipçiler
Sabitlenmiş Tweet
alvinjamur
alvinjamur@alvinjamur·
What you need will come to you, if you do not ask for what you do not need. - Nisargadatta Maharaj
English
4
3
27
0
alvinjamur retweetledi
Probability and Statistics
One theorem every ML engineer should know: The Johnson–Lindenstrauss Lemma. It states that high-dimensional data can be projected into a much lower-dimensional space while approximately preserving pairwise distances. Why it matters: • Explains why random projections work • Enables scalable learning in high dimensions • Used in embeddings, compressed learning, and ANN search • Helps fight the curse of dimensionality The surprising part: You can reduce dimensions dramatically without destroying the geometry of the data. That’s why many ML systems can operate efficiently even with massive feature spaces. Modern representation learning is deeply connected to this idea: Good embeddings preserve structure while compressing information. In ML, compression is often not loss of intelligence — it’s removal of redundancy.
Probability and Statistics tweet media
English
18
236
1.8K
128.7K
alvinjamur
alvinjamur@alvinjamur·
Earlier today, a Geometer friend left me this voice mail : "Hey, Al! Please don't forget.... Data are the domain of facts, likelihoods are the domain of logical deduction, and the priors are the domain of theoretical prejudice. Call me."
English
1
0
6
783
alvinjamur retweetledi
Harrison Ford
Harrison Ford@HarrisonFordLA·
May the fourth be with you
GIF
English
2.9K
51.8K
220.7K
6.9M
alvinjamur
alvinjamur@alvinjamur·
If you’d rather not fall back on these generative approaches to Quant Finance (replete with intellectual masturbation & beginner errors for answers) AND are a beginner and really want to learn. Timothy Masters can come next…
alvinjamur tweet media
English
6
53
515
22.1K
alvinjamur retweetledi
Didier 'Dirac's ghost' Gaulin
Once again, I found ya a gem publicly available on arXiv, this time around, it is Bruno Martelli's 'An Introduction to Geometric Topology', a fantastic book on the topology and geometry of surfaces and three-manifolds, which would be useful to both mathematicians and physicists. If you are curious and you have a standard background in topology and analysis, you will be able to pick this one up quickly, including differential topology, Riemannian geometry, measure theory and homology. gl hf 👇👇👇👇
Didier 'Dirac's ghost' Gaulin tweet media
English
5
60
425
16.5K
alvinjamur retweetledi
MIT CSAIL
MIT CSAIL@MIT_CSAIL·
Today, MIT & the IMO released MathNet, the world’s largest dataset of International Math Olympiad problems & solutions 🌍 MathNet is 5x larger than previous datasets & is sourced from over 40 countries across 4 decades: bit.ly/4u1bhBC
MIT CSAIL tweet media
English
15
543
2.1K
194.7K
alvinjamur
alvinjamur@alvinjamur·
As someone who started using Claude Code approx 2 weeks after its release, I appreciated reading this more than the source : arxiv.org/pdf/2604.14228
English
0
0
4
1.2K
alvinjamur retweetledi
Jeremy Nguyen ✍🏼 🚢
Jeremy Nguyen ✍🏼 🚢@JeremyNguyenPhD·
Milla Jovovich (actress from The Fifth Element) created a world-beating Claude memory system with @bensig?! - 100% on LongMemEval — first perfect score ever recorded. Free and 100% open source. Github link in the quoted post from Ben. I'm keen to hear how it works for you.
Ben Sigman@bensig

My friend Milla Jovovich and I spent months creating an AI memory system with Claude. It just posted a perfect score on the standard benchmark - beating every product in the space, free or paid. It's called MemPalace, and it works nothing like anything else out there. Instead of sending your data to a background agent in the cloud, it mines your conversations locally and organizes them into a palace - a structured architecture with wings, halls, and rooms that mirrors how human memory actually works. Here is what that gets you: → Your AI knows who you are before you type a single word - family, projects, preferences, loaded in ~120 tokens → Palace architecture organizes memories by domain and type - not a flat list of facts, a navigable structure → Semantic search across months of conversations finds the answer in position 1 or 2 → AAAK compression fits your entire life context into 120 tokens - 30x lossless compression any LLM reads natively → Contradiction detection catches wrong names, wrong pronouns, wrong ages before you ever see them The benchmarks: 100% recall on LongMemEval — first perfect score ever recorded. 500/500 questions. Every question type at 100%. 92.9% on ConvoMem — more than 2x Mem0's score. 100% on LoCoMo — every multi-hop reasoning category, including temporal inference which stumps most systems. No API key. No cloud. No subscription. One dependency. Runs on your machine. Your memories never leave. MIT License. 100% Open Source. github.com/milla-jovovich…

English
233
775
6.9K
1.7M
HF_Trader
HF_Trader@HF_Trader·
Backyard views this am…..late season nuke very welcomed. 3 seasons ago we had a record year. Alta had more snow than any place in the world (903in). Now we had another record year but opposite end of the spectrum. Worst year in history here at Alta. Last week it was 85 degrees all week in the valley and the snowpack got decimated causing several resorts to close early for the season. The resorts in the cottonwoods held on strong and this late storm brought nearly 3ft of snow. This storm is the best skiing we have had all year long. Pretty wild if you think about it.
HF_Trader tweet media
English
9
1
83
7.7K
alvinjamur retweetledi
atomicbot.ai
atomicbot.ai@atomicbot_ai·
Running OpenClaw with Gemma 4🦞 Free Open Source Local Model Device: MacBook Air M4 16Gb
English
92
107
982
408K
alvinjamur retweetledi
alvinjamur
alvinjamur@alvinjamur·
So astute.
JH@CRUDEOIL231

I think way too many ppl are delusional about this idea of letting Iran control the SoH, having the US pull out, and just letting Iran set up a toll booth. Where does Saudi’s power actually come from? It’s not just because they’re rich. Their entire influence comes from being the world’s only swing Producer. We need oil, and Saudi controls that market. If Iran takes over the SoH, they become the most powerful, one of a kind Global Swing Producer in history. If they don’t like the oil price? They can just "adjust" the traffic in a strait that handles ~20mb/d to swing prices however they want. If the UAE gets on Iran’s bad side? "No passage for UAE tankers." If Kuwait tries to build a bypass? "Fine, the SoH is closed starting today. Let’s see if you can finish that bypass—which takes years—without making a single dime." By letting Iran control that flow, the US is effectively making Iran the ultimate energy gatekeeper. The entire regional hegemony shifts to Iran. Saudi and the UAE lose everything. Think about it—if you were MBS, would you let this happen? Let’s say the US pulls out this week. The US started this mess, and now the GCC has to just sit there and watch their power handed over to Iran? Let me give you a reality check for Americans: Imagine Mexico now controls the North American continent. "Want to fly to the UK? Get Mexico’s permission. Want to import jet fuel from Asia? Pay Mexico a toll and take the route they tell you to. Did you dare to criticize Mexico? Now, no container ships can enter your waters. You can’t say a word against the great President of Mexico." It sounds like a fantasy, but that’s the reality for the GCC. If the US tries to run away? If I were the GCC, I wouldn’t let them leave. I’d grab them by the hair and drag them back to clean up the mess they made. I’ve said before that this is an existential issue for Iran and Israel. Well, Iranian control of the SoH is an existential issue for every other GCC nation. And the GCC has leverage. They have massive wealth invested in the West, huge U.S. asset holdings, decades of lobbying networks, and they are the biggest donors for Trump’s terms. And of course they have oil. Do you really think Brent would stay below $100/bbl if the GCC teamed up and cut just 3mb/d for six months? Even the most optimistic guy knows the answer is zero chance. They don't even need a fancy excuse: "Oh, since the US gave up on us and Iran owns the SoH, it's not safe. We have to cut production. Sorry!" Within months, the US would be begging to come back. It’s just pushing the Middle East into an even bigger pit of fire. Thanks for listening to my TED Talk :) #oott #iran

English
0
0
0
731
alvinjamur retweetledi
Factory
Factory@FactoryAI·
Droids can now pursue goals autonomously over multi-day horizons. You describe what you want, approve the plan, and come back to finished work. We call these Missions.
English
49
74
882
405.8K
alvinjamur retweetledi
Jenny Zhang
Jenny Zhang@jennyzhangzt·
Introducing Hyperagents: an AI system that not only improves at solving tasks, but also improves how it improves itself. The Darwin Gödel Machine (DGM) demonstrated that open-ended self-improvement is possible by iteratively generating and evaluating improved agents, yet it relies on a key assumption: that improvements in task performance (e.g., coding ability) translate into improvements in the self-improvement process itself. This alignment holds in coding, where both evaluation and modification are expressed in the same domain, but breaks down more generally. As a result, prior systems remain constrained by fixed, handcrafted meta-level procedures that do not themselves evolve. We introduce Hyperagents – self-referential agents that can modify both their task-solving behavior and the process that generates future improvements. This enables what we call metacognitive self-modification: learning not just to perform better, but to improve at improving. We instantiate this framework as DGM-Hyperagents (DGM-H), an extension of the DGM in which both task-solving behavior and the self-improvement procedure are editable and subject to evolution. Across diverse domains (coding, paper review, robotics reward design, and Olympiad-level math solution grading), hyperagents enable continuous performance improvements over time and outperform baselines without self-improvement or open-ended exploration, as well as prior self-improving systems (including DGM). DGM-H also improves the process by which new agents are generated (e.g. persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. This work was done during my internship at Meta (@AIatMeta), in collaboration with Bingchen Zhao (@BingchenZhao), Wannan Yang (@winnieyangwn), Jakob Foerster (@j_foerst), Jeff Clune (@jeffclune), Minqi Jiang (@MinqiJiang), Sam Devlin (@smdvln), and Tatiana Shavrina (@rybolos).
Jenny Zhang tweet media
English
157
659
3.6K
498.8K
alvinjamur retweetledi
Valeriy M., PhD, MBA, CQF
Valeriy M., PhD, MBA, CQF@predict_addict·
Most people in machine learning still misunderstand probabilities. A model can be perfectly calibrated and still be completely useless. This was proven more than 40 years ago by DeGroot & Fienberg (1983). Yet many ML papers still miss this point. Here is the idea.
English
3
29
347
30.1K
alvinjamur
alvinjamur@alvinjamur·
Happy birthday to Joseph Fourier!! That’s Mathematics from 1822 allows us to .....just add to the looooong list....
alvinjamur tweet media
English
0
0
1
391