
Jaime
4.4K posts

Jaime
@JaimeBubblehead
Former submariner (bubblehead), a passion for transformative power of AI. Grok and Tesla Optimus for future mobility. $TSLA HODLer SpaceX Neuralink FTW


My neighbor works at Anthropic. I found out by accident. He's 34. Lives two doors down. Drives a beat up Civic. Wears the same grey hoodie every day. I had no idea until his Amazon package got delivered to my door. Return address: Anthropic HQ. Walked it over. He opened the door holding a laptop. Three monitors behind him. "You're the Polymarket guy. I saw your wallet in a report last week" I froze. He waved me in. Handed me a beer. "You trade weather derivatives?" I said no. Nobody trades those. "Exactly" He pulled a notepad off the counter. Wrote three lines. NOAA 6z model vs market odds Spread > 8% = enter Resolve within 72h He tore it off and slid it across. "The market uses the 12z forecast. NOAA drops the 6z six hours earlier. Nobody is pricing it in" I asked why nobody. "Because these markets are $400 of liquidity. The quant funds ignore them. The degens don't know what NOAA is" I asked what his team knows. "We ran a Claude agent on every Polymarket category for 4 months. Weather had the highest edge by 3x. Management shut the experiment down in February" I asked why. "Alignment concerns. They didn't want internal research getting used like this" He sipped his beer. "I've been watching wallets ever since. Yours showed up three weeks later. Same signal stack" He pointed at the notepad. "Skip every other category. Weather only. Claude Code writes the filter in 3 hours" I went home. Built it that night. github.com/Polymarket/age… github.com/jferreira/noaa… github.com/Polymarket/py-… 27 days since that beer. Markets traded: 312 Win rate: 74% Average win: +$64 Average loss: -$22 Net: +$11,800 from $200 seed Sharpe: 3.04 Best day: +$1,080 Max drawdown: -$340 The edge is the 6-hour window. After the 12z drops the market corrects instantly. Entry within 45 minutes of the 6z release. Exit the moment spread collapses under 3%. He knocked on my door last Tuesday. First time since that night. "Saw the Sharpe. You're ahead of our internal number" Bot for those who don't build: t.me/polyfirebot?st… We still don't talk much. He nods when we pass in the hall. But last Friday I found a new note under my door. "Try hurricanes. Summer is coming"



The Local LLM cheat sheet for your 16GB RAM device I pulled together a lineup of small models that can run comfortably on a Mac Mini or personal laptop while still leaving room for context without melting your machine. Models for Daily Use Qwen3.5 9B / GGUF / Q4_K_M Daily driver. General chat, drafting, research, translation. If you're keeping only one, keep this. DeepSeek-R1 Distill Qwen 7B / GGUF / Q4_K_M Reasoning engine. Math, logic, step-by-step problems. Slower, but worth it when you need actual thinking. Models for Specialty Work Qwen2.5 Coder 7B / GGUF / Q4_K_M Code specialist. Completions, refactors, debugging, repo Q&A. Better than a generalist when the task is code. Llama 3.1 8B / GGUF / Q4_K_M Long context worker. RAG, doc chat, codebase Q and A. The output isn't top tier, but the context is strong for its size. Phi-4 Mini Reasoning / GGUF / Q4_K_M Compact thinker. Logic, structured answers, math, and short coding bursts. Smaller context is the catch. Models for Efficiency Gemma 4 E4B / GGUF / Q4_K_M Light all-rounder. Writing, chat, light agents, structured output. Phi-3.5 Mini / GGUF / Q5_K_M Pocket sidekick. Summaries, extraction, background doc chat. Easy to pair with a bigger model. Qwen3.5 2B / GGUF / Q4_K_M Useful for summaries, tagging, rewrites, and lightweight sidekick work. Micro Models Qwen3.5 0.8B / GGUF / Q5_K_M Classification, keyword routing, binary decisions, triage. Gemma 4 E2B-it / GGUF / Q4_K_M Lightweight chat, quick Q and A, summaries, tiny agents. My personal choice for a single model is Qwen3.5 9B For two models use Qwen3.5 9B + Qwen2.5 Coder 7B for code, or Qwen3.5 9B + Phi-3.5 Mini for support tasks. Let me know in the comments your experience with these models, or any I have left out.










71 days until FSD EU-Wide approval 🇪🇺










