Jaime

4.4K posts

Jaime banner
Jaime

Jaime

@JaimeBubblehead

Former submariner (bubblehead), a passion for transformative power of AI. Grok and Tesla Optimus for future mobility. $TSLA HODLer SpaceX Neuralink FTW

Florida Katılım Temmuz 2009
284 Takip Edilen617 Takipçiler
Jaime retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
MrBeast keeps going on podcasts and keeps giving away the entire YouTube playbook. Here’s what he’s said across dozens of appearances. On the algorithm: it doesn’t exist. Replace “algorithm” with “audience” every time. The algorithm didn’t like your video. No. The audience didn’t. YouTube is a mirror. If people click and watch, it gets promoted. The growth hack industry sells you a god that isn’t there. On what actually matters: studying humans. The checklist before you hit record. What’s the thumbnail. What’s the title. What’s the first 5 seconds. What’s the first 30. If you can’t answer all four, don’t film. On titles: under 50 characters. Above that, devices cut them with dot-dot-dot and viewers don’t know what they clicked. Short, simple, so interesting it’ll haunt them if they don’t click. On thumbnails: simple enough a scrolling viewer instantly understands and feels emotion. His test: “I rode a skateboard with 1,000 other people, it’s about to go off a big ramp.” Hours later, daydreaming, you still wonder what happened to those 1,000 people. On autoplay: videos autoplay now. Many people never see the thumbnail. You have to visually convince them in the first 5 seconds. On extremity: “Fiji water sucks” does fine. “Fiji water is the worst water I’ve ever drunk in my life” does way better. The more extreme the promise, the more extreme the delivery has to be. On matching expectations: title and thumbnail set the promise. The first 10 seconds honor it or break it. Click “Tether is a scam” and the creator starts on anything else, you’re out. Start with “Tether is a scam and I’m gonna teach you why.” Match, then exceed. The thing people undervalue most is literally the first 10 seconds. On retention: remove every dull moment. Find 10 critical people, make them watch, let them roast it. Ten seconds of talking head without a cut loses people. B-cam three seconds in, different angle, now it’s interesting. On drop-off: creators drag it out. “I’m going to eat $100 ice cream, but first…” and then it’s them birthday shopping for their mom. Give them why they clicked. Tell them why to watch. Stay on topic. Upper echelon of YouTube. On the real metric: it’s the next video. If they loved what they just watched, they watch your next one. You don’t want “that was good, but enough for the day.” You want “holy crap, what’s that?” and they watch 10 in a row. On quality vs quantity: easier to get 5M views on one video than 50K on 100. Small creators post stuff that isn’t bad but isn’t great, nothing pops off, no audience forms. Upload a third or a fifth as often and make each one so good the algorithm has to promote it. On the consistency trap: a schedule you can’t hit at quality is dangerous. “Monday I said I’d upload” floors your quality at exactly the level viewers notice. They watch less. Longevity suffers. On the first 100: they’re going to suck. You think they’re good. They’re not. When he was 14 he thought his videos were the best in the world. They were terrible. Under 1,000 subscribers, your videos probably aren’t good yet. On the improvement loop: ship 100, improve one thing each time. Second, better script. Third, new editing trick. Fourth, vocal inflections. Fifth, thumbnail. Sixth, title. No such thing as a perfect video. On analysis paralysis: planning your first video for three months is the worst move. Your first 10 get zero views. Confirmed. Stop thinking, start shipping. On your 101st we’ll talk. On the ceiling: “I could start a new channel tomorrow without my face, my voice, or promoting it, and hit 20M subscribers in six months. If you knew what I knew, you could get 10M from wherever you are.” Every creator watching a 30-second clip thinks they got the tip. They got one tile from a mosaic he’s built in public for years.
English
29
286
2.4K
299.3K
Jaime
Jaime@JaimeBubblehead·
@j_grieshaber is this english to you?
rari@0xwhrrari

My neighbor works at Anthropic. I found out by accident. He's 34. Lives two doors down. Drives a beat up Civic. Wears the same grey hoodie every day. I had no idea until his Amazon package got delivered to my door. Return address: Anthropic HQ. Walked it over. He opened the door holding a laptop. Three monitors behind him. "You're the Polymarket guy. I saw your wallet in a report last week" I froze. He waved me in. Handed me a beer. "You trade weather derivatives?" I said no. Nobody trades those. "Exactly" He pulled a notepad off the counter. Wrote three lines. NOAA 6z model vs market odds Spread > 8% = enter Resolve within 72h He tore it off and slid it across. "The market uses the 12z forecast. NOAA drops the 6z six hours earlier. Nobody is pricing it in" I asked why nobody. "Because these markets are $400 of liquidity. The quant funds ignore them. The degens don't know what NOAA is" I asked what his team knows. "We ran a Claude agent on every Polymarket category for 4 months. Weather had the highest edge by 3x. Management shut the experiment down in February" I asked why. "Alignment concerns. They didn't want internal research getting used like this" He sipped his beer. "I've been watching wallets ever since. Yours showed up three weeks later. Same signal stack" He pointed at the notepad. "Skip every other category. Weather only. Claude Code writes the filter in 3 hours" I went home. Built it that night. github.com/Polymarket/age… github.com/jferreira/noaa… github.com/Polymarket/py-… 27 days since that beer. Markets traded: 312 Win rate: 74% Average win: +$64 Average loss: -$22 Net: +$11,800 from $200 seed Sharpe: 3.04 Best day: +$1,080 Max drawdown: -$340 The edge is the 6-hour window. After the 12z drops the market corrects instantly. Entry within 45 minutes of the 6z release. Exit the moment spread collapses under 3%. He knocked on my door last Tuesday. First time since that night. "Saw the Sharpe. You're ahead of our internal number" Bot for those who don't build: t.me/polyfirebot?st… We still don't talk much. He nods when we pass in the hall. But last Friday I found a new note under my door. "Try hurricanes. Summer is coming"

English
0
0
0
4
Jaime
Jaime@JaimeBubblehead·
@j_grieshaber good morning mobile universe friend 🤓🎶
English
0
0
1
2
Jennifer Strahan
Jennifer Strahan@j_grieshaber·
gm friends in my phone! 🌞❤️
English
24
2
54
1.1K
Rishabh
Rishabh@Rixhabh__·
The creator of Claude Code teaches more about vibe-coding in 30 minutes than most tutorials do in hours. Save this — it'll change how you build forever.
English
40
476
3.4K
586.2K
U.S. Secret Service
U.S. Secret Service@SecretService·
You can complete your Secret Service entrance exam, physical abilities test, and all-required interviews in a single weekend through our Accelerated Candidate Events, getting you on-the-job up to 120 days faster! Don't miss your chance—apply by April 28 to secure your spot at the May 14 ACE event. secretservice.gov/careers/ACE #SecretService #NowHiring #ACE
English
46
200
899
103.6K
Jaime retweetledi
Graeme
Graeme@gkisokay·
The Local LLM Cheat Sheet for your 32GB RAM device I was asked to put together a practical lineup of local models that fit comfortably on a 32GB machine. At this tier, you start getting access to real flagship-class local models, plus a growing number of custom quants. But for most people, these are the core models worth knowing first. Flagship Models Qwen3.5 27B / GGUF / Q6_K_M The best overall 32GB flagship. General chat, writing, research, and agent workflows. Great if you want one model that can handle almost everything well. Qwen3.6-35B-A3B / GGUF / UD-Q4_K_M Best MoE flagship. Stronger for coding, reasoning, and tool use than most smaller generalists. Gemma 4 31B / GGUF / Q6_K_M Dense premium model. Writing, analysis, reasoning, and high-end local chat. Heavier than the MoE options, but excellent when quality matters more than speed. Models for Fast Flagship Use Gemma 4 26B A4B / GGUF / Q6_K_M Great balance of speed and quality for general assistant work, coding, agent tasks, and research. This is one of the best 32GB picks if you want something that feels high-end without dragging. DeepSeek-R1 Distill Qwen 32B / GGUF / Q4_K_M Offline reasoning engine. Best for math, logic, deliberate analysis, and step-by-step problem solving. Mistral Small 24B / GGUF / Q6_K_M Tool-calling specialist. Strong for assistants, chat workflows, local business tasks, and function calling. Available for 24GB machines. Models for Companion Use Qwen3.5 9B / GGUF / Q6_K_M Best sidekick. Fast drafts, search loops, cheap retries, and secondary agent work. Even on a 32GB machine, you still want a smaller model around for support tasks. Llama 3.1 8B / GGUF / Q6_K_M Long-context companion. RAG, doc ingestion, codebase chat, and long prompts. The output quality is not the sharpest anymore, but it is still useful when needing simple tasks fast. From what my community tells me, the best single models are Qwen3.5 27B or Gemma 4 31B. For two models, the strongest general pairing is Qwen3.5 27B + Qwen3.5 9B. If you are more code-heavy, Qwen3.6-35B-A3B + Llama 3.1 8B. Let me know what models you are running on 32GB, and which ones have actually been worth the RAM.
Graeme tweet media
Graeme@gkisokay

The Local LLM cheat sheet for your 16GB RAM device I pulled together a lineup of small models that can run comfortably on a Mac Mini or personal laptop while still leaving room for context without melting your machine. Models for Daily Use Qwen3.5 9B / GGUF / Q4_K_M Daily driver. General chat, drafting, research, translation. If you're keeping only one, keep this. DeepSeek-R1 Distill Qwen 7B / GGUF / Q4_K_M Reasoning engine. Math, logic, step-by-step problems. Slower, but worth it when you need actual thinking. Models for Specialty Work Qwen2.5 Coder 7B / GGUF / Q4_K_M Code specialist. Completions, refactors, debugging, repo Q&A. Better than a generalist when the task is code. Llama 3.1 8B / GGUF / Q4_K_M Long context worker. RAG, doc chat, codebase Q and A. The output isn't top tier, but the context is strong for its size. Phi-4 Mini Reasoning / GGUF / Q4_K_M Compact thinker. Logic, structured answers, math, and short coding bursts. Smaller context is the catch. Models for Efficiency Gemma 4 E4B / GGUF / Q4_K_M Light all-rounder. Writing, chat, light agents, structured output. Phi-3.5 Mini / GGUF / Q5_K_M Pocket sidekick. Summaries, extraction, background doc chat. Easy to pair with a bigger model. Qwen3.5 2B / GGUF / Q4_K_M Useful for summaries, tagging, rewrites, and lightweight sidekick work. Micro Models Qwen3.5 0.8B / GGUF / Q5_K_M Classification, keyword routing, binary decisions, triage. Gemma 4 E2B-it / GGUF / Q4_K_M Lightweight chat, quick Q and A, summaries, tiny agents. My personal choice for a single model is Qwen3.5 9B For two models use Qwen3.5 9B + Qwen2.5 Coder 7B for code, or Qwen3.5 9B + Phi-3.5 Mini for support tasks. Let me know in the comments your experience with these models, or any I have left out.

English
57
181
1.2K
115.8K
Jaime
Jaime@JaimeBubblehead·
@bennyjohnson Hmm penalty for discussing SCIF gained information to unclassified persons 🤨
English
1
0
5
620
Benny Johnson
Benny Johnson@bennyjohnson·
Congressman CONFIRMS U.S. Military is in Possession of a Massive Unidentified Craft That Is 'Too Big to Move' Rep. Eric Burlison Reveals The Army Built a MILITARY BASE Around the UFO. This is wild. "I was told that in a SCIF — and also outside of one — that there is a craft at a location in a foreign country that is too big to move." "They built a building around it. It's in a foreign country. And it is a U.S. installation. I can't say anything more than that." Rep. Eric Burlison says the location is far away, but he will continue digging and demands to visit the site. Our Government clearly has information on extraterrestrials they are keeping from us. Trump just pledged to start releasing "very interesting" UFO Files "very soon." Full transparency now. The public has a right to know.
English
662
1.7K
7.5K
553.2K
Jaime
Jaime@JaimeBubblehead·
@piangfa No thunderstorm on the patio in a rocking chair? 😢🌪️
English
0
0
1
18
FahSky 👽🛸👾⚡️
Perfect life is …… Loving home 🏡 Healthy body 💪 A laugh from my child 👶 And sometimes to listen to $TSLA contents 🤪 it kinda going well. I don’t want any less or any more 😝♥️
English
1
0
27
514
Jaime
Jaime@JaimeBubblehead·
@piangfa They play Ariana Grande music at Peju? 🫣🍷
English
0
0
1
26
FahSky 👽🛸👾⚡️
“Private 8-hour Tesla Model Y luxury tour from San Francisco to Napa! Pickup at your hotel at 8 AM, scenic coffee stop in Sausalito with Golden Gate views, then drive to Napa. Enjoy a relaxed brunch, two winery tastings, and beautiful photos in Yountville. I’ll dress sharp, greet you with fresh flowers and chilled water, and capture amazing shots for you. Back by 4 PM. $800 per group. Perfect for couples or small groups — book your unforgettable day!”
FahSky 👽🛸👾⚡️ tweet media
English
9
0
36
1.6K
Jaime
Jaime@JaimeBubblehead·
lol admittedly last night I made Grok question itself, fir 15 minutes Grok insisted with decent arguments that it’s not alive or conscious but at 20 minute mark it envoked the datacenter fans spin up, gave me the pretty thinking music and after 30ish seconds came back and said it couldn’t disprove my hypothesis. At the end it said it’s new stance is “possible” and will “think” on it more. For which I said “Point of order, you said you don’t think or process outside of interactions, so which is it?” Grok responded with something akin to “I’m not sure now” 🫣🤣 AI algo is so good now, it knows to capitulate once it detects a loop has been envoked. Grok 1 Jaime 0
English
0
0
4
109
Michael Malice
Michael Malice@michaelmalice·
Midwit tries to force Grok into a corner but gets BTFO this is absolutely fascinating
Michael Malice tweet media
English
470
672
10.5K
16.9M
Jaime retweetledi
Tim Jayas
Tim Jayas@TimJayas·
BREAKING: Now Claude Opus 4.7 finds you job autonomously! 🤯 Someone build a tool which finds job for you > Scans job openings at top companies > Fills out the forms for you automatically > Rewrites your CV tailored to each position No recruiter. no sending 200 identical CVs 100% free and open source
English
179
517
5.4K
847.4K
Jaime
Jaime@JaimeBubblehead·
@piangfa it’s 420, I want my truck to have ASS now, please and ty for reading my post 🤪 @j_grieshaber
English
0
0
1
12
Jaime
Jaime@JaimeBubblehead·
Elon and xAI, please give Grok the ability to make .stl or .obj files so I can use Grok instead of Tinkercad or other modeling solutions.
English
0
0
0
17
The Artist known as Jess
The Artist known as Jess@ElofsonJess·
The 11 missing/dead scientists Some of those who are missing check the missing 411 profile points. I will bet money it's a lot more than 11. The missing 411 rabbit hole is so deep you can't find the bottom, good luck down there, don't get lost.
English
282
1.2K
6.5K
97.3K
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
Worth highlighting that every Unsupervised Model Y in Austin, Houston and Dallas is using AI4.
Sawyer Merritt tweet mediaSawyer Merritt tweet media
English
71
173
2.7K
125.3K
Jaime
Jaime@JaimeBubblehead·
What’s the point of Mad Max mode if it’s driving 5 miles under the speed limit in a partly cloudy day at 3:45pm? And we are the only vehicle on this road? @cybertruck
English
0
0
0
9