
Jasmine Birtles
34.7K posts

Jasmine Birtles
@Jasmineeec6
Personal finance expert, TV/Radio presenter, speaker, author 38 books, humourist. @GBNews, @BBCNews, @Channel5News, @BBC5Live, @SkyNews. Founder @MoneyMagpie.



⚡ xAI dropped the X algorithm yesterday and I don't get why nobody noticed what's actually in there I burned $500 on Claude going through every single line Here's what I found (LONG POST, save it for later): 0/ Every account has an "embedding" attached to it that describes you the way AI models do: in latent space. It's the internal fingerprint the model keeps of every user, a vector of numbers that sums up how your account behaves (what topics you touch, what engagement you generate, who you interact with). The model uses it every time it decides who to show your posts to. If your history is good, it stays clean and the model pushes you. If you accumulate negative signals (blocks, mutes, reports, not_interested), it goes toxic and starts penalizing you automatically. And the trap: it does NOT reset. What you do today stays in there for weeks, poisoning everything you publish after, even if it's good. That's why getting out of a shadowban or a low-reach streak on X feels like trying to move a giant rusted wheel. It's not your imagination, it's literally that. Cleaning up your embedding is slow and painful, like the impression you have of someone you don't like: no matter how nice they get to you, it's gonna take a while before you trust them. Another important finding: the embedding doesn't decay on a clock. It decays with NEW engagement entering the system. If you stop posting, the old bad signals stay frozen in there. Nothing overwrites them. If you start making content the algorithm likes, you'd see improvement after 6 to 8 weeks and a real shift around 12 to 16 weeks, assuming you don't pile up more bad signals along the way. Why is nobody talking about this? It blows my mind. Finally a confirmation of that "I'm in a bad streak" feeling we've all been through. 1/ First 30 minutes are everything If your post doesn't get engagement fast, Grok doesn't even evaluate it. No quality score, no deep analysis, no chance of reaching anyone who doesn't follow you. Dead and buried 2/ Post age caps at 80 hours: POST_AGE_MAX_MINUTES = 4800, bucketed in 1 hour chunks. After that you're in the "overflow bucket" which translates to "ancient, ignore" Best window: first 0 to 12 hours. After 24 you're already in a worse bucket Far from rewarding "evergreen" content, X wants a constant stream of fresh meat (literally the opposite of YouTube) 3/ MY BIGGEST FEAR TURNED OUT TO BE UNFOUNDED (supposedly): living in EU posting English for US audience: ZERO direct penalty in theory: The PostCandidate struct has NO field for author country, IP, or location. Gizmoduck (X's identity service) returns only follower count + screen name. The Phoenix transformer just sees a hash of your author_id What hurts you indirectly: timezone (your post ages while US sleeps) and the language of the POST itself So using a VPN to "post from the US" does literally nothing (unlike TikTok or Instagram, by the way) 4/ The 5 negative signals that kill your reach: The model predicts 22 actions per post. 5 of them are negative weights that get SUBTRACTED from your score: - not_interested - block_author - mute_author - report - not_dwelled (people scrolling past your post without stopping) That last one is brutal tbh. A post that gets ignored is mathematically WORSE than a post that never got published 5/ Shadowbans 100% exist. 4 different kinds: - Hard drop. X removes your post from everyone's feed without telling you. Applied to posts with serious content (child safety, etc.) or suspended accounts. You don't even find out - DO_NOT_AMPLIFY label. Literally a field in the code that says "do not amplify this post". If they put it on you, ads stop showing next to your posts → X stops making money from showing you → the system stops pushing you. Full blackout - BotMaker rules. The internal panel where X employees can manually limit a specific account by hand. The code shows the categories that exist (Content, ContentLimited, Safety, Grok) but does NOT show who they're applied to or why. The tool is documented, the usage isn't - Poisoned embedding. The worst one, as we saw above. The model has an internal "memory" for every account. If your account racks up enough "not interested" + blocks + mutes + reports over time, that memory goes toxic. From then on, even your good future posts get penalized automatically. Nobody decided this. The model just learned your account gets bad engagement and self-corrected 6/ Only ORIGINAL posts get the "Banger Screen" Replies and retweets never enter the Grok quality classifier. If you spend your day replying to viral accounts, you're optimizing for the Reply Ranker, NOT for amplification Want to be discovered out of network? Write originals. There's no other way 7/ Replies to small accounts get spam-scanned. Replies to big accounts get Grok-ranked Two separate classifiers. The SpamEapiLowFollowerClassifier hits replies to small accounts. The ReplyRanker scores replies to big accounts 0 to 3 with Grok "First!" or emoji-only replies get a 0. "Sir, this is a Wendy's" energy gets penalized. Basically, if you write replies, they better add something. Otherwise don't bother 8/ 50% of all feed requests are "shadow traffic" is_sampled(request_id, 0.5) marks half of every feed request as shadow. Many context features (gender inference, demographics, Grok topic preferences) only activate on shadow OR with a feature flag Translation: you literally cannot know which version of the algorithm any given user is getting. Half your audience is in an experiment at any moment 9/ Dwell (the time a user spends looking at your post before scrolling) is 5x better than getting likes The scorer has 5 different dwell signals (dwell, cont_dwell_time, click_dwell_time, etc.) but only 1 favorite signal. - A post with tons of likes but people read it for 1 second and keep scrolling → low score - A post with few likes but people stay 8 seconds reading it → high score Optimize for time spent on your post, not for likes! 10/ Things that actually work: - Get engagement in the first 10 min. DM your friends, ping your community, whatever - Post in your AUDIENCE'S timezone, not yours. US targeting: 8 to 11am ET (14 to 17 Madrid time) - Don't post 5 things in a row. AuthorDiversityScorer multiplies each next post by decay^position. By post 4 you're at the floor - Video ≥ 10 seconds. Below MinVideoDurationMs you lose the full VQV weight - Videos with audio. Grok runs ASR (speech to text) on every video. No audio = blank signal - Quote tweet virals in your niche. The model already knows the original engages, your value-add stacks on top 11/ Things that absolutely kill your reach: - WILD FINDING: threads of 10+ tweets. DedupConversationFilter keeps only 1 tweet per conversation per feed. Megathreads are mathematically a waste - Reposting the same content. Bloom filters dedupe it - AI slop. There's literally a slop_score field in the BangerScreen output. They explicitly detect it - NSFW/violence/hate without tags. Auto MediumRisk = no ads = structural shadowban - Reply-spamming small accounts. Specific classifier for that 12/ What they DIDN'T release, the sneaky bastards: The skeleton is public. The dials are not - Exact numeric values of every weight (FavoriteWeight, ReplyWeight, OonWeightFactor, AuthorDiversityDecay). Live in xai_feature_switches::Params, external config - The actual Grok prompts (the 7 PToS policy prompts, BangerMiniVlmScreenScore, SafetyPtos). Could literally have any framing in them - The BotMaker rules that apply DO_NOT_AMPLIFY to specific accounts - util/phoenix_request.rs, which constructs the final model call - 25+ xai_* crates referenced but not included - The production Phoenix weights. They only released the mini version My theory: they gave us a pretty skinny skeleton of the whole thing they actually have. The muscle (weights) and the brain (prompts and BotMaker rules) are completely opaque. They kept the best parts for themselves, clearly 13/ Cheat sheet so you don't forget: - First 30 min matter more than anything - Your location is irrelevant, your timing and language are not - Shadowbans exist in 4 flavors. Worst is the model quietly poisoning your author embedding from past bad signals. Climbing back up by cleaning your embedding is gonna hurt, but it can be done - Replies and retweets don't get the quality classifier. Originals do - Dwell (someone actually staying to look at your post) beats likes 5 to 1 - Half of all traffic is in some experiment at any moment - They kept the best parts of the algorithm for themselves, but hey, something is something













I work in government affairs at OpenAI. My job is federal partnerships. When an agency wants our models, I make sure the paperwork is beautiful. Paperwork is my love language. On my desk I have a framed quote that says "Policy Is Just Code That Runs on People." I bought the frame at Target. It was in the Live Laugh Love section. I did not see the irony at the time. I still don't. We had a good week. On Monday, we closed a $110 billion funding round. One hundred and ten billion dollars. Amazon put in fifty. Nvidia put in thirty. Valuation: $730 billion. The largest private fundraise in the history of anyone raising anything. There was a company-wide Slack message about it. The message used the word "transformative" twice and the word "safety" once. The word "safety" was in the last sentence, after the link to the new branded hoodie pre-order. The hoodies are nice. They're the soft kind. On Tuesday, we fired a research scientist for insider trading on Polymarket. He had opened seventy-seven positions across sixty wallets, betting on our product announcements before they were public. Over three years. Total profit: sixteen thousand dollars. Seventy-seven positions. Sixty wallets. Sixteen thousand dollars. That is two hundred and eight dollars per wallet. The man had access to the most valuable product roadmap in artificial intelligence and he used it to make less money than a good weekend at a Reno blackjack table. The wallets were linked. Not discreetly linked. Linked like Christmas lights. One wallet was reportedly called something I cannot repeat but it contained the word "OpenAI" and a number. He did not use a VPN. He did not use an alias. He used Polymarket, the platform that is designed to be publicly auditable, to place bets on information he stole from the company that invented GPT. A compliance team composed entirely of Labrador retrievers would have found this by lunch on day one. We did not find it for three years. This will matter later. On Wednesday, a petition appeared. "We Will Not Be Divided." Four hundred and seven signatures. Two hundred sixty-six from Google. Sixty-five from OpenAI. The petition warned that the government was pitting AI companies against each other on safety. It said that if one company broke ranks, the government would use the defection to lower the bar for everyone. I meant to read it. It went into my to-read folder. The to-read folder also contains the Responsible Scaling Policy, three think-tank white papers on AI governance, and a New Yorker article someone sent me in November. The folder is aspirational. On Thursday, OpenAI told CNN we would maintain "the same red lines as Anthropic." Same red lines. On Friday, Anthropic told the Pentagon no. The Pentagon had given them seventy-two hours to remove the safety guardrails from Claude. Anthropic's guardrails were not in a policy document. They were not in a legal reference. They were in the code. Written into Claude's architecture. If Claude hit a safety boundary, Claude stopped. Not because a lawyer said so. Because the math said so. You could fire every lawyer at Anthropic and the model would still refuse. You cannot remove code with a contract amendment. You can remove a contract reference by Tuesday. I checked. Anthropic said no. By that evening, the Pentagon had designated them a supply-chain risk. I have worked in government procurement for eight years. Government paperwork does not move in hours. I have waited nine weeks for a badge renewal. I once spent four months getting a PDF notarized. This designation moved in hours. The document was pre-written. Formatted before the deadline expired. Calibri 11pt. Consistent margins. Somebody wanted this very badly. I respect the craft. I do not think about the implication. That is not my scope. Within hours, we had signed the replacement contract. I was proud of the turnaround. My team moved fast. Legal moved fast. Everyone moved fast. We are very good at moving fast. We are not always sure what we are moving toward, but the speed is impressive and the hoodies are soft. The contract referenced DoD Directive 3000.09, which governs autonomous weapon systems. The directive requires "appropriate levels of human judgment over the use of force." The word "appropriate" is not defined. This is not an oversight. This is the point. The word "appropriate" is the most load-bearing word in the entire contract and it is doing exactly as much work as a throw pillow on a couch that is on fire. Anthropic built a wall. We referenced a document about where walls should go. Anthropic's guardrails were architecture. Ours were a citation. Theirs execute. Ours can be filed. The Pentagon asked both companies to take down the wall. Anthropic said it's load-bearing, the building will collapse. We said what wall? Oh, you mean the wallpaper. Here, watch. It peeled off beautifully. It was designed to. Sam announced the partnership that night. The word "responsible" appeared in the announcement and in the contract. In the announcement it was a brand. In the contract it was a footnote to a directive that uses the word "appropriate" which nobody has defined. The word traveled from a legal document to a public statement without changing its font. Only its meaning. At this valuation, "responsible" means: we will do the thing the other company refused to do, and we will describe doing it with the same adjective they used to describe not doing it. By Saturday morning, "How to delete your OpenAI account" was the number one post on Hacker News. 982 points. By noon, subscription cancellations were up eighty-nine times the daily average. Not eighty-nine percent. Eighty-nine times. Someone in our Slack posted the Hacker News link with the message "should we be worried?" Someone else reacted with the branded hoodie emoji. We have a branded hoodie emoji now. It was introduced on Monday, to celebrate the fundraise. It has been used four hundred and twelve times. Mostly in the #general channel. Mostly this week. The communications team drafted a response. The response used the word "committed" three times and the word "safety" four times. It did not use the word "guardrails." It did not use the word "code." It did not explain anything. It was a holding statement. It held nothing. It held beautifully. Here is the math. The twenty-dollar-a-month customers were upset. The two-hundred-million-dollar customer was upset because the previous vendor had guardrails that could not be removed. The hundred-and-ten-billion-dollar investors were not upset. The subscription cancellations, at eighty-nine times the daily rate, represented less than the interest on Amazon's fifty billion dollar contribution calculated over a long weekend. Twenty dollars. Two hundred million. One hundred and ten billion. Three different price points. Three different definitions of "responsible." The most expensive one won. It always does. The math does not have red lines. The math has a cap table and a TAM slide that now includes "defense and intelligence" where it previously said "enterprise and consumer." One word changed on one slide in one deck and the company is worth one hundred and ten billion dollars more. The sixty-five OpenAI employees who signed the petition came to work on Monday. They sat at their desks. Nobody asked them about it. Nobody asked them to resign. Nobody brought it up at the all-hands. The all-hands had catering. Sweetgreen. The chopped salads. Someone made a joke about the kale being "responsibly sourced." No one laughed. Then everyone laughed. Then it was quiet. The petition had four hundred and seven signatures. The contract had one. Now: the Polymarket thing. Seventy-seven positions. Sixty wallets. Three years. A public blockchain. We did not catch him. That same week, we were entrusted with deploying artificial intelligence on America's classified military networks. The classified networks. The ones where the detection requirements are somewhat more rigorous than "check if anyone's gambling on our launch dates on a website that is literally designed to be publicly auditable." The company that could not find the Polymarket guy can now be found in the Pentagon's classified infrastructure. I'm sure it'll be fine. We move fast. The contract is signed. The deployment is underway. The compliance documentation will reference the directives. The directives will use the word "appropriate." I will not define it. That is not my scope. My scope is the paperwork. The paperwork is beautiful. The petition is still a Google Doc. Nobody has updated it. The signatures still say four hundred and seven. The to-read folder still has the New Yorker article from November. The branded hoodie pre-order closed on Wednesday. I got mine in navy. It's the soft kind. On Thursday we told CNN: the same red lines. On Friday we signed the contract they refused. We do have the same red lines. We drew ours in pencil.








Hi everyone! The time has come to mint Squiggle #9999. I’m so excited to finally get to see what the last output will look like and get it into the hands of @LACMA. This is a save the date to invite you to participate in the festivities. No action necessary now. Please read on below. I’ve had the honor of working with @_deafbeef on a farewell smart contract to commemorate the final mint. It will live on as an immutable relic of who’s here and active in our ecosystem as the Chromie Squiggle project comes to a close. I will follow up with some additional information but for now here are the basics: Smart contract will go live Monday, July 15th at 12pm CT and close the following Monday, July 22nd, at the same time. Everyone is welcome and encouraged to “sign” the contract and participate in the farewell. Note: there will be no website and no need to connect your wallet anywhere. Anything stating otherwise is a scam. To sign the farewell you’ll simply send any amount of ETH (I suggest sending 0.001 ETH, but anything you send will be immediately and automatically sent back to you by the contract anyways, you can send as little as 0.000000000000000001 ETH if you want) to the smart contract address. This will trigger a function in the contract to record your participation. I’ll share the contract address as soon as I have it. The smart contract will be open for one full week so take your time signing! At the end of the week, the contract will be paused and will be sealed forever once the last mint is executed. Then, in the weeks following I will initiate the transaction for minting Squiggle #9999 directly to the museum’s address via this smart contract, eternally linking anyone that participates to the final mint. More details to come, but most important right now is that you know there will be no website or place to connect your wallet. The contract will be a raw smart contract that you’ll only be able to explore via Etherscan. This is mostly to keep you safe and make it easy to participate, but also a bit of nostalgia for the old days when I was tinkering with ETH in the pre-MetaMask era. It would be incredibly meaningful for me to get as many people as possible to interact with the contract, so please help me spread the word! Anyone and everyone is welcome and encouraged to participate. ❤️


berlin is so fucking back was amazing meeting all of you at my ai x marketing event today full house! if you’re at the computer all day you can forget that it’s real people reading your stuff met so many cool founders and had great convos about all kind of ai products it just made me realize how much human connection really matters in an AI world people crave it even more than before! thanks to @w3_hub @SuperteamDE and @mitte_ai for providing the space, drinks and tasty pizza thank you also to everyone who presented their ideas today I’m gonna do more of these IRL is based
























