
views represent my employer
872 posts

views represent my employer
@AdamAbramson
past: @virginia_tech @newsday @fallontonight @snl @latelateshow. present: NIL stuff - @rowzagency. pretty mid on here






I'm going to post this very long explanation ONE TIME for the dorks that missed this X phenomenon long ago. On X, the phrase “trouble in ___ (city)” has become a small trolling meme used during recruiting drama in college football. How the meme works: When a player de-commits from a school or announces he’s reopening his recruitment, fans rush to speculate about what it means for that program. Someone will intentionally post something like: “Trouble in Athens 👀” But the player might actually be leaving a totally different school. The poster purposely names the wrong city TO BAIT REACTIONS. Why people do it: 1. Engagement bait Fans from the named program rush into the replies saying things like: “What are you talking about?” “He wasn’t even committed here.” “Wrong city stupid.” Every correction, quote-tweet, and argument boosts the post’s reach in the algorithm. 2. Fanbase trolling It needles a big fanbase by implying their program has problems. Even when people know it’s wrong, they react emotionally. 3. Inside-joke culture Recruiting Twitter has developed a culture of sarcasm and “bit posting.” Once people recognize the joke format, it becomes a running gag. Example: A player de-commits from Florida State Seminoles football in Tallahassee. Someone tweets: “Trouble in Austin 👀” — referencing Texas Longhorns football in Austin, which has nothing to do with the story. Fans of Texas jump in to defend their program → the tweet goes viral → the troll succeeds. The real purpose: It’s basically algorithm farming mixed with fanbase trolling. The goal isn’t accuracy — it’s to trigger replies and quote-tweets, which pushes the post into more timelines. So thanks, dorks. 🙏 Works every time.👍


2024 football or 2026 basketball, which one is the bigger let down?



I've decided to leave OpenAI. I'm incredibly proud of all the work I've been part of here, from helping create the reasoning paradigm with @MillionInt, scaling up test-time compute with @polynoamial, working on RL algorithms with my fellow strawberries, shipping o1-preview (which started life as of one of my derisking runs), to post-training o1 and o3 with @ericmitchellai, @yanndubs and many others. I'm most proud of having led the post-training team here for the last year -- the team has done incredible work and shipped some really smart models, including GPT-5, 5.1, 5.2, and 5.3-Codex. OpenAI has genuinely some of the most talented researchers I have ever met, and I have learned more than I could have imagined knowing since I joined as a new grad. I want to thank @markchen90 @FidjiSimo @sama @merettm for all their support over my time here, and too many collaborators to name for the insights, ideas, and just plain fun we have had working together. After leading post-training for a year, though, I'm longing to start fresh and return to IC research work. I've been thinking about going back to technical research for quite some time, and I genuinely believe my colleagues and team here are set up to succeed going forward without me. I'm personally very excited for my next chapter -- I'm proud to be joining @AnthropicAI to get back into the weeds in RL research, and I'm looking forward supporting my friends there at this important time. Many of people I most trust and respect have joined Anthropic over the last couple of years, and I'm excited to work with them again. I have also been very impressed with Anthropic's talent, research taste and values, and I'm excited to be part of what the company does next!



“VIrGiniA TecH wOn’T gET anY PenN sTAte trANsFeRs”















