Steven

7.2K posts

Steven banner
Steven

Steven

@asimovman

I'm a recovering journalist, an author, and an undercover time-traveling android (but keep that last part to yourself). Retweets/follows ≠ endorsements

Bremerton, Wash. Katılım Mart 2009
4.8K Takip Edilen3.4K Takipçiler
Sabitlenmiş Tweet
Steven
Steven@asimovman·
Steven tweet media
ZXX
3
2
20
0
Steven
Steven@asimovman·
@monday_flowers I went with the Mudita Kompakt. You should check out Jose Briones on YouTube, he does the best "dumb phone" reviews
English
1
0
1
21
moody monday
moody monday@monday_flowers·
Do one/some of my mutuals have the light phone iii? Pls share your reviews!
English
4
0
0
137
Alex Griswold
Alex Griswold@HashtagGriswold·
The Waffle House teleportation story reminds me of the time Tucker said he had personally experienced a demon attack after waking up with claw marks, and the interviewer was like, "woooow, and there was no else in bed??" and Tucker goes, "just my four dogs."
English
46
961
27.7K
462.7K
Steven
Steven@asimovman·
@brianjamesgage @DavidBadurina That's just a matter of style. AP style calls for spaces on either side of an em dash, so I wouldn't say it's any more a dead giveaway of AI than the Chicago style version.
English
0
0
0
52
Brian James Gage
Brian James Gage@brianjamesgage·
i'll raise you a bit of nuance here... The Chicago Manual of style em dash is most certainly human—attaches the words long dash...eloquent and beautiful to look at on the page... this type of em dash — the type that doesn't attach either a sloppy writer or AI... The short dash referenced in the post is simply a dash, not an em dash... so we'll just leave that one there.
English
6
0
5
1.5K
David Badurina - Enigma
David Badurina - Enigma@DavidBadurina·
This strawman is so tiring. No, using an em dash is not directly evidence that AI was used. Nobody is arguing that except for people that clearly don't understand why they've become a point of contention. Using 12 of them in the first 500 words along with section breaks every 300 words combined with repetitive phrasing like: "He was mad. NOT angry. NOT frustrated. Mad," with one-note characters and metaphors that make zero contextual sense within the moment of the story? In that case one of two things is true: 1) You need to improve your craft. 2) You used ChatGPT. Keep your em dashes. They're nothing more than a red-flag for people that are tuned to pattern-matching. If I, as an editor, review your story and I see more more than two on a page outside of dialogue (where someone is interrupted), you bet your ass I am going to be suspicious of every word.
Nicolas Cole 🚢👻@Nicolascole77

Anyone who says, “Using M-dashes means it was written by AI” is really just announcing to the world they don’t read. M-dashes have been used by writers from Dostoyevsky to Hemingway to Jane Austen to Toni Morrison for decades. Go pick up a book.

English
50
6
157
23K
Steven
Steven@asimovman·
@Compy_Fi @KenTumin After the deposit hits you can immediately transfer the money to a different account
English
0
0
1
240
CompyFi
CompyFi@Compy_Fi·
@KenTumin Keeping $15k in a 0 interest savings account loses like 450 a year with inflation at 3% these days. Better move is MMF or market and borrow for liquidity.
English
1
0
6
5K
Ken Tumin
Ken Tumin@KenTumin·
For the last 3 years, Chase has periodically offered a $900 bonus for opening a personal checking and savings account with direct deposit in the checking and a $15k deposit into the savings. Enough people must keep the accounts open to make it worthwhile for Chase. The savings account pays nearly zero interest. This seems like the largest bank bonus nationally available for the smallest required deposit (for personal deposit accounts). It may not be long before we see the bonus increase to $1,000.
My Money Blog@mymoneyblog

Chase Bank $900 Checking + Savings Bonus w/ Coupon Code (Updated 2026) mymoneyblog.com/chase-bank-900…

English
5
1
147
148.4K
Anna Lux
Anna Lux@theannalux·
Airbnb, this is unacceptable. @Airbnb We stayed in a listing where loud drilling (building maintenance) made it impossible to work during the day. We raised this respectfully. After checkout, the host started contacting us repeatedly on WhatsApp, pushing for calls and making inappropriate comments when we refused. We left an honest, factual review. Airbnb removed it. If real experiences can be erased while this kind of behavior is ignored, how can anyone trust this platform?
English
247
379
8.8K
639.7K
PK
PK@PavelPaulKucera·
@theannalux Why did you not answer when the host called?
English
4
0
13
4.9K
Kyle Mann
Kyle Mann@The_Kyle_Mann·
He has never laid eyes on the Artemis despite devoting years to it but he did get the LEGO set lol
Kyle Mann tweet media
English
9
6
502
18K
Kyle Mann
Kyle Mann@The_Kyle_Mann·
For most of my childhood, my dad was working on the Shuttle program. Kids at school would often ask, "Your dad's an astronaut, right?" (He was not. He did vehicle loads analysis. Lol) Now he's on Artemis II. Super cool to see him help make history again tomorrow 🚀
Kyle Mann tweet mediaKyle Mann tweet media
English
199
624
16.2K
288.8K
Steven
Steven@asimovman·
@tysonbrody This almost makes it sound like Craigslist voluntarily ceased allowing sex ads. They removed them in response to legislation.
Steven tweet media
English
0
0
0
12
tyson brody
tyson brody@tysonbrody·
Classified ads, insanely high margin business. Ofc he wasn’t the only one to think of putting them online, just the one that succeeded. New Times tried to compete by making backpage, but they were too libertarian for their own good and kept sex work ads when Craigslist moved away
English
3
10
182
9.4K
Steven
Steven@asimovman·
I'm worried that Peppa Pig might be British propaganda
English
2
0
2
40
Steven
Steven@asimovman·
@emmma_camp_ "Sprung" was an underrated sitcom about Covid
English
0
0
3
401
Emma Camp
Emma Camp@emmma_camp_·
Question for the group: has there been any great art about Covid? Any incredible literary novels or films? I can't think of anything off the top of my head but my cultural knowledge is not limitless.
English
524
19
500
4.4M
Steven
Steven@asimovman·
@benryanwriter "Living in Seattle means being permanently entrapped by a dome of gloom" uuuhh are you forgetting about the 2 1/2 months of summer sun??
English
0
0
0
13
Benjamin Ryan
Benjamin Ryan@benryanwriter·
My thing about growing up in Seattle was that I didn't know that one reason I was permanently sad as a child was that I basically never saw the sun. I moved to NYC at 19, and while everyone was whining that it was so cold, I was reveling in seeing blue sky in December. I used to babysit this really sweet little boy named Elliott. His mom told me that when he was four years old, he looked up in the sky in May and said, "Mom, what's all that blue stuff?" Living in Seattle means being permanently entrapped by a dome of gloom. It does things to your soul. Lewis & Clark's expedition was marooned near the mouth of the Columbia River over the wintertime before they could make the trek back east. They'd been downright chipper back when they were still in what's now eastern Washington. But once they passed the Cascades, they were all consumed with suicidal depression. They scurried back east the first chance they got.
Pamela Ross@PamNotAnderson

the Lindy West thing is ultimately a cautionary tale about living in the Pacific Northwest. it’s scary! you could encounter Bigfoot, or end up in a poly throuple that no one respects

English
25
3
170
25.4K
Trey the Explainer 🔜 FWA
Trey the Explainer 🔜 FWA@Trey_Explainer·
I have just been informed that there’s a 2026 Bigfoot documentary definitively proving the Patterson-Gimlin film as a hoax by presenting lost “rehearsal” footage never seen before
Trey the Explainer 🔜 FWA tweet media
English
78
321
5.4K
464.2K
Katie Herzog
Katie Herzog@kittypurrzog·
Please help me solve the Mystery of the Naked Evergreen. No sign of lightning strike. No bite or claw marks. Claude’s best guess was elk but it’s debarked like 25-30 feet up.
Katie Herzog tweet media
English
140
12
324
57.6K
Steven
Steven@asimovman·
@GovBobFerguson Then amend the state Constitution instead of violating it
English
0
0
0
4
Steven
Steven@asimovman·
@Michaelfiore How about praying mantises? They look so cool.
English
0
0
0
44
Michael Fiore - Garden Center
Michael Fiore - Garden Center@Michaelfiore·
Yeah, don’t do this. I quit selling ladybugs years ago for several reasons: 1) Ladybugs are not farm-raised like other beneficial bugs are. They are wild collected. People go out with scoops and shop vacs in the mountains of California when ladybugs are hibernating en masse and gather them up. They then refrigerate them and package them for selling. This means that the ladybugs you buy are not native to your area and will wake up very disoriented, which leads to my next point- 2) They leave (fly away) as soon as you release them. Not only are they confused upon release, but they’re also territorial. If you release 1,500 of them you’ll be lucky if even a few of them stick around. You’re basically just paying to relocate ladybugs from California to your state. Instead of buying ladybugs, you can attract native ladybugs to your yard by planting native plants and other pollinator friendly plants around your property. Also, make sure you aren’t using insecticides in the garden. If you want to purchase and release a beneficial insect in your garden, green lacewings are a much better alternative.
Sky@SkyTheViking

I know for a fact that aphids are attacking my neighbor's tree. Would it be weird if I got her an order of live ladybugs? Like...I feel like she'd be happy. But it's also strange to purchase bugs for people.

English
135
832
9.4K
729.8K
Steven
Steven@asimovman·
@asymmetricinfo I use a prompt for generating poetry that I think yields pretty interesting results, but it goes beyond just writing, "write a poem about X"
Steven tweet mediaSteven tweet mediaSteven tweet mediaSteven tweet media
English
0
0
0
753
Megan McArdle
Megan McArdle@asymmetricinfo·
I asked Claude, GPT and Gemini to write me poems about time. Responses in replies.
Alex Prompter@alex_prompter

🚨 BREAKING: Researchers at UW Allen School and Stanford just ran the largest study ever on AI creative diversity. 70+ AI models were given the same open-ended questions. They all gave the same answers. They asked over 70 different LLMs the exact same open-ended questions. "Write a poem about time." "Suggest startup ideas." "Give me life advice." Questions where there is no single right answer. Questions where 10 different humans would give you 10 completely different responses. Instead, 70+ models from every major AI company converged on almost identical outputs. Different architectures. Different training data. Different companies. Same ideas. Same structures. Same metaphors. They named this phenomenon the "Artificial Hivemind." And the paper won the NeurIPS 2025 Best Paper Award, which is the highest recognition in AI research, handed to a small number of papers out of thousands of submissions. This is not a blog post or a hot take. This is award-winning, peer-reviewed science confirming something massive is broken. The team built a dataset called Infinity-Chat with 26,000 real-world, open-ended queries and over 31,000 human preference annotations. Not toy benchmarks. Not math problems. Real questions people actually ask chatbots every single day, organized into 6 categories and 17 subcategories covering creative writing, brainstorming, speculative scenarios, and more. They ran all of these across 70+ open and closed-source models and measured the diversity of what came back. Two findings hit hard. First, intra-model repetition. Ask the same model the same open-ended question five times and you get almost the same answer five times. The "creativity" you think you're getting is the same output wearing a slightly different outfit. You ask ChatGPT, Claude, or Gemini to write you a poem about time and you keep getting the same river metaphor, the same hourglass imagery, the same reflection on mortality. Over and over. The model isn't thinking. It's defaulting to whatever scored highest during alignment training. Second, and this is the one that should really alarm you, inter-model homogeneity. Ask GPT, Claude, Gemini, DeepSeek, Qwen, Llama, and dozens of other models the same creative question, and they all converge on strikingly similar responses. These are models built by completely different companies with different architectures and different training pipelines. They should be producing wildly different outputs. They're not. 70+ models all thinking inside the same invisible box, producing the same safe, consensus-approved content that blends together into one indistinguishable voice. So why is this happening? The researchers point directly at RLHF and current alignment techniques. The process we use to make AI "helpful and harmless" is also making it generic and boring. When every model gets trained to optimize for human preference scores, and those preference datasets converge on a narrow definition of what "good" looks like, every model learns to produce the same safe, agreeable output. The weird answers get penalized. The original takes get shaved off. The genuinely creative responses get killed during training because they didn't match what the average annotator rated highly. And it gets even worse. The study found that reward models and LLM-as-judge systems are actively miscalibrated when evaluating diverse outputs. When a response is genuinely different from the mainstream but still high quality, these automated systems rate it LOWER. The very tools we built to evaluate AI quality are punishing originality and rewarding sameness. Think about what this means if you use AI for brainstorming, content creation, business strategy, or literally any task where you need multiple perspectives. You're getting the illusion of diversity, not the real thing. You ask for 10 startup ideas and you get 10 variations of the same 3 ideas the model learned were "safe" during training. You ask for creative writing and you get the same therapeutic, perfectly balanced, utterly forgettable tone that every other model gives. The researchers flagged direct implications for AI in science, medicine, education, and decision support, all domains where diverse reasoning is not a nice-to-have but a requirement. Correlated errors across models means if one AI gets something wrong, they might ALL get it wrong the same way. Shared blind spots at massive scale. And the long-term risk is even scarier. If billions of people interact with AI systems that all think identically, and those interactions shape how people write, brainstorm, and make decisions every day, we risk a slow, invisible homogenization of human thought itself. Not because AI replaced creativity. Because it quietly narrowed what we were exposed to until we all started thinking the same way too. Here's what you can actually do about it right now: → Stop accepting first-draft AI output as creative or diverse. If you need 10 ideas, generate 30 and throw away the obvious ones → Use temperature and sampling parameters aggressively to push models out of their comfort zone → Cross-reference multiple models AND multiple prompting strategies, because same model with different prompts often beats different models with the same prompt → Add constraints that force novelty like "give me ideas that a traditional investor would hate" instead of "give me creative ideas" → Use structured prompting techniques like Verbalized Sampling to force the model to explore low-probability outputs instead of defaulting to consensus → Layer your own taste and judgment on top of everything AI gives you. The model gets you raw material. Your weirdness and experience make it original This paper puts hard data behind something a lot of us have been feeling for a while. AI is getting more capable and more homogeneous at the same time. The models are smarter, but they're all smart in the exact same way. The Artificial Hivemind is not a bug in one model. It's a systemic feature of how the entire industry builds, aligns, and evaluates language models right now. The fix requires rethinking alignment itself, moving toward what the researchers call "pluralistic alignment" where models get rewarded for producing diverse distributions of valid answers instead of collapsing to a single consensus mode. Until that happens, your best defense is awareness and better prompting.

English
8
2
42
60.1K
Xenomorpheus
Xenomorpheus@Xenomorph_Zaza·
@asimovman Why would Hitler ever want to visit [present time]?
English
1
0
0
11
Steven
Steven@asimovman·
As AGI fast approaches, keep an eye out for time travelers (with or without Austrian accents)
English
1
0
1
98
Nicole Knocks
Nicole Knocks@NixxieKnocks·
@asimovman @kittypurrzog Yet the underlying truth was captured. K + S were in the closest doing something sexual and the children saw.
English
1
0
0
16