Un•AI•ify

886 posts

Un•AI•ify banner
Un•AI•ify

Un•AI•ify

@UnAIify

Un•AI•ify helps you spot AI-generated text and reduce heavy rhetoric from online content. Improve writing skills and bolster know-how about persuasive tactics.

เข้าร่วม Haziran 2025
136 กำลังติดตาม183 ผู้ติดตาม
ทวีตที่ปักหมุด
Un•AI•ify
Un•AI•ify@UnAIify·
Two counterarguments are used to parry the "calling out" of AI writing online: "Who cares if it's AI writing, if it's good?" -OR- "Argue with the arguments! Or maybe you can't!" BONUS: "AI detectors don't work!" Worth sharing how to address:
English
1
1
4
396
Un•AI•ify
Un•AI•ify@UnAIify·
@tomfgoodwin Have seen tech CMOs in different SaaS verticals post the same "thought leadership" written in the same way, down to voice and rhetoric. Both are using AI to write the slop, and they're too clueless to realize everyone using AI sings from the same, averaging songbook.
English
0
0
1
6
Tom Goodwin
Tom Goodwin@tomfgoodwin·
It's amazing how much stuff in trends deck is the same as each other, and wrong. It's getting worse every year, People lazily drawing upon the same nonsense, which then gets put into the training data for more of the same Agentic commerce is a good example, or AI influencers
English
4
0
13
1.2K
Un•AI•ify
Un•AI•ify@UnAIify·
Everything you need to know about AI follows from these two principles: AI is a bullsh*tter.¹ AI lacks skin in the game. — ¹ Even if mostly right, AI is still mostly right bullsh*t² ² AI BS is convincing about that which you know least.
English
1
2
2
80
Un•AI•ify รีทวีตแล้ว
Justin Owings
Justin Owings@justinowings·
The Standard Bullsh*t Take - content on 𝕏, Linkedin, Facebook, YouTube, etc., that invokes "It's not X. It's Y" phrasing and/or other rhetoric suggesting AI provenance. "SBST" content, whatever its merit, is tainted by its intent to persuade. Proceed accordingly.
English
1
1
3
63
Un•AI•ify
Un•AI•ify@UnAIify·
Here Chris Voss describes liars. Understand what he says as a description for AI answers. AI is a bullsh*tting machine that lacks any skin in the game. AI overstates its case, explaining to try and convince us (the ones with skin in the game) to believe. AI lies.
English
1
1
2
98
Un•AI•ify
Un•AI•ify@UnAIify·
@atmoio AI is a bullsh*tter. The most dangerous bullsh*t convincingly mixes truth with lies. Use with extreme caution! Also, AI lacks skin in the game. Thus, reliance on AI puts the user's skin at risk.
English
0
0
2
15
Un•AI•ify รีทวีตแล้ว
Justin Owings
Justin Owings@justinowings·
.@rorysutherland: "In engineering, you are peer-reviewed by reality." 👆 "Does it work?" vs. "Does it persuade?" "Does it work?" vs. "Is this bullsh*t?" Most of all: "What's the fallout if it doesn't work?" and "How do we know if it works?" (@nntaleb's skin in the game) The AI hype is driven most by its use for code. The code is written and tested. The code works, or it doesn't. This is not the case with so many other things that AI creates. Can you "test" the answer it spits out, the copy or the email draft it writes, the document it reviews and summarizes, the brief, and on and on. Do you have the experience to even know where it might be wrong? Non-code, non-mathematical AI creations are hard to test for all sorts of reasons. Communication is messy and nuanced like the people doing the communicating. Communication is also aimed at persuading the audience, to "win arguments" rather than "solve problems" as Rory Sutherland so wonderfully puts it. As people use AI to create difficult to test things, how might they be fooled into thinking the creations are good? Prompters, as revealed by their deference to the LLM to assist them, may not know enough about the subject matter they're getting help with. They very well don't know enough to identify where an AI-generated creation is wrong or incomplete. Nevermind how persuasive AI is with its knack for mirroring, pacing and leading, and sycophancy. AI is the palm-reader, the P.T. Barnum, the bullsh*tting con-man we want to believe. "Prompter's blindness" hits squarely on these two fronts: 1. On the subjects they know less about and 2. on the subjects of which they are readily influenced (i.e. that of which they're already convinced). Thus, outside of testable coding, AI creations have no first-line-of-defense, relatively low-cost means for filtering the bullsh*t from "what works," what's true or valid. Thus, AI creations get published, emailed along, go viral. The control against this is people who know better, and know more about the subjects in the prompter's blindspots. But almost necessarily, the know-how is limited to a smaller number of people, nevermind the impossible to fix asymmetry that comes with a combination of zero-cost scale on the Internet and instant-creation of AI. The amount of [stuff] being created that is half-baked, performative slop grows daily. Are we set on a path to drown in unvetted bullsh*t? "It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so." cc @UnAIify
Justin Owings tweet media
Rory Sutherland@rorysutherland

spectator.com/article/why-en…

English
0
1
3
188
Un•AI•ify
Un•AI•ify@UnAIify·
Now consider how "corporate bullsh*t" is a narrower frame for "AI bullshit," ie the rhetorical patterns and words used by every LLM when they draft content. AI fakes meaning as the supreme bullsh*tter, having no skin in the game. It's on us to separate the truth from the BS.
Shane Littrell, PhD@MetacogniShane

🧵Happy to announce that, "The Corporate Bullshit Receptivity Scale: Development, validation, and associations with workplace outcomes" is now published! 😀🥳 (see replies below for more info) 1) Official version: sciencedirect.com/science/articl… 2) Open access version: researchgate.net/publication/40…

English
0
1
2
155
Un•AI•ify
Un•AI•ify@UnAIify·
@TuckerGoodrich AI expands off a small amount of info and uses reductionist phrasing (It's not X. It's Y, cliches, shallow reasoning). This compounds over time. Like using a JPEG compression and decompression over and over again.
English
0
0
0
10
Tucker Goodrich
Tucker Goodrich@TuckerGoodrich·
Oliver Prompts@oliviscusAI

Microsoft Research + Salesforce just dropped a paper that should scare every single AI builder right now. They tested 15 of the top models (GPT-4.1, Gemini 2.5 Pro, Claude 3.7 Sonnet, o3, DeepSeek R1, Llama 4) across 200,000+ simulated conversations. The results are actually terrifying. If you give a model a single-turn prompt, it hits 90% performance. But if you have a multi-turn conversation? it plummets to 65%. same model. same task. just.. talking normally. The crazy part is that the ai isn't getting dumber (aptitude only dropped 15%). the problem is that unreliability EXPLODED by 112%.. Here is exactly why they break: → they answer before you finish explaining, and those wrong assumptions get baked in permanently → they fall in love with their first wrong answer and just keep building on it → they completely forget the middle of your conversation → longer responses introduce more assumptions, which means more errors Even the new reasoning models failed. o3 and deepseek r1 performed just as badly. giving them extra "thinking tokens" did absolutely nothing. setting temperature to 0? still broken. Every benchmark we celebrate is tested in perfect, single-prompt lab conditions. but real conversations break every model on the market and nobody is talking about it.. The only fix right now? stop chatting. Give your AI everything upfront in one massive message instead of going back-and-forth.

QHT
1
0
2
1K
Mark Gadala-Maria
Mark Gadala-Maria@markgadala·
800,000 human brain cells, floating in a dish, have never had a body. Never seen light. Never felt anything. And they just learned to play a video game. That's not a metaphor. That's literally what happened. These neurons are alive. They fire. They adapt. They get better at DOOM over time, which means something inside that petri dish is changing in response to failure. Scientists call it "goal-directed learning." There is no cleaner definition of that phrase than "it kept trying until it got better." The cells have no survival instinct, no reward system, no reason to improve. They just do. The part nobody's talking about: researchers have to convert the game's visuals into electrical pulses the neurons can interpret. Which means those cells are perceiving something. Not seeing it the way you do. But processing a version of a world that doesn't exist, inside a container that was never meant to think. The Turing Test was about machines fooling humans. Nobody wrote the test for this.
Curiosity@CuriosityonX

🚨: A petri dish of human brain cells just learned to play DOOM

English
339
2K
17.5K
2.1M
Un•AI•ify รีทวีตแล้ว
Justin Owings
Justin Owings@justinowings·
@emollick Playing the detection game is stupid. Playing detective is smart. The difference is in having a set of heuristics that help you suss out what's worth attending to and what's worth dismissing. Get better at abductive reasoning. /s/ Builder of a detective tool: @UnAIify
English
0
1
1
135
Un•AI•ify
Un•AI•ify@UnAIify·
AI helps with the easy, but lacks the know-how to tackle the emergent hard work. Everyone feels the ephemeral nature of AI creations. Cheap trinkets and toys get thrown away. How we do one thing is how we do everything.
English
0
0
0
6
Un•AI•ify
Un•AI•ify@UnAIify·
Two recent articles about AI resulted in a massive flurry of activity in the market (the Schumer piece and the Citrini piece). Would either have been written without the help of AI? Would either have been spread without the help of algorithms hitched to attention?
English
1
0
0
12
Un•AI•ify
Un•AI•ify@UnAIify·
Would you [do whatever] if there was no AI to help making it easy to [do whatever]? AI makes [doing whatever] easier right now. This frontloading can distract, support procrastination, and lead to confusion about what's worth doing at all.
English
1
0
0
14
Un•AI•ify รีทวีตแล้ว
Brad Stulberg
Brad Stulberg@BStulberg·
We're at a point in history—not nearing it, but here—where you have to decide if you're content to ruin your brain with an endless stream of fentanyl-like digital slop or if you're going to fight for your humanity, touch grass, challenge yourself, create, contribute, and love.
English
73
560
4.4K
134.5K
Un•AI•ify
Un•AI•ify@UnAIify·
3M views, retweets from giant accounts, another reason the AI slopocalypse (hyperinflation of content) will continue is that upside optionality of fiat content (without cost to the sloppers) guarantees more supply.
Un•AI•ify tweet mediaUn•AI•ify tweet media
Handre@Handre

LASIK eye surgery cost $2,200 per eye in 2000. Today it's around $1,000 per eye despite 24 years of inflation. Meanwhile, an MRI that cost $1,200 in 2000 now costs $3,000+. The difference? LASIK operates in a free market with no insurance interference and minimal regulation. When patients pay directly, providers must compete on price and quality. LASIK clinics advertise prices, offer financing, and constantly improve technology to attract customers. Compare this to hospital procedures where prices are hidden, patients never see bills, and insurance companies negotiate opaque rates that somehow always increase faster than inflation. Cosmetic surgery follows the same pattern. Breast augmentation, rhinoplasty, and other elective procedures have become more affordable and safer over decades. Surgeons invest in better techniques and equipment because they must satisfy paying customers, not insurance bureaucrats or hospital administrators focused on maximizing reimbursements. The lesson is clear: remove third-party payment systems and excessive regulation, and you get Austrian economics in action. Prices fall, quality rises, and innovation accelerates. Healthcare costs aren't rising because of aging populations or new technology—they're rising because we've destroyed the price mechanism that makes markets work.

English
0
0
2
73