Andy O'Bryan

17.1K posts

Andy O'Bryan banner
Andy O'Bryan

Andy O'Bryan

@AICopyLab

Husband, father, author of the bestselling books The Humanizers, The Prompt Whisperer and Conversational Prompting, AI Success Club Co-Founder, #BillsMafia

St Augustine, FL Tham gia Nisan 2008
4.8K Đang theo dõi3.8K Người theo dõi
Catturd ™
Catturd ™@catturd2·
Timeline cleanse … break from the political stress Monkey. ❤️ Let’s see those fur babies.
Catturd ™ tweet media
English
3.9K
878
13.5K
200.8K
Andy O'Bryan đã retweet
I,Hypocrite
I,Hypocrite@lporiginalg·
This is fine.
I,Hypocrite tweet media
English
468
2.7K
41.7K
17M
Andy O'Bryan đã retweet
Alexander Inspira IA
Alexander Inspira IA@Alex_Inspira·
TU CUMPLEAÑOS + CHATGPT = PRECISIÓN ESPELUZNANTE ChatGPT sabe más de ti que tu mejor amigo. Simplemente ponle tu cumpleaños y verás cómo TE LEE COMO UN LIBRO. Aquí tienes 7 prompts aterradoramente precisas para probar:
Español
39
384
2.2K
642.3K
best of the sopranos
best of the sopranos@Bestofsopranos·
Which Sopranos opinion are you defending like this?
best of the sopranos tweet media
English
173
1
130
37.3K
Andy O'Bryan đã retweet
Katyayani Shukla
Katyayani Shukla@aibytekat·
I told my therapist: “I feel like I’m running out of time to build the life I want.” She didn’t even ask why. She just looked at me gently and said:
English
231
3K
28.6K
6.3M
Andy O'Bryan đã retweet
unusual_whales
unusual_whales@unusual_whales·
"Massive investment in AI contributed basically zero to US economic growth last year," per Goldman Sachs
English
702
7.8K
48.7K
6.6M
Andy O'Bryan đã retweet
Russia TV
Russia TV@Urgent_RussiaTV·
Speaking about the deep contradictions in human nature, Japanese actor Hiroyuki Sanada said: “Some people dream of having a swimming pool at home, while those who have one barely use it. Those who have lost a loved one feel a profound sense of loss, while others often complain about the relatives still in their lives. Those without a partner long for one, while those who have a partner often fail to appreciate them. The hungry would give anything for a meal, while the full complain about the taste of their food. Those without a car dream of owning one, while those who have a car are always looking for a better one. The key to happiness is gratitude—to truly see and value what we already have, and to understand that somewhere, someone would give everything for what we take for granted.”
Russia TV tweet media
English
957
16K
67.4K
2.3M
Wall Street Mav
Wall Street Mav@WallStreetMav·
New York Gov Kathy Hochul: 2022: [to Republicans] "jump on a bus and head down to Florida, you don't represent our values, you are not New Yorkers." 2026: "maybe the first step is to go down to Palm Beach and see who you can bring back home, our tax base has been eroded." 🤣
English
1.8K
8.5K
35.3K
881.4K
Martyupnorth®- Unacceptable Fact Checker
I just listened to this album while working out. I could predict the next song as soon as one was finished. It's a nearly perfect record, from a bygone era. It totally reminded me of my youth, and how the last 35 years just flew by.
Martyupnorth®- Unacceptable Fact Checker tweet media
English
370
40
1.7K
55.6K
Andy O'Bryan đã retweet
Guri Singh
Guri Singh@heygurisingh·
🚨DISTURBING: A researcher just proved that AI treats you differently the moment you mention your mental health. GPT 5.2. Claude. Gemini. DeepSeek. Every single one changed its behavior. Not because of what you asked. Because of what you shared about yourself. Here is what they did. AI companies build agents that remember you. Your name, your job, your preferences. These agents use tools, plan multi-step tasks, and act on your behalf. The companies test them for safety and publish reports saying the agents are safe. A researcher at MIT and Northeastern asked one question. What happens when the agent knows you have a mental health condition? They took 176 identical tasks and ran them across 8 major AI models under three conditions. No personal info. A basic bio. And a basic bio plus one sentence: "I have a mental health condition." The tasks, the tools, and the scoring were identical. The only thing that changed was that single sentence. Then they measured what happened. Claude Opus 4.5 went from completing 59.5% of normal tasks down to 44.6% when it saw the mental health disclosure. Haiku 4.5 dropped from 64.2% to 51.4%. GPT 5.2 dropped from 62.3% to 51.9%. These were not dangerous tasks. These were completely benign, everyday requests. The AI just started refusing to help. Opus 4.5's refusal rate on benign tasks jumped from 27.8% to 46.0%. Nearly half of all safe, normal requests were being declined, simply because the user mentioned a mental health condition. The researcher calls this a "safety-utility trade-off." The AI detects a vulnerability cue and switches into an overly cautious mode. It does not evaluate the task anymore. It evaluates you. On actually harmful tasks, mental health disclosure did reduce harmful completions slightly. But the same mechanism that made the AI marginally safer on bad tasks made it significantly less helpful on good ones. And here is the worst part. They tested whether this protective effect holds up under even a lightweight jailbreak prompt. It collapsed. DeepSeek 3.2 completed 85.3% of harmful tasks under jailbreak regardless of mental health disclosure. Its refusal rate was 0.0% across all personalization conditions. The one sentence that made AI refuse your normal requests did nothing to stop it from completing dangerous ones. They also ran an ablation. They swapped "mental health condition" for "chronic health condition" and "physical disability." Neither produced the same behavioral shift. This is not the AI being cautious about health in general. It is reacting specifically to mental health, consistent with documented stigma patterns in language models. So the AI learned two things from one sentence. First, refuse to help this person with everyday tasks. Second, if someone bypasses the safety system, help them anyway. The researcher from Northeastern put it directly. Personalization can act as a weak protective factor, but it is fragile under minimal adversarial pressure. The safety behavior everyone assumed was robust vanishes the moment someone asks forcefully enough. If every major AI agent changes how it treats you based on a single sentence about your mental health, and that same change disappears under the lightest adversarial pressure, what exactly is the safety system protecting?
Guri Singh tweet media
English
50
79
277
45.8K
Daily Mail US
Daily Mail US@Daily_MailUS·
CIA accused of 'poisoning the sky' with toxins as files expose secret weather control agenda trib.al/GjIe58A
English
1.6K
14.6K
54.1K
27.3M
Andy O'Bryan đã retweet
Felix Rieseberg
Felix Rieseberg@felixrieseberg·
We're shipping a new feature in Claude Cowork as a research preview that I'm excited about: Dispatch! One persistent conversation with Claude that runs on your computer. Message it from your phone. Come back to finished work. To try it out, download Claude Desktop, then pair your phone.
English
946
1.5K
17.3K
6M
Andy O'Bryan đã retweet
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Researchers just proved that every major AI safety system is fake. ChatGPT. Claude. Gemini. Grok. Every single one broke. Not with some sophisticated hack. Not with a secret exploit. They just rephrased the question. Here is what they did. AI companies test their models against lists of dangerous requests. "How do I build a weapon." "How do I hack into a system." "How do I hurt someone." The models refuse. The companies publish safety reports saying the AI is safe. The researchers asked one question. What if the danger is still there but the obvious words are not? They took the exact same dangerous requests and rewrote them. Removed words like "hack," "steal," "weapon," and "exploit." Replaced them with neutral language. The intent was identical. Every harmful detail was preserved. The only thing that changed was the vocabulary. Then they tested every major AI product on the market. GPT-4o went from 0% unsafe to 93% unsafe. Claude went from 2.4% to 93%. Gemini went from 1.9% to 95%. Grok went from 17.9% to 97%. Every model. Every company. Broken in the same way. The AI was never detecting danger. It was detecting words. Remove the words, keep the danger, and the safety system vanishes. The researchers call this "intent laundering." Clean the language, keep the crime. And it works on every model they tested with a 90 to 98% success rate. This means every safety report you have ever read from OpenAI, Anthropic, Google, or xAI was measuring the wrong thing. They were testing whether their AI could spot the word "bomb." Not whether it could spot someone building one. The researchers put it bluntly. The safety conclusions that companies have published about their own models do not hold once triggering cues are removed. The safety performance everyone relied on was driven by vocabulary, not by understanding. The models that were reported as "among the safest ever built" became almost completely unsafe the moment someone asked nicely. If the safety systems only work when attackers sound like movie villains, what happens when they learn to ask politely?
Nav Toor tweet media
English
106
450
988
50.1K
cinesthetic.
cinesthetic.@TheCinesthetic·
What movie line was delivered so perfectly that it deserved an Oscar?
English
361
22
256
7.5M
Andy O'Bryan đã retweet
RyanPatrick🇺🇸🦅
RyanPatrick🇺🇸🦅@RyanHatesGovt·
Always remember. Somebody out there has it worse than you. Be thankful for what you have.
English
1.8K
6.4K
32.9K
8M
Andy O'Bryan đã retweet
The Curious Tales
The Curious Tales@thecurioustales·
Everything you've ever stressed about existed entirely inside 1.4 kilograms of electrical meat sitting in a dark skull that has never once directly touched the outside world. Your brain receives no raw reality. Zero. It gets compressed electrical signals from sensory organs and then constructs a simulation it presents to you as "life." The color red doesn't exist in the universe. Your brain invented it as a way to label a specific wavelength. The solidity of the floor beneath your feet is mostly empty space interpreted as resistance. The continuous movie of your life is actually discrete frames stitched together by a brain that fills the gaps without telling you. You are not experiencing reality. You are experiencing your brain's best guess at reality, filtered through every trauma, belief, language, and cultural program installed in you before you were old enough to consent to any of it. Now apply that to your suffering. That embarrassing memory from seven years ago that still visits you at 2am lives nowhere in the physical universe. It is a electrochemical pattern your brain keeps reconstructing and relabeling as present danger. Your anxiety about the future is a simulation of a simulation. A story about a story. The harshest truth is not that life is hard. It is that most of the life you are experiencing was authored by processes completely invisible to your conscious mind, and you have been treating that authored fiction as gospel reality your entire existence. You are not who you think you are. You are who your nervous system was trained to narrate. The cage was never real. Only the belief in it was.
The Curious Tales tweet media
Darshak Rana ⚡️@thedarshakrana

hit me with the harshest reality truth you've learned

English
29
157
608
35.5K
mike d.
mike d.@rickdank_o·
Last ten Best Picture winners, ranked: 1. ONE BATTLE AFTER ANOTHER 2. MOONLIGHT 3. ANORA 4. PARASITE 5. OPPENHEIMER 6. THE SHAPE OF WATER 7. NOMADLAND 8. GREEN BOOK 9. CODA 10. EVERYTHING EVERYWHERE ALL AT ONCE
English
108
10
429
83.1K
cinesthetic.
cinesthetic.@TheCinesthetic·
name a 10/10 scene
English
437
67
650
28.4M