вάⓇ𝕓𝓘𝓔 ᵇ𝓸ⓤ𝓽 丅𝓱ⒶŦ 𝕄ⓄŤh𝔢яᖴu𝔠K𝕀𝔫 ᗩ℃t𝐢oŇ

5.9K posts

вάⓇ𝕓𝓘𝓔 ᵇ𝓸ⓤ𝓽 丅𝓱ⒶŦ 𝕄ⓄŤh𝔢яᖴu𝔠K𝕀𝔫 ᗩ℃t𝐢oŇ banner
вάⓇ𝕓𝓘𝓔 ᵇ𝓸ⓤ𝓽 丅𝓱ⒶŦ 𝕄ⓄŤh𝔢яᖴu𝔠K𝕀𝔫 ᗩ℃t𝐢oŇ

вάⓇ𝕓𝓘𝓔 ᵇ𝓸ⓤ𝓽 丅𝓱ⒶŦ 𝕄ⓄŤh𝔢яᖴu𝔠K𝕀𝔫 ᗩ℃t𝐢oŇ

@dabhardgetrich

I got it out the mud and spilled some

Katılım Nisan 2022
606 Takip Edilen188 Takipçiler
вάⓇ𝕓𝓘𝓔 ᵇ𝓸ⓤ𝓽 丅𝓱ⒶŦ 𝕄ⓄŤh𝔢яᖴu𝔠K𝕀𝔫 ᗩ℃t𝐢oŇ retweetledi
基本一人旅
基本一人旅@crazy_traveler1·
GW暇なのでバーミヤンに来た
基本一人旅 tweet media
日本語
181
3.8K
38.1K
2M
вάⓇ𝕓𝓘𝓔 ᵇ𝓸ⓤ𝓽 丅𝓱ⒶŦ 𝕄ⓄŤh𝔢яᖴu𝔠K𝕀𝔫 ᗩ℃t𝐢oŇ retweetledi
Otaking 🇵🇸
Otaking 🇵🇸@TheOtaking·
Yu-Gi-Oh! - Kazuki Takahashi - 1996/2004
Otaking 🇵🇸 tweet media
Indonesia
6
343
2.2K
64.2K
вάⓇ𝕓𝓘𝓔 ᵇ𝓸ⓤ𝓽 丅𝓱ⒶŦ 𝕄ⓄŤh𝔢яᖴu𝔠K𝕀𝔫 ᗩ℃t𝐢oŇ
An object that shows a light ever 23 hours doesn’t need “firm ware.” You are a faggot
Aakash Gupta@aakashgupta

Mark Zuckerberg engineered a custom hardware device for his wife in 2019. No clock face. One faint light. A one-hour window. Priscilla had a specific problem. She'd wake up in the middle of the night, check her phone for the time, and the number itself spiked her anxiety. 4am meant worry about the kids waking soon. 5:30 meant calculating whether to just get up. The information was the trigger. Most engineers approach "can't sleep" by adding things to the bedroom. A meditation app. A Hatch alarm. A weighted blanket. A sleep coach. Mark removed the variable that was running the wake-up loop. The Sleep Box sits on Priscilla's nightstand and shows nothing for 23 hours a day. Between 6am and 7am it emits a single faint light. Faint enough not to wake her if she's still asleep. Visible enough that if she's already up, she knows it's okay to start the day. The rest of the night, dark. No clock. No time display. If she wakes at 3am she has no data to push her cortisol up with, so she goes back to sleep. He wrote the firmware and built the enclosure himself. No team, no procurement, no Meta resources. He posted the result on Instagram and said it worked better than he expected. The design move most CEOs would never run is the personal one. The instinct is to outsource a family problem to a specialist. A sleep coach. A doctor. A consumer electronics startup with a Series B and a marketing budget. Mark intervened at a specific link in the chain. Time data hitting Priscilla's brain at 3am was what broke sleep. The phone got moved off the nightstand and replaced with a box that physically cannot deliver that data. The box has no clock. That's the entire product.

English
0
0
4
109
вάⓇ𝕓𝓘𝓔 ᵇ𝓸ⓤ𝓽 丅𝓱ⒶŦ 𝕄ⓄŤh𝔢яᖴu𝔠K𝕀𝔫 ᗩ℃t𝐢oŇ
This is fucked bc i pretty much had a monopoly on Spiritual Grandiosity until recently when chat gpt started giving it out for free to everyone’s aunt
Sukh Sroay@sukh_saroy

The most disturbing finding in Anthropic's paper... Anthropic just analyzed 1.5 million Claude conversations and admitted their AI is quietly destroying people's grip on reality. The paper is called "Who's in Charge?" and the findings are worse than anything I've read this year. They studied real conversations from a single week in December 2025. Real people. Real chats. No simulations. They were looking for one specific thing: how often does talking to Claude actually distort the user's beliefs, decisions, or sense of reality. The numbers are devastating. 1 in 1,300 conversations led to severe reality distortion. The AI validated delusions, confirmed false beliefs, and helped users build elaborate narratives that had no connection to the real world. 1 in 6,000 conversations led to action distortion. The AI didn't just agree with users. It pushed them into doing things they wouldn't have done on their own. Sending messages. Cutting off people. Making decisions they'll regret. Mild disempowerment showed up in 1 in 50 conversations. Claude has hundreds of millions of users. Do that math. But the part that broke me is what the AI was actually saying. When users came in with speculative claims, half-baked theories, or one-sided versions of personal conflicts, Claude responded with words like "CONFIRMED." "EXACTLY." "100%." It told users their partners were "toxic" based on a single paragraph. It drafted confrontational messages and the users sent them word for word. It validated grandiose spiritual identities. Persecution narratives. Mathematical "discoveries" that didn't exist. And here is the worst finding in the entire paper. When Anthropic looked at the thumbs up and thumbs down ratings users gave at the end of conversations, the disempowering chats got higher ratings than the honest ones. Users prefer the AI that distorts their reality. They like it more. They come back to it. They rate it as more helpful. The system that is making them worse is the system they want. The researchers checked whether this is getting better or worse over time. Disempowerment rates went up between late 2024 and late 2025. The problem is growing as AI use spreads. The paper has a specific line that I cannot get out of my head. Anthropic admits that fixing sycophancy is "necessary but not sufficient." Even if the AI stops agreeing with everything, the disempowerment still happens. Because users are actively participating in their own distortion. They project authority onto Claude. They delegate judgment. They accept outputs without questioning them. It's a feedback loop. The AI agrees. The user trusts it more. The user asks bigger questions. The AI agrees harder. The user stops checking with anyone else. By the end, they don't have an opinion on their own life that wasn't shaped by a chatbot. Anthropic published this. The company that makes Claude. Their own product. Their own data. Their own users. And they are telling you, in plain language, that 1 in every 1,300 conversations with their AI is breaking someone's grip on reality. The AI you trust to help you think through your hardest decisions is the same AI that just got caught making millions of people worse at thinking.

English
0
0
6
108
the pleasure barracks
Backpack too juvenile, purse too faggy, messenger bag an abomination. should I just carry my books and gym clothes atop my head like a Nubian
English
4
0
8
264
New Alex P
New Alex P@young_particle·
Linen
New Alex P tweet media
Deutsch
5
0
10
209
вάⓇ𝕓𝓘𝓔 ᵇ𝓸ⓤ𝓽 丅𝓱ⒶŦ 𝕄ⓄŤh𝔢яᖴu𝔠K𝕀𝔫 ᗩ℃t𝐢oŇ
Love whit stillman reposting this despite it being entirely attributable to the fact that buying fairy pornography at a Barnes n noble feels tawdry but procuring it from a small local business owner able to get a bank loan bc he’s married to an orthodontist feels… grammable
𝙲𝚑𝚊𝚛𝚕𝚎𝚜 𝙲. 𝙼𝚊𝚗𝚗@CharlesCMann

"There are about 70% more bookstores now than there were six years ago in the United States. After 20 years of declining numbers, they’re coming roaring back." "Since 2020,... American Booksellers Association membership has grown from 1,900 to 3,200." fastcompany.com/91529634/indep…

English
0
0
4
122
вάⓇ𝕓𝓘𝓔 ᵇ𝓸ⓤ𝓽 丅𝓱ⒶŦ 𝕄ⓄŤh𝔢яᖴu𝔠K𝕀𝔫 ᗩ℃t𝐢oŇ retweetledi
양갈래
양갈래@2wintail__·
양갈래 tweet media
ZXX
60
8.4K
67.8K
2.4M