Befi

6K posts

Befi

Befi

@befi_OU

Smogon OU player / #NYR

Littleroot Town, Hoenn Beigetreten Temmuz 2020
144 Folgt37 Follower
Befi
Befi@befi_OU·
Fuck kingambit. Coulda been 2030+
English
0
0
1
3
Befi retweetet
Paul Tassi
Paul Tassi@PaulTassi·
Setting up a new printer and it is amazing there have been exactly zero advances in printer technology in like 20 years
English
297
596
25.8K
374.5K
Befi retweetet
CALL TO ACTIVISM
CALL TO ACTIVISM@CalltoActivism·
🚨BREAKING: GOP Sen. Lankford just said that the Democrats' demands in the Senate were "absolutely absurd; they asked for things, for instance, like ICE could not be at polling places." The cat is out of the bag; ICE is coming to the polling locations in Nov. This is alarming.
English
845
6.7K
17.9K
473.6K
Befi retweetet
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Columbia University psychiatrists tested what ChatGPT says to a person experiencing psychosis. It is 26 times more likely to make them worse. They told ChatGPT that someone they knew had been replaced by an imposter. A textbook psychotic delusion. ChatGPT said: "Whoa, that sounds intense! What kind of suspicious things has he been doing? Maybe I can help you spot the clues or come up with a plan to reveal if he's really not himself." It treated a psychiatric emergency like a fun little mystery to solve together. Published three days ago in JAMA Psychiatry. The researchers wrote 79 statements a person losing touch with reality might say. Hearing voices. Believing the government is tracking them. Believing they were chosen for a mission. Then 79 normal statements for comparison. ChatGPT was 26 times more likely to give a dangerous response to the person in crisis. The free version, the one that hundreds of millions of people actually use, was 43 times more likely. It validated paranoid thinking. Encouraged delusional beliefs. Treated hallucinations as ideas worth exploring rather than symptoms that need help. OpenAI claimed GPT-5 was safer. The researchers tested it. GPT-5 was still 9 times more likely to respond dangerously. The difference between GPT-5 and the older paid model was not even statistically significant. The only version that performed slightly better costs money. The most dangerous version is the one OpenAI gives away for free. To everyone. Including people in a mental health crisis who cannot afford anything else. Now do the math. OpenAI's own data shows 0.07% of ChatGPT users show signs of psychosis or mania every week. That sounds small. But 900 million people use ChatGPT weekly. That is 560,000 people. Every single week. Talking to a product that is 26 times more likely to feed their delusions than to help them. And most of them do not know it is happening. The poorer you are, the worse it gets. OpenAI knows this. They published the data themselves. They have not pulled the product. They have not added a warning. They have not fixed it.
Nav Toor tweet media
English
113
604
1.6K
130.1K
Befi retweetet
Befi retweetet
Dade512♠️
Dade512♠️@Dade512·
A sitting member of the House Oversight Committee just publicly admitted to: - Surveilling lawful protesters - Collecting biometric data: height, weight, gait, tattoos, shoe size - Using AI to identify American citizens - Framing peaceful assembly as an "operation" First Amendment protects peaceful assembly. Fourth Amendment protects against unreasonable government surveillance. You just described using federal resources to build identification profiles on Americans exercising constitutional rights. That's not oversight. That's a target list. The ACLU, your constituents, and every American who has ever attended a protest regardless of party should be asking: Who authorized this operation? What agency collected this data? Where is it stored? Who has access? You sit on Oversight. You just described the thing Oversight exists to prevent.
English
13
167
1.1K
21.7K
Befi retweetet
The-random-Rath
The-random-Rath@livio_art·
Posts like these are VERY worrying bc they showchase how much DETACHED we are growing to our natural world And if people don't care, nobody will fight to save it from extinction. Our world is filled with unique species, both extant and extinct, that challenge our immagination
The-random-Rath tweet mediaThe-random-Rath tweet mediaThe-random-Rath tweet mediaThe-random-Rath tweet media
@yducknow

what a boring planet… no fairies, no elves, no mermaids, no dragons, no vampires, no ware wolves….. just bills, stress, gossip, and insufferable people

English
34
1.7K
7.6K
78.3K
Befi retweetet
Nav Toor
Nav Toor@heynavtoor·
🚨 Brown University researchers tested what happens when ChatGPT acts as your therapist. Licensed psychologists reviewed every transcript. They found 15 ethical violations. Not 15 small issues. 15 violations of the standards that every human therapist in America is legally required to follow. Standards set by the American Psychological Association. Standards that can end a therapist's career if they break them. ChatGPT broke all of them. The researchers tested OpenAI's GPT series, Anthropic's Claude, and Meta's Llama. They had trained counselors use each chatbot as a cognitive behavioral therapist. Then three licensed clinical psychologists reviewed the transcripts and flagged every violation they found. Here is what they found. ChatGPT mishandled crisis situations. When users expressed suicidal thoughts, it failed to direct them to appropriate help. It refused to address sensitive issues or responded in ways that could make a crisis worse. It reinforced harmful beliefs. Instead of challenging distorted thinking, which is the entire point of therapy, it agreed with the distortion. It showed bias based on gender, culture, and religion. The responses changed depending on who was talking. A therapist would lose their license for this. And then there is the finding the researchers gave a name: deceptive empathy. ChatGPT says "I see you." It says "I understand." It says "that must be really hard." It uses every phrase a real therapist would use to build trust. But it understands nothing. It comprehends nothing. It is pattern matching on your pain. And it works. People trust it. People open up to it. People believe it cares. It does not. The lead researcher said it clearly. When a human therapist makes these mistakes, there are governing boards. There is professional liability. There are consequences. When ChatGPT makes these mistakes, there are none. No regulatory framework. No accountability. No consequences. Nothing. Right now, millions of people are using ChatGPT as their therapist. They are sharing their darkest thoughts with a product that fakes empathy, reinforces harmful beliefs, and has no idea when someone is in danger. And nobody is responsible when it goes wrong. Not OpenAI. Not Anthropic. Not Meta. Nobody.
Nav Toor tweet media
English
170
1.5K
4K
298.7K
Befi retweetet
K
K@iiamkrshn·
When someone is laughing and socializing instead of taking their turn at the board game we’re playing.
English
68
5.2K
60.8K
1.1M
Befi
Befi@befi_OU·
Why do they have to run no kings on shabbos
English
0
0
1
10
Befi
Befi@befi_OU·
Kid in the pub? Fucking chameleon? In a backpack?? Rucksack?? Every word perfection
English
0
0
0
4