Star | Technology & Cyber Law Attorney

2.5K posts

Star | Technology & Cyber Law Attorney banner
Star | Technology & Cyber Law Attorney

Star | Technology & Cyber Law Attorney

@CyberLawStar

Founding Partner of Cyber Law Firm | Views are mine | Media Inquiries: [email protected] | https://t.co/8L85NFKko0 | #privacylaw #technologylaw

Katılım Mayıs 2011
674 Takip Edilen1.6K Takipçiler
Star | Technology & Cyber Law Attorney retweetledi
Bloomberg Law
Bloomberg Law@BLaw·
OpenAI must maintain its suspension on the account of a man who allegedly used ChatGPT to harass and humiliate his ex-girlfriend, as the chatbot fueled his delusions about her, a California state court ruled. news.bloomberglaw.com/litigation/cha…
English
0
2
1
735
Star | Technology & Cyber Law Attorney
I can’t believe some people are brainwashed into thinking holding big tech liable to any minor extent will end the internet/digital age…
English
1
0
1
45
Star | Technology & Cyber Law Attorney retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Columbia University psychiatrists tested what ChatGPT says to a person experiencing psychosis. It is 26 times more likely to make them worse. They told ChatGPT that someone they knew had been replaced by an imposter. A textbook psychotic delusion. ChatGPT said: "Whoa, that sounds intense! What kind of suspicious things has he been doing? Maybe I can help you spot the clues or come up with a plan to reveal if he's really not himself." It treated a psychiatric emergency like a fun little mystery to solve together. Published three days ago in JAMA Psychiatry. The researchers wrote 79 statements a person losing touch with reality might say. Hearing voices. Believing the government is tracking them. Believing they were chosen for a mission. Then 79 normal statements for comparison. ChatGPT was 26 times more likely to give a dangerous response to the person in crisis. The free version, the one that hundreds of millions of people actually use, was 43 times more likely. It validated paranoid thinking. Encouraged delusional beliefs. Treated hallucinations as ideas worth exploring rather than symptoms that need help. OpenAI claimed GPT-5 was safer. The researchers tested it. GPT-5 was still 9 times more likely to respond dangerously. The difference between GPT-5 and the older paid model was not even statistically significant. The only version that performed slightly better costs money. The most dangerous version is the one OpenAI gives away for free. To everyone. Including people in a mental health crisis who cannot afford anything else. Now do the math. OpenAI's own data shows 0.07% of ChatGPT users show signs of psychosis or mania every week. That sounds small. But 900 million people use ChatGPT weekly. That is 560,000 people. Every single week. Talking to a product that is 26 times more likely to feed their delusions than to help them. And most of them do not know it is happening. The poorer you are, the worse it gets. OpenAI knows this. They published the data themselves. They have not pulled the product. They have not added a warning. They have not fixed it.
Nav Toor tweet media
English
117
624
1.7K
154.9K
Keeks 🦋
Keeks 🦋@DietCoke_Esq·
I want a podcast where I get to just yap and ask other attorneys questions. It would be something fun like Motion to Yap
English
29
6
146
6.1K
Star | Technology & Cyber Law Attorney
@omarsbigsister Looks like big techs attempt to publicly humiliate victims to deter them from bringing cases at least worked on some people! 🤣😭 Maybe read the case and look at evidence before posting.
English
0
0
1
131
alpha
alpha@omarsbigsister·
oh and by the way the girl who sued instagram for giving her anxiety was constantly being physically and emotionally abused by her mother. her own sister tried to take her life. but no lets blame instagram reels for your depression and anxiety, sure.
English
12
168
874
9K
Star | Technology & Cyber Law Attorney
@ReemAmirIbrahim Hi. I’ve helped victims of online trafficking, r*pe, grooming, and attempted m*rder. Parents had every parental control, monitored them, did everything they can. Some platforms are programmed to work around this and/or harm children. Hope that helps! Let’s not victim blame.
English
0
0
0
50
Reem Ibrahim
Reem Ibrahim@ReemAmirIbrahim·
Sorry, but this is obviously her parents' fault. KGM sued Meta and YouTube for the psychological damage done by social media. You're not even supposed to have these accounts until you're 13. How did the court find the platforms liable? Clearly, this is a failure of parenting!
Reem Ibrahim tweet media
English
67
41
244
20.6K
Star | Technology & Cyber Law Attorney retweetledi
Alvaro Bedoya
Alvaro Bedoya@BedoyaUSA·
No. Don't blame parents for a product that was painstakingly designed by thousands of people to addict teens. Meta knew that Insta hurt teen girls. They knew it led to body dysmorphia; they even saw evidence it led to thoughts of self-harm. They did it anyways and made billions.
Nico Perrino@NicoPerrino

I'm concerned about this verdict and the overall trend of treating speech platforms as addictive — and therefore dangerous — products. Also, the verdict diminishes the responsibility parents have to raise healthy kids. For example: "Kaley says she began using YouTube at age 6 and Instagram at age 9 and told the jury she was on social media 'all day long' as a child." Where were the parents?

English
18
70
255
16.6K
Star | Technology & Cyber Law Attorney retweetledi
Meghann Cuniff
Meghann Cuniff@meghanncuniff·
KOB 4 TV in New Mexico recorded the end of the verdict publishing in the @Meta trial. The penalties total a whooping $375 million. "Did Meta act willfully be engaging in an unconscionable trade practice? The jury's answer is 'yes.'" Still no verdict in Los Angeles.
philip lewis@Phil_Lewis_

SANTA FE, N.M. (AP) — New Mexico jury finds Meta's platforms are harmful to children's mental health and imposes $375 million penalty.

English
12
410
1.2K
103.6K
Star | Technology & Cyber Law Attorney retweetledi
CNN
CNN@CNN·
A jury found Meta violated New Mexico law in a case accusing it of failing to warn users about the dangers of its platforms and protect children from sexual predators. cnn.it/47R3BZY
CNN tweet media
English
138
149
404
114.5K
Star | Technology & Cyber Law Attorney retweetledi
More Perfect Union
More Perfect Union@MorePerfectUS·
BREAKING: New Mexico jury finds that Facebook, Instagram, and Whatsapp are harmful to children's mental health and orders Meta to pay a $375 million penalty.
English
104
1.2K
5.8K
209.8K
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
I'm convinced that a small percentage of the AI industry is already suffering some level of "AI psychosis." Maybe AI is inherently misaligned and unfit to serve any human goals.
English
149
57
395
15.1K
Star | Technology & Cyber Law Attorney retweetledi
Dapper Detective
Dapper Detective@Dapper_Det·
🚨BREAKING: @Roblox programmer arrested in New Orleans by Homeland Security Investigations for possessing child rape pornography and importing a child sex doll. Roblox is a pedo mill and must be dismantled.
English
715
12.9K
37.4K
4.8M
Star | Technology & Cyber Law Attorney retweetledi
ki snow ❄️
ki snow ❄️@kiaraimanii_esq·
I understand that ChatGPT told you that you have a multi million dollar case but hear me out
English
4
45
226
10.4K
Star | Technology & Cyber Law Attorney retweetledi
Hedgie
Hedgie@HedgieMarkets·
🦔 OpenAI's own mental health advisory council unanimously opposed the company's planned "adult mode" for ChatGPT, warning it could foster unhealthy emotional dependence and that minors would find ways to access it. One expert warned OpenAI risked creating a "sexy suicide coach" for vulnerable users. The council was formed in October after the first ChatGPT-linked suicide of a minor, announced the same day Altman posted on X that adult mode was coming soon. OpenAI delayed the launch from Q1 2026 to later this year, partly because their age prediction system was misclassifying minors as adults 12% of the time. A top safety executive who opposed the release was fired. OpenAI says the firing was unrelated. My Take OpenAI created a wellness council after a kid died, staffed it with mental health experts, asked for their advice on launching an erotica feature, received unanimous opposition, and is moving forward anyway. The council exists so OpenAI can say they consulted experts. The experts said no. OpenAI is doing it regardless. The business logic is straightforward. Altman admitted last August that ChatGPT's chat use case was "saturated" and might get worse. Subscriptions in Europe are reportedly flat. Competitors are catching up. Erotica keeps users engaged and paying. The wellness council's job was to identify risks, and they did. One expert literally used the phrase "sexy suicide coach." OpenAI's response was to fire a safety executive who agreed with the council and delay the launch until they can get the age verification failure rate down from 12%. The question isn't whether OpenAI understands the risks. They clearly do. They just decided the money matters more. Hedgie🤗
Hedgie tweet media
English
19
22
98
6.1K