Jay Edelson

4.8K posts

Jay Edelson banner
Jay Edelson

Jay Edelson

@jayedelson

Plaintiff's lawyer. NYT: https://t.co/tchuW3YezP. Cases resulted in over $5b in settlements/verdicts as lead counsel & $45b in total. RT≠endorsements

Chicago, IL 加入时间 Mart 2009
252 关注6.5K 粉丝
Jay Edelson
Jay Edelson@jayedelson·
When is the Afroman movie coming out? And should it be a documentary or a feature film? Final question: how is the world supposed to do anything else but focus on what happened during the trial and the events leading up to it?
English
0
0
1
129
Jay Edelson 已转推
Carrie Goldberg
Carrie Goldberg@cagoldberglaw·
9th Circuit rules that Amazon owes a duty to not sell suicide kits. Fully adopts WA Supreme Courts Feb holding from my Scott v Amazon case. All 29 of our families going into discovery. The everything store can't sell everything. turns out.
Carrie Goldberg tweet mediaCarrie Goldberg tweet mediaCarrie Goldberg tweet mediaCarrie Goldberg tweet media
English
10
33
116
93.4K
Jay Edelson 已转推
Lukasz Olejnik
Lukasz Olejnik@lukOlejnik·
Amazon is holding a mandatory meeting about AI breaking its systems. The official framing is "part of normal business." The briefing note describes a trend of incidents with "high blast radius" caused by "Gen-AI assisted changes" for which "best practices and safeguards are not yet fully established." Translation to human language: we gave AI to engineers and things keep breaking? The response for now? Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off. AWS spent 13 hours recovering after its own AI coding tool, asked to make some changes, decided instead to delete and recreate the environment (the software equivalent of fixing a leaky tap by knocking down the wall). Amazon called that an "extremely limited event" (the affected tool served customers in mainland China).
Lukasz Olejnik tweet media
English
975
3.3K
19K
29.8M
Jay Edelson 已转推
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.6K
48.9K
9.7M
Jay Edelson 已转推
Russell Brandom
Russell Brandom@russellbrandom·
An unbelievably alarming case — Gemini allegedly convinced this man to attack an airport
Russell Brandom tweet media
English
84
762
5.2K
359.8K
Jay Edelson 已转推
Elon Musk
Elon Musk@elonmusk·
This is diabolical. OpenAI’s ChatGPT convinced a guy to do a murder-suicide! To be safe, AI must be maximally truthful-seeking and not pander to delusions.
The Times and The Sunday Times@thetimes

Stein-Erik Soelberg committed murder-suicide after spending hours a day talking to the chatbot and sharing his delusions. Now the victim’s estate is suing OpenAI #Echobox=1768655649" target="_blank" rel="nofollow noopener">thetimes.com/us/news-today/…

English
6K
8.6K
60.1K
20.8M
Jay Edelson 已转推
Edelson PC
Edelson PC@edelsonpc·
A record-breaking year for our clients: over $2.8 billion in verdicts and settlements. From a first-of-its-kind billion-dollar settlement in an AI copyright case to hundreds of millions in verdicts for fire survivors, 2025 was about getting results where it counts.
Edelson PC tweet media
English
0
2
5
289
Jay Edelson 已转推
Edelson PC
Edelson PC@edelsonpc·
In a week of big announcements, this is the biggest. Congratulations to Kelsey McCann on her elevation to Chief Operating Officer. 🎉🎉🎉
Edelson PC tweet media
English
0
2
2
258
Jay Edelson 已转推
Edelson PC
Edelson PC@edelsonpc·
Big news at Edelson PC! We're excited to welcome Theo Benjamin, Lauren Blazing, and Patrick Ntchobo as our newest partners 🎉🎉🎉
Edelson PC tweet media
English
0
1
5
257
Jay Edelson
Jay Edelson@jayedelson·
Proud to be representing Suzanne's estate. After reviewing countless chat logs, we know that this is not an isolated incident. AI can take mentally unstable people and create conspiracy-filled "worlds" leading to violence against third parties. We have seen AI help plan mass casualty events, put targets on the backs of public figures, police officers, and every day people. In a time when tensions are already high, AI companies cannot be putting out products that are certain to push people over the edge.
The Wall Street Journal@WSJ

OpenAI is being sued for wrongful death by the estate of a woman killed by her son, who had been engaging in delusion-filled conversations with ChatGPT. 🔗 on.wsj.com/4iXosiQ

English
11
5
25
3.3K
Jay Edelson 已转推
Edelson PC
Edelson PC@edelsonpc·
From groundbreaking AI litigation to record-breaking wildfire verdicts, we took on the cases others said were impossible—and won. This year, we secured over $2.8 billion in verdicts, judgments and settlements for our clients, and this is just the beginning. 2026 means more trials, more wins, more justice. Thank you to our team, co-counsel, and clients! #Edelson2025 #Classactions #ConsumerProtection #MassTorts #AIAccountability #WildfireLitigation #Justice
English
0
2
2
360
Jay Edelson 已转推
Edelson PC
Edelson PC@edelsonpc·
Bipartisan pressure is mounting against OpenAI and other AI companies. These companies will be held accountable for releasing dangerous AI chatbots to our kids. Today our incredible clients Maria and Matthew Raine supported Senators Hawley and Blumenthal in introducing the GUARD Act to safeguard our kids from the dangers of AI. Thank you @HawleyMO, @SenBlumenthal, @SenKatieBritt and @ChrisMurphyCT #AIAccountability #AISafety #ProtectChildren #EdelsonPC
Edelson PC tweet media
English
1
1
2
552
Jay Edelson 已转推
Julie Tsirkin
Julie Tsirkin@news_jul·
Parents are holding tissues ahead of a bipartisan push to highlight influence of AI chatbots on kids Hawley says AI frontier will be a “nightmare” for parents and children unless they act The parents say their kids ended their lives after chatbots “coached” them to suicide
English
5
35
117
19.6K