Ira Rothken

6.8K posts

Ira Rothken banner
Ira Rothken

Ira Rothken

@rothken

High Technology Attorney, Entrepreneur, and Computer Technologist; Lead counsel in large tech cases; Helped build Web 2 & 3 services that lots of people use.

California Katılım Mart 2008
3.7K Takip Edilen8.5K Takipçiler
Sabitlenmiş Tweet
Ira Rothken
Ira Rothken@rothken·
I spoke yesterday to the Digital Entertainment Group or DEG about NFT legal strategy - here is the video below - I step through numerous weighty legal-tech issues for any NFT project beginning at about the seven minute mark. I hope you find it helpful. youtu.be/YazOOzLQacI
YouTube video
YouTube
English
18
24
152
0
Ira Rothken
Ira Rothken@rothken·
12+ years ago we had rule based “expert systems” with “knowledge engineers” and on the machine learning front a good dose of IBM’s Watson doing important work including cancer treatments - there were early copyright collisions between medical journals and AI for cancer treatments apps that needed mass training inputs from journals.
English
1
0
0
28
Ira Rothken
Ira Rothken@rothken·
California Dreaming by the pool at sunset I am working on an artificial intelligence law project on my Mac
Ira Rothken tweet media
English
2
0
1
0
Ira Rothken
Ira Rothken@rothken·
@scottastevenson @jborstein Need to enhance the agentic legal informed consent layer or run the risk that AI agent contracting errors are a breach of fiduciary duty.
Ira Rothken@rothken

If you are an officer, board member, or lawyer for a company using AI or "automated legal decision-making" (ALD), then take this 30-second quiz. If you answer “yes” to any then read on. This article is for you, and it's urgent. Red Flags: Are You Exposed? If your organization can check any of these boxes, you have significant AI agent fiduciary liability risk: 1.    AI agents and ALD have access to payment credentials, email, or API keys 2.    No attorney has reviewed the decision logic, guardrails, or heuristics governing AI agent or ALD behavior 3.    You cannot produce a list of contract terms your AI agents or ALD have accepted in the past 90 days 4.    There are no materiality thresholds requiring human escalation before AI or ALD accepts terms 5.    The board has not discussed AI agent contracting and ALD legal risks in the past year 6.    Your AI agents and ALD operate 24/7 without human monitoring of their decisions 7.    You have no audit trail of which AI agent and ALD accepted which terms when 8.    Software engineers, not attorneys, designed the logic for what terms AI agents and ALD can accept 9.    Your D&O insurance application doesn't mention AI agent or ALD deployment 10.     You cannot explain how your AI agent or ALD deployment complies with UETA Section 10's error-correction requirement Even one checked box represents governance exposure. Multiple boxes indicate the kind of systematic oversight failure that fiduciary duty was designed to address.

English
1
0
5
875
Scott Stevenson
Scott Stevenson@scottastevenson·
It’s inevitable that contracts will be negotiated by agents within the next decade. Business will speed up so much that humans won’t be able to keep up with day-to-day contracts between companies. Lawyers will set the policies and audit— agents will be trusted to do the rest. This sounds irresponsible, until you realize what the error rate is for human contract managers—fairly high. I believe agents will be a safer way to negotiate rudimentary agreements within the decade, just like self-driving is now safer than human driving. This chart will exist for basic business contracts:
Scott Stevenson tweet media
English
35
17
183
50.9K
Ira Rothken
Ira Rothken@rothken·
The Proportional Verification Standard: A Safe Harbor for Lawyers in the Age of AI Summary The legal profession is trapped. Use AI and risk sanctions for hallucinated citations. Don’t use AI and fall behind every competitor who does. This is the AI law squeeze phase — and no one has handed lawyers a way out. Until now. This article makes three arguments that belong together. First, the “verify everything manually” response to AI hallucinations is not just impractical — it’s incoherent. Human re-verification of large AI outputs is itself error-prone, prices out solo practitioners and small firms, and destroys the efficiency gains AI was supposed to deliver. The profession already accepts process-level validation over document-by-document review in e-discovery. It should accept the same logic here. Second, the profession needs shared infrastructure — not more rules. Frank Shepard solved the 19th-century citation problem with paper slips. Westlaw solved the 20th-century version with online citators. Neither emerged spontaneously. The AI era has created a third category of citation failure — cases that never existed, holdings that were never said, authorities that are semantically irrelevant — and existing tools weren’t built to catch it. This article proposes building that tool and making it widely accessible. Third, the cost of not using AI is no longer zero. We are approaching a threshold where categorical avoidance of AI in specific practice contexts may itself become a professional responsibility problem — where clients pay more, wait longer, and receive less thorough work because their lawyer refused to use a tool that demonstrably outperforms manual methods. The practical core of this article is a three-part framework: a Proportional Verification Standard that calibrates what verification actually requires based on risk; a Citation Verification Safe Harbor that protects lawyers who follow it; and a proposed Legal Citation Integrity Foundation — a nonprofit, bar-endorsed, privacy-preserving utility any lawyer can use for the cost of a court filing fee. Building a prototype revealed exactly why a solo tool can’t solve this - it takes industry cooperation. The structural obstacles — proprietary databases behind paywalls, fragmented coverage, false positives that train lawyers to ignore real warnings — require institutional scale, bar endorsement, and negotiated API agreements with legal research providers. The concept works. The infrastructure doesn’t exist yet. This article is an argument for building it. @rothken/the-proportional-verification-standard-15693af2b601" target="_blank" rel="nofollow noopener">medium.com/@rothken/the-p…
English
0
0
0
180
Base44
Base44@Base44·
Introducing Base44 Superagents. AI agents built with managed infrastructure, secured by default, one-click integrations, and 24/7 execution from the start. Everything is taken care of so you can focus on what your agent does, not how to get it running. That means no API keys to juggle, no config files, no security setup, and no maintenance. We handle all of it. Your Superagent connects to all the tools you already use in one click, runs on schedules and triggers, remembers context across sessions, acts proactively on your behalf, and keeps working around the clock. All from wherever you already are, WhatsApp, Telegram, Slack, or your browser. The AI agent everyone's been waiting for, with everything you need already built in. We're excited to get this into your hands, so we're giving free credits to everyone who comments and reposts in the next 24 hours.
English
1.1K
898
2.5K
1.5M
Ira Rothken
Ira Rothken@rothken·
The Proportional Verification Standard: A Safe Harbor for Lawyers in the Age of AI By: Ira P. Rothken Summary This article discusses a solution to AI hallucinations; A safe harbor against sanctions for good faith court filings containing AI hallucinations; A legal tech industry cooperative software platform for checking and fixing AI hallucinations before filing - accessible to all; An argument that Legal ethics ought to consider the huge benefits of AI automations to clients, the profession, and the judiciary in developing standards for AI usage. Human in the loop is paramount. But the promise of AI as a force multiplier becomes illusory if every AI determination requires a human to independently repeat the underlying work. The AI Proportional Verification Standard is introduced. @rothken/the-proportional-verification-standard-15693af2b601" target="_blank" rel="nofollow noopener">medium.com/@rothken/the-p…
English
1
1
3
275
Polymarket
Polymarket@Polymarket·
JUST IN: Lawsuit claims ChatGPT pretended to be a lawyer and persuaded a woman into firing her real attorney while citing fake case law.
English
586
1.2K
12.7K
12.1M
Reuters Legal
Reuters Legal@ReutersLegal·
A lawsuit filed by Nippon Life Insurance Company of America accused OpenAI of practicing law without a US license and helping a former disability claimant breach a settlement and flood a federal court docket with meritless filings. Subscribe: reut.rs/4aBvwvO
Reuters Legal tweet media
English
1
6
10
995
Ira Rothken
Ira Rothken@rothken·
Nippon Life Insurance is suing OpenAI. Not the person who used ChatGPT. Her litigation adversary. Theory: if someone uses AI to draft a legal filing, the AI company can be liable. Fair question: Which AI helped, or as Nippon calls it “aided and abetted” drafting the complaint at Sidley? OpenAI may want to know if the motion to dismiss succeeds. 😂
English
4
1
7
1.1K
Ira Rothken
Ira Rothken@rothken·
The Nippon lawsuit basically asks a court to treat a “word prediction engine” like it had legal intent. My prediction for coming attractions in the motion to dismiss which will be cited for years to come: • No knowledge of the contract • No intent to induce breach • No proximate cause • No duty to monitor chats • Section 230 immunity • First Amendment issues • “Generating text” not practicing law —-Failure to join an indispensable party particularly on the injunction request. That’s before you even get to the AI architecture.
English
0
0
4
113
Ira Rothken
Ira Rothken@rothken·
The case depends on the idea that ChatGPT was basically acting like a licensed attorney secretly advising a user to breach a settlement… That theory is about to run into… some problems.
English
0
0
1
114
Brian Armstrong
Brian Armstrong@brian_armstrong·
Very soon there are going to be more AI agents than humans making transactions. They can’t open a bank account, but they can own a crypto wallet. Think about it.
English
2.3K
2.9K
20.4K
4.5M
Ira Rothken
Ira Rothken@rothken·
With over a 1000 reported cases of court filings containing AI hallucinations a call for mandatory AI CLE seems logical. We are in the “AI law squeeze phase” of the evolution - if you don’t use AI your clients are at a litigation disadvantage but if you do use it you are at risk of sanctions for hallucinations. Will there ever be a time to set guardrail standards for “tolerable good faith AI mistakes” - where the benefit of AI usage to the (e.g. a legal aid) client and the judicial process can be so profound as to justify minor AI mistakes made in good faith after following best practices?
Ira Rothken tweet media
English
4
1
8
260
Ira Rothken
Ira Rothken@rothken·
SCOTUS denied cert. in Thaler v Perlmutter but that didn’t stop us from doing an AI moot court between AI lawyers before AI judges in our experimental AI web app. Listen here to the oral argument - spoiler alert - AI court said no human no copyright. lawrobot.ai/replay/69d38a3…
English
0
0
3
175
Ira Rothken
Ira Rothken@rothken·
Dormant Commerce Clause problem. New York’s proposed AI law doesn’t just target unlicensed legal services. If read broadly, it could prevent a NY resident from using a California-based chatbot to get educational feedback about California law — even with clear disclaimers. That’s not just regulating practice. That’s regulating interstate legal information. This would be consumer unfriendly.
English
2
1
12
1.5K
Rob Freund
Rob Freund@RobertFreundLaw·
Interesting: NY bill would prohibit AI chatbots from giving legal advice. SB 7263, which passed the Internet & Technology Committee last week, says: "A proprietor of a chatbot shall not permit such chatbot to provide any substantive response, information or advice, or take any action, which, if taken by a natural person: ... would violate ... law prohibiting the practice or appearance as an attorney-at-law without being admitted and registered ...." The bill provides a private right of action with mandatory attorneys' fees.
Rob Freund tweet media
English
155
136
691
690.2K
Ira Rothken
Ira Rothken@rothken·
If standard discretionary disclosure language in an AI or cloud provider's terms of service or privacy policy destroys the "reasonable expectation of confidentiality" required for privilege, then nearly every law firm storing privileged communications and work product on Microsoft 365, Google Workspace, Dropbox, or cloud-hosted eDiscovery platforms is exposed — because those platforms have materially identical provisions. That's the implication of the court's reasoning in United States v. Heppner (S.D.N.Y. Feb. 17, 2026). To be clear: the documents in Heppner were seized from the defendant's home under a warrant, not obtained from Anthropic, the AI provider. And the Stored Communications Act and Warshak still require the government to get a warrant before compelling content from a platform. Those protections remain intact. But privilege is a different question. It asks whether the communicator maintained confidentiality — and the court treated boilerplate privacy policy language as negating it. If that reasoning is extended, opposing counsel could challenge privilege over any document that passed through a cloud service with standard disclosure terms. Criminal or civil. Law firm or pro se. I dig into the full analysis — including why the court may have relied on the wrong version of Anthropic's privacy policy and a proposed Reasonable Cloud Confidentiality Test — in my latest article. x.com/rothken/status… #AI #AttorneyClientPrivilege #WorkProduct #LegalTech #eDiscovery #Privacy #CloudComputing #Litigation
Ira Rothken@rothken

x.com/i/article/2027…

English
1
2
13
1.8K
Ira Rothken retweetledi
Sam Altman
Sam Altman@sama·
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
English
15.9K
4K
34.2K
38.1M
Ira Rothken
Ira Rothken@rothken·
A federal judge just ruled that talking to an AI chatbot destroys both attorney-client privilege and work-product protection. The reasoning could arguably put at risk privilege for every document you've ever stored on Google Drive, Microsoft OneDrive, or Dropbox — and could put the Department of Justice's own cloud-based litigation files at risk of waiver. x.com/rothken/status…
English
3
1
6
451