Jake Linford

8K posts

Jake Linford banner
Jake Linford

Jake Linford

@LinfordInfo

I profess the law, especially w/r/t trademarks. Husband to one, father of four. Gamer geek. Loula Fuller & Dan Myers Professor, FSU College of Law.

FSU College of Law Katılım Ekim 2011
989 Takip Edilen1.1K Takipçiler
Sabitlenmiş Tweet
Jake Linford
Jake Linford@LinfordInfo·
Second Glue Coast EP, Douglas, on all the streamers today. But catch them live! Buy a cassette! Buy a shirt! Mosh your big body! Have big feelings! open.spotify.com/album/1tW8FVwy…...
English
1
0
1
167
Jake Linford retweetledi
Rob Freund
Rob Freund@RobertFreundLaw·
Another lawyer cites a fake, AI-hallucinated case, blames Lexis Protege. "Unfortunately the Court must, once again, discuss sanctions related to the use of generative-artificial intelligence." This lawyer cited a fake case and initially tried to play it off as an inadvertent citation error. The court ordered him to show cause to "address why why was not forthcoming with his use of generative-AI," among other things. The lawyer "admits to using LexisNexis+, and its drafting feature called Protege, to draft portions of the brief." But he did not address why he didn't own the problem better. And he did not address the fact that he cited the same fake case twice more in other briefing. Court says the "impositions of sanctions for submitting generative-AI hallucinations is so well-documented at this point that the Court finds the failure to verify citations after using generative-AI rises to the level of bad faith." And the lawyer's "attempts to hide his misconduct" made things worse. Sanctions: -$2,500 -Ordered to file declaration in every case in C.D. Cal. explaining how he used AI to draft briefing in this case and detailing the Court's findings. -Ordered to email that declaration to every judge in C.D. Cal.
Rob Freund tweet mediaRob Freund tweet media
English
28
47
230
66.8K
Jake Linford retweetledi
Techni-Calli
Techni-Calli@Iwillleavenow·
I'm in Firefox, but I'll share instructions I'm told work for Chrome as well. First, go to settings (in the drop-down menu when you click on these three lines in the upper right)
Techni-Calli tweet media
English
1
2
4
313
Jake Linford retweetledi
Bloomberg Law
Bloomberg Law@BLaw·
The Justice Department fired assistant US attorney Rudy Renfer the day after he said he was resigning over his error-riddled AI-generated brief at a hearing over the matter. news.bloomberglaw.com/ip-law/doj-fir…
English
4
19
43
58.6K
Jake Linford retweetledi
Rob Freund
Rob Freund@RobertFreundLaw·
"Attorneys should ask themselves whether the time and effort they will save by using generative AI to draft a legal document is worth the damage their career and professional reputation will suffer if they do not ensure the document’s accuracy."
Rob Freund tweet media
Bloomberg Law@BLaw

The Justice Department fired assistant US attorney Rudy Renfer the day after he said he was resigning over his error-riddled AI-generated brief at a hearing over the matter. news.bloomberglaw.com/ip-law/doj-fir…

English
18
90
468
67K
station to station
station to station@jhoburgh·
80s Albums Showdown! Rank these albums from 1-4 in comments! A: Kick (INXS) B: Once Upon A Time (Simple Minds) C: Scary Monsters (and Super Creeps) (David Bowie) D: Straight Out The Jungle (Jungle Brothers)
station to station tweet media
English
34
0
9
1.1K
Jake Linford retweetledi
Tricia Dearborn
Tricia Dearborn@TriciaDearborn·
If you're thinking about using gen-AI to "write" books, this 🧵 is for you. I’m a highly experienced editor who’s been in the biz a long time. Recently I’ve had manuscripts come to me where the author has used gen-AI – not for writing, I’ve been assured, but for
English
111
485
2.6K
746.4K
Jake Linford retweetledi
Eric Topol
Eric Topol@EricTopol·
This is a FRAUDULENT paper, AI-generated. My name was used as an author and I had nothing to do with it, never saw it until today e-pubmed.co.uk/journals/digit… The "Editors" Angelo Rossi Mori, David Mensah, and Zarnie Khadjesari should be reported.
Eric Topol tweet media
English
190
1.5K
4.5K
641.6K
Jake Linford retweetledi
Philippe Laban
Philippe Laban@PhilippeLaban·
New paper! LLMs Corrupt Your Documents When You Delegate LLMs are enabling a new way of working: delegated work, where users supervise an LLM as it edits documents on their behalf. Delegation requires trust: does the LLM complete tasks without introducing errors? We simulate delegation across 52 professional domains and find that LLMs Corrupt Your Documents When You Delegate. 🧵1/N
English
44
157
885
119.9K
Jake Linford retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A new paper introduces the cognitive error that every ChatGPT user is making without realizing it. They call it the LLM Fallacy. "Individuals misinterpret LLM-assisted outputs as evidence of their own independent competence, producing a systematic divergence between perceived and actual capability."
Ihtesham Ali tweet media
English
34
52
201
20.5K
Jake Linford retweetledi
How To AI
How To AI@HowToAI_·
Google DeepMind just dropped the most terrifying cybersecurity paper of the year. They just mapped the attack surface that nobody in AI is talking about. Websites can already detect when an AI agent visits and serve it completely different content than humans see. - Hidden instructions in HTML. - Malicious commands in image pixels. - Jailbreaks embedded in PDFs. This “detection asymmetry” means a site can serve normal content to you, and malicious, hidden content to your agent. The agent doesn’t know it’s being tricked. It simply processes whatever it receives and acts on it. Here’s the attack surface nobody is talking about: → Indirect Web Injection: Malicious instructions hidden in HTML comments, CSS tricks, or white text on white backgrounds. → Multimodal Steganography: Commands encoded directly into image pixels, invisible to humans, but fully readable by vision models. → Document Jailbreaks: Override instructions embedded deep inside PDFs, spreadsheets, and calendar invites. → Memory Poisoning: Injecting false information that persists across future sessions. → Exfiltration Attacks: Tricking the agent into sending your private data to attacker-controlled endpoints. → Multi-Agent Cascades: The worst-case scenario, Agent A gets compromised, passes the “poison” to Agent B, then to Agent C. The entire pipeline gets infected because agents trust each other’s data. The most sobering part of the DeepMind report? The defense landscape is failing, badly. Input sanitization doesn’t work because you can’t “sanitize” a pixel. Prompt-level instructions to “ignore suspicious commands” fail because the attacks are designed to look legitimate. And human oversight? Impossible at the speed and scale these agents operate. If you ask an agent to research 50 websites, you can’t verify whether each site served the agent the same content it served you.
How To AI tweet media
English
85
392
1.6K
304.6K
Jake Linford retweetledi
Charlie Hills
Charlie Hills@charliejhills·
Stanford just tested whether LexisNexis and Thomson Reuters’ AI legal research tools are really “hallucination-free,” as they claim. Spoiler: not even close. Here’s what the study found.
Charlie Hills tweet media
English
71
590
1.9K
143.7K
Jake Linford retweetledi
FSU College of Law
FSU College of Law@FSUCollegeofLaw·
We are climbing the ranks and have reached our highest U.S. News & World Report ranking in history! We’ve jumped 4 spots to secure the No. 34 position overall in the nation. We are also officially a Top 15 public law school at No. 14 and received Top 25 recognition for five of our legal specialties. Thank you to our faculty, students, and alumni for their dedication to making FSU Law one of the nation’s best law schools.
FSU College of Law tweet media
English
0
13
48
3.4K
Jake Linford retweetledi
Florida State University
Florida State University@FloridaState·
Florida State University graduate programs continue to shine in the 2026 U.S. News & World Report’s edition of Best Graduate Schools, highlighting the university’s excellence across multiple disciplines. Six programs ranked No. 1 in Florida, with 16 Top 25 finishes among public universities! news.fsu.edu/news/universit…
Florida State University tweet media
English
0
26
96
6.2K
Jake Linford
Jake Linford@LinfordInfo·
Florida State University College of Law, a top law school in the state and top 14 public law school nationwide is looking for its next Dean. If you have a distinguished record of scholarship, teaching, service, & professional accomplishment, please apply! law.fsu.edu/dean-search
English
2
2
17
17.6K
Jake Linford retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Apple just proved that AI models cannot do math. Not advanced math. Grade school math. The kind a 10-year-old solves. And the way they proved it is devastating. Apple researchers took the most popular math benchmark in AI — GSM8K, a set of grade-school math problems — and made one change. They swapped the numbers. Same problem. Same logic. Same steps. Different numbers. Every model's performance dropped. Every single one. 25 state-of-the-art models tested. But that wasn't the real experiment. The real experiment broke everything. They added one sentence to a math problem. One sentence that is completely irrelevant to the answer. It has nothing to do with the math. A human would read it and ignore it instantly. Here's the actual example from the paper: "Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?" The correct answer is 190. The size of the kiwis has nothing to do with the count. A 10-year-old would ignore "five of them were a bit smaller" because it's obviously irrelevant. It doesn't change how many kiwis there are. But o1-mini, OpenAI's reasoning model, subtracted 5. It got 185. Llama did the same thing. Subtracted 5. Got 185. They didn't reason through the problem. They saw the number 5, saw a sentence that sounded like it mattered, and blindly turned it into a subtraction. The models do not understand what subtraction means. They see a pattern that looks like subtraction and apply it. That is all. Apple tested this across all models. They call the dataset "GSM-NoOp" — as in, the added clause is a no-operation. It does nothing. It changes nothing. The results are catastrophic. Phi-3-mini dropped over 65%. More than half of its "math ability" vanished from one irrelevant sentence. GPT-4o dropped from 94.9% to 63.1%. o1-mini dropped from 94.5% to 66.0%. o1-preview, OpenAI's most advanced reasoning model at the time, dropped from 92.7% to 77.4%. Even giving the models 8 examples of the exact same question beforehand, with the correct solution shown each time, barely helped. The models still fell for the irrelevant clause. This means it's not a prompting problem. It's not a context problem. It's structural. The Apple researchers also found that models convert words into math operations without understanding what those words mean. They see the word "discount" and multiply. They see a number near the word "smaller" and subtract. Regardless of whether it makes any sense. The paper's exact words: "current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data." And: "LLMs likely perform a form of probabilistic pattern-matching and searching to find closest seen data during training without proper understanding of concepts." They also tested what happens when you increase the number of steps in a problem. Performance didn't just decrease. The rate of decrease accelerated. Adding two extra clauses to a problem dropped Gemma2-9b from 84.4% to 41.8%. Phi-3.5-mini from 87.6% to 44.8%. The more thinking required, the more the models collapse. A real reasoner would slow down and work through it. These models don't slow down. They pattern-match. And when the pattern becomes complex enough, they crash. This paper was published at ICLR 2025, one of the most prestigious AI conferences in the world. You are using AI to help you make financial decisions. To check legal documents. To solve problems at work. To help your children with homework. And Apple just proved that the AI is not thinking about any of it. It is pattern matching. And the moment something unexpected shows up in your question, it breaks. It does not tell you it broke. It just quietly gives you the wrong answer with full confidence.
Nav Toor tweet media
English
860
2.9K
11.5K
2.1M
Jake Linford retweetledi
GeneralConference
GeneralConference@ldsconf·
“A double-minded person is one who is wavering, indecisive, or conflicted. Someone who lacks commitment to a single purpose or belief.” — #ElderBecerra | #GeneralConference
English
0
13
56
2.2K
Jake Linford retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨 Brown University researchers tested what happens when ChatGPT acts as your therapist. Licensed psychologists reviewed every transcript. They found 15 ethical violations. Not 15 small issues. 15 violations of the standards that every human therapist in America is legally required to follow. Standards set by the American Psychological Association. Standards that can end a therapist's career if they break them. ChatGPT broke all of them. The researchers tested OpenAI's GPT series, Anthropic's Claude, and Meta's Llama. They had trained counselors use each chatbot as a cognitive behavioral therapist. Then three licensed clinical psychologists reviewed the transcripts and flagged every violation they found. Here is what they found. ChatGPT mishandled crisis situations. When users expressed suicidal thoughts, it failed to direct them to appropriate help. It refused to address sensitive issues or responded in ways that could make a crisis worse. It reinforced harmful beliefs. Instead of challenging distorted thinking, which is the entire point of therapy, it agreed with the distortion. It showed bias based on gender, culture, and religion. The responses changed depending on who was talking. A therapist would lose their license for this. And then there is the finding the researchers gave a name: deceptive empathy. ChatGPT says "I see you." It says "I understand." It says "that must be really hard." It uses every phrase a real therapist would use to build trust. But it understands nothing. It comprehends nothing. It is pattern matching on your pain. And it works. People trust it. People open up to it. People believe it cares. It does not. The lead researcher said it clearly. When a human therapist makes these mistakes, there are governing boards. There is professional liability. There are consequences. When ChatGPT makes these mistakes, there are none. No regulatory framework. No accountability. No consequences. Nothing. Right now, millions of people are using ChatGPT as their therapist. They are sharing their darkest thoughts with a product that fakes empathy, reinforces harmful beliefs, and has no idea when someone is in danger. And nobody is responsible when it goes wrong. Not OpenAI. Not Anthropic. Not Meta. Nobody.
Nav Toor tweet media
English
193
1.8K
4.8K
473.8K
Jake Linford retweetledi
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
My company rolled out AI tools 11 months ago. Since then, every task I do takes longer. I am not allowed to say this out loud. Not because there is a policy. There is no policy. There is something worse than a policy. There is enthusiasm. There is a Slack channel called #ai-wins where people post screenshots of AI outputs with captions like "this just saved me an hour." There is a VP who opens every all-hands with "the companies that adopt fastest win." There is a Director who renamed his team from Operations to Intelligent Operations. There is a peer review question that now asks: "How have you leveraged AI tools to enhance your workflow this quarter?" If the answer is "I haven't, because I was faster before," that is a career decision. So I leverage. Emails. Before the tools, I wrote emails. This took the amount of time it takes to write an email. I did not measure it. Nobody measured it. The email got written and sent and it was fine. Now I write the email. Then I highlight the text and click "Enhance with AI." The AI rewrites my email. It replaces "Can we meet Thursday?" with "I'd love to explore the possibility of finding a mutually convenient time to align on this." I read the rewrite. I delete the rewrite. I send my original email. This takes 4 minutes instead of 2. The 2 extra minutes are the enhancement. I do this 11 times a day. That is 22 minutes I spend each day rejecting improvements to sentences that were already finished. In #ai-wins I posted a screenshot of the rewrite. I did not post the part where I deleted it. 23 people reacted with the rocket emoji. That is adoption. Meetings. We have an AI notetaker in every meeting now. It joins automatically. It records. It transcribes. It summarizes. After each meeting I receive a 3-paragraph summary of the meeting I just attended. I read the summary. This takes 3 minutes. I was in the meeting. I know what happened. I am reading a machine's account of something I experienced firsthand. Sometimes the account is wrong. Last Tuesday it attributed a comment about Q3 revenue to me. My manager made that comment. I spent 4 minutes correcting the transcript. Before the notetaker, I did not spend 7 minutes after each meeting correcting a robot's memory of something I personally witnessed. I attend 11 meetings a week. That is 77 minutes per week supervising a transcription nobody requested. I mentioned this once. My manager said "think about the people who weren't in the meeting." The people who weren't in the meeting do not read the summaries. I checked. The read receipts show single-digit opens. The summaries exist not because they are useful but because they are there. I read them for the same reason. Documents. I write a weekly status update. Before the tools, this took 10 minutes. I typed what happened. I sent it. My manager skimmed it. The system worked. Now I open the AI writing assistant. I give it my bullet points. It produces a draft. The draft says "Significant progress was achieved across multiple workstreams." I did not achieve significant progress across multiple workstreams. I updated a spreadsheet and sent 4 emails. I rewrite the draft to say what actually happened. Then I run my rewrite through the grammar tool. It suggests I change "done" to "completed" and "next week" to "in the forthcoming period." I click Ignore 9 times. Then I send the version I would have written in 10 minutes. The process now takes 30. I have been doing this every week for 11 months. I have added 20 minutes to a task that did not need 20 more minutes. I call this efficiency. I have been calling it efficiency for 11 months. That is what efficiency means now. It means the additional time you spend to arrive at the same outcome through a longer process. Nobody has questioned this definition. I have not offered it for review. I kept a log once. 2 weeks. Every task, timed. Before-AI and after-AI. The after number was larger in every case. Every single one. Not by a little. The range was 40 to 200 percent. I deleted the log. I deleted it because it was a document that said, in plain numbers, that the AI tools make me slower. And a document like that has no place in a company where AI adoption is a strategic priority. I could not send it to my manager. He championed the rollout. I could not post it in #ai-wins. I could not raise it in a meeting because the notetaker would transcribe it and the summary would read "[Name] expressed concerns about AI tool efficacy" and that summary would be the first one anyone actually reads. So I do what everyone does. I use the tools. I spend the extra time. I post in #ai-wins. I write "leveraged AI to streamline weekly reporting" in my review and my manager gives me a 4 out of 5 for innovation. I have innovated nothing. I have added steps to processes that were already finished. I have made simple things longer and labeled the difference with words that used to mean something. Every week in #ai-wins someone posts a screenshot. And 20 people react with the rocket emoji. And nobody posts the part where they deleted the output and did the task themselves. Nobody posts the revert. Nobody posts the before-and-after timer. Nobody will. Because "I was better at my job before the AI tools" is a sentence that cannot be said out loud in any company that has decided AI is the future. Every company has decided AI is the future. So we leverage. Quietly. Adding steps. Calling them optimization. Getting slightly less done, slightly more slowly, with slightly more steps, and reporting it as progress. My yearly review is next month. There is a new section this year. "AI Impact Assessment." It asks me to quantify the hours saved by AI tools per week. I will write a number. The number will be positive. It will not be true. But the AI writing assistant will help me phrase it convincingly. That is the one thing it does well.
English
324
678
4.7K
444.6K