


Daniel W. Linna Jr.
11.9K posts

@DanLinna
prof & Dir Law & Technology, @NorthwesternLaw & @NorthwesternEng; @CodeXStanford Affiliated Faculty; frmr #BigLaw partner; AI4Law; Law4AI; ppl+process+data+tech








What next, NYC bans legal advice over radio? Turns out... 🧵 thread newsletter.pessimistsarchive.org/p/nyc-once-ban…











I understand that this is where the trend points, but I don’t believe it. Not because models won’t be capable of it, but because you don’t know the feature set and tradeoffs of what you want in a development project that complex.








As a lawyer, let me tell you the main reason why AI will likely NOT kill all lawyers, and why law is actually one of the most 'AI-safe' professions today: The legal profession is very good at protecting itself. It's built around the idea of competence, authority, credibility, and gatekeeping. That's how it has been for centuries. If AI systems become very good at generating structurally complex and legally accurate outputs, legal procedure rules will be amended to make sure that: - a human lawyer is always involved in a legal case; - every legally relevant document is reviewed and signed by a human lawyer. Also, bar associations worldwide will likely create new rules around legal representation, including procedural and behavioral rules, in a way that a human lawyer will always be necessary. I don't see it changing in the next 15-20 years. After that, lawyers will probably find a new way to gatekeep. I view law as one of the 'AI safest' professions to pursue today (much safer than computer programming, by the way).

Notre Dame would beat both these teams.