Colin Lachance

9K posts

Colin Lachance banner
Colin Lachance

Colin Lachance

@colinlachance

Long-time legal info/tech/ai agitator. Troy McClure of legal innovation activities. You might remember me from such companies/events/panels/tweets/posts/etc…as

Canada Katılım Ekim 2019
1.4K Takip Edilen2K Takipçiler
Ross Guberman & BriefCatch
Ross Guberman & BriefCatch@legalwritingpro·
@colinlachance Great point. I use Opus all the time personally, but for products we’re building, Sonnet 4.6 is a better cost-benefit tradeoff for many tasks.
English
2
0
0
131
Colin Lachance
Colin Lachance@colinlachance·
Project Hail Mary. 👍👍 Been a long time since i’ve found myself wanting to see a movie a second time the day after seeing it.
English
1
0
6
148
Colin Lachance
Colin Lachance@colinlachance·
@zackbshapiro You could give away 100% of everything you do with AI as of today and it wouldn’t matter 1) no one can apply someone’s secret AI sauce, they have to adapt it to their situation 2) you will be constantly tweaking your own approach as things change and you find new ways of working
English
0
0
7
370
Zack Shapiro
Zack Shapiro@zackbshapiro·
New article sometime this week. It might be the last of these I write in public for a while. This one is the closest I'll come to explaining, in detail, how I actually use AI. It's as much of the secret sauce as I'm willing to give away for free.
English
4
2
75
5K
Colin Lachance
Colin Lachance@colinlachance·
Reminder: this (ie. asking the AI to tech you) is how @LawQi_learn works for lawyers.
Colin Lachance tweet media
Mark Cuban@mcuban

I’m going to tell you how much worse it was at the start of the PC Revolution for white collar workers trying to adapt, vs today with AI Today, presumably every white collar worker has access to a smart phone and/or a PC/laptop. Back then, a PC cost $4,995 , an off brand was $3,995. 5k in 1984 is about $16k today. It was really expensive. The only reason I could learn how to code and support software is because my job let me take home a PC to learn. By reading the software manual. Literally. RTFM. Or pay to go to training. Classes that started at hundreds of dollars then. It was expensive. It absolutely limited who could get ahead. Today, ANYONE can go to their browser, to the AI LLM website of their choice, and type in the words “I’m a novice with zero computer background, teach me how to create an agent that reads my email and …” That concept applies to LEARNING ANYTHING Think about what this means. Any employee of any company can say “ I need to learn how to xyz for my job , which is to do the following: Tell me what more information do you need to help me be more efficient, productive and promotable”. Or “ what new skills can you teach me that will help me reduce my chances of getting laid off “. Or “what suggestions do you have for me to communicate to my boss, who I barely know, to help my chances of staying employed “ These aren’t great prompts. But they are a start that anyone can take. Think about how incredible that is. Back in the day was so much harder for white collar workers. It was harder for new grads because unless they took comp sci, they probably had never used a PC. Big Companies are going to cut jobs. No question about it. Small companies is are going to need more and more AI literate thinkers who can help them compete or get an edge What I tell every entrepreneur, and it’s more crucial today. “ when you run with the elephants there are the quick and the dead. Adopt tech quickly , you can out maneuver big companies. “

English
0
0
3
176
Colin Lachance
Colin Lachance@colinlachance·
@Allinallnotbad Zero might be slightly unfair. I’d go 0.1. On an especially optimistic day I might say 0.2. My circles are admittedly overweight at the extreme through a lot of lawyer/builders, but i do encounter normies understanding and leveraging the leaps in that and Opus 4.6 pro extended
English
0
0
0
92
Samuel Roland
Samuel Roland@Allinallnotbad·
One of the things I've most realized starting to spend a lot of time on legal twitter is that approximately 0 lawyers are currently using top-end models; The lack of understanding of how the top end (5.4 Pro Extended) has largely solved hallucination problems is severe.
English
11
1
68
3.5K
Colin Lachance
Colin Lachance@colinlachance·
@ItsMattsLaw Same. I used AI to build this during flight delays and lay overs since last night. LexBluff.com serves no useful purpose, and several detrimental purposes. But it was find to vibe into existence.
Colin Lachance tweet media
English
0
0
0
102
Matt Margolis
Matt Margolis@ItsMattsLaw·
our new AI tool uses AI to create new AI to AI your AI into AI with AI
English
9
13
160
5.8K
Colin Lachance
Colin Lachance@colinlachance·
The things you can accomplish during flight delays and layovers
Colin Lachance tweet media
English
0
0
2
75
Colin Lachance
Colin Lachance@colinlachance·
The value of a legal AI wrapper should not be measured against the frontier model, but against the value-for-effort the customer would otherwise need to expend managing the frontier model into a flow suitable for the customer’s intended purpose. Today that customer effort could mean 2000 word prompts, managing agents, handling back-end databases, version control, workflow triggers and more. Certainly much more effort than throwing a doc at ChatGPT/Claude/Gemini and saying “fix”. For a very small (but growing) portion, the legal AI wrapper already falls short. For that group, the self-developed wrapper is now the preferred starting point with legal AI wrappers being the exception for when circumstances (e.g. multi-party interactivity, security, doc volume) make it the better choice. As models, harnessing and tools improve, the self-developed wrappers will take on a bigger share. Similarly, the scope of work done directly within a frontier model requiring little to no wrapper (ie, drop docs and say “fix”) is also growing. But even as the traditional space occupied by the legal AI wrappers diminishes, the opportunity to claim new space grows because they (or at least a few of them) can leverage the frontier models as well into flows and actions that appear to some customers as more valuable than the self-wrapper or raw model options.
English
0
0
0
53
Zack Shapiro
Zack Shapiro@zackbshapiro·
Please shill me your best steelman arguments for why legal AI wrappers add value:
English
21
1
24
7.6K
Colin Lachance
Colin Lachance@colinlachance·
@JosephPatrice My recollection is in early days of o1/DeepSeek/etc… reasoner models we were told thinking trail was more like post-hoc explanation than incremental reason-act-reflect-reason-act loop. We (ok, I) just assumed it got better at narrating and actual step-by-step. I guess it didn’t.
English
0
0
0
212
Joe Patrice
Joe Patrice@JosephPatrice·
This seems like it could be significant for AI legal research claims
English
6
6
49
19.3K
Colin Lachance
Colin Lachance@colinlachance·
For all the NDA-focused legal tech and legal AI of the past several years, how is it possible that people are still finding “improvements” to their MNDAs? 🤔 Unless….asking a machine to “improve” a document always results in edits no matter how great the document?
GIF
Computer@AskPerplexity

Perplexity Computer can markup any document with Final Pass. It runs 5 reviews in parallel and returns a fully marked up version with actionable edits. In one query, it identified several improvements to our MNDA that we actually implemented. Available with Enterprise Computer.

English
1
0
4
473
Colin Lachance
Colin Lachance@colinlachance·
I train lawyers and other legal folks on AI. Grok rarely merits mention (if ever). Even among folks I learn from, unless it’s on Twitter, their resources/posts rarely mention Grok. Good as quick factcheck/explainer of a single post. And to go where other LLMs dare not. That’s it.
English
0
0
2
131
Sam Harden
Sam Harden@samuelharden·
Claude can't create images (good IMO), but it can now use Excalidraw to create images using shapes and drawing tools. So essentially you have an AI using a computer to draw things using math, but it can't "see" it. Anyway, here's Claude's drawing of a dragon.
Sam Harden tweet media
English
1
0
3
104
Shubham Saboo
Shubham Saboo@Saboo_Shubham_·
What adding new skills to OpenClaw Agents feels like.
English
35
65
595
42.3K
Colin Lachance
Colin Lachance@colinlachance·
Leaving the Spellbook stuff aside (I’m a big fan of the team), you’re bang on about why working with bare metal is best. Those that do are playing an entirely different game than everyone else. It’s also going to be a very small percentage of the profession. My bigger concern is for those that are squandering the opportunity to scratch below the surface of LLM behaviours. There’s a base level of understanding required of a savvy user that most lawyers aren’t even pursuing. The window is closing and soon the legal AI tools will, to them, remain forever things they buy but don’t understand. I’m not a programmer but I’m old. That means I learned to navigate DOS before Windows hit the market. Understanding DOS meant I could make sense of file systems in Windows, which meant I could make sense of URLs and how the internet worked. And so on. We’re in the DOS era of AI. My other AI analogy is food. Most of us have one year left where we can comfortably walk in the AI kitchen and make sense of the ingredients, the recipes and the equipment. Next year, we’ll be overrun by menus and the prospect of looking beyond them to the kitchen will be daunting and very difficult. Back to your point. I’d love it if more lawyers understood the value of working with the bare metal. They won’t. Just like most people won’t be chefs.
English
0
0
4
224
Zack Shapiro
Zack Shapiro@zackbshapiro·
Bunch of people have sent me this Dropbox meme to dunk on my criticism of legal AI wrappers. But they are making a category error. Generative AI isn't just infrastructure, it's judgment amplification. The whole point is that it has to be your judgment doing the driving, not a wrapper UI with an "opinionated" workflow abstracting away the hard cognitive work (sorry @scottastevenson). With a wrapper, your primary cognitive contribution is reviewing outputs. You're the approval layer at the end of a process the product designed--the check box guy. With bare metal, your judgment is distributed throughout: in how you prompt, how you've built your workflows, how you iterate. The former atrophies your judgment. The latter supercharges it. If you let a dashboard decide how you work with AI instead of the other way around, you're outsourcing the part of the work that makes you hard to replace. So eventually, you will get replaced. Hope this helps.
henrique cunha@henrycunh

@zackbshapiro it just don't fit your needs

English
12
3
54
11.8K