Daniel Brugger

721 posts

Daniel Brugger banner
Daniel Brugger

Daniel Brugger

@_DBrugger

Law/AI, Head Legal Tech/AI (https://t.co/jeRjCYUeJS), Senior Legal AI Advisor (https://t.co/05252HfQW9), Founder and Podcaster

Männedorf, Schweiz เข้าร่วม Mayıs 2021
166 กำลังติดตาม270 ผู้ติดตาม
Daniel Brugger
Daniel Brugger@_DBrugger·
Identity Before Technology: What We Can Learn from Atomic Habits for the AI Transformation In "Atomic Habits" James Clear presents a model that describes three layers of behavior change. Outcomes sit on the outside (what you get). Processes come next (what you do). And at the core lies identity (what you believe). Most people start their behavior change from the outside in. They focus on what they want to achieve and then try to find the right process to get there (outcome→process→identity). Clear argues that the reverse direction works better. You start with identity. Instead of focusing on what you want to achieve, you focus on who you want to become. The habits follow naturally (identity→process→outcome). I believe this approach can also be applied to the AI transformation in the legal market. The Outcome Trap Most AI initiatives in the legal sector begin with the same pitch: faster work, lower costs, more consistency, and so on. The language is purely outcome-oriented. That is not wrong, but it is superficial and can cause the transformation to fail: an expensive legal tech tool, a few enthusiasts, but no organization-wide transformation. The Process Plateau More advanced organizations move to the second layer, the process level. They redesign their workflows, clean up their document management, and build prompt libraries or "workflows" in legal tech tools. They start thinking about which processes AI can actually support. That is better. But it can stall too. You can redesign a process as many times as you want. If the people executing it do not believe in what it stands for, they will not use it. The Real Starting Point Organizations that will lead the AI transformation should, in my view, ask themselves a different question. The right question is not "How can AI make us faster or better?" or "How should we restructure our workflows?" but rather "What kind of firm or organization are we?" That is the identity level. Identity determines which processes feel natural and which feel forced. It decides whether a new AI tool is adopted or abandoned after the first disappointing result. It is the difference between a law firm where partners say "We are an AI-forward firm" and one where they say "We tried that AI thing." Identity First, Then Everything Else For law firms, this means the AI conversation must start at the partner retreat, not in the IT department. It must begin with beliefs about what legal work means in 2026 and beyond and where the firm's distinctive value lies. Once that is clear, you can move on to designing processes. The outcomes will follow (identity→process→outcome). But anyone who skips the identity level will keep buying tools that never see widespread adoption and building workflows that feel unnatural. Better start with identity. ___ Originally published as a German-language blog post on iusbubble.com: iusbubble.com/c/public/ident…
Daniel Brugger tweet media
English
0
0
1
36
Daniel Brugger รีทวีตแล้ว
Marc Andreessen 🇺🇸
My information consumption is now 1/4 X, 1/4 podcast interviews of the smartest practitioners, 1/4 talking to the leading AI models, and 1/4 reading old books. The opportunity cost of anything else is far too high, and rising daily.
English
1.4K
3.9K
35.1K
34.6M
Daniel Brugger
Daniel Brugger@_DBrugger·
🚀 Join IusBubble.com - Where Law Meets Digital Transformation IusBubble is a digital community for lawyers and everyone interested in the future of law and its digital transformation. What started as an idea is now a growing community of over 780 members exchanging ideas in English, German, and French. IusBubble hosts regular webinars focused on practical, real-world use cases. Just yesterday, members joined a step-by-step tutorial on how to connect their AI tools to an MCP server. All sessions are recorded and available for members. 👉 Join over 780 members and start bubbling ideas into reality. Registration form: docs.google.com/forms/d/e/1FAI…
Daniel Brugger tweet media
English
0
0
0
16
Daniel Brugger
Daniel Brugger@_DBrugger·
Learn to Think Before You Start to Prompt In a concise blog post (retogubelmann.net/2026/01/16/tho…), Reto Gubelmann highlights a crucial aspect that frequently gets overlooked: "At a university, efficiency is not the goal. Formation of habits, virtues, and acquisition of deep skills is. You acquire those by doing the nitty-gritty work of a BA student yourself. Not because you will do these tasks without GenAI when on the job (there, it is about efficiency), but because it is the only way to acquire the grit, depth, precision, and flexibility of thought that will allow you to succeed in a labor market shot through with AI." A thought-provoking blog post on the use of AI in legal education.
English
0
0
1
27
Daniel Brugger
Daniel Brugger@_DBrugger·
KI-Assistent für Schweizer Rechtsprechung in Deinem KI-Tool einrichten (Schritt für Schritt Tutorial) Mit der neuen MCP-Integration von Lexi Search kann man jedes kompatible KI-Tool direkt mit LexiSearch verbinden (lexisearch.ch/mcp-integration). Dadurch kennt Dein KI-Tool aktuelle Schweizer Gerichtsentscheide und kann sie zitieren. Wie das geht, zeigt Lionel Voser im IusBubble.com Webinar Nr. 28 vom nächsten Dienstag, 27. Januar 2026, 12.00-12.30h. Im Webinar erwartet Dich ein Schritt-für-Schritt-Tutorial zur Verbindung Deines Assistenten mit Lexi Search per Model Context Protocol (MCP). Gezeigt wird es am Beispiel von Claude und Mistral (weitere Infos in der Bubble). Am Ende des Webinars hast Du einen fertig eingerichteten Assistenten, der im Chat aktuelle Schweizer Rechtsprechung durchsucht und Entscheide sauber zitiert. 💡 Warum ich die Lösung von LexiSearch spannend finde: Durch das Model Context Protocol (MCP) werden Rechtsdaten - hier Schweizer Rechtsprechung - zu einer andockbaren Infrastruktur. Anstatt auf teure Speziallösungen oder zusätzliche Legal-Tech-Plattformen angewiesen zu sein, können Nutzer ihr KI-Tool frei wählen und dieses direkt mit einer Datenquelle verbinden. Dadurch entsteht kein Lock-in, sondern ein offenes, modulares System. --- IusBubble.com ist eine digitale Community für Juristen sowie für alle anderen, die sich für das Recht und seine Digitalisierung interessieren. Die Mitgliedschaft ist kostenlos. Die Registrierung ist einfach, aber kontrolliert, um Bots und störendes Verhalten in der Gemeinschaft zu vermeiden. Anmeldung auf: docs.google.com/forms/d/e/1FAI…
Daniel Brugger tweet media
Deutsch
0
0
0
17
Daniel Brugger รีทวีตแล้ว
Joël Niklaus @ ICLR
Joël Niklaus @ ICLR@joelniklaus·
Super excited to be heading to Davos next week for my first @wef! I'll be participating in a roundtable discussion on legal AI in Davos on January 20th. Thanks to @RobertMahari for the invitation and organization! I'm looking forward to the conversation with @_DBrugger, Paul Chow, Dr. Rehana Harasgama-Zehnder, Ilona Logvinova, Philippe Gillieron, @alex_pentland, and Roland Vogl. The discussion will focus on how legal AI is evolving beyond document automation toward genuine reasoning support. We'll explore how law firms and in-house teams can use AI strategically, how data-driven analysis is changing business models and client expectations, and what new questions around trust, liability, and professional responsibility are emerging. Leading Beyond Boundaries brings together researchers, business leaders, and policymakers in Davos to discuss AI, justice, healthcare, and other areas shaping our future. The initiative combines invite-only roundtables with public sessions for cross-disciplinary conversation.
Joël Niklaus @ ICLR tweet media
English
0
1
3
207
Daniel Brugger
Daniel Brugger@_DBrugger·
🚀 Lawcad.com: Gebaut, getestet, und nun unter Live-Bedingungen bestanden. Man kann vieles intern vor dem Launch testen. Aber der wahre Test kommt erst unter echten Bedingungen. Genau das konnte ich kürzlich mit Marco Candinas durchspielen: 1️⃣ Marco hat seinen Onlinekurs „Praktische Anwendung von Künstlicher Intelligenz im juristischen Alltag“ auf Lawcad selbstständig mit dem einfach verständlichen Drag-and-Drop-Editor erstellt. 2️⃣ Der Kurs wurde erfolgreich an mehrere Personen verkauft. 3️⃣ Marco hat über die Plattform seine Kursbeteiligung (95 % des Kurspreises) angefordert und ich habe ihm diese ausbezahlt. 🔁 Kreis geschlossen. Was intern gut aussah, hat sich im Live-Betrieb bewährt: Erstellung, Verkauf, Abrechnung, Auszahlung. Alles funktioniert. Danke Marco fürs Vertrauen und fürs Mitgehen als erster echter End-to-End-Test. 💪
Daniel Brugger tweet media
Deutsch
0
0
0
22
Daniel Brugger
Daniel Brugger@_DBrugger·
Jagged intelligence (x.com/karpathy/statu…) means that AI models are not uniformly intelligent. They perform certain tasks extremely well, while failing catastrophically at others. Unfortunately, it is by no means always obvious what large language models can do reliably and where dangerous failures may occur. With growing experience, one can develop a certain intuition about the situations in which a model is robust and where caution is required. However, this intuition does not emerge automatically; it is built through observation, testing, trial and error, and deliberate reflection on the behavior of the tools. This is why it is important to actively experiment with these tools in order to develop a feel for their strengths and weaknesses.
Daniel Brugger tweet media
Andrej Karpathy@karpathy

Jagged Intelligence The word I came up with to describe the (strange, unintuitive) fact that state of the art LLMs can both perform extremely impressive tasks (e.g. solve complex math problems) while simultaneously struggle with some very dumb problems. E.g. example from two days ago - which number is bigger, 9.11 or 9.9? Wrong. x.com/karpathy/statu… or failing to play tic-tac-toe: making non-sensical decisions: x.com/polynoamial/st… or another common example, failing to count, e.g. the number of times the letter "r" occurs in the word "barrier", ChatGPT-4o claims it's 2: x.com/karpathy/statu… The same is true in other modalities. State of the art LLMs can reasonably identify thousands of species of dogs or flowers, but e.g. can't tell if two circles overlap: x.com/fly51fly/statu… Jagged Intelligence. Some things work extremely well (by human standards) while some things fail catastrophically (again by human standards), and it's not always obvious which is which, though you can develop a bit of intuition over time. Different from humans, where a lot of knowledge and problem solving capabilities are all highly correlated and improve linearly all together, from birth to adulthood. Personally I think these are not fundamental issues. They demand more work across the stack, including not just scaling. The big one I think is the present lack of "cognitive self-knowledge", which requires more sophisticated approaches in model post-training instead of the naive "imitate human labelers and make it big" solutions that have mostly gotten us this far. For an example of what I'm talking about, see Llama 3.1 paper section on mitigating hallucinations: x.com/karpathy/statu… For now, this is something to be aware of, especially in production settings. Use LLMs for the tasks they are good at but be on a lookout for jagged edges, and keep a human in the loop.

English
0
0
0
38
Daniel Brugger
Daniel Brugger@_DBrugger·
Plato, Writing, and Our AI Debate: An Old Pattern Repeats Itself [AI] is “inhuman” because it pretends to offer knowledge that is, in truth, no knowledge at all. True knowledge cannot exist in an external medium. [AI] is a manufactured product, static and incapable of replacing real thinking. Those who use [AI] weaken their memory, for they rely on an external support instead of training their own judgment. [AI] is a technology that produces an outward appearance of wisdom while letting inner thinking wither. Sounds like today’s AI debate, doesn’t it? Yet these concerns are far older. They come from Plato, not directed at artificial intelligence, of course, but at the technology of writing. I merely replaced “text” or “writing” with “AI.” Discussions about ChatGPT, AI assistants, and digital tools feel new, but the underlying questions are surprisingly old. Janique Brüning notes in her recent article (KI als Herausforderung für das juristische Studium, ZDRW 4, 2024, pp. 291 ff., 299 inlibra.com/10.5771/2196-7…) that Plato, in the dialogue Phaedrus, raised almost exactly the same objections that many people today express about AI or other digital tools: writing, he argued, was a dangerous technology. It was “inhuman” because it simulated knowledge that was not real knowledge. True knowledge, Plato claimed, could exist only in the mind, not in any external medium. Writing, as a manufactured object, was static, non-dialogical, and incapable of replacing genuine thought. Those who relied on written texts would weaken their memory, depending on an external support instead of cultivating their own judgment. Writing lacked interactivity: you can question a human being, but not a text. A text merely repeats the same words, whether they fit or not. It is a technology that produces outward pseudo-wisdom while allowing inner thought to atrophy. Reading the objections that Brüning recounts, one hears today’s AI debate: “AI makes us dumber,” “AI produces superficial knowledge,” “AI destroys our ability to think for ourselves.” The pattern is timeless. Every new technology is first perceived as a threat to thinking before it becomes a natural tool for thinking. Perhaps we stand at exactly this threshold with AI today. The question is not whether AI will change our thinking. It will, just as every media innovation since writing has done. The question is how we integrate AI in a way that improves our learning and our work. IusBubble Blogpost: iusbubble.com/c/public/plato…
Daniel Brugger tweet media
English
1
0
0
33
Daniel Brugger
Daniel Brugger@_DBrugger·
Legal Questions on German Law: Accuracy Rates of the Newest Gemini Models and GPT-5 Markus Conrads and Sascha Schweitzer have once again tested LLMs with legal questions from German law, this time using the newest Gemini models and GPT-5. Their previous study was published a few months ago in the NJW (Markus Conrads / Sascha Schweitzer, Juristische Problemlösung mit KI – Leistung und Grenzen großer Sprachmodelle, NJW 2025, 2888 ff.). Now, Prof. Markus Conrads has published an update on LinkedIn (linkedin.com/posts/markus-c…), presenting fresh results based on 200 complex multiple-choice cases from General Contract Law (BGB AT), Law of Obligations, Business Law, and Employment Law. With Thinking Mode activated, the models achieve the following overall results, according to Conrads: • Gemini 3 Pro: 79.5% • ChatGPT-5 High Effort: 78% • Gemini 2.5 Pro: 75% I find this impressive. In today’s AI discourse, discussions often focus on hallucinations of large language models and the resulting verification effort. However, this analysis clearly shows how capable modern models have become when dealing with well-structured, exam-style legal questions: With accuracy rates between 75% and almost 80%, they already operate in the range of solid exam performance. That’s a remarkable figure, especially considering that (based on my understanding) no legal databases or external sources were connected (keyword: RAG). It would also be interesting to compare how human lawyers would perform on the same tests. My guess: Also not at 100% …
Daniel Brugger tweet media
English
0
0
0
31
Daniel Brugger
Daniel Brugger@_DBrugger·
The End of the Traditional Lawyer? Three Roles for the AI Age In the podcast episode “The Threefold Division of the Legal Profession in the Post-AI Era,” (legal-tech-verzeichnis.de/legal-tech-vid…; Die Dreiteilung des Anwaltsberufs in der Post-KI-Ära) Patrick Prior (@Legal_Tech_News) and Ioannis Martinis (@ioannismartinis) discuss a highly topical question: How does the legal profession change when AI is no longer an optional tool but becomes an integral part of everyday legal work? In the conversation, Ioannis outlines his thesis that the legal profession is undergoing a fundamental restructuring, resulting in three clearly distinguishable roles: 1⃣Legal Engineer: They build and maintain AI-supported legal-tech solutions, implement automations, conduct data analyses, and design the infrastructure of digital legal services. 2⃣Strategic Legal Advisor: Responsible for complex mandates, strategic design, legal value creation, and client leadership, especially in sophisticated or sensitive matters. 3⃣Legal Validator: The broad majority of lawyers. They review, validate, and take responsibility for the outputs of AI-driven systems. According to Ioannis, this threefold division eliminates the traditional role of the “generalist lawyer”, at least in the form we know it today. Instead, it gives rise to a specialized professional mosaic that fundamentally reshapes requirements for education, technical literacy, and client profiles. 💡My takeaway: Many of today’s discussions about AI in law firms, public administration, and the courts revolve around the implementation and selection of AI tools. But this podcast shows, once again in my view, that the real issue concerns structures, competencies, and professional roles. AI is not just another tool; it changes the organizational model of legal work. This means we need to adapt our training and education formats. Successful AI integration does not come from “tool trainings,” but from building the ability to critically assess AI outputs, use them responsibly, and embed them meaningfully into legal workflows. These are exactly the questions that Ioannis and I will be discussing next semester in our joint course at the University of St. Gallen. We focus on the evolving work practices of legal professionals, the reshaping of professional roles, and the associated requirements for skills and competencies. Our aim is to help students develop the ability to critically evaluate technological developments, assess opportunities and risks, and integrate innovative tools strategically and responsibly into their future legal practice. This is not only about technical understanding, but above all about reflecting on the ethical, legal, and organizational implications for the profession. How do you see the roles of the future? I would be very interested to hear your perspectives.
English
1
0
0
25