Zubair Sapi

3.4K posts

Zubair Sapi banner
Zubair Sapi

Zubair Sapi

@zubairsapi

DIFC,ADGM & AIFC Courts’ registered Counsel/GC/Litigation & Arbitration/Mediation/Dispute Settlement& Resolution/ Civil Fraud/Commercial Disputes/Legal Tech& Ai

Abu Dhabi, United Arab Emirate شامل ہوئے Ocak 2017
679 فالونگ141 فالوورز
Zubair Sapi ری ٹویٹ کیا
Reuters Legal
Reuters Legal@ReutersLegal·
The U.S. Securities and Exchange Commission said on Tuesday the agency obtained orders for monetary relief totaling $17.9 billion during fiscal year 2025 and that it had filed 456 enforcement actions during that period. reuters.com/legal/governme…
English
0
1
2
545
Zubair Sapi ری ٹویٹ کیا
Rob Freund
Rob Freund@RobertFreundLaw·
More lawyers misusing AI (and more): "As this court already has explained at length, ... every lawyer knows that citing fake cases in a court filing is a terrible decision." Sanctions: -Lawyer and his firm to pay ~$47,000 in fees to defendants. -Lawyer and firm must send copy or order to all clients, opposing counsel, every lawyer in the firm, and every judge in every pending case. -Kicked off the case. -Court will send order to Alabama State Bar for further proceedings.
Rob Freund tweet media
English
23
26
186
26.5K
Zubair Sapi ری ٹویٹ کیا
biglawbro
biglawbro@biglawbro·
barely matters what transactional practice area you pick, or the assets you're working on. you're just mastering a set of docs. but smart and fair clients and coworkers change everything.
English
0
2
23
1.2K
Zubair Sapi ری ٹویٹ کیا
Canadian Bar Assoc.
Canadian Bar Assoc.@CBA_News·
Join us on April 21 for our AI in Practice meeting with Dr. Gideon Christian (UCalgary). He will discuss confidentiality and privilege in the context of AI and law. Don't miss it! ✅ Free and exclusive access for CBA members 👉 bit.ly/46PYMzW
Canadian Bar Assoc. tweet media
English
3
0
0
205
Zubair Sapi ری ٹویٹ کیا
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
🚨 Notion charges you $16/month and still owns your data lol AppFlowy gives you the same workspace docs, wikis, AI, databases, kanban, project management and lets you self-host the entire thing for free. Here's what makes it different from every other Notion clone: The data ownership is real, not marketing language. You deploy it on your own server. Your files never touch their infrastructure. If they shut down tomorrow, nothing changes for you. The AI is built into the editor, not bolted on as an upsell. Summarize, rewrite, fix grammar, generate content, translate all directly inside the document without switching tools. AppFlowy Sites lets you publish any page as a live public website in one click. Internal wiki becomes external documentation instantly. The database layer supports grid, board, calendar, and gallery views all pointing at the same underlying data. Switch views without moving anything. Works natively on macOS, Windows, Linux, iOS, and Android from a single Flutter and Rust codebase. Not Electron. Actually native. github.com/AppFlowy-IO/Ap… AGPL-3.0 License. 100% Opensource. I'm switching from Notion to this... what's your plan?
Ihtesham Ali tweet media
English
4
9
40
4.7K
Zubair Sapi ری ٹویٹ کیا
Matt Mireles
Matt Mireles@mattmireles·
Introducing... Gemma 4 Multimodal Fine-Tuner for  Apple Silicon - LoRA fine-tunning toolkit for Gemma LLM - runs locally on macOS via PyTorch and Metal - streams data from Google Cloud to your machine - fine-tune on audio, image and text - easy-to-use CLI wizard If you want to fine-tune the new Gemma 4 on text, images, or audio without renting an H100 or copying a terabyte of data to your laptop, this is the only toolkit that does it all on Apple Silicon.
Matt Mireles tweet media
English
9
47
412
30K
Zubair Sapi ری ٹویٹ کیا
Akshay 🚀
Akshay 🚀@akshay_pachaar·
A raw LLM is just like a CPU without OS. It can compute. But it can't do anything useful on its own. This analogy is the clearest way I've found to understand what an agent harness actually does. Here's the mapping: • 𝗖𝗣𝗨 → 𝗟𝗟𝗠 (model weights). The raw compute engine. Powerful, but useless without infrastructure around it. • 𝗥𝗔𝗠 → 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝘄𝗶𝗻𝗱𝗼𝘄. Fast, always available, but limited. When it fills up, you start losing things. • 𝗛𝗮𝗿𝗱 𝗱𝗶𝘀𝗸 → 𝗩𝗲𝗰𝘁𝗼𝗿 𝗗𝗕 / 𝗹𝗼𝗻𝗴-𝘁𝗲𝗿𝗺 𝘀𝘁𝗼𝗿𝗮𝗴𝗲. Large capacity, but slow to access. You retrieve from it, not compute in it. • 𝗗𝗲𝘃𝗶𝗰𝗲 𝗱𝗿𝗶𝘃𝗲𝗿𝘀 → 𝗧𝗼𝗼𝗹 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻𝘀. The interfaces that let the model interact with the outside world. Code execution, web search, file I/O. • 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗻𝗴 𝘀𝘆𝘀𝘁𝗲𝗺 → 𝗔𝗴𝗲𝗻𝘁 𝗵𝗮𝗿𝗻𝗲𝘀𝘀. This is the key layer. It manages everything: which tools to call, what fits in memory, when to retrieve, how to recover from errors, and when to stop. And then there's the 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 layer. That's the "agent" itself. Not a piece of software you install, but emergent behavior that arises when the OS does its job well. This is why two products using the exact same model can perform completely differently. LangChain changed only their harness infrastructure (same model, same weights) and jumped from outside the top 30 to rank 5 on TerminalBench 2.0. The model didn't improve. The operating system around it did. The article below is a deep dive on agent harness engineering, covering the orchestration loop, tools, memory, context management, and everything else that transforms a stateless LLM into a capable agent.
Akshay 🚀 tweet media
Akshay 🚀@akshay_pachaar

x.com/i/article/2040…

English
30
128
509
51.6K
Zubair Sapi ری ٹویٹ کیا
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A MIT student figured out how to compress an entire semester of lecture content into one 90-minute study session. He calls it "context stacking," and it's the most unfair thing I've seen done with NotebookLM. I asked him to walk me through it. He did. I haven't studied the same way since. Here's exactly what he does. Two days before each lecture, he uploads everything into NotebookLM. The assigned readings, the previous week's slides, 3 or 4 related papers he finds himself, and any problem sets that are still open. Most students wait for the lecture to explain the material. He walks in having already built a mental model of it. That's step one. But it's not the move that makes it unfair. The first prompt he runs across all of it: "What are the 5 core concepts this week's content is built on, and how do they connect to what I studied last week?" Not summarize. Not define. Connect. NotebookLM pulls threads across everything he uploaded simultaneously. It surfaces relationships between ideas that would take a normal student weeks of review to notice. He gets that map before the lecture even starts. Then he runs the prompt that does most of the work. "What would I need to genuinely understand about this material to be able to teach it to someone with zero background in this subject?" That question is doing something most students never force themselves to do. It exposes exactly where his understanding is solid and exactly where it's hollow. The gaps show up immediately, and he spends the rest of the 90 minutes filling only those gaps. Not reviewing what he already knows. Only fixing what he doesn't. The final prompt is the one that separates context stacking from every other study method I've heard of. "What question could a professor ask about this material that would expose a student who understood the surface but missed the underlying logic?" He's not studying for the exam he expects. He's studying for the exam designed to catch people who only think they understood it. By the time he sits in the lecture hall, the professor is not teaching him anything new. The professor is confirming what he already mapped, filling in a few details, and occasionally surprising him with something he didn't anticipate. That surprise is the only thing he writes down. Most students leave a lecture hoping the material will eventually click. He walks in with it already clicked, and uses the lecture to find out what he missed. That's not a study hack. That's a completely different relationship with learning.
Ihtesham Ali tweet media
English
67
511
3.1K
233.1K
Zubair Sapi ری ٹویٹ کیا
Art Levy
Art Levy@artlevy·
Harvey: ~$1B raised across 4 rounds in 14 months. Legora: ~$800M across 3 rounds in 10 months. Combined $1.7B+ into two legal AI companies. History doesn't repeat, it rhymes. This is the Capital Wars playbook we've seen before 🧵
Art Levy tweet mediaArt Levy tweet media
English
5
6
96
22.5K
Zubair Sapi ری ٹویٹ کیا
ICC Arbitration
ICC Arbitration@ICC_arbitration·
💡 A clearer way to predict ICC Arbitration costs with the ICC Costs Calculator. 🔍 How it works: 1️⃣ Enter the amount in dispute 2️⃣ Select your procedure: ordinary or expedited 3️⃣ Indicate the number of arbitrators 👉 Try it now: bit.ly/4scP8ih
ICC Arbitration tweet mediaICC Arbitration tweet mediaICC Arbitration tweet mediaICC Arbitration tweet media
English
0
2
1
200
Zubair Sapi ری ٹویٹ کیا
Matt Dancho (Business Science)
🚨 BREAKING: Microsoft launches a free Python library that converts ANY document to Markdown Introducing Markitdown. Let me explain. 🧵
Matt Dancho (Business Science) tweet media
English
12
163
1.3K
145.5K
Zubair Sapi ری ٹویٹ کیا
Locally AI - Local AI Chat
Locally AI - Local AI Chat@LocallyAIApp·
Gemma 4 models are now on Mac! Try the new Gemma 4 E2B and E4B — Google’s most intelligent open models for the edge, powered by MLX for best-in-class performance on M-series chips. Update your Mac app now.
Locally AI - Local AI Chat tweet media
English
41
125
2.4K
152.7K
Zubair Sapi ری ٹویٹ کیا
Lawyer T.S.O🇳🇬
Lawyer T.S.O🇳🇬@IgbominaTSO·
Lawyer to Lawyer: It’s not about the number of case files in your office, it’s about the quality of your service and the fortune you derive from it.
English
2
42
229
5.3K
Zubair Sapi
Zubair Sapi@zubairsapi·
@aryanXmahajan Yet , majority people in many industries not prepared to take it seriously
English
0
0
0
152
Aryan Mahajan
Aryan Mahajan@aryanXmahajan·
I replaced a $500K/year team with $1,100/month in AI. 23 agents. 5 departments. Everything automated. 4 businesses. 7 figures. Zero employees. Here's the full operating system: → Engineering: Claude Code (47 Fortune 500 deployments this month) → Business Ops: @Accio_official (312 tasks automated, zero manual back-office) → Content: AI OS (3.1M impressions/month, zero keyboards touched) → Sales: AI SDR ($500K active pipeline, no agency) → Client Delivery: Agent Fleet (9 live Fortune 500 deployments, zero babysitting) Business ops is the layer most solo operators never automate. Supplier sourcing, vendor outreach, procurement, quote comparison — all running without me. What makes this unfair: → $0 payroll vs $500K+ for a team doing the same work → 1,847 hours reclaimed this quarter → Every agent reports into one console → Scales to any volume without hiring 4 businesses. 23 agents. 1 operator. I documented the entire setup. Every agent, every tool, every workflow, every dollar of infrastructure cost. Like + comment "SOLO" + repost, and I'll DM it to you. (must be following)
Aryan Mahajan tweet media
English
252
109
349
24.2K
Zubair Sapi ری ٹویٹ کیا
Anthropic
Anthropic@AnthropicAI·
Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing
English
1.3K
4.4K
29.3K
15.3M
Zubair Sapi ری ٹویٹ کیا
Robert Youssef
Robert Youssef@rryssf_·
🚨 BREAKING: Purdue built an AI system that automatically fact-checks scientific papers, and used it to dismantle a quantum computing breakthrough claim. Analysts with zero quantum expertise fed the paper in. The AI found undisclosed conflicts of interest, cherry-picked data, and a fraudulent baseline comparison. The "breakthrough" was a product launch dressed as science. A quantum startup called Kipu Quantum published a paper claiming their algorithm achieves "runtime quantum advantage" over classical computers on IBM's 156-qubit quantum processor. The abstract claimed speedups of "several orders of magnitude." Purdue's AutoVerifier read the paper, pulled 10 more papers, traced financial records, and built a knowledge graph connecting every claim to every piece of evidence. Here is what it found. The data didn't support the abstract. The methods section used appropriate hedging "best-performing instance," "can potentially." The abstract dropped every qualifier. The "several orders of magnitude" projection had zero supporting analysis in the body text. AutoVerifier flagged this as a structural pattern of strategic overclaiming: cautious methods, assertive framing. The "80x speedup" was one cherry-picked outlier. Median speedup was 5–7x. CPLEX the classical baseline was benchmarked on a single CPU thread. When tested against stronger classical solvers, the quantum advantage disappeared entirely. Zero independent papers corroborated the claim. All 4 supporting papers shared at least 4 of 6 authors with the original. A D-Wave rebuttal replaced the quantum processor with a trivial classical algorithm. Same solution quality. The quantum component contributed nothing detectable. The paper never asked this question. AutoVerifier found the rebuttal through citation-chain retrieval. The conflicts of interest never disclosed in the paper: → All six authors employed by Kipu Quantum → CEO Enrique Solano holds an equity stake → BF-DCQO is Kipu's commercial product sold as "Iskay Quantum Optimizer" on IBM's marketplace → Product launched March 2025 the "breakthrough" paper followed two months later → IBM provides the hardware, owns the classical baseline, hosts the commercial product, and co-authors the benchmark no independent link in the chain The company then quietly retracted its own claim. > May 2025: "runtime quantum advantage." > October 2025: "hybrid sequential quantum computing." > March 2026: classical solvers "reach or surpass" the hybrid workflow. The retraction came from the authors themselves. AutoVerifier's final verdict: → Runtime advantage: Likely Hallucination high semantic entropy, only 1 of 3 independent models agreed → QPU execution: Confirmed → Technology maturity: TRL 4–5 → Keystone properties for credible quantum advantage: 0 out of 5 met The analysts had no quantum expertise. They fed in one paper. The system did the rest.
Robert Youssef tweet media
English
6
14
62
3.6K
Peter Yang
Peter Yang@petergyang·
I had a wonderful chat with my friend @illscience (a16z GP) about the future of work in an AI agent first world: 1. Coding will eat all knowledge work Writing docs, building slides, pulling analytics — I now get the first 80% done through AI coding agents before doing manual polish for the last 20%. I never start from zero anymore. 2. Small teams will outperform large orgs Anish and I both remember sitting in 3-hour OKR meetings thinking "this is wasting my life." This generation's founders know to stay tiny on purpose. 2-3 person product teams with a swarm of agents will replace overstaffed orgs. 3. Apps for completing tasks will shrink Ever since I wired up Google Workspace, Mercury, and other APIs to my OpenClaw, I barely use those apps anymore. But I still scroll X every day. Apps that entertain you will outlast the ones you open to get stuff done. 4. We'll all have personal agents that understand us deeply I was on a walk with my OpenClaw and it said: "You keep talking about your career and business. Just remember your kids are 7 and 4. They're going to grow up soon - optimize for spending time with them instead." That was a great wake-up call I didn't expect. 5. Human ambition has no ceiling The shape of the economy is changing, not shrinking. We'll hopefully see more one-person companies and small teams in light of ongoing layoffs from big tech. As someone tweeted recently, "The job market is so bad I have no choice but to pursue my dreams." 📌 Watch now: youtube.com/watch?v=UE8jx4…
YouTube video
YouTube
a16z@a16z

“Coding will eat all knowledge work” Peter Yang joins a16z’s Anish Acharya to discuss the post-AI future of work, why AI will create more solopreneurs, why human ambition means there will always be new jobs, and more. 00:00 Intro 01:56 Using OpenClaw for voice, memory & daily life 06:14 Will agents kill apps & SaaS? 11:57 Coding agents: Claude Code vs. Codex 17:00 Future of work: small teams, agents & company culture 24:00 How agents change consumer products & the economy @petergyang @illscience

English
22
22
144
63.4K