Muad_Dib
312 posts

Muad_Dib
@b162543
AI-driven quant research, built in (public). Publishing what survives walk-forward — and what doesn't. Plus frontier tech notes.



Perplexity is building one of the most secure scalable agent runtime sandboxes in the market right now. A blog post on how we: 1. Handle proxy API keys for agents securely 2. Run safety detection for all content accessed by agents 3. Encrypt data passed via connectors to agents 4. Decouple storage and compute reliably

Introducing: the Notion Developer Platform New building blocks that help you (and your coding agents) sync any data source, build any tool, and orchestrate any agent. Follow along 👇 twitter.com/i/broadcasts/1…

Starting June 15, paid Claude plans can claim a dedicated monthly credit for programmatic usage. The credit covers usage of: - Claude Agent SDK - claude -p - Claude Code GitHub Actions - Third-party apps built on the Agent SDK

Today we're launching the OpenBB App Marketplace. Financial data apps from vetted providers with charts, tables, documents, and other interactive widgets. All live inside the OpenBB Workspace and ready to explore with AI. Find a provider, trial their data in your actual workflow, and connect an API key when you're ready. Skip sales calls, IT tickets, and integration sprints. The data you need is now a few clicks away. Early partners include BlueGamma, @findatasets, Outsampler, @synorb, @koinju, Open Portfolio (Alberto Gallini), VecViz (Rodger Coyne), @axioradev, Exponential Technology, and @velo_xyz . More coming soon. 🌐 Go to the OpenBB Workspace (pro.openbb.co) → Apps page → Marketplace tab


You can extend every step of Claude Code's agentic loop. I've been thinking a lot about what that means for the last one. What are you doing to help Claude verify its own work? Genuinely want to hear what workflows people have.

How do you keep Claude working until the job is done? Claude Code helps with this in a few ways, including one we shipped recently: /goal.

Fast mode for Claude Opus 4.7 is now available in research preview on the API and in Claude Code.



We published new research on how we serve post-trained Qwen3 235B models on NVIDIA GB200 NVL72 Blackwell racks. GB200 is a major step up over Hopper for high-throughput inference on large MoE models, not just a training platform.










