Meris Dabhi

429 posts

Meris Dabhi banner
Meris Dabhi

Meris Dabhi

@Merisdabhi

Building production AI agents | Guardrails, reliability & real-world plumbing | Sharing what actually ships

Присоединился Nisan 2024
41 Подписки32 Подписчики
Meris Dabhi
Meris Dabhi@Merisdabhi·
I never noticed this until today. Was googling something random and Google showed me 2 new dedicated tabs — Forums and Short Videos. Short video = "show me someone actually doing this" Forums = "show me someone who's actually been through this" Google spent 25 years indexing pages. Now they're indexing people. Which accidentally creates the best distribution opportunity I've seen in years. Two new surfaces. Both free. Both uncrowded. I'd start both today if you haven't. No brainer. #google
Meris Dabhi tweet media
English
0
0
0
5
Meris Dabhi
Meris Dabhi@Merisdabhi·
Everyone’s jumping on multi-agent frameworks right now. But most setups still treat agents like independent chatbots that happen to call tools. That’s why they feel smart in testing and fragile in real workflows. The biggest gains don’t come from fancier orchestration. They come from boring, explicit rules for how agents talk to each other and when they stop. Define the handoff. Set the guardrails. Log the chain. Everything else is noise. #cluade #ai
Meris Dabhi tweet media
English
0
0
0
7
Meris Dabhi
Meris Dabhi@Merisdabhi·
MiniMax Music 2.6 is out. AI music just got more intentional. Before: You'd describe a scene, cross your fingers, hope for something usable. Half the time you'd get ambient background noise or "technically music." Now: Write "open with tension, build toward awakening, explode into triumph" and the model actually follows. Beat by beat. Three things worth knowing: → Original BGM in minutes: No sample libraries, no "probably fine to use" tracks. Describe your scene, get something fully yours → Structure that follows your prompt: Intro, bridge, build, explosion — the model understands pacing now → Intentional imperfection: Breathiness in lo-fi, presence in jazz, human touches that sound generated. Actually sound human. Also shipping: under 20s first output, tighter bass response for House/Trap/Drum & Bass, style transfer (reimagine your melody in a different genre).
MiniMax (official)@MiniMax_AI

MiniMax Music 2.6 is live. A few things worth knowing: 🎬 Original BGM in minutes No more hunting for "probably fine to use" tracks. Describe your scene, get something fully yours. 🎭 Structure that actually follows your prompt. You can now write "open with tension, build toward awakening, explode into triumph", and the model follows, beat by beat. For the first time, AI music generation feels less like rolling the dice and more like directing. 🎤 Intentional imperfection In lo-fi, indie folk, jazz — the breathiness that makes a track feel human, not generated. Also shipping with 2.6: → First audio in under 20s: write a prompt, take a breath, it's ready → Improved low-mid frequency response: tighter bass for House, Trap, Drum & Bass → Style transfer & remixing: reimagine your own melody in a completely different genre 14-day free global beta starts today (500 songs/day). 👉Try now: minimax.io/audio/music

English
0
0
0
22
Meris Dabhi
Meris Dabhi@Merisdabhi·
Sometime in the next 2-3 years agents will be using the internet more than humans We designed the whole thing for human eyes, human emotions, human attention spans Agents do not have any of that The internet as we know it was built for the wrong user The opportunity is rebuilding everything for the new user Agent-native search. Agent-native commerce. Agent-native discovery Every category is open again I can't stop thinking about it.
English
0
0
0
9
Meris Dabhi
Meris Dabhi@Merisdabhi·
Anthropic just dropped “Managed Agents.” And honestly, it’s a big deal. You can now spin up an AI agent in minutes instead of months: No infrastructure headaches No complex API wiring Just define tasks, tools, guardrails and you’re done It’s basically: Describe your agent in chat Run it Watch it execute Plus: Built-in environments (sandboxed and secure) Easy integrations (Notion, ClickUp via MCP) Visual step-by-step debugging dashboard But here’s the catch. These agents aren’t truly autonomous. No scheduling. No periodic execution. No always-on workflows. So you still need external automation or APIs to make them useful in real systems. My take: This is the easiest entry point into agents we’ve seen so far. But if you’re building serious automation, you will still need custom setups. We’re getting closer, just not fully there yet.
English
1
0
1
28
Meris Dabhi
Meris Dabhi@Merisdabhi·
Meta just released Muse Spark, and it's a different bet from their previous models. Previous Meta models: instant answers based on training data. Muse Spark: reasoning-first. It thinks step-by-step, tries different approaches if the first doesn't work, spins up subagents to reason in parallel. It's multimodal (text + images), supports tool use, can orchestrate multiple agents. Benchmark-wise: competitive with Claude, Gemini, GPT-5 on specific tasks (reasoning, health, multimodal). Not beating them across the board, but it's the first serious output from Meta's new Superintelligence Labs under Alexandr Wang. The rebuild: 9 months to restructure their entire AI stack. New architecture, new training regime, new data curation. Muse Spark is validation that the new approach works. The bigger models come next. Available now free at meta.ai (with eventual API preview for partners).
Meris Dabhi tweet media
AI at Meta@AIatMeta

Introducing Muse Spark, the first in the Muse family of models developed by Meta Superintelligence Labs. Muse Spark is a natively multimodal reasoning model with support for tool-use, visual chain of thought, and multi-agent orchestration. Muse Spark is available today at meta.ai and the Meta AI app. We’re also making it available in private preview via API to select partners, and we hope to open-source future versions of the model. Learn more: go.meta.me/43ea00

English
0
0
0
39
Perplexity
Perplexity@perplexity_ai·
Today we're announcing the Billion Dollar Build. An 8-week competition where teams will use Perplexity Computer to build a company with a path to $1B. Finalists have the opportunity to secure up to $1M in investment from the Perplexity Fund and up to $1M in Computer credits.
Perplexity tweet media
English
344
559
6.7K
3.2M
Meris Dabhi
Meris Dabhi@Merisdabhi·
Days instead of months. That's the real shift here. Before: Agents work great in isolation. Getting them to production meant wrestling infrastructure, scaling, monitoring, memory management. A team would spend weeks or months on DevOps just to deploy one agent. -> Now: Define your agent, hit deploy, it scales and updates automatically. The harness is tuned for agent workloads specifically—not general compute. That detail matters. Agents have different patterns than typical applications. They spawn subtasks, maintain state, consume in bursts. When infrastructure understands your agent architecture, you move fast
Claude@claudeai

Introducing Claude Managed Agents: everything you need to build and deploy agents at scale. It pairs an agent harness tuned for performance with production infrastructure, so you can go from prototype to launch in days. Now in public beta on the Claude Platform.

English
0
0
1
3.3K
Meris Dabhi
Meris Dabhi@Merisdabhi·
Anthropic just announced Project Glasswing: giving select tech giants early access to Claude Mythos Preview—a model so capable at finding software vulnerabilities it can't be released publicly. Mythos has already discovered thousands of zero-day bugs across every major OS and browser. Some vulnerabilities survived 27 years of review and millions of automated tests. The play: let defenders patch critical infrastructure before adversaries get similar capabilities. Partners include AWS, Apple, Microsoft, Google, Linux Foundation, and others. $100M in usage credits. $4M to open-source security orgs. The real story: AI-driven cybersecurity just crossed a threshold. The old ways of securing systems aren't enough anymore.
Anthropic@AnthropicAI

Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing

English
0
0
0
74
Meris Dabhi
Meris Dabhi@Merisdabhi·
@sentient_agency Yeah, Coolify is free, but I think Vercel is much easier to use. If you go with Coolify, you need a good VPS; otherwise, your SaaS or web app may load or work slowly. That’s why most people prefer Vercel.
English
0
0
0
270
Sentient
Sentient@sentient_agency·
RIP Vercel bills. Coolify is a free, open source PaaS that kinda gives you most of what Heroku, Netlify, and Vercel do, but it runs entirely on your own server. - One command install with curl - 280+ one-click services you can deploy - Postgres, Redis, MySQL, MariaDB built-in - Auto SSL, custom domains, reverse proxy stuff - Works on basically any VPS, bare metal, or even a Raspberry Pi - No vendor lock-in, your configs stays on your server 51K stars. Apache 2.0. 100% open source. github.com/coollabsio/coo…
Sentient tweet media
English
6
30
220
12.6K
Sundar Pichai
Sundar Pichai@sundarpichai·
T A B S I N C H R O M E
Google@Google

Too many @GoogleChrome tabs open? Try vertical tabs, rolling out now. Just right-click any Chrome window and select “Show Tabs Vertically” to move your tabs to the side of the browser window, making it easier to read page titles and manage tab groups.

English
438
395
9.6K
1.1M
News from Google
News from Google@NewsFromGoogle·
Today, we’re rolling out two new productivity features in Chrome. With vertical tabs, you’ll now have the option to move your tabs to the side of your browser window by selecting “Show Tabs Vertically.” We’re also introducing immersive reading mode, a new full-page interface for deep focus.
English
113
269
2.9K
500.9K
Meris Dabhi
Meris Dabhi@Merisdabhi·
2026 agent framework truth The "best" one isn't the one with the highest SWE-Bench score. It's the one whose failure modes you can actually audit and contain. Hero single-agent wrappers die fast. Graph-based or protocol-driven runtimes force you to design coordination upfront. We've seen too many "autonomous" setups collapse because the framework hid the complexity instead of exposing it. Choose orchestration you can see. Kill-switch it when needed. Everything else is theater.
Meris Dabhi tweet media
English
0
0
0
11
Meris Dabhi
Meris Dabhi@Merisdabhi·
Claude can now search YouTube for you natively Algrow plugged directly into Claude. Real, constantly updated YouTube data. No more generic slop from models trained on frozen datasets → Search current trends → Analyze live video data → Get actual watch patterns, not hallucinated ones Your research workflow just changed. Models have finally escaped their training data cutoff. #cluade
English
0
0
0
50
Meris Dabhi
Meris Dabhi@Merisdabhi·
Sam Altman: the time to debate superintelligence is now AI is doing real cognitive work in coding, knowledge work, science. The capabilities are arriving before the conversation about governance is ready "We want people to start thinking about how that should go" No fixed answers yet. But the window to shape this is narrow. The debate can't wait for the systems to arrive. #ai
English
0
0
0
357
Meris Dabhi
Meris Dabhi@Merisdabhi·
The era of pure neural brute force for agents is ending Neuro-symbolic AI: neural nets for pattern matching, symbolic logic for verifiable reasoning chains → 100x lower training energy → 95% success on structured tasks vs 34% → 5% execution energy overhead We've watched intelligent agents fail not because of raw capability, but because their reasoning wasn't structured. Production systems need auditability, not just scale Control planes in 2026 will route dynamically
Meris Dabhi tweet media
English
0
0
0
23
Sakshi Sugandhi
Sakshi Sugandhi@SakshiSugandhi·
Vibe coding is great until you realise your "SaaS" still requires actual users, which can't be vibe coded
English
39
18
254
12.9K
Meris Dabhi
Meris Dabhi@Merisdabhi·
Google DeepMind just published something critical: a framework for understanding how AI agents can be attacked through malicious content The research identifies four trap types: → Content Injection Traps (exploit perception gaps) → Cognitive Traps (target reasoning flaws) → Behavioral Traps (manipulate long-term memory) → Human-in-the-Loop Traps (misuse human oversight) If you're deploying agents at scale, this isn't academic—it's operational security. Your agents are running on untrusted surfaces.
Meris Dabhi tweet media
English
0
0
0
26
Meris Dabhi
Meris Dabhi@Merisdabhi·
X API now ships with native MCP support for AI agents Real-time data from the most active platform on Earth becomes directly accessible to your agent code → Pay-per-use (no monthly tiers) → xMCP Server + Xurl for seamless agent integration → Official Python & TypeScript SDKs → API Playground for safe testing Your agents can read context and execute actions on X without friction. Plus: up to 20% back in xAI API credits on what you spend Worth moving if you've been on the sidelines
Chris Park@chrisparkX

We’ve made major upgrades to X API: • Pay-Per-Use now GA worldwide • XMCP Server + xurl for agents • Official Python & TypeScript XDKs • API Playground - free realistic simulations New releases coming will be a game changer. Start building → docs.x.com 🚢

English
0
0
0
23