Amit Piplani

2K posts

Amit Piplani banner
Amit Piplani

Amit Piplani

@apiplani

Engineering leader | Ultra runner

San Francisco, CA शामिल हुए Mayıs 2009
720 फ़ॉलोइंग346 फ़ॉलोवर्स
Amit Piplani
Amit Piplani@apiplani·
Today we are celebrating people, not AI #oscars
English
0
0
0
309
Amit Piplani रीट्वीट किया
The Alliance for Secure AI
The Alliance for Secure AI@secureainow·
Today we are launching jobloss.ai. A real-time tracker of AI-driven layoffs across the U.S. These jobs are disappearing. The numbers are growing. And we're counting every single one.
English
108
692
2.7K
728.6K
Amit Piplani रीट्वीट किया
Janakiram MSV
Janakiram MSV@janakiramm·
SKILL.md is eating MCP Servers, and that's a good thing Your MCP servers are burning 50,000 tokens just to teach an agent what a 200-token markdown file already knows. Brad Feld runs an entire company on 12 skill files. No app. No workflow engine. Just markdown in a git repo. Sentry's David Cramer says it bluntly. Many MCP servers shouldn't exist. The problem? Teams keep building MCP servers for knowledge problems. But MCP was designed for execution problems. The difference is costing you 50x in wasted context and worse agent reasoning. I wrote the decision framework for getting this right. What's your skill-to-MCP ratio looking like? thenewstack.io/skills-vs-mcp-…
English
18
37
278
28K
Amit Piplani रीट्वीट किया
Shanaka Anslem Perera ⚡
Shanaka Anslem Perera ⚡@shanaka86·
Look at this image carefully. You are looking at a Chinese commercial satellite photograph of Prince Sultan Air Base in Saudi Arabia. Every red box is an artificial intelligence model identifying a US military aircraft by type. Every label is in Mandarin. And the base you are looking at is the one Iran fired ballistic missiles at on Saturday night. A company called MizarVision, founded five years ago in Hangzhou, published this. Not the Pentagon. Not the CIA. Not a classified intelligence briefing delivered to the Situation Room. A Chinese startup with access to sub-meter resolution Earth observation satellites and an AI object detection model that can distinguish a KC-135 Stratotanker from a KC-46 Pegasus from orbit. Aviation Week confirmed what the image shows. Fifteen KC-135 aerial refueling tankers. Six KC-46 Pegasus tankers. Six E-3 Sentry airborne early warning aircraft, which is significant because only thirty one E-3s remain in the entire US Air Force inventory worldwide, meaning roughly a fifth of America’s operational AWACS fleet is parked on a single ramp in the Saudi desert. Two E-11A Battlefield Airborne Communications Nodes. C-130 Hercules transports. C-5 Galaxy heavy lifters. The backbone of Operation Epic Fury, catalogued from space and published on Weibo. This is the base that Iran targeted. AFP journalists in Riyadh reported explosions in the eastern part of the capital with thick smoke rising. The Saudi Foreign Ministry condemned Iranian attacks targeting Riyadh and the Eastern Province. Saudi air defenses intercepted the projectiles. But the image you are looking at was published days before the strike. Which means Iran had exactly the same intelligence picture that MizarVision gave the entire world for free. This is what the democratization of intelligence looks like. In 1991, only the United States could see individual aircraft on a ramp from space. In 2003, a handful of nations had that capability. In 2026, a Chinese startup publishes annotated satellite imagery of American force dispositions on social media, and Aviation Week runs the analysis before the first missile is fired. Defence Security Asia captured what this means: sub-meter resolution imagery distinguishing individual aircraft types fundamentally alters the secrecy calculus of pre-strike deployments. You cannot mass two hundred aircraft across half a dozen bases and keep it secret when commercial satellites photograph every ramp twice a day and AI models label every airframe before an analyst finishes their coffee. The age of hidden buildups is over. Every deployment is now observable, catalogued, and published in near real time by companies with no security clearance and no allegiance to anyone. The next war will not be planned in secret. It will be watched from orbit by everyone, in every language, simultaneously. open.substack.com/pub/shanakaans…
Shanaka Anslem Perera ⚡ tweet media
English
828
6.7K
24.1K
4.7M
Amit Piplani रीट्वीट किया
Guri Singh
Guri Singh@heygurisingh·
🚨 Stanford just analyzed the privacy policies of the six biggest AI companies in America. Amazon. Anthropic. Google. Meta. Microsoft. OpenAI. All six use your conversations to train their models. By default. Without meaningfully asking. Here's what the paper actually found. The researchers at Stanford HAI examined 28 privacy documents across these six companies not just the main privacy policy, but every linked subpolicy, FAQ, and guidance page accessible from the chat interfaces. They evaluated all of them against the California Consumer Privacy Act, the most comprehensive privacy law in the United States. The results are worse than you think. Every single company collects your chat data and feeds it back into model training by default. Some retain your conversations indefinitely. There is no expiration. No auto-delete. Your data just sits there, forever, feeding future versions of the model. Some of these companies let human employees read your chat transcripts as part of the training process. Not anonymized summaries. Your actual conversations. But here's where it gets genuinely dangerous. For companies like Google, Meta, Microsoft, and Amazon companies that also run search engines, social media platforms, e-commerce sites, and cloud services your AI conversations don't stay inside the chatbot. They get merged with everything else those companies already know about you. Your search history. Your purchase data. Your social media activity. Your uploaded files. The researchers describe a realistic scenario that should make you pause: You ask an AI chatbot for heart-healthy dinner recipes. The model infers you may have a cardiovascular condition. That classification flows through the company's broader ecosystem. You start seeing ads for medications. The information reaches insurance databases. The effects compound over time. You shared a dinner question. The system built a health profile. It gets worse when you look at children's data. Four of the six companies appear to include children's chat data in their model training. Google announced it would train on teenager data with opt-in consent. Anthropic says it doesn't collect children's data but doesn't verify ages. Microsoft says it collects data from users under 18 but claims not to use it for training. Children cannot legally consent to this. Most parents don't know it's happening. The opt-out mechanisms are a maze. Some companies offer opt-outs. Some don't. The ones that do bury the option deep inside settings pages that most users will never find. The privacy policies themselves are written in dense legal language that researchers people whose job is reading these documents found difficult to interpret. And here's the structural problem nobody is addressing. There is no comprehensive federal privacy law in the United States governing how AI companies handle chat data. The patchwork of state laws leaves massive gaps. The researchers specifically call for three things: mandatory federal regulation, affirmative opt-in (not opt-out) for model training, and automatic filtering of personal information from chat inputs before they ever reach a training pipeline. None of those exist today. The uncomfortable truth is this: every time you type something into ChatGPT, Gemini, Claude, Meta AI, Copilot, or Alexa, you are contributing to a training dataset. Your medical questions. Your relationship problems. Your financial details. Your uploaded documents. You are not the customer. You are the curriculum. And the companies doing this have made it as hard as possible for you to stop.
Guri Singh tweet media
English
329
3.9K
8.6K
1.7M
Amit Piplani रीट्वीट किया
Amit Piplani रीट्वीट किया
Ejaaz
Ejaaz@cryptopunk7213·
so we now have: - OpenClaw - perplexity OpenClaw (perplexity computer) - anthropic openclaw (cowork) - miniature openclaw (picoclaw) - secure openclaw (ironclaw) - chinese openclaw (kimi k2.5) - enterprise openclaw (openai frontier) the future is 100% agentic. get the fuck on board.
Perplexity@perplexity_ai

Introducing Perplexity Computer. Computer unifies every current AI capability into one system. It can research, design, code, deploy, and manage any project end-to-end.

English
166
321
3.7K
327.1K
Amit Piplani रीट्वीट किया
Priyanka Vergadia
Priyanka Vergadia@pvergadia·
MCP vs. API: it’s about who the consumer is 🧠 vs 💻 We’ve spent decades mastering APIs. They are the plumbing of modern software strict contracts between your code and a service. You know exactly what inputs to send (POST /payments) and exactly what outputs to expect. But when we started building AI Agents, we hit a wall. Trying to hardcode hundreds of API integrations into an LLM is a nightmare. That's how Model Context Protocol (MCP) was born. Let's break it down: 💻 The API (Application Programming Interface) • Consumer: A Developer. • Integration: You write code (SDK/HTTP). • Nature: Deterministic. You control the request shape. • Question it answers: "How do I call this service?" 🤖 The MCP (Model Context Protocol) • Consumer: An AI Model / Agent Runtime. • Integration: Standardized discovery (No custom glue code). • Nature: Context-aware. The model "discovers" tools and schemas dynamically. • Question it answers: "How does an agent find and use tools safely?" Think of it this way: APIs are for hardcoding a specific path. MCP is for giving an Agent a map so it can find its path. Check out the sketch below to see the flow side-by-side. 👇 #MCP #API #SoftwareArchitecture #AI #LLM #DevOps
Priyanka Vergadia tweet media
English
19
90
484
20.6K
Amit Piplani रीट्वीट किया
Google Cloud Tech
Google Cloud Tech@GoogleCloudTech·
We’ve launched the Universal Commerce Protocol (UCP), a new open standard for agentic commerce that works across the shopping journey! UCP is compatible with A2A, AP2, and MCP, and was co-developed with partners like Etsy, Shopify, Wayfair, and Target → goo.gle/4pyt2p2
Google Cloud Tech tweet media
English
78
290
1.5K
138.4K
Amit Piplani
Amit Piplani@apiplani·
@AnthropicAI Models are fighting among themselves and next up agents built on those models will fight against each other !!
English
0
0
0
5
Anthropic
Anthropic@AnthropicAI·
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
English
7.3K
6.3K
55K
33.6M
Amit Piplani रीट्वीट किया
Aaron Levie
Aaron Levie@levie·
Today, software is primarily built for people to use (directly or indirectly). But it's very clear that there will be trillions of agents in the future, executing every type of task for us imaginable.  Agents will be deployed for coding, processing loans, reviewing insurance claims, executing financial transactions, acting as personal assistants, and every other known task in the economy. As a result, we're going to see a shift in who we have to increasingly build tools for.  So many new opportunities are rapidly emerging right for building for agents. Agents are going to need seamless identities across platforms. They're going to need file systems and databases to store off their work, sessions, and important data they're sharing. They're going to need tools for collaborating with people. They're going to need safe ways of spending or managing money. They're going to need computers to execute code and other tasks in. And so on. In many cases, the tools and systems that the human users are already working with will be the natural tools for these agents to leverage. There are many areas where the highways have already been built, and agents will ride right on top of those. In other cases, there will need to be new capabilities that emerge due to the scale and change in use-case that agents represent. In either case, these tools need to be API-first, as agents will leverage these tools like a developer or machine would have previously. CLIs/APIs are their native tongue.  The complex part is that building for agents introduces new challenges vs. building for people. They require far more oversight than people do, and they don't get the same right to privacy as people. They can't be held responsible for the work that they're doing, but rather the person that launches them into their task must be (for now). They don't quite know when they've run astray and can't execute the task at hand. These are just a small set of things that become the new complexities that need to be anticipated when building for agents. We’re entering a completing new era of software development and infrastructure that will be built out. Wild times ahead.
English
69
62
560
90.7K
Amit Piplani रीट्वीट किया
Hasan Toor
Hasan Toor@hasantoxr·
🚨 Alibaba just quietly dropped a vector database that destroys Pinecone, Chroma, and Weaviate. It's called Zvec and it runs directly inside your application no server, no config, no infrastructure costs. No Docker. No cloud bills. No DevOps nightmare. Built on Proxima, Alibaba's battle-tested vector search engine powering their own production systems at scale. The numbers don't lie: → Searches billions of vectors in milliseconds → pip install zvec and you're searching in under 60 seconds → Dense + sparse vectors + hybrid search in a single call And it runs everywhere: → Notebooks → Servers → Edge devices → CLI tools 100% Opensource. Apache 2.0 license. This is the vector DB the RAG community has been waiting for production-grade performance without the production-grade headache. Link in the first comment 👇
Hasan Toor tweet media
English
142
524
4.2K
353.1K
Amit Piplani रीट्वीट किया
Claude
Claude@claudeai·
Claude Code on desktop can now preview your running apps, review your code, and handle CI failures and PRs in the background. Here’s what's new:
English
871
2.5K
27.2K
9.3M
Amit Piplani रीट्वीट किया
Hiten Shah
Hiten Shah@hnshah·
You don’t hire great people to fix a broken environment. You fix the environment first.
English
10
4
75
4.3K
Amit Piplani रीट्वीट किया
Vala Afshar
Vala Afshar@ValaAfshar·
Complaining is not a strategy.
Vala Afshar tweet media
English
55
855
5K
602.8K
Amit Piplani रीट्वीट किया
Abhishek
Abhishek@HeyAbhishek·
ChatGPT can now create Flowcharts and Diagrams. No more wasting hundreds of hours creating visuals for presentations or research papers. Here’s how to do it for free in a few minutes:
Abhishek tweet media
English
120
677
4.8K
1.5M