Vaibhav Sharma

8.6K posts

Vaibhav Sharma banner
Vaibhav Sharma

Vaibhav Sharma

@arbitrarybytes

Certified Azure Architect | Technology enthusiast | Space hobbyist

Bengaluru, India Sumali Kasım 2009
535 Sinusundan306 Mga Tagasunod
Vaibhav Sharma nag-retweet
Google Gemma
Google Gemma@googlegemma·
A completely local agent that lives right inside your browser. Powered by Gemma 4 E2B and WebGPU, it uses native tool calling to: 🔍 Search browsing history 📄 Read and summarize pages 🔗 Manage tabs 100% local. No servers needed!
English
147
615
6.3K
618.5K
Vaibhav Sharma nag-retweet
Google Cloud Tech
Google Cloud Tech@GoogleCloudTech·
Our official Agent Skills repository on @github is here! Skills are a simple, open format for giving agents new capabilities and expertise. Think of a skill as compact, agent-first documentation for a specific tech or task. Learn more → goo.gle/4eCsZqu #GoogleCloudNext
Google Cloud Tech tweet media
English
49
748
5.4K
448K
Vaibhav Sharma nag-retweet
Qwen
Qwen@Alibaba_Qwen·
LM Performance:With only 27B parameters, Qwen3.6-27B outperforms the Qwen3.5-397B-A17B (397B total / 17B active, ~15x larger!) on every major coding benchmark — including SWE-bench Verified (77.2 vs. 76.2), SWE-bench Pro (53.5 vs. 50.9), Terminal-Bench 2.0 (59.3 vs. 52.5), and SkillsBench (48.2 vs. 30.0). It also surpasses all peer-scale dense models by a wide margin.
Qwen tweet media
English
11
35
609
142.9K
Vaibhav Sharma nag-retweet
ClaudeDevs
ClaudeDevs@ClaudeDevs·
Caching is critical for customers to lower both costs and TTFT. We’re launching a new dashboard in Claude Developer Console to increase visibility and help customers optimize their usage. Check it out here: platform.claude.com/usage/cache
ClaudeDevs tweet media
English
87
178
2.7K
349.3K
Vaibhav Sharma nag-retweet
Stitch by Google
Stitch by Google@stitchbygoogle·
Today, we’re open-sourcing the draft specification for DESIGN.md, so it can be used across any tool or platform. We’re also adding new capabilities. DESIGN.md lets you easily export and import your design rules from project to project. Instead of guessing intent, agents know exactly what a color is for and can even validate their choices against WCAG accessibility rules. Watch David East break down this shared visual language in action👇. New capabilities and links in 🧵
English
198
2K
18.1K
6.7M
Vaibhav Sharma nag-retweet
Google DeepMind
Google DeepMind@GoogleDeepMind·
Meet Gemma 4: our new family of open models you can run on your own hardware. Built for advanced reasoning and agentic workflows, we’re releasing them under an Apache 2.0 license. Here’s what’s new 🧵
GIF
English
371
1.2K
8.8K
3.9M
Vaibhav Sharma nag-retweet
Google
Google@Google·
We just released Gemma 4 — our most intelligent open models to date. Built from the same world-class research as Gemini 3, Gemma 4 brings breakthrough intelligence directly to your own hardware for advanced reasoning and agentic workflows. Released under a commercially permissive Apache 2.0 license so anyone can build powerful AI tools. 🧵↓
English
737
3.1K
20.6K
7.7M
Vaibhav Sharma nag-retweet
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
7K
57.9K
20.8M
Vaibhav Sharma nag-retweet
Cheng Lou
Cheng Lou@_chenglou·
My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow
English
1.3K
8.3K
65.5K
23.8M
Vaibhav Sharma nag-retweet
Google Research
Google Research@GoogleResearch·
Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI
GIF
English
1K
5.8K
39.1K
19.3M
Vaibhav Sharma nag-retweet
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
1.4K
5.4K
28.1K
66.5M
Vaibhav Sharma nag-retweet
Dori for Zerodha Kite
Dori for Zerodha Kite@arbitrary_bytes·
Trading on @zerodha? It’s time to upgrade your cockpit 🚀 Meet Dori for Kite - the ultimate Chrome extension that transforms your Kite holdings into a pro trader's dashboard. Visualize P&L drivers, spot concentration risk, and get actionable signals📈 #Nifty50 #ZerodhaKite
Dori for Zerodha Kite tweet media
English
0
1
1
34
Bank of Baroda
Bank of Baroda@bankofbaroda·
@arbitrarybytes Dear Sir/Madam, we will surely resolve the issue. Kindly follow us, so that we can Direct Message you in order to protect your privacy and maintain confidentiality.
English
1
0
0
51
Vaibhav Sharma
Vaibhav Sharma@arbitrarybytes·
. @bankofbaroda @BankofBarodaCEO Your bank representatives are intentionally delaying the closure of my car loan account. Repeated tactics have been employed to delay the closure. Please assist on priority. Being a PSU shouldn’t be an excuse to harass customers. Please help.
English
1
0
1
125
Vaibhav Sharma
Vaibhav Sharma@arbitrarybytes·
The different types of #MachineLearning Supervised ML: training data includes both feature values (x1, x2,...) and known label values (y). Unsupervised ML: training data that consists only of feature values (x1, x2,...) without any known labels Source > learn.microsoft.com/en-us/training…
Vaibhav Sharma tweet media
English
0
0
0
92
Vaibhav Sharma
Vaibhav Sharma@arbitrarybytes·
@HomeLoansByHDFC @HDFC_Bank how come your home loan onboarding works without any issue but the moment I try to prepay my principal in part all sorts of payment issues occur? Reference: 20115710 No one follows up despite repeated emails and call center calls. #pathetic #HDFC
English
2
0
0
61