RAJDEEP SHARMA

2.6K posts

RAJDEEP SHARMA

RAJDEEP SHARMA

@buggydebugger

“it is what it is” - philosophy for life | Interested in positive-sum games | People person around select kind of people. | [email protected]

Bengaluru, India เข้าร่วม Ağustos 2010
847 กำลังติดตาม69 ผู้ติดตาม
RAJDEEP SHARMA รีทวีตแล้ว
Atul Kumar
Atul Kumar@atulkumarzz·
Stop burning tokens on Claude Code. Use this instead 👇 A free GitHub repo (80K⭐) that turns your CLI into a high-performance AI coding system. Link → github.com/affaan-m/every… Why it’s different: → Token optimization Smart model selection + lean prompts = lower cost → Memory persistence Auto-save/load context across sessions (No more losing the thread) → Continuous learning Turns past work into reusable skills → Verification loops Built-in evals so code actually works → Subagent orchestration Tames large codebases with iterative retrieval Most people think Claude struggles with complex repos. It doesn’t. They’re just using the wrong setup. This fixes it. Bookmark this for your AI stack. ♻️ #AI #Claude #AIAgents #LLM #GenAI #DevTools
Atul Kumar tweet media
English
29
75
483
42.9K
RAJDEEP SHARMA รีทวีตแล้ว
Peter Holderrieth
Peter Holderrieth@peholderrieth·
We are also releasing self-contained lecture notes that explain flow matching and diffusion models from scratch. This goes from "zero" to the state-of-the-art in modern Generative AI. 📖 Read the notes here: arxiv.org/abs/2506.02070 Joint work with @EErives40101.
Peter Holderrieth@peholderrieth

🚀MIT Flow Matching and Diffusion Lecture 2026 Released (diffusion.csail.mit.edu)! We just released our new MIT 2026 course on flow matching and diffusion models! We teach the full stack of modern AI image, video, protein generators - theory and practice. We include: 📺 Videos: Step-by-step derivations. 📝 Notes: Mathematically self-contained lecture notes 💻 Coding: Hands-on exercises for every component We fully improved last years’ iteration and added new topics: latent spaces, diffusion transformers, building language models with discrete diffusion models. Everything is available here: diffusion.csail.mit.edu A huge thanks to Tommi Jaakkola for his support in making this class possible and Ashay Athalye (MIT SOUL) for the incredible production! Was fun to do this with @RShprints! #MachineLearning #GenerativeAI #MIT #DiffusionModels #AI

English
38
650
5.6K
463K
RAJDEEP SHARMA รีทวีตแล้ว
TFTC
TFTC@TFTC21·
Jensen Huang: "If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. This is no different than a chip designer who says 'I'm just going to use paper and pencil. I don't think I'm going to need any CAD tools.'"
English
487
610
7.9K
2.9M
RAJDEEP SHARMA รีทวีตแล้ว
Suryansh Tiwari
Suryansh Tiwari@Suryanshti777·
🤯holy shit...someone just dropped the open-source version of Stripe/Ramp-level AI coding agents. this isn’t a demo. this is the actual internal architecture. → agents that run in isolated cloud sandboxes → full repo + issue context injected → spawn subagents to work in parallel → fix code, run tests, commit changes → open PRs automatically trigger it from Slack, Linear, or GitHub… and it just works. no prompt babysitting. no fragile workflows. just autonomous execution. this is what “AI engineer” actually means in 2026. repo in comments 👇
Suryansh Tiwari tweet media
English
13
22
156
10.3K
RAJDEEP SHARMA รีทวีตแล้ว
Simplifying AI
Simplifying AI@simplifyinAI·
You can now run ElevenLabs-level voice cloning completely offline 🤯 LuxTTS is a local TTS model that clones voices from 3 seconds of audio at insane speeds. It runs at 150x real-time without you ever having to pay a subscription. - Works perfectly on both CPU and GPU - Takes up just 1GB of VRAM - Outputs crisp 48kHz audio instead of standard 24kHz 100% Open Source.
Simplifying AI tweet media
English
18
104
825
41.8K
RAJDEEP SHARMA รีทวีตแล้ว
Vaibhav Sisinty
Vaibhav Sisinty@VaibhavSisinty·
Someone just open-sourced a global intelligence system. The kind governments spend millions building. Yes, seriously. Meet Crucix. An always-on intelligence system that watches the world… and texts you when something actually matters. Every 15 minutes, it scans 26 live data streams and fuses them into a single Jarvis-style command center. Here’s what it’s tracking in real-time: → NASA satellite fire detection → Fed economic signals → Markets: crypto, oil, commodities → Sanctions + watchlists → Maritime vessel tracking → Global news via GDELT + RSS → Global flight movement → Radiation levels → Conflict zone activity → Sentiment from 17 Telegram intel channels Now the wild part: It’s not just passive, it talks back. Ping it on Telegram or Discord: → /brief → get a full intelligence rundown → /sweep → trigger a fresh global scan It responds like an analyst on demand. The kind of system usually locked behind six-figure government contracts… …just got open-sourced. MIT licensed.
Vaibhav Sisinty tweet media
English
3
6
54
2.5K
RAJDEEP SHARMA รีทวีตแล้ว
Thomas Wolf
Thomas Wolf@Thom_Wolf·
This is really cool. It got me thinking more deeply about personalized RL: what’s the real point of personalizing a model in a world where base models can become obsolete so quickly? The reality in AI is that new models ship every few weeks, each better than the last. And the pace is only accelerating, as we see on the Hugging Face Hub. We are not far away from better base models dropping daily. There’s a research gap in RL here that almost no one is working on. Most LLM personalization research assumes a fixed base model, but very few ask what happens to that personalization when you swap the base model. Think about going from Llama 3 to Llama 4. All the tuned preferences, reward signals, and LoRAs are suddenly tied to yesterday’s model. As a user or a team, you don’t want to reteach every new model your preferences. But you also don’t want to be stuck on an older one just because it knows you. We could call this "RL model transferability": how can an RL trace, a reward signal, or a preference representation trained on model N be distilled, stored, and automatically reapplied to model N+1 without too much user involvement? We solved that in SFT where a training dataset can be stored and reused to train a future model. We also tackled a version of that in RLHF phases somehow but it remain unclear more generally when using RL deployed in the real world. There are some related threads (RLTR for transferable reasoning traces, P-RLHF and PREMIUM for model-agnostic user representations, HCP for portable preference protocols) but the full loop seems under-studied to me. Some of these questions are about off-policy but other are about capabilities versus personalization: which of the old customizations/fixes does the new model already handle out of the box, and which ones are actually user/team-specific to ever be solved by default? That you would store in a skill for now but that RL allow to extend beyond the written guidance level. I have surely missed some work so please post any good work you’ve seen on this topic in the comments.
Ronak Malde@rronak_

This paper is almost too good that I didn't want to share it Ignore the OpenClaw clickbait, OPD + RL on real agentic tasks with significant results is very exciting, and moves us away from needing verifiable rewards Authors: @YinjieW2024 Xuyang Chen, Xialong Jin, @MengdiWang10 @LingYang_PU

English
33
64
739
117.8K
RAJDEEP SHARMA รีทวีตแล้ว
Tips Excel
Tips Excel@gudanglifehack·
🚨 Anthropic dropped a FREE 33-page playbook revealing Claude's very own cheat code: The 'Skills' folder. Spend 30 minutes building it, and you’ll never have to explain your process again. Top-tier users don't just type commands, they build systems. Grab your free copy of Anthropic's official guide to building Claude skills right here: resources.anthropic.com/hubfs/The-Comp…
Tips Excel tweet media
English
15
386
3.1K
466.9K
RAJDEEP SHARMA รีทวีตแล้ว
Utkarsh Sharma
Utkarsh Sharma@techxutkarsh·
Instead of watching a 2-hour movie, watch this Claude FULL COURSE (Build & Automate Anything):
English
58
1.8K
8.1K
1.1M
RAJDEEP SHARMA รีทวีตแล้ว
v. Jatin
v. Jatin@JatinTweets_·
Better than original tbh ❤️
English
404
2.3K
21K
1.5M
RAJDEEP SHARMA รีทวีตแล้ว
The Sigma Mindset
The Sigma Mindset@thesigmamindset·
If you want to beat procrastination, use this 3-second rule ‼️‼️
English
69
2.3K
12K
355K
RAJDEEP SHARMA รีทวีตแล้ว
Suhail
Suhail@Suhail·
Probably 90% fake and most links are gone.
Suhail tweet media
English
160
196
4.3K
2.1M
RAJDEEP SHARMA
RAJDEEP SHARMA@buggydebugger·
Suicide is the clearest symbol of societies’ failure
English
0
0
1
15
RAJDEEP SHARMA รีทวีตแล้ว
Ethan Evans
Ethan Evans@EthanEvansVP·
I screwed over one of my top engineers when I was a Senior Manager at Amazon. He felt betrayed, found another job, and resigned. This is a dark spot on my career, so learn from my mistake. Here’s the story:
English
176
705
10.9K
2.3M
RAJDEEP SHARMA รีทวีตแล้ว
Millie Marconi
Millie Marconi@MillieMarconnni·
R.I.P Gartner. Now you don’t need expensive analyst subscriptions anymore. You can generate full industry reports using any LLM ChatGPT, Claude, DeepSeek, Gemini, Qwen3 and public data. Here’s the prompt that turns any LLM into a full-stack market research analyst for free:
Millie Marconi tweet media
English
64
391
3K
571.8K
RAJDEEP SHARMA รีทวีตแล้ว
Elon Musk
Elon Musk@elonmusk·
ZXX
4.6K
10.7K
100.1K
26.2M
RAJDEEP SHARMA รีทวีตแล้ว
Chauhan
Chauhan@Platypuss_10·
Here is video which shows, our Air Defence Systems has intercepted multiple Pakistani Missiles in Jammu! Watch the full video!! Motto of AAD- Annihilate the Airborne Enemy!💪🇮🇳
English
312
2.2K
13.4K
772.7K