rajat

160 posts

rajat banner
rajat

rajat

@RajatNotes

aspiring ML Systems/Infra Engg. | EE undergrad

DTU Delhi 参加日 Şubat 2025
48 フォロー中6 フォロワー
rajat
rajat@RajatNotes·
@bhaktSenapati what do you mean by "quantity". Why do you said one is part of another.
English
1
0
0
11
The Saintly King
The Saintly King@bhaktSenapati·
@RajatNotes One in quality, different in quantity. This is what it means. The other understanding is wrong or incomplete.
English
1
0
0
44
rajat
rajat@RajatNotes·
@barneyxbt If I saw this gif one more time
English
0
0
0
23
rajat がリツイート
rain
rain@redactedrain·
I lied. I love the game. I love the hustle. I love being doubted. I love the uncertainty. I love the feeling of knowing it's just a matter of time.
English
39
2K
11.4K
256.8K
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Tomorrow we will unveil the all new vibe coding experience in @GoogleAIStudio, the team has spent 4 months rebuilding it all from scratch and smoothing out rough edges to help everyone bring their ideas to life. This is a big step forward, but just the start : )
English
481
336
6K
389.4K
Pratyaksh Patel
Pratyaksh Patel@baldwin_IVth·
Am looking to contribute to some niche ML/stats/DL open source projects, anything works. Does anyone of you have any leads? Appreciated. Thanks.
English
1
1
37
2.7K
rajat がリツイート
Peter Mmuo
Peter Mmuo@PeterMmuo·
@ns123abc How @sama feels being the supreme tech villain
Peter Mmuo tweet media
English
3
5
66
10.2K
Shubhang Sinha
Shubhang Sinha@OptimalHustler·
@baldwin_IVth Deepchem is very research oriented. It also comes in GSoC every year as well. Just one thing that they mostly operate thro Disocrd and not github issues tab. Besides, they have a models wishlist issues, check that out. Might be a great place for you
English
1
0
3
172
rajat がリツイート
Aritra 🤗
Aritra 🤗@ariG23498·
When you run a @PyTorch model on a GPU, the acutal work is executed through kernels. These are low-level, hardware-specific functions designed for GPUs (or other accelerators). If you profile a model, you'll see a sequence of kernel launches. Between these launches, the GPU can sit idle, waiting for the next operation. A key optimization goal is therefore to minimize gaps between kernel execution and keep the GPU fully utilized. One common approach is `torch.compile`, which fuses multiple operations into fewer kernels, reducing overhead and improving utilization. Another approach is to write custom kernels tailored to specific workfloads (e.g., optimized attention or fused ops). However, this comes with significant challenges: > requires deep expertise in kernels writing > installation hell > integration with the model is non-trivial To address this,@huggingface introduces the `kernels` library. With this one can: > build custom kernels (with the help of a template) > upload them to the Hub (like models or datasets) > integrate them to models with ease Let's take a look at how the transformers team use the kernels library to integrate it into the already existing models. (more in the thread)
English
19
88
1.2K
82.4K
rajat
rajat@RajatNotes·
ngl if @ChatGPTapp releases one News section in their app - summarising top news of the week, day and month. It's GoJover for an entire industry , @sama @karpathy @greg
English
0
0
1
18
rajat
rajat@RajatNotes·
@CuriosityonX Wdym by "Breaking"? Pick a random ahh article from random years ago that talks any bs and create irrelevant hype. Bait
English
0
0
0
38
Curiosity
Curiosity@CuriosityonX·
BREAKING🚨: Your consciousness can connect with the whole Universe, groundbreaking research reveals
Curiosity tweet mediaCuriosity tweet media
English
322
540
5.1K
410.2K
Nishtha Singh
Nishtha Singh@pikachiuiu·
Non-AI ML students should consider themselves lucky that they do not have to learn these type of formulas
Nishtha Singh tweet media
English
40
8
232
31.4K
hayden
hayden@haydendevs·
what is the EE equivalent of a react todo list
English
46
1
179
20.2K
rajat がリツイート
Xia
Xia@xiaonweb·
Today I uninstalled Antigravity. Not because I suddenly decided I hate AI tools or because I woke up wanting to write everything by hand like it is 2006 again ( i hate this to an extent now ) , but because after weeks of trying to actually rely on it for real work I realized I was spending more time fighting the tool than writing code, and at some point it becomes genuinely absurd when the thing that is supposed to accelerate your workflow keeps interrupting it every five minutes with some new limitation, broken tool call, or a quiet little reminder that the feature you thought you were using is actually sitting behind an ULTRA plan. Half the time the tool calls did not even work. You would ask it to run something, or fetch something, or analyze a file, and it would confidently say it was doing it, only for absolutely nothing to happen, or it would hallucinate that a tool succeeded when it clearly did not, or it would suddenly switch models mid conversation and the reasoning would drop so hard that it felt like you had handed your keyboard to someone who skimmed the documentation once three months ago. And the weirdest part is that none of this feels accidental anymore, it feels designed. Goolgle made people switch to their vscode fork offering free Opus and other cooler models , lets be honest no one used Antigravity for their gemini slop Suddenly the responses slow down, the context window shrinks, the tool calls stop working as reliably, and you quietly get pushed back to the default Gemini tier that feels like the AI equivalent of running a modern game on integrated graphics where everything technically runs but nothing feels smooth enough to actually enjoy. And this is where Google has been especially frustrating lately because the entire ecosystem is starting to feel like a carefully engineered funnel where the free tier exists mostly to demonstrate how good the paid version might be, which is a very different thing from actually giving people a usable tool. You open the model list and you see the interesting ones sitting there like museum exhibits behind glass. You can look at them. You can occasionally poke them. But the moment you try to rely on them for real development work the system starts nudging you toward the same solution every single time, which is the little upgrade button that promises things will magically work better once you start paying. And maybe they will. But the experience leading up to that moment feels so intentionally constrained that it starts leaving a bad taste in your mouth, because instead of feeling like you are using a powerful piece of software you start feeling like you are trapped inside a product demo that never quite ends. Which is why today I just removed Antigravity completely and went back to writing things by hand, and yes it absolutely takes longer and yes I am typing more boilerplate than I probably should in 2026, but at least the code I write actually runs, the tools I call actually exist, and there is something strangely refreshing about a development workflow where the only thing between you and your program is your own ability to write it instead of a rotating stack of rate limits, model downgrades, and half working integrations that constantly remind you that the good version of the tool is apparently waiting for you somewhere behind a subscription tier.
Xia tweet media
English
106
53
638
82.7K
rajat がリツイート
siddharth
siddharth@buildwithsid·
@ramxcodes literally hate this gif/video trend which is going on recently it's all just slop
English
0
1
1
67
rajat がリツイート
☁️
☁️@gardenofcolours·
light sparkles on water ✨
GIF
☁️ tweet media☁️ tweet media☁️ tweet media
English
11
382
1.4K
30.7K
Nalin
Nalin@nalinrajput23·
real?
Nalin tweet media
English
188
336
7.1K
306.4K
Science girl
Science girl@sciencegirl·
Brain cells making connections
English
45
312
1.4K
48.8K
rajat
rajat@RajatNotes·
@DevVora4 @PrinSciAdvOff Yeah- there needs to be a proper plan with targets, plans and expectations. Just teaching kids "AI" and making "AI Universities" are just desperation for relevancy.
English
0
0
2
16
Dev vora
Dev vora@DevVora4·
@RajatNotes @PrinSciAdvOff yeah. all I am saying is if we want to lead, will have to think beyond current facade. generating data and training training mindlessly wont take us anywhere
English
1
0
0
15
Office of Principal Scientific Adviser to the GoI
𝐀𝐬 𝐩𝐚𝐫𝐭 𝐨𝐟 𝐭𝐡𝐞 𝐨𝐧-𝐠𝐨𝐢𝐧𝐠 𝐀𝐈 𝐏𝐨𝐥𝐢𝐜𝐲 𝐖𝐡𝐢𝐭𝐞 𝐏𝐚𝐩𝐞𝐫 𝐒𝐞𝐫𝐢𝐞𝐬, 𝐭𝐡𝐞 𝐎𝐟𝐟𝐢𝐜𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐏𝐫𝐢𝐧𝐜𝐢𝐩𝐚𝐥 𝐒𝐜𝐢𝐞𝐧𝐭𝐢𝐟𝐢𝐜 𝐀𝐝𝐯𝐢𝐬𝐞𝐫 𝐭𝐨 𝐭𝐡𝐞 𝐆𝐨𝐯𝐞𝐫𝐧𝐦𝐞𝐧𝐭 𝐨𝐟 𝐈𝐧𝐝𝐢𝐚 𝐫𝐞𝐥𝐞𝐚𝐬𝐞𝐬 𝐚 𝐰𝐡𝐢𝐭𝐞 𝐩𝐚𝐩𝐞𝐫 𝐨𝐧 “𝐀𝐝𝐯𝐚𝐧𝐜𝐢𝐧𝐠 𝐈𝐧𝐝𝐢𝐠𝐞𝐧𝐨𝐮𝐬 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬. The versatility of Foundation Models makes them a critical layer of today’s AI ecosystem and a key area for innovation in India. Therefore, developing indigenous foundation models is a strategic priority. India’s objective is to harness foundation models for inclusive growth and public good, while ensuring they are governed in a manner consistent with the country’s values, legal framework, and security interests. This white paper provides an understanding of India’s approach to advancing indigenous foundation models through public–private collaboration and to governing these systems that support trust, accountability, and responsible adoption. The White Paper also provides details on India’s approach - which is centred on building indigenous capability across the foundation-model stack. Rather than relying on a single model, India is developing an ecosystem that combines (i) shared compute access, (ii) India-centric data and model repositories, and (iii) multiple model-building efforts across text, speech, multimodal, and sectoral systems. Read the White Paper here: psa.gov.in/CMS/web/sites/…
English
26
243
1.2K
380.7K