rajat
160 posts

rajat
@RajatNotes
aspiring ML Systems/Infra Engg. | EE undergrad
DTU Delhi Beigetreten Şubat 2025
48 Folgt6 Follower

@bhaktSenapati @RajatNotes Analogies like "whole sunshine" vs "single ray" fall short when it comes to Brahman because Brahman is not matter, that can be compartmentalized into whole and parts.
English

Atma is a part of Brahman, and not absolutely equal. This is wrong understanding of Vedic scriptures.
Sam Altman@sama
absolute equivalence of brahman and atman
English

@bhaktSenapati what do you mean by "quantity".
Why do you said one is part of another.
English

@RajatNotes One in quality, different in quantity. This is what it means. The other understanding is wrong or incomplete.
English

realizing that crypto was merely the introduction to tokenization and was never meant to have any real use case
Tokens on Solana@tokens
JUST IN: Hyperliquid now trades more oil, gold, and silver than crypto.
English
rajat retweetet

@OfficialLoganK @GoogleAIStudio Please improve the gemini app experience
English

Tomorrow we will unveil the all new vibe coding experience in @GoogleAIStudio, the team has spent 4 months rebuilding it all from scratch and smoothing out rough edges to help everyone bring their ideas to life.
This is a big step forward, but just the start : )
English

@OptimalHustler @baldwin_IVth What does deepchem work on? Do you have any extra intel?
English
rajat retweetet

@baldwin_IVth Deepchem is very research oriented. It also comes in GSoC every year as well. Just one thing that they mostly operate thro Disocrd and not github issues tab. Besides, they have a models wishlist issues, check that out. Might be a great place for you
English
rajat retweetet

When you run a @PyTorch model on a GPU, the acutal work is executed through kernels. These are low-level, hardware-specific functions designed for GPUs (or other accelerators).
If you profile a model, you'll see a sequence of kernel launches. Between these launches, the GPU can sit idle, waiting for the next operation. A key optimization goal is therefore to minimize gaps between kernel execution and keep the GPU fully utilized.
One common approach is `torch.compile`, which fuses multiple operations into fewer kernels, reducing overhead and improving utilization.
Another approach is to write custom kernels tailored to specific workfloads (e.g., optimized attention or fused ops). However, this comes with significant challenges:
> requires deep expertise in kernels writing
> installation hell
> integration with the model is non-trivial
To address this,@huggingface introduces the `kernels` library.
With this one can:
> build custom kernels (with the help of a template)
> upload them to the Hub (like models or datasets)
> integrate them to models with ease
Let's take a look at how the transformers team use the kernels library to integrate it into the already existing models. (more in the thread)
English

@CuriosityonX Wdym by "Breaking"?
Pick a random ahh article from random years ago that talks any bs
and create irrelevant hype.
Bait
English
rajat retweetet

Today I uninstalled Antigravity.
Not because I suddenly decided I hate AI tools or because I woke up wanting to write everything by hand like it is 2006 again ( i hate this to an extent now ) , but because after weeks of trying to actually rely on it for real work I realized I was spending more time fighting the tool than writing code, and at some point it becomes genuinely absurd when the thing that is supposed to accelerate your workflow keeps interrupting it every five minutes with some new limitation, broken tool call, or a quiet little reminder that the feature you thought you were using is actually sitting behind an ULTRA plan.
Half the time the tool calls did not even work.
You would ask it to run something, or fetch something, or analyze a file, and it would confidently say it was doing it, only for absolutely nothing to happen, or it would hallucinate that a tool succeeded when it clearly did not, or it would suddenly switch models mid conversation and the reasoning would drop so hard that it felt like you had handed your keyboard to someone who skimmed the documentation once three months ago.
And the weirdest part is that none of this feels accidental anymore, it feels designed. Goolgle made people switch to their vscode fork offering free Opus and other cooler models , lets be honest no one used Antigravity for their gemini slop
Suddenly the responses slow down, the context window shrinks, the tool calls stop working as reliably, and you quietly get pushed back to the default Gemini tier that feels like the AI equivalent of running a modern game on integrated graphics where everything technically runs but nothing feels smooth enough to actually enjoy.
And this is where Google has been especially frustrating lately because the entire ecosystem is starting to feel like a carefully engineered funnel where the free tier exists mostly to demonstrate how good the paid version might be, which is a very different thing from actually giving people a usable tool.
You open the model list and you see the interesting ones sitting there like museum exhibits behind glass.
You can look at them.
You can occasionally poke them.
But the moment you try to rely on them for real development work the system starts nudging you toward the same solution every single time, which is the little upgrade button that promises things will magically work better once you start paying.
And maybe they will.
But the experience leading up to that moment feels so intentionally constrained that it starts leaving a bad taste in your mouth, because instead of feeling like you are using a powerful piece of software you start feeling like you are trapped inside a product demo that never quite ends.
Which is why today I just removed Antigravity completely and went back to writing things by hand, and yes it absolutely takes longer and yes I am typing more boilerplate than I probably should in 2026, but at least the code I write actually runs, the tools I call actually exist, and there is something strangely refreshing about a development workflow where the only thing between you and your program is your own ability to write it instead of a rotating stack of rate limits, model downgrades, and half working integrations that constantly remind you that the good version of the tool is apparently waiting for you somewhere behind a subscription tier.

English
rajat retweetet

@ramxcodes literally hate this gif/video trend which is going on recently
it's all just slop
English
rajat retweetet

@DevVora4 @PrinSciAdvOff Yeah- there needs to be a proper plan with targets, plans and expectations. Just teaching kids "AI" and making "AI Universities" are just desperation for relevancy.
English

@RajatNotes @PrinSciAdvOff yeah. all I am saying is if we want to lead, will have to think beyond current facade. generating data and training training mindlessly wont take us anywhere
English

𝐀𝐬 𝐩𝐚𝐫𝐭 𝐨𝐟 𝐭𝐡𝐞 𝐨𝐧-𝐠𝐨𝐢𝐧𝐠 𝐀𝐈 𝐏𝐨𝐥𝐢𝐜𝐲 𝐖𝐡𝐢𝐭𝐞 𝐏𝐚𝐩𝐞𝐫 𝐒𝐞𝐫𝐢𝐞𝐬, 𝐭𝐡𝐞 𝐎𝐟𝐟𝐢𝐜𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐏𝐫𝐢𝐧𝐜𝐢𝐩𝐚𝐥 𝐒𝐜𝐢𝐞𝐧𝐭𝐢𝐟𝐢𝐜 𝐀𝐝𝐯𝐢𝐬𝐞𝐫 𝐭𝐨 𝐭𝐡𝐞 𝐆𝐨𝐯𝐞𝐫𝐧𝐦𝐞𝐧𝐭 𝐨𝐟 𝐈𝐧𝐝𝐢𝐚 𝐫𝐞𝐥𝐞𝐚𝐬𝐞𝐬 𝐚 𝐰𝐡𝐢𝐭𝐞 𝐩𝐚𝐩𝐞𝐫 𝐨𝐧 “𝐀𝐝𝐯𝐚𝐧𝐜𝐢𝐧𝐠 𝐈𝐧𝐝𝐢𝐠𝐞𝐧𝐨𝐮𝐬 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬.
The versatility of Foundation Models makes them a critical layer of today’s AI ecosystem and a key area for innovation in India. Therefore, developing indigenous foundation models is a strategic priority. India’s objective is to harness foundation models for inclusive growth and public good, while ensuring they are governed in a manner consistent with the country’s values, legal framework, and security interests.
This white paper provides an understanding of India’s approach to advancing indigenous foundation models through public–private collaboration and to governing these systems that support trust, accountability, and responsible adoption.
The White Paper also provides details on India’s approach - which is centred on building indigenous capability across the foundation-model stack. Rather than relying on a single model, India is developing an ecosystem that combines
(i) shared compute access,
(ii) India-centric data and model repositories, and
(iii) multiple model-building efforts across text, speech, multimodal, and sectoral systems.
Read the White Paper here: psa.gov.in/CMS/web/sites/…
English


















