Nitin Agarwal

3.3K posts

Nitin Agarwal banner
Nitin Agarwal

Nitin Agarwal

@nityn

Building - @fvbankus https://t.co/qPeDNZS3qy

San Juan, Puerto Rico Katılım Mart 2007
3.2K Takip Edilen1.5K Takipçiler
Amitabh Kant
Amitabh Kant@amitabhk87·
Energy security cannot depend on geography. India must turn the West Asia crisis into an opportunity to push for true energy sovereignty. We can accelerate this process by: 1. Raising renewable target to 1,500 GW by 2030. 2. Strengthening grids in Gujarat, Rajasthan, Karnataka, Tamil Nadu, and adding more Renewable Energy Management Centres. 3. Mandating battery storage in all tenders; and cutting GST for storage assets to 5%. 4. Scaling clean cooking via Ujjwala-linked induction cooker aggregation. 5. Electrifying new buses fully, 2/3-wheelers by 2030, cars/trucks by 2035; fixing advanced chemistry-cell battery storage PLI. 6. Executing 100 GW nuclear by 2047 and diversifying critical minerals from China. My article in today’s Business Standard .
Amitabh Kant tweet media
English
42
119
497
16.9K
Nitin Agarwal retweetledi
Google Gemma
Google Gemma@googlegemma·
Gemma 4 can run on phones without an internet connection! 🤯 It can perform local agentic tasks, such as logging and analyzing trends. When connected, it can also make API calls. Want to try it yourself? Get the Google AI Edge App on iOS or Android. (🔊 Sound on for the demo!)
English
284
947
8.1K
635.6K
Nitin Agarwal
Nitin Agarwal@nityn·
The game is not JUST raising money to be a payment infrastructure company it is about building one and scaling. #keepbuilding
English
1
0
1
10
Nitin Agarwal retweetledi
The Curious Tales
The Curious Tales@thecurioustales·
🚨BREAKING: 8 weeks of gratitude practice physically rebuilds the neural pathways between your memory and reward centers. Your brain physically rewires itself every time you feel grateful. Eight weeks of intentional gratitude practice creates measurable structural changes in the neural pathways connecting your hippocampus to your ventral tegmental area. The memory center starts talking to the reward center in a fundamentally different way. New synaptic connections form. Existing ones strengthen. The physical architecture of how you process positive experiences rebuilds itself. Most people approach gratitude like a mood they can choose to feel. A psychological vitamin they remember to take when life gets difficult. The neuroscience reveals something far more profound. Gratitude is a biological intervention that sculpts brain tissue. Researchers tracked participants practicing gratitude exercises for two months using brain scans. They watched new neural highways construct themselves in real time. The anterior cingulate cortex developed stronger connections to the medial prefrontal cortex. The brain learned to route positive emotional experiences through higher order thinking centers instead of storing them as fleeting feelings. Every positive experience you’ve ever had exists as a neural trace in your memory network. Most sit dormant, accessible only when something external triggers the specific sensory combination that originally encoded them. You smell coffee, suddenly remember a conversation from years ago. Random. Unreliable. Outside your control. Gratitude practice systematically rewires that retrieval system. After two months, participants could voluntarily access positive memories with increasing ease. Their brains had built stronger pathways between memory storage areas and emotional processing centers. They experienced deeper emotional resonance during memory retrieval. The quality of remembering itself had improved. The participants also started noticing positive details in their present environment they had previously filtered out. Their attention systems recalibrated. The same neural pathways pulling positive memories forward were scanning current experiences more thoroughly for elements worth encoding as positive memories. Their brains became biased toward collecting evidence that life contains meaningful moments. Most cognitive interventions try to change how you interpret negative experiences. Gratitude practice changes how thoroughly you notice positive ones. It teaches your visual and emotional processing systems to detect opportunities and pleasures that were always present but neurologically invisible. The timeline reveals something crucial about neural plasticity. Weeks one through three showed minimal structural changes. Participants felt slightly more positive, but brain scans looked identical to baseline. Weeks four through six showed the first measurable increases in gray matter density. Weeks seven and eight revealed entirely new neural network formation. Two months. Your nervous system can physically restructure itself with consistent practice. The method was almost embarrassingly simple. Participants wrote down three specific things they felt grateful for every evening, explaining why each mattered. No meditation apps. No guided visualizations. Just pen, paper, and the requirement to identify gratitude targets with enough detail that their brains had to actively search for positive elements. Specificity drives the neural development. General statements like “I’m grateful for my family” generate different brain activity than precise observations like “I’m grateful my daughter laughed at my terrible joke during dinner because it showed me she still finds me funny despite growing more independent.” The brain needs detailed targets to practice connecting memory specifics to emotional rewards. After eight weeks, participants developed a fundamentally different relationship with their attention and memory systems. Someone whose brain automatically scans for and emotionally amplifies aspects of experience that make existence feel worthwhile. The neural pathways remain permanent after practice ends. Gratitude carves lasting roads through consciousness.
The Curious Tales tweet media
Darshak Rana ⚡️@thedarshakrana

Gratitude rewires the brain. Gratitude rewires the brain. Gratitude rewires the brain. Gratitude rewires the brain. Gratitude rewires the brain. Gratitude rewires the brain. Gratitude rewires the brain. Gratitude rewires the brain. Gratitude rewires the brain.

English
103
2.4K
9.9K
929.8K
Nitin Agarwal retweetledi
klöss
klöss@kloss_xyz·
let me explain what Karpathy just shared he’s spending way less time using AI to write code and more time using it to build personal knowledge bases the full breakdown:  → he dumps raw sources (articles, papers, repos, datasets, images) into a folder. then has an LLM organize them into a wiki… a collection of markdown files with summaries, links between related ideas, and concept articles that connect everything together → he uses Obsidian as his frontend. he views raw data, the organized wiki, and visualizations all in one place. the LLM writes and maintains the entire wiki. he rarely touches it directly → once the wiki gets big enough (~100 articles, ~400K words on one recent research topic)… he just asks the LLM questions against it. no RAG (complex retrieval system) needed. the LLM maintains its own index files and reads what it needs → outputs aren’t just text. he has the LLM render markdown files, slide decks, charts, and images… then files the outputs back into the wiki so every question he asks makes the knowledge base smarter → he runs “health checks” where the LLM finds inconsistent data, fills gaps using web search, and suggests new connections and articles. the wiki cleans and improves itself over time → he even vibe coded a search engine over his wiki that he uses directly in a browser or hands off to an LLM as a tool for bigger questions → his next step: training a custom model on his own research so it knows the material in its weights… not just in the context window most people use AI to get answers. Karpathy is using AI to build his own ‘Jarvis’ via compounding knowledge systems that get smarter the more he uses them the difference between asking ChatGPT or Claude a question and having a personal research engine that grows with every session is the gap most people haven’t crossed yet and this is where it gets really powerful not replacing your thinking but organizing everything you’ve ever learned into something you can query or create with forever if you’ve been using CLAUDE .md and context files in Claude Code… this is that same idea at a much bigger scale if you’re doing any kind of AI work or deep learning on a new topic right now… this workflow is worth studying closely you’ll want to adopt it yourself this is one of AI’s brightest minds after all. we’re all better off listening to him.
klöss tweet media
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
93
432
3.7K
474.7K
Nitin Agarwal retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
961
2.6K
25.2K
6.2M
Nitin Agarwal retweetledi
JUMPERZ
JUMPERZ@jumperz·
karpathy is showing one of the simplest AI architectures that actually works.. dump research into a folder, let the model organise it into a wiki, ask questions, then file the answers back in. the real insight is the loop...every query makes the wiki better. it compounds.. now thats a second brain building itself. i think this is so good for agents if applied right instead of pulling from shared memory every session, they build a living knowledge base that stays. your coordinator is not just coordinating tasks anymore.. it is maintaining institutional knowledge so every execution adds something back to the base. the bigger implication is crazy tho. agents that own their own knowledge layer do not need infinite context windows, they need good file organisation and the ability to read their own indexes. way cheaper, way more scalable, and way more inspectable than stuffing everything into one giant prompt.
JUMPERZ tweet media
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
136
723
8.2K
912.9K
Nitin Agarwal retweetledi
FV Bank
FV Bank@fvbankus·
For decades, banks have stored money. What if they could help manage it continuously? In his latest editorial, FV Bank's Chief Revenue Officer @nityn explores Agentic Banking, where intelligent systems don’t just report financial data, but act on it within defined guardrails. A shift from reactive finance to continuous optimization. #ICYMI Explore the full perspective in this @financialit_net's 2026 Spring Edition (pages 36-37): issuu.com/financialit/do…
FV Bank tweet media
English
0
2
5
115
Nitin Agarwal retweetledi
Prajit Nanu
Prajit Nanu@prajitn·
My annual reminder that SWIFT is not a payments company. SWIFT is a messaging network between financial institutions. It never touches the money. Let’s debunk a couple of myths. SWIFT is not responsible for payment delays When a cross-border payment is delayed, it’s usually because of: • intermediary banks • compliance and AML checks • liquidity availability between banks • cut-off times in local markets SWIFT simply passes the message between the banks involved. Blaming SWIFT for a slow payment is like blaming email when a contract takes a week to get signed. The message moved instantly. The people involved took time. SWIFT is not a PSP SWIFT does not: • hold funds • move funds • settle funds Banks move the money between accounts. SWIFT only sends the instruction message between those banks. An easy way to think about it: Calling SWIFT a payments company is like calling WhatsApp a telecom operator or calling email a courier service. It carries the message. It doesn’t move the asset. Understanding this distinction matters because most of the complexity in cross-border payments sits in banking, settlement, and compliance, not in the messaging layer.
English
6
6
39
5.1K
Nitin Agarwal retweetledi
Kyle Samani
Kyle Samani@KyleSamani·
There was a story from ~2011 that I vividly recall It was shortly after Larry Page stepped in as sole-CEO of Google. This was wartime CEO Page, who was very worried about losing mobile to Apple Page required that all Google execs stop using desktops/laptops entirely. Execs were only allowed to use mobile devices, for what I would assume was 3-6 months. He knew that by forcing the execs to use Google's own mobile services, the services would get a lot better I think we're in a pretty similar moment right now with AI I have come to the realization that for almost any task I want to do on my computer, the first step is "use AI." Sometimes it's actually dumb and counterproductive, sure, but I think it's a helpful forcing function Most CEOs should be forcing a similar exercise throughout their companies
English
20
20
309
33.3K
Nitin Agarwal retweetledi
Dr. AK 🇮🇳
Dr. AK 🇮🇳@docakx·
India's first fully indigenous 1.5T helium-free MRI scanner was unveiled on December 25, 2025. VoxelGrids Innovations, a Bengaluru-based medtech startup founded by Arjun Arunachalam (ex-GE Global Research and IIT Bombay), has developed India's first fully indigenous 1.5T helium-free MRI scanner after ~12 years of R&D. Backed by Zoho Corporation and grants from BIRAC/Tata Trusts, the system uses a proprietary "dry magnet" conduction-cooled design, eliminating scarce/expensive liquid helium. Key advantages: ~30-40% lower manufacturing and operational costs (priced around $400,000 vs. imported equivalents), reduced power consumption, lighter/compact footprint (suited for Indian infrastructure and unstable grids), and AI-enhanced imaging. It is not a copy of foreign systems but features bottom-up innovations in hardware, software, pulse sequences, and integration. On December 25, 2025, the scanner was officially unveiled and deployed at Chandrapur Cancer Care Foundation near Nagpur, Maharashtra, where it became clinically operational, scanning real patients for cancer diagnostics. As of March 2026, it remains active in real-world use. The Bengaluru facility has capacity for 20-25 units/year. A full commercial launch was targeted by end of FY26 (March 2026), with growing order interest from hospitals linked to Tata Trusts and others. Plans include a mobile/containerized version for rural/tier-2-3 access. Amid global helium shortages (exacerbated by Middle East issues), this positions VoxelGrids as a cost-disruptive, Atmanirbhar Bharat milestone in high-end medtech, aiming to democratize advanced imaging.
Dr. AK 🇮🇳 tweet media
Hyderabadi Chicha 2.0@HyderabadiChic3

Had MRI for the first time. Enquired to doctor, why MRIs is so costly. He replied, machine cost plus setup requires 10cr+. Need to pay 16-19% duty on machines as well. Come out with two questions in mind. First, why does the Government impose such duties on equipment used for essential and emergency medical treatment. Secondly, I wondered why, even in 2026, we still need to import such machines instead of manufacturing it in India.

English
34
790
3.2K
187.7K
Nitin Agarwal retweetledi
Moneycontrol
Moneycontrol@moneycontrolcom·
🚨~"India’s first rare earth magnet unit is approved today. The entire IP, Design, and R&D is done in India." ~"61% of domestic demand of electronics to be met from locally made lithium ion batteries." ~"India will be able to meet 100 per cent demand in laminates segment. We will become global suppliers in this." Union Minister Ashwini Vaishnaw on rare earth magnet industry in India. @AshwiniVaishnaw @GoI_MeitY
Moneycontrol tweet media
English
25
511
1.7K
52K
Nitin Agarwal retweetledi
Brent Fulfer
Brent Fulfer@Brent_Fulfer·
90% of founders I meet have a poor network. They may have a great product but that doesn’t help sell investors globally. They fly around to conferences pitching and get “nice project” and get ghosted. What to do? Bring value day 1. Fastest way to do that is via intros. Meaning: - connect them to relevant people. For example: I met a prospective LP in Bangkok who was doing business in Singapore. Made an intro to another family office there and now they do business. In turn, he went from a stranger to a person he now trusts and invested in our fund. The moment u stop being transactional and instead bring VALUE FIRST, doors open. Think “how do I bring value first”, not “how do I get money first”.
English
10
4
54
3.3K
Nitin Agarwal retweetledi
Dustin
Dustin@r0ck3t23·
Elon Musk just explained how Starlink moves the GDP of entire nations. The formula is so simple it should embarrass every development agency on the planet. Musk: “GDP is a function of average productivity per person.” Productivity per person goes up. GDP goes up. That is the whole equation. Everything else is decoration. And connectivity is the single largest lever on Earth for pushing that number. Musk: “If you don’t have access to the internet, or it’s too expensive or low bandwidth, you cannot access the MIT lessons and you can’t sell the goods and services that you produce.” No internet means no global knowledge. No global markets. No ability to sell to anyone beyond your village or learn from anyone outside of it. The penalty is total. And it has nothing to do with the person serving it. There is a child alive right now who is as intelligent as anyone who has ever walked the halls of MIT. She does not know it. Nobody around her knows it. Because the coordinates of her birth have no connectivity. No library. No signal. No link to the world that would show her what she is. She will grow old inside a ceiling that geography built for her. Not because of talent. Not because of effort. Because of a satellite that had not been launched yet. Musk: “Internet connectivity is certainly a candidate for one of the things that would do more to lift people out of poverty than anything else.” Traditional infrastructure takes decades. Fiber has to be laid. Towers have to be built. Permits have to be approved. Capital has to be attracted to regions that cannot attract it. Starlink bypasses all of it from orbit. No cables. No permits. No waiting for a government to prioritize your village. A dish goes up. Isolation ends. Someone who could not access a textbook yesterday downloads MIT’s entire curriculum today. Someone who could only sell to neighbors starts selling to the planet tomorrow. That is not an upgrade. That is a different life. Musk: “Starlink will actually move the GDP of countries. Like it’s gonna be that kind of thing.” He said it like a feature update. But read it again. Move the GDP of countries. Not a company’s revenue. Not an industry’s output. The gross domestic product of nations. Shifted by one constellation. The telecom industry spent decades deciding which regions were profitable enough to connect. The rest were written off. Starlink does not make that calculation. It covers the planet. Every farmer. Every welder. Every kid with a clear view of the sky. The minds that will cure diseases, solve energy, and build things we cannot yet name are already alive. They are already thinking. They have no signal. Starlink is the first technology in human history that can reach them at the speed of deployment instead of the speed of bureaucracy. And when those minds come online, they will not change their own lives. They will change the trajectory of the species. That is what Musk actually built. Not a telecom company. The largest unlock of human potential ever launched from a single network.
English
416
1.5K
5.9K
901.4K
Nitin Agarwal retweetledi
Anil Agarwal
Anil Agarwal@AnilAgarwal_Ved·
Whenever the government has entrusted Indian entrepreneurs, they have done miracles and made world-class sectors like telecom, aviation, ports, steelmaking, cement, power EPC. I have done some research and looked at 24 public sector companies which are in the natural resources sector – hydrocarbons, minerals, metals and fertilizers. These companies have very good human resources, and with entrepreneurship, they can try and fulfil India’s demand, which is the need of the hour. India has done very well in agriculture, which is above-the-ground. We have boosted production and achieved self-sufficiency and surplus. Now the time has come for below-the-ground, namely hydrocarbons, minerals and fertilizers. India has the best geology. We are the only ones who have experience in public sector. In Hindustan Zinc and BALCO, with massive investment and advanced technology, using the same resources and managerial people, we increased employment by 5 times, and production 10-15 times. More than thousand downstream industries have been created. The same thing can be tried with our public sector. Of course, there will be no retrenchment. It is very challenging to raise funding in this sector, but our entrepreneurs can do it on the back of amazing reserves and resources. The purpose to is reduce imports and fulfil our Prime Minister’s vision of an Atmanirbhar Bharat.
Anil Agarwal tweet media
English
43
60
374
18.1K
Nitin Agarwal retweetledi
Dustin
Dustin@r0ck3t23·
Jensen Huang just reverse-engineered why Elon Musk operates at a speed no one on the planet can match. Three traits. The first is deletion. Huang: “He has the ability to question everything to the point where everything’s down to its minimal amount.” Most engineers solve problems by adding. Musk solves them by subtracting. Every part. Every process. Every assumption that survived because no one had the nerve to kill it. He picks it up. Asks if it’s load-bearing. If the answer is anything less than absolutely, it is gone. Not simplified. Not optimized. Removed. What survives is the skeleton. The bare physics of the problem. Nothing between intent and execution. Huang said it plainly. As minimalist as you could possibly imagine. And he does it at system scale. Not at a product level. Not at a department level. Across entire companies. Entire industries. Entire supply chains. He strips a rocket the same way he strips a meeting. Down to the load-bearing walls and nothing else. The second is presence. Huang: “He is present at the point of action. If there’s a problem, he’ll just go there and show me the problem.” Not a Slack message. Not a report filtered through four layers of people who weren’t there when it broke. He walks to the failure. Stands over it. Puts his hands on it. Most executives have never seen the actual problem their company is trying to solve. They have seen slides about it. Read summaries of it. Formed opinions about it in rooms that are nowhere near it. Musk stands over the broken hardware and does not leave until it works. That collapses the distance that buries most organizations. The gap between something breaking and the person with authority to fix it actually understanding what broke. In most companies, that gap is weeks. For Musk, it is hours. The third is the one that bends everyone around him. Huang: “When you act personally with so much urgency, it causes everybody else to act with urgency.” Every supplier has a hundred customers. Every vendor has a dozen priorities. Every manufacturer has a backlog stretching months into the future. Musk makes himself the top of every single one of those lists. Not by demanding it. By demonstrating it. When the CEO shows up at your facility at midnight. When he is moving faster than your own internal team. When his timeline makes yours look like a suggestion. You do not put him in the queue. You rearrange the queue around him. Huang watched this up close. Huang: “He does that by demonstrating.” Not by asking. Not by negotiating. Not by leveraging a contract clause. By moving so fast that everyone else’s normal pace feels like standing still. Three traits. Strip everything down. Show up at the failure. Move so fast the world rearranges around you. That is not a management philosophy. That is why one man runs six companies while entire boards cannot keep one moving.
English
208
1.7K
9.4K
767.1K