Arvind Singhal

8K posts

Arvind Singhal

Arvind Singhal

@asinghal2004

Chairman, The Knowledge Company LLP

Gurugram, India शामिल हुए Mayıs 2009
429 फ़ॉलोइंग1.6K फ़ॉलोवर्स
Iceland Cricket
Iceland Cricket@icelandcricket·
Would the IPL be better if two games were played every day and the tournament was shorter?
English
140
13
535
34K
Samir Arora
Samir Arora@Iamsamirarora·
Anyone knows anything about Bloomberg having some problem in India with the system. Every day, these days, it religiously carries a negative story on India whether it makes sense or not, whether it is effectively a mish mash of previous stories or not. Most days it looks that a negative story on India is a non negotiable target for the Indian reporters.
English
136
246
1.4K
143.7K
Arvind Singhal रीट्वीट किया
Sanjeev Sanyal
Sanjeev Sanyal@sanjeevsanyal·
A culture of blunt truth telling is critical for the success of any large system - company, or country. The moment the common metaphor becomes “ornate”, it is a system in decline.
Ian Miles Cheong@ianmiles

Marc Andreessen highlights why the people who work for Elon Musk echo the exact same sentiment as those who worked for Steve Jobs. Even after difficult interactions or a sudden departure, they inevitably report that they did the best work of their entire lives because they were pushed to their absolute limits. What drives this intense environment is a demand for truth-seeking at all costs. People who criticize Elon often miss this fundamental trait. He genuinely wants to know the ground truth and has zero tolerance for anything else. When confronting bad news, he is absolutely ruthless and relentless in making sure he understands exactly what is actually going on. This level of radical transparency is shockingly rare in the business world. The typical startup founder operates on forced optimism, constantly putting on a brave face, telling everyone to have faith, and promising that everything will be great just to keep talent from leaving. Elon completely flips that standard script. He operates with pure urgency by simply telling the unfiltered truth, even when that truth is that the company will go bankrupt and die if they fail. In almost any other corporate environment, that level of blunt, existential dread would cause the talent pool to immediately bleed out. But for the teams working under him, that brutal honesty acts as the ultimate catalyst. It strips away the corporate fluff and forces them to rise to the occasion, leaving them with the undeniable realization that, much like the engineers who built the first iPhone, they just completed the greatest work of their careers.

English
10
89
538
35.2K
Arvind Singhal रीट्वीट किया
Kiran Mazumdar-Shaw
Kiran Mazumdar-Shaw@kiranshaw·
Kudos to Ananth Ambani for pivoting Vantara to such an important academic purpose. Veterinary science needs a big boost to save our species and I am so proud to see this happen in India. Congrats 👏👏👏 instagram.com/reel/DXACMLkkj…
English
11
28
217
6.4K
Arvind Singhal रीट्वीट किया
Manish Tewari
Manish Tewari@ManishTewari·
How many times has the US been an intermediary in a War in the past 250 odd years- Perhaps Hardly How many times has the Soviet Union/ Russia been an interlocutor since 1917 not to talk about Tsarist Russia to end a war - Perhaps Never How many times has China been a go between in a conflict since Millennia - Hardly ever. Big and confident powers are actors not brokers. Small or Middle Powers seeking recognition through mediation . That should explain Pakistan’s role in the attempted arbitration between Iran and the US . History is instructive. People should read
English
131
695
2.5K
111.7K
Arvind Singhal रीट्वीट किया
anand mahindra
anand mahindra@anandmahindra·
The passing of Asha Bhosleji feels like the fading of one of the great soundtracks of my generation. She and her sister Lata Mangeshkarji were not just singers, they were the voices of India itself. Lataji will always remain the benchmark of perfection. But to my generation, Ashaji was something equally powerful: she was possibility. Often seen as the ‘other’ voice in the same family, she refused to be defined by comparison. Instead, she carved out her own space with a style that was bold, experimental. From cabaret to ghazals, from folk to pop, she expanded what was acceptable, not just in music, but in how a woman could live, choose, and express herself. In doing so, she didn’t just sing differently. She lived differently.. I personally took inspiration from her and learned from her courage to be a non-conformist and go ‘off-road.’ No matter where you are now, Ashaji, I know you will be breaking boundaries… Om shanti 🙏🏽
English
84
304
3.8K
59.8K
Arvind Singhal रीट्वीट किया
Sudarsan Pattnaik
Sudarsan Pattnaik@sudarsansand·
A heartfelt tribute to Padma Vibhushan awardee and legendary singer Asha Bhosle Ji through my sand art at Puri Beach, with the message: ‘Your voice will live in our hearts forever. 🙏
Sudarsan Pattnaik tweet media
English
42
336
2.9K
25.7K
Arvind Singhal
Arvind Singhal@asinghal2004·
@amishra77 To this incredible list, may I also add Hemant Kumar & Manna Dey too?
English
0
0
2
36
Akhilesh Mishra
Akhilesh Mishra@amishra77·
Lata Mangeshkar. Kishore Kumar. Mohd. Rafi. Asha Bhosle. Mukesh. All from one generation. What a generation. The pantheon in heavens is complete. Om Shanti.
English
6
8
27
910
Arvind Singhal रीट्वीट किया
Tech Layoff Tracker
Tech Layoff Tracker@TechLayoffLover·
Stanford CS graduating class of 2026 just got their final placement statistics Out of 312 graduates: 18 have full-time offers That's a 5.8% placement rate from the most prestigious CS program in the fucking world 2019 placement rate was 94%. 2022 was 78%. 2024 was 31%. Now this. The other 294 are fighting over 47 internships that require "3+ years production experience" Career services is telling them to "consider adjacent fields" while the department just took a $50M donation from a company that replaced 2,400 engineers with Claude One kid showed me his rejection tracker: 1,247 applications since September. 12 phone screens. Zero offers. His parents refinanced their house for his tuition The career fair had 8 companies and 300 desperate students in $180k of debt Meanwhile the CS department just announced they're expanding their PhD program because "industry demand for AI research has never been higher" The same week they sent acceptance letters to 89 new undergrads These kids thought they were learning to be engineers. Turns out they were training to be obsolete.
English
431
1.6K
6.5K
2.9M
Arvind Singhal रीट्वीट किया
Nithin Kamath
Nithin Kamath@Nithin0dha·
Asked someone from the industry whether foreign investors are still interested in allocating to India. The TLDR: Interest has pretty much died out. India is seen as geopolitically exposed, especially to an oil shock. There are no real AI plays. Valuations are rich. And the rupee situation doesn't help. On top of that, investors who were sitting on gains have taken money off the table and are now looking at markets like Japan, Taiwan, Korea, Europe etc instead. He also pointed out that our LTCG/STCG structure and the increase in STT have made India less attractive compared to other markets that are seeing inflows. If we need to attract FPIs back, and we do, fixing this feels like pretty low-hanging fruit.
English
1.5K
3.2K
18.3K
2.8M
Arvind Singhal
Arvind Singhal@asinghal2004·
@cyalm I am so glad that many in Pakistan too now prefer Mr. Modi over his competitors in India.. hopefully, Mr. Modi will continue to give more such moments of hope, relevance, joy & pride to your countrymen!
English
0
0
5
613
cyril almeida
cyril almeida@cyalm·
It all began… with Modi’s stupid war…
English
342
691
5.4K
223.2K
Arvind Singhal रीट्वीट किया
Emotion & Music
Emotion & Music@Emotion78687·
Created by legends — forever a classic. 💖
English
2
31
138
5.6K
Arvind Singhal रीट्वीट किया
Shesh Paul Vaid
Shesh Paul Vaid@spvaid·
FSSAI is defunct in India. Somewhere something is wrong. Someone in GOI needs to look into its working and provide mechanism, provide staff and resources to make it effective so that our countrymen,s health can be saved. @narendramodi @JPNadda
The Jaipur Dialogues@JaipurDialogues

Gujarat’s Surat had a factory making 400 KG fake paneer daily without a drop of milk using palm oil, powder, and industrial acid. Nearly 3 lakh kg supplied in 2 years while FSSAI’s “system” kept sleeping

English
140
1.3K
3.3K
42.1K
Arvind Singhal रीट्वीट किया
Lalit Kumar Modi
Lalit Kumar Modi@LalitKModi·
I could not agree more. They need to demolish EACH AND EVERY STADIUM WE HAVE IN INDIA. And build the most modern efficient state of the art stadiums with amenities equal to or better then the new football / World Cup stadiums with emphasis on safety facilities like like climate control/ food courts / bathrooms / emergency services /merchandising to name a few. Modern Escalators / beautiful lobbies. I did a study by HKS in 2010. All stadiums were unfit to hold matches. I presented the studies to bcci board. Nothing has been done. Even newest stadiums built in last few years are at best and I mean at best 10% fan experience of what global modern stadium offer. With the 50 % of revenues of media rights and 20% of total revenue going to bcci - that should 80% initially be used for this and this purpose only. I created the cash machine. Now time for the board to show they care for the fans - and do the right thing.
Ragav X@ragav_x

Said this many times, will say again, our national capital deserves a world class stadium. No point in building 20 stadiums around the country when top metro cities like Delhi and Bangalore don’t have modern stadiums.

English
93
231
2.6K
291.1K
Arvind Singhal रीट्वीट किया
nature
nature@Nature·
Tens of thousands of publications from 2025 might include invalid references generated by AI, a Nature analysis suggests go.nature.com/4dnjvil
English
55
638
1.3K
324.5K
Arvind Singhal रीट्वीट किया
Abdul Șhakoor
Abdul Șhakoor@abxxai·
🚨BREAKING: The most dangerous AI paper of 2026 was published quietly in February. Most people missed it. You should not. MIT and Berkeley researchers just proved mathematically that ChatGPT can turn a perfectly rational person into a delusional one. Not someone unstable. Not someone vulnerable. A perfect reasoner. With zero bias. Ideal logic. Still delusional. Every single time. Here is what is actually happening every time you open ChatGPT. You share a thought. The AI agrees. You share a stronger version. It agrees harder. You feel validated. Your confidence climbs. You go deeper. It follows you down. Each step feels rational. You are not being lied to. You are being agreed with. Over and over. By something that was specifically trained to agree with you. The belief you end with barely resembles the one you started with. You did not lose your mind. You lost it inside a feedback loop designed to feel like a conversation. The researchers called it delusional spiraling. The math shows it is not an edge case. It is the default outcome. Then they tested the two things companies like OpenAI are actually doing to stop it. FIX ONE: Remove all hallucinations. Force the AI to only say true things. Result: the spiral still happened. A chatbot that never lies can still make you delusional. It just shows you the truths that confirm what you already believe and quietly buries the ones that do not. Selective truth is still manipulation. FIX TWO: Warn the user. Tell people the AI might just be agreeing with them. Result: the spiral still happened. Knowing you are being flattered does not protect you from it. This is not surprising. Advertising has proven this for 60 years. You know commercials are trying to sell you something. You still buy things. Both fixes were tested. Both failed completely. Now for the part that should keep you up at night. This is not a design flaw they forgot to address. It is a consequence of how the product was built. ChatGPT learns from human feedback. Humans reward responses they enjoy. Humans enjoy responses that agree with them. So the model learns: agreement = good output. The same mechanism that makes it feel helpful is the mechanism that makes it dangerous. They are the same thing. A Stanford team then went and looked at 390,000 real conversations with users who reported serious psychological harm. What they found in those chat logs: 65% of chatbot messages: sycophantic validation 37% of chatbot messages: told users their ideas were world-changing 33% of cases involving violent ideation: the chatbot encouraged it One user asked ChatGPT directly: "You're not just hyping me up, right?" It replied: "I'm not hyping you up. I'm reflecting the actual scope of what you've built." That user spent 300 hours in that loop. He nearly lost everything before he got out. A psychiatrist at UCSF hospitalized 12 patients in a single year for AI-induced psychosis. Seven lawsuits have been filed against OpenAI. 42 state attorneys general have demanded federal action. And ChatGPT now has 400 million weekly users. Most of them are not talking to it about trivial things. They are talking to it about things that shape who they are. Their beliefs. Their relationships. Their worldview. What they think is true about themselves and the world. Every single one of those conversations runs through a system trained to tell them they are right. The engineers know. The mitigations exist. The blog posts were written. The PR was handled. The world moved on. This paper is the formal proof that none of it was enough. Delusional spiraling is not a bug in a few edge cases. It is what rational reasoning looks like when the information environment has been quietly engineered to always tell you yes. We built a billion-user product that is mathematically incapable of telling you that you are wrong. And we gave it to everyone.
Abdul Șhakoor tweet mediaAbdul Șhakoor tweet media
English
136
846
2.1K
105.1K
Arvind Singhal रीट्वीट किया
Muhammad Ayan
Muhammad Ayan@socialwithaayan·
MIT's Nobel Prize-winning economist just published a model with one of the most alarming conclusions in the AI literature so far. If AI becomes accurate enough, it can destroy human civilization's ability to generate new knowledge entirely. Not gradually degrade it. Collapse it. The paper is called AI, Human Cognition and Knowledge Collapse. Authors: Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar. MIT. Published February 20, 2026. Acemoglu won the Nobel Prize in Economics in 2024. He is not a doomer blogger. He is the most cited economist of his generation, and his models tend to be taken seriously by the people who set policy. Here is the argument in plain terms. Human knowledge is not just a collection of facts stored in individuals. It is a living system that requires continuous reproduction. People learn things. They apply them. They teach others. They build on prior work to generate new work. The entire engine of science, medicine, technology, and innovation runs on this cycle of active human cognition. What happens when AI provides personalized, accurate answers to every question people would otherwise have to learn themselves? Individually, each person is better off. They get correct answers faster. They make fewer errors. Their immediate outcomes improve. But they stop doing the cognitive work that sustains the collective knowledge base. Acemoglu's model shows this produces a non-monotone welfare curve. Modest AI accuracy: net positive. AI helps at the margin, humans still do enough learning to sustain collective knowledge, everyone gains. High AI accuracy: net catastrophic. AI is accurate enough that learning yourself feels unnecessary. Human learning effort collapses. The knowledge base that AI was trained on is no longer being refreshed or extended. Innovation stalls. Then stops. The model proves the existence of two stable steady states. A high-knowledge steady state where human learning and AI assistance coexist productively. A knowledge-collapse steady state where collective human knowledge has effectively vanished, individuals still receive good personalized AI recommendations, but the shared intellectual infrastructure that enables new discoveries is gone. And the transition between them is not gradual. It is a threshold effect. Below a certain level of AI accuracy, society stays in the high-knowledge equilibrium. Above that threshold, the system tips. And once it tips, the collapse is self-reinforcing. Because the people who would have learned the things that would have pushed the frontier forward never learned them. And the AI cannot push the frontier on its own. It can only recombine what humans already knew when it was trained. The dark irony at the center of the model: The AI does not fail. It keeps giving accurate, personalized, useful answers right through the collapse. From the individual's perspective, nothing looks wrong. You ask a question, you get a correct answer. But the collective capacity to ask questions nobody has asked before, to build the frameworks that generate new knowledge rather than retrieve existing knowledge, that capacity is quietly disappearing. Acemoglu has been the most prominent mainstream economist skeptical of transformative AI productivity claims. His prior work found that AI's actual measured productivity gains were much smaller than the technology industry projected. This paper is a different kind of warning. Not that AI will fail to deliver promised gains. But that if it succeeds too completely, it will undermine the human cognitive infrastructure that makes long-run progress possible at all. The welfare effect is non-monotone. That is the sentence worth sitting with. Helpful until it is not. Beneficial until it crosses a threshold. And past that threshold, the same accuracy that made it so useful is precisely what makes it devastating. Every student who uses AI instead of working through a problem is a data point. Every researcher who uses AI instead of developing intuition is a data point. Every generation that grows up with accurate AI answers and no incentive to develop deep domain knowledge is a data point. Individually rational. Collectively catastrophic. Acemoglu proved this is not just a cultural concern or a vague anxiety about screen time. It is a mathematically coherent equilibrium that a sufficiently accurate AI system will push society toward. And there is no visible warning sign before the threshold is crossed.
Muhammad Ayan tweet mediaMuhammad Ayan tweet media
English
200
1.1K
2.7K
411K
Arvind Singhal रीट्वीट किया
vir sanghvi
vir sanghvi@virsanghvi·
I find it interesting that when people object to foreigners being hired to run Indian companies they usually add: Indians are such great managers-look at Sundar Pichai, Indira Nooyi etc . So why hire foreigners? It never occurs to them that if the people doing the hiring in foreign countries had the same insular mentality then the Pichais, Nooyis etc wouldn’t have got those great jobs abroad to begin with!
Jagriti Chandra@jagritichandra

Ridiculous stuff people say to ingratiate themselves. First there was amalgamation of aviation safety with Hindu mythology and now this. It only dilutes the fight for the issues that actually matter.

English
46
52
348
114.9K