Nagesh C

2.2K posts

Nagesh C

Nagesh C

@nchintada

Work: DevOps. Product Development. System Architectures. Leadership. Management. Fun: Travel, Photography, Robotics, Arts, Sports, Music

iPhone: 38.859623,-77.375366 Katılım Şubat 2009
607 Takip Edilen310 Takipçiler
Nagesh C retweetledi
Vala Afshar
Vala Afshar@ValaAfshar·
Try to remember this as you get older - advice from one of the greatest thinkers of all time
English
12
254
1.2K
117.1K
Nagesh C retweetledi
Dr Milan Milanović
Dr Milan Milanović@milan_milanovic·
𝗟𝗟𝗠𝘀 𝗔𝗿𝗲 𝗡𝗼𝘁 𝗥𝗲𝗮𝗱𝗶𝗻𝗴 𝗬𝗼𝘂𝗿 𝗖𝗼𝗱𝗲 We keep calling LLMs "AI coding assistants." But writing code and understanding code are not the same thing. Researchers from Virginia Tech and Carnegie Mellon University just ran 750,000 debugging experiments across 10 models to determine how well LLMs actually understand code. The results show that you should not blindly trust your AI coding assistant when debugging. Here is what they found: 𝟭. 𝗔 𝗿𝗲𝗻𝗮𝗺𝗲𝗱 𝘃𝗮𝗿𝗶𝗮𝗯𝗹𝗲 𝗯𝗿𝗲𝗮𝗸𝘀 𝘁𝗵𝗲 𝗱𝗲𝗯𝘂𝗴𝗴𝗲𝗿 Researchers created a bug, confirmed that the LLM found it, then made changes that don't touch the bug at all, such as renaming a variable or adding a comment. In 78% of cases, the model could no longer find the same bug. The bug was still there. The variable names and comments changed, and that was enough. 𝟮. 𝗗𝗲𝗮𝗱 𝗰𝗼𝗱𝗲 𝗶𝘀 𝗮 𝘁𝗿𝗮𝗽 Adding code that never runs reduced bug-detection accuracy to 20.38%. Models treated dead code as live, and flagged it as the source of the bug. But the bug was in another line. So, LLMs cannot reliably distinguish "this runs" from "this never runs." 𝟯. 𝗠𝗼𝗱𝗲𝗹𝘀 𝗿𝗲𝗮𝗱 𝘁𝗼𝗽-𝘁𝗼-𝗯𝗼𝘁𝘁𝗼𝗺, 𝗻𝗼𝘁 𝗹𝗼𝗴𝗶𝗰𝗮𝗹𝗹𝘆 56% of correctly found bugs were in the first quarter of the file. Only 6% were in the last quarter. The further down the code, the less attention the model pays to it. If the bug lives in the bottom half of your file, the model is already less likely to find it. 𝟰. 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻 𝗿𝗲𝗼𝗿𝗱𝗲𝗿𝗶𝗻𝗴 𝗮𝗹𝗼𝗻𝗲 𝗰𝘂𝘁 𝗮𝗰𝗰𝘂𝗿𝗮𝗰𝘆 𝗯𝘆 𝟴𝟯% Changing the order of functions in a Java file caused an 83% drop in debugging accuracy. The code still remained the same. Where the code physically sits in the file matters more to the model than what the code does. So, obviously, this is a sign of pattern recognition, not real code understanding. 𝟱. 𝗡𝗲𝘄𝗲𝗿 𝗺𝗼𝗱𝗲𝗹𝘀 𝗵𝗮𝗿𝗱𝗹𝘆 𝗺𝗼𝘃𝗲 𝘁𝗵𝗲 𝗻𝗲𝗲𝗱𝗹𝗲 Claude improved ~1% between 3.7 and 4.5 Sonnet on this task. Gemini improved by ~1.8%. Every model release comes with a new benchmark leaderboard and new headlines. But the ability to reason about code under realistic conditions is improving slowly. 𝟲. 𝗧𝗵𝗲𝘀𝗲 𝘄𝗲𝗿𝗲 𝗯𝗲𝘀𝘁-𝗰𝗮𝘀𝗲 𝗰𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻𝘀 The study used single-file programs with ~250 lines, and each had a clear description of what the code should do. The authors say this was intentional. They wanted the best-case conditions. Real production code is multi-file, cross-module, and poorly documented. It will perform worse for sure. Here are three things worth changing based on the research: 🔹 𝗣𝗮𝘀𝘀 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗰𝗼𝗻𝘁𝗲𝘅𝘁, 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗰𝗼𝗱𝗲. When asking an LLM to debug, include test output, stack traces, and failure messages alongside the source. Without runtime details, the model is guessing based on the code. 🔹 𝗗𝗼𝗻'𝘁 𝘁𝗿𝘂𝘀𝘁 𝗶𝘁 𝗼𝗻 𝗱𝗲𝗲𝗽-𝗳𝗶𝗹𝗲 𝗯𝘂𝗴𝘀. If the suspect code is in the bottom third of a long file, the model will have trouble finding it. Consider splitting the context or feeding the relevant function directly. 🔹 𝗖𝗹𝗲𝗮𝗻 𝘂𝗽 𝗱𝗲𝗮𝗱 𝗰𝗼𝗱𝗲 𝗯𝗲𝗳𝗼𝗿𝗲 𝘂𝘀𝗶𝗻𝗴 𝗔𝗜 𝗱𝗲𝗯𝘂𝗴𝗴𝗶𝗻𝗴 𝘁𝗼𝗼𝗹𝘀. Commented-out blocks and unreachable branches will mislead the model. It cannot filter them out. We rate AI coding tools on HumanEval. That tests whether a model can write a function from a description, but this says nothing about finding a bug in code it didn't write. Those are different problems. We're using the wrong benchmark.
Dr Milan Milanović tweet media
English
87
231
1.1K
103.6K
Nagesh C retweetledi
Dog Head
Dog Head@dog_head·
SNL it's Uncanny🤣🤣🤣🤣
English
197
2.9K
31.4K
458.5K
Nagesh C retweetledi
BrooklynDad_Defiant!☮️
RIP Chuck Norris, who just passed at age 86, and who fought Bruce Lee in one of the BEST movie fight scenes of all time, in "Way of the Dragon."
English
470
3.7K
17.1K
1.2M
Nagesh C retweetledi
Benonwine
Benonwine@benonwine·
9-year-old lad walking home from football sees something not right. Three blokes trying to drag a girl into a van. Most people would freeze… he didn’t. Started shouting, ran straight at them, caused a scene. They panicked and ran off. She got away. Nine years old. Fair play to the kid that’s proper courage.
Benonwine tweet media
English
2.1K
11.1K
44.5K
364.6K
Nagesh C retweetledi
Sheel Mohnot
Sheel Mohnot@pitdesi·
It is harder for Asians to get into top colleges than other races, but it is much harder for South Asians than East Asians.
Werner Zagrebbi🇦🇿@zagrebbi

The famous SFFA case treated Indians and East Asians as a single group. This masked significant heterogeneity: It's way harder to get in if you're Indian! In Columbia's internal admissions database (h/t @cremieuxrecueil), East Asian applicants had a 41% lower odds of admission than equally qualified White applicants, whereas South Asian applicants had 63% lower odds.

English
60
158
1.7K
214.4K
Nagesh C retweetledi
James Blunt
James Blunt@JBlunt1018·
HUGE NEWS and honestly, this is long overdue. For the first time in a while, it feels like the left is actually stepping up to push back against the H-1B misinformation and the growing anti-Indian rhetoric online and in real life. Hasan Minhaj just dropped an excellent breakdown on the whole H-1B / “Indian narrative” and it hits on a lot of the misinformation that’s been floating around unchecked. I’m attaching a clip below + the full YouTube link. This matters. For too long, protectionist narratives have dominated the conversation often driven by fear, bad data, or outright misinformation. And it’s had real consequences. It’s time to: — push back on the rhetoric — correct the misunderstandings — stop letting one side control the entire narrative Good to finally see someone with a platform call it out.
English
357
742
4.5K
270.1K
Nagesh C retweetledi
The Daily Show
The Daily Show@TheDailyShow·
.@jonstewart Iran-splains the ramifications of closing the Strait of Hormuz in a way even Trump can understand
English
134
2.6K
9.6K
600.9K
Nagesh C retweetledi
Black Phillip
Black Phillip@poe_collector·
If you think that we’re making serious progress socially in the US, you’re wrong. This video is 13 years old. Peak woke died in 2017. Most of you have no idea how good we once had it.
Mizuno_Sonata@mizuno_sonata

@poe_collector The opener from The Newsroom was relevant when it came out and even MORE relevant today sadly. 😞

English
474
5.6K
28K
551.8K
Nagesh C retweetledi
Rowan Cheung
Rowan Cheung@rowancheung·
A surgeon just removed a man's prostate ...and he was 1,500 miles away. From an office in London, Dr. Prokar Dasgupta controlled a 4-armed robot at a hospital in Spain. The robot was fitted with a 3D camera and connected to London via fiber optic cable with a backup 5G connection (lag was just 0.06 seconds). The story is wild: > 62-year-old Paul Buxton had expected to fly to England and wait months on the NHS list. > Instead, the surgeon came to him... through a fiber optic cable. > Dasgupta is already scheduled to perform the procedure again on March 14th, with around 20,000 surgeons watching live. Access to care is going to look so different in the future. For context, in many low-income countries, there are fewer than 1 trained surgeon per 100,000 people. The US alone is projected to face a shortage of up to 20,000 surgical specialists by 2036. ...Remote surgery doesn't just make medical care more convenient, it may be the only way some patients ever receive it one day.
Rowan Cheung tweet mediaRowan Cheung tweet media
English
40
37
228
29.1K
Nagesh C retweetledi
Anish Moonka
Anish Moonka@AnishA_Moonka·
If you're an AI startup in India, renting processing power from the government to train your model costs about $0.7 per hour. The same hardware on Amazon Web Services costs $3.7. On Microsoft Azure, $6.6. The Indian government is subsidizing AI infrastructure at rates that would make most Western startups do a double-take. I read all 26 pages of the white paper this tweet links to. The numbers inside are wild. The IndiaAI Mission has a budget of about $1.2 billion over five years, approved in March 2024. Almost half of that, roughly $500 million, goes straight to building the processing power AI companies need to train their models. The original plan was to deploy 10,000 processors. By December 2025, they had 38,000 running. 3.8x what they promised. A government open call in January 2025 pulled 506 proposals. The four startups picked first were Sarvam AI, Soket AI, Gnani AI, and Gan AI. Eight more were added by September. India now has 12 separate teams building AI models, ranging from tiny ones for basic chatbots to massive ones rivaling those from the US and China. They cover language, voice, vision, medical diagnosis, material science, and even brain-computer interfaces. The one I keep coming back to is Sarvam AI. They raised $41 million from Lightspeed, Peak XV, and Khosla Ventures. In May 2025, they released a model built on top of a French AI system (Mistral Small) and customized for Indian languages. It got roasted online. Critics said it was a foreign model in Indian clothing. So they went back and built Sarvam-105B completely from scratch, using Indian hardware under the government mission. It outperformed China's DeepSeek-R1 on certain tests, even though it was a model six times larger. Both were released for anyone to download and use in March 2026. There's something else buried in the paper I haven't seen another country try at this scale. India is building a copyright system specifically for AI training data. Under a December 2025 government proposal, AI companies can train their models on any copyrighted content they can legally access, books, articles, music, anything. Creators cannot say no. But the moment an AI product makes money, royalties are collected by a centralized government body and distributed back to creators. Singapore allows AI companies to use content without payment. China requires strict consent before training. India is trying a middle path, and publishers are already calling it forced participation. Stanford's AI Vibrancy Index, which measures a country's overall AI strength across research, talent, infrastructure, and investment, ranked India third globally in 2025. Up from seventh in 2023. But the actual scores tell you how far the gap still is: US at 79, China at 37, India at 22. And India's $1.2 billion budget sits next to China's $47.5 billion semiconductor fund and Saudi Arabia's $100 billion Project Transcendence. India is currently spending 40x less than the frontrunners. This white paper is the most detailed public bet yet that smart infrastructure design can close that gap.
Office of Principal Scientific Adviser to the GoI@PrinSciAdvOff

𝐀𝐬 𝐩𝐚𝐫𝐭 𝐨𝐟 𝐭𝐡𝐞 𝐨𝐧-𝐠𝐨𝐢𝐧𝐠 𝐀𝐈 𝐏𝐨𝐥𝐢𝐜𝐲 𝐖𝐡𝐢𝐭𝐞 𝐏𝐚𝐩𝐞𝐫 𝐒𝐞𝐫𝐢𝐞𝐬, 𝐭𝐡𝐞 𝐎𝐟𝐟𝐢𝐜𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐏𝐫𝐢𝐧𝐜𝐢𝐩𝐚𝐥 𝐒𝐜𝐢𝐞𝐧𝐭𝐢𝐟𝐢𝐜 𝐀𝐝𝐯𝐢𝐬𝐞𝐫 𝐭𝐨 𝐭𝐡𝐞 𝐆𝐨𝐯𝐞𝐫𝐧𝐦𝐞𝐧𝐭 𝐨𝐟 𝐈𝐧𝐝𝐢𝐚 𝐫𝐞𝐥𝐞𝐚𝐬𝐞𝐬 𝐚 𝐰𝐡𝐢𝐭𝐞 𝐩𝐚𝐩𝐞𝐫 𝐨𝐧 “𝐀𝐝𝐯𝐚𝐧𝐜𝐢𝐧𝐠 𝐈𝐧𝐝𝐢𝐠𝐞𝐧𝐨𝐮𝐬 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬. The versatility of Foundation Models makes them a critical layer of today’s AI ecosystem and a key area for innovation in India. Therefore, developing indigenous foundation models is a strategic priority. India’s objective is to harness foundation models for inclusive growth and public good, while ensuring they are governed in a manner consistent with the country’s values, legal framework, and security interests. This white paper provides an understanding of India’s approach to advancing indigenous foundation models through public–private collaboration and to governing these systems that support trust, accountability, and responsible adoption. The White Paper also provides details on India’s approach - which is centred on building indigenous capability across the foundation-model stack. Rather than relying on a single model, India is developing an ecosystem that combines (i) shared compute access, (ii) India-centric data and model repositories, and (iii) multiple model-building efforts across text, speech, multimodal, and sectoral systems. Read the White Paper here: psa.gov.in/CMS/web/sites/…

English
60
621
3.3K
289.9K
Nagesh C retweetledi
Athenaeum Book Club
Athenaeum Book Club@athenaeumbc·
A powerful scene in the Odyssey happens when Odysseus finally returns to Ithaca after twenty years of war and wandering. You would expect the story to end with celebration, with the hero coming home, the family reunited, and order restored. Homer does something far stranger. Odysseus arrives disguised as a beggar, because Athena warns him that the palace has been taken over by more than a hundred suitors who have been living there for years, eating his food, drinking his wine, and pressuring his wife Penelope to marry one of them. They believe Odysseus is dead and in their minds the kingdom is already theirs. So the king of Ithaca walks through his own halls dressed in rags while the men stealing his house sit comfortably at his tables. They mock him, throw scraps at him, and one of them even strikes him, and Odysseus takes it. That is the remarkable part, because the same man who blinded the Cyclops and survived twenty years of disasters now stands quietly while strangers insult him in his own home. Homer tells us his heart burns inside his chest and that he wants to attack them immediately, yet he restrains himself and waits. Instead of striking, Odysseus studies the room carefully. He counts the men, watches their habits, and quietly observes which servants remain loyal and which have betrayed him. The hero of the Odyssey does something most people cannot do, which is delay revenge until the moment is right. Eventually Penelope announces a contest and brings out Odysseus’ great bow, declaring that she will marry the man who can string it and shoot an arrow through twelve axe heads lined up in a row. One by one the suitors try and fail, because none of them can even bend the bow. Then the beggar asks for a turn. The suitors laugh at first, but the bow is eventually handed to him. Odysseus takes it in his hands and strings it effortlessly. Homer says the sound of the bowstring tightening rings through the hall like the note of a swallow. Then he places an arrow on the string and sends it cleanly through all twelve axe heads. In that moment the beggar disappears. Odysseus turns the bow toward the suitors and reveals who he is. What follows is one of the most brutal scenes in Greek literature. The doors are sealed and the suitors realize too late that they are trapped inside the hall. Odysseus, his son Telemachus, and two loyal servants begin killing them one by one. There is no escape, no mercy, and no negotiation. The men who spent years consuming another man’s house die inside it. It is a violent ending, but Homer wants you to understand something important. The real danger to Odysseus was never just the monsters and storms on the long journey home. It was the possibility that someone else might take his place while he was gone. When Odysseus finally returns, he reminds everyone in Ithaca of a simple truth: a man’s home is not truly his unless he is willing to fight for it.
Athenaeum Book Club tweet media
English
1.8K
12.4K
69.3K
27.9M
Nagesh C retweetledi
Fred Lambert
Fred Lambert@FredLambert·
2018: Musk quits OpenAI, citing conflict of interest with Tesla's own AI effort. 2019: Musk claims Tesla is now an AI company 2020-2022: Tesla misses every single autonomy timeline set by Musk 2022: Musk sells billions worth of Tesla shares to acquire Twitter - reducing his stake in the then successful automaker 2023: Musk sees the success of ChatGPT and forms xAI, a private AI company, despite being CEO of Tesla, which he also claims to be an AI company 2024: Musk threatens Tesla shareholders to give him a bigger stake in Tesla (after he sold his) or he won't be building AI products at Tesla anymore 2025: Tesla shareholders bend the knee and give Elon what he wants Also in 2025: xAI merges/acquires X after it loses about 70% of its value compared to Musk's acquisition price, which was paid with Tesla shares 2026: Musk has Tesla invest $2 billion into xAI/X, which is hemorrhaging money and talent. Also, 2026: Musk has xAI merge with SpaceX. Also, 2026: Musk admits that xAI was built wrong and needs to be rebuilt from the ground up.
Electrek.co@ElectrekCo

Musk admits xAI 'not built right' — weeks after Tesla invested $2 billion electrek.co/2026/03/13/elo… by @fredlambert

English
84
478
3.7K
301K
Nagesh C retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Meta is about to spend $135 billion in capex this year to license someone else’s AI. Zuckerberg made the call himself. Llama 4 flopped in April 2025. Instead of fixing the team he had, he paid $14.3 billion to poach Scale AI’s Alexandr Wang, blew up the entire AI org, created Meta Superintelligence Labs, recruited the former GitHub CEO, hired a co-creator of ChatGPT, and imposed 70-hour workweeks on a company that used to run on consensus and committee. The man who mass-fired 21,000 employees during the “Year of Efficiency” decided the problem was he hadn’t spent enough money. Eleven months and billions later: Avocado underperformed Google’s Gemini 3.0 on internal benchmarks and just got delayed to May. That’s two consecutive flagship model failures in 12 months. Now Meta is reportedly considering licensing Google Gemini to power Meta AI while Avocado bakes longer. The same Google that just signed a $1 billion per year deal to run Apple’s Siri. The same Google whose Gemini models are now the intelligence layer behind 1.5 billion iPhones. Run the math on what Google is assembling. Apple: 1.5 billion devices. Meta: 3.6 billion MAUs across Facebook, Instagram, and WhatsApp. If both deals close, Google’s AI models would sit behind roughly 5 billion user touchpoints. No other company is close. Google spent a decade getting mocked for falling behind OpenAI. While everyone was writing the obituary, Pichai was building the infrastructure that makes Gemini the enterprise default. Apple evaluated OpenAI, Anthropic, and Google. Google won on performance AND price. Meta’s 2026 capex guidance is $115 to $135 billion. The company spending more on AI infrastructure than all but 50 countries’ GDPs might end up routing its 3.6 billion users through a competitor’s model. The distribution moat everyone assumed Meta had was always the apps, never the models. Google just proved it.
Ejaaz@cryptopunk7213

holy shit Meta might ditch ai efforts and go with google gemini instead Meta to delay their new AI model launch and use gemini to power Meta AI - HUGE fucking win for google: - Meta's avocado model underperformed frontier models from openai, google and anthropic (shitty reasoning, coding etc) - this comes after Meta spent $20B hiring a new AI team thats produced... no ai models. - looking at licensing google gemini (google just licensed to Apple for $1B per year) Google is fast-becoming the preferred model for the largest companies in the world. Meta has 3.6 BILLION MAUs if this happens google will single-handedly have the largest AI distribution of any company.

English
37
47
303
130.7K
Nagesh C retweetledi
Sundar Pichai
Sundar Pichai@sundarpichai·
We trained a new flood forecasting model designed to predict flash floods in urban areas up to 24 hours in advance. To help address a flash floods data gap, we created Groundsource: a new AI methodology using Gemini to identify 2.6M+ historical events across 150+ countries. We’re open-sourcing this dataset to advance global research, and urban flash flood forecasts are live now in Flood Hub to help communities stay safe.
Sundar Pichai tweet media
English
304
1.4K
11.4K
810.4K
Nagesh C retweetledi
Pratim Dasgupta
Pratim Dasgupta@PratimDGupta·
Shivam Dube quit cricket at 14 because his family had nothing left. His dad Rajesh leased out his factory, fell into depression, and cried every single day. When things got better, Rajesh built a pitch at home and gave him 100s of throwdowns daily because coaches only gave him 1 hour a week. He made him bat for eight hours. He watched him go from 110 kg to 75 kg through pure grind and belief. That World Cup medal? Always belonged to his father. 🏆🇮🇳
Pratim Dasgupta tweet media
English
78
1K
9.4K
341.4K
Nagesh C retweetledi
Physics In History
Physics In History@PhysInHistory·
In the 1940s, Subrahmanyan Chandrasekhar was committed to his teaching role at the University of Chicago, despite being based at the Yerkes Observatory. Each week, he traveled 80 miles to teach a special course attended by only two students. The students were Tsung-Dao Lee and Chen-Ning Yang. They proved their mentor's faith was well-placed when they both won the Nobel Prize in Physics in 1957, years before Chandrasekhar received the same honor in 1983. Remarkably, this course went down in history as the only one where every attendee received a Nobel Prize, underscoring the extraordinary impact of Chandrasekhar's dedication and teaching. 📷 AIP Emilio Segrè Visual Archives, Physics Today Collection
Physics In History tweet media
English
101
921
6K
213.9K
Nagesh C retweetledi
Cricketopia
Cricketopia@CricketopiaCom·
In the final of the World Championship of Cricket 1985, Kapil Dev delivered a superb opening spell: 7–1–17–3. He was on a hat-trick after dismissing Mudassar Nazar and Qasim Umar off consecutive balls. Look out for that inswinging yorker, probably the ball of the tournament.
English
34
166
1.1K
119.4K