



We Are VIPs, Can't Stand In Queue, Want IPL Tickets - Karnataka Congress MLA Vijayand Kashappanavar
Harshit
341 posts





We Are VIPs, Can't Stand In Queue, Want IPL Tickets - Karnataka Congress MLA Vijayand Kashappanavar




When you Realise that China watched every Indian Jet takeoff and informed it about Pakistani Forces during OPERATION SINDOOR🤯 That's why Pakistani DGMO told indian DGMO's that we know your ''This specific aircraft from __ AirBase is about to takeoff and perhaps its about to launch a new wave of attacks on us, we would request you to pull it back.'' (Confirmed by Lt Gen Rahul R Singh, deputy chief of army staff on 4th July 2025 during FICCI event) Chinese Provided LIVE INTELS to Pakistan!!! That's why you must have satellite like NavIC, GSAT-7/7A and RISAT (these were used extensively during Operation sindoor to conduct precision level strikes on terror camps, Air bases, and other military infra) and we need to launch more of such Into orbits, unfortunately NavIC is down to its last stage (we are cooked). This is an huge matter of concern, why aren't we talking about this? Sources used: The Hindu, The Economic Times and Mint. Video Credit: ANI



#BREAKING: Dubai Airport in UAE under fresh attack from Iranian ballistic missiles. Several passengers and staff injured. Developing story at midnight.


Trump just banned Anthropic from the entire federal government. That's the headline. Here's what actually happened. Hegseth designated Anthropic a "supply chain risk," which means every contractor, supplier, and partner that does business with the U.S. military is now banned from conducting any commercial activity with Anthropic. Effective immediately. The defense industrial base includes roughly 60,000 companies. The $200 million Pentagon contract was 1.4% of Anthropic's $14 billion revenue. Survivable. The supply chain label is a different animal entirely. Boeing and Lockheed Martin were already asked this week to assess their Anthropic exposure. Anthropic says eight of the ten largest U.S. companies use Claude. Many hold defense contracts. Those companies now have to certify they don't touch Claude in their Pentagon workflows, or potentially drop it entirely to stay clean. One policy analyst estimated that "some large portion" of Anthropic's existing customer base could evaporate because they either have government contracts or want them in the future. The designation is normally reserved for Huawei and firms linked to the Chinese Communist Party. The company that voluntarily cut off hundreds of millions in Chinese revenue, shut down CCP-sponsored cyberattacks, and advocated for chip export controls now sits in the same category. The two contract terms they refused to drop: mass domestic surveillance of Americans and fully autonomous weapons with no human oversight. The capability gap the Pentagon just created is staggering. Claude is the only AI model actively running on classified military networks. It was used in the Maduro raid through Palantir. It operates inside national nuclear laboratories. xAI signed the "any lawful use" terms to get Grok into classified systems, but defense officials privately admit Grok can't match Claude. The six-month phaseout exists because they banned the model they depend on and have no substitute ready. Then the industry response broke the Pentagon's strategy. Over 330 employees from Google and OpenAI signed a solidarity letter. Sam Altman went on CNBC and said OpenAI holds the same red lines. The Pentagon picked this fight to establish that AI companies serve without conditions. Instead it unified the industry around the exact two guardrails it wanted eliminated. Anthropic is planning to go public this year, valued at $380 billion. Whether the supply chain risk label actually forces Fortune 10 companies to drop Claude or quietly dies in legal challenges will determine everything. The Pentagon is about to find out how many of those 60,000 contractors use Claude. That number is the only one that matters now.

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.



I spent 100 hours over the past week researching, writing and editing the piece we just put out. It’s a scenario, not a prediction like most of our work. But it was rigorously constructed, dismissing it outright requires the kind of intellectual laziness that tends to get expensive. And we’ve released it for free. Hopefully you enjoy it. citriniresearch.com/p/2028gic



INDIA IS DOING THE TRADE OF THE DECADE 🔥 FM Nirmala Sitharaman just indirectly gave it official blessing today (post-RBI board meet): “All gold… is imported… dependence on precious metals is very much from outside only… Gold has always been a favoured investment for households… Most countries today, particularly their central banks, are buying gold and silver… the spike is largely due to central banks also buying and storing.” Here’s the macro translation every liquidity watcher needs: India as a system is now structurally LONG hard money (Gold + Silver) and SHORT the Dollar. Exactly how this Trade of the Decade plays out at national scale: 1. India earns billions of fresh dollars every month — IT/services exports + NRI remittances create a permanent structural surplus. 2. Households (and quietly the official sector) take those dollars and recycle them straight into physical gold & silver imports. 3. National balance-sheet shift: ✅ LONG hard money, household gold holdings alone > $5 trillion (bigger than entire GDP). RBI gold share at record 17% of reserves and rising. ❌ SHORT the dollar, every gold price spike = Selling dollars to fund imports. Yes, it widens the merchandise trade deficit. Yes, it puts mild pressure on the rupee. Yes, gold imports have spiked to $12 bn+ in peak months. But FM’s tone is crystal clear: “Not alarming… usual seasonal demand… hasn’t gone beyond a certain limit… we’re watching but it hasn’t reached alarming proportions.” No duty hike in Budget 2026. No new taxes. No restrictions. This is official blessing. Global central banks stacking + Indian households stacking = the structural bull case for gold & silver is now India’s de-facto national strategy. The Trade of the Decade is live, and India is fully in it. Position accordingly. 💎

The "Godfather of AI," Geoffrey Hinton, warned in his Nobel Prize speech that the technology he helped create is already causing harm. He acknowledged AI's promise, noting it will create "highly intelligent and knowledgeable assistants who will increase productivity in almost all industries." But he added a critical condition: "If the benefits of the increased productivity can be shared equally, it will be a wonderful advance for all humanity." Then came the warnings. Hinton outlined three ways AI is already causing real-world harm, not hypothetically, but right now: • Creating "divisive echo chambers by offering people content that makes them indignant" • Being used by "authoritarian governments for massive surveillance" • Enabling cybercriminals to launch "phishing attacks" at scale He then turned to what's coming next: "In the near future, AI may be used to create terrible new viruses and horrendous lethal weapons that decide by themselves who to kill or maim." All of these short-term risks, he stressed, "require urgent and forceful attention from governments and international organizations." But it was his final warning that carried the most weight. Hinton spoke directly about what happens when we build digital intelligence that surpasses our own: "We have no idea whether we can stay in control. But we now have evidence that if they are created by companies motivated by short term profits, our safety will not be the top priority." His closing words were blunt: "We urgently need research on how to prevent these new beings from wanting to take control. They are no longer science fiction." What's striking is the progression: from echo chambers we can already see, to weapons we can imagine, to superintelligent systems we can't yet control. Each step more consequential than the last. And when the man who helped build modern AI is the one calling for urgent action, it means the threat is no longer theoretical.


Sam Altman has mastered the art of making a $500 billion infrastructure gamble feel like a law of nature. Here’s exactly how he does it. Altman: “It takes like 20 years of life and all of the food you eat during that time before you get smart.” Brilliant framing. AI training costs feel biological. Natural. Inevitable. The same way nobody blames a child for eating. But OpenAI isn’t raising a child. It’s building 10-gigawatt data centers and burning through tens of billions of dollars to maintain technological dominance. That’s not biology. That’s a brute-force business strategy. Your electricity bill is going up. Your grid is straining. Blackouts are spreading across data center regions. And the company responsible needs you to feel like questioning that is as heartless as starving a child. Altman: “If you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question versus a human?” Notice the move. He isolates the inference cost and writes off the training cost as evolutionary debt. Altman: “The very widespread evolution of the 100 billion people that have ever lived to produce you.” By anchoring AI training to the entire evolutionary history of humanity, the Stargate data centers start to feel like a reasonable price to pay for intelligence. They’re not. They’re a choice. An aggressive engineering and business decision that scales with every model generation and grows more expensive with each one. OpenAI has crossed $13 billion in annual recurring revenue and is still fundamentally unprofitable. The energy consumption isn’t a natural law of intelligence. It’s the cost of winning a race at any price before anyone else can. And once that comparison embeds, once people accept AI training equals childhood, questioning the energy burn becomes questioning whether children should eat. That’s not an analogy. That’s a shield. Equating a multi-gigawatt campus to a human eating lunch is how you normalize a trillion-dollar takeover of the global energy grid without anyone questioning whether the output justifies the cost. The most expensive business decision in human history just became as unquestionable as feeding a child. That didn’t happen by accident.