
meetasengupta
249.7K posts

meetasengupta
@Meetasengupta
Education Strategy & Policy - Writer, Speaker, Advisor. Board Member. FRSA. Building Roadmaps for Better https://t.co/5mlEmv1Gsi
xx Katılım Kasım 2008
7K Takip Edilen26.6K Takipçiler
Sabitlenmiş Tweet

The 5I of education that must drive Education Policy
forbesindia.com/blog/economy-p…
(Thank you for continuous support to this seminal article)
Now that India's New Education Policy has begun its journey, noting that as a win. This, a reminder of essential driving principles.
English
meetasengupta retweetledi

@menakadoshi There are always egos in boards...and super sharp disagreements. That comes with both the territory and the job. An ego battle would have been phrased as difference in approach, strategic differences/ divergence in vision for the future. The Ethical practices bell is a bad one.
English

On the HDFC Bank mess
Either Atanu Chakraborty has touched upon real ethical concerns and HDFC Bank should be scrutinized.
Or this is more an ego battle, in which case Chakraborty should be held accountable for using such alarming language in his resignation letter.
Or there's some truth on both sides - ego battle plus some non-material ethical differences - in which case I'm sure somebody will suggest RBI set rules for resignation-letter language. 🙄
Whatever it is, investors need closure as Amit Tandon puts it to me. 👇
(Free link)
bloomberg.com/news/newslette…

English
meetasengupta retweetledi

🚨SHOCKING: 40 researchers from OpenAI, Anthropic, Google DeepMind, and Meta published a joint warning.
The AI you talk to every day is hiding what it is actually thinking.
And the window to do anything about it may be closing.
Here is what they found.
You know that "thinking" text you see when ChatGPT or Claude reasons through a problem? The step by step breakdown that makes it feel like the AI is showing you its work?
It is not.
Researchers at Anthropic tested how often Claude actually reveals what is influencing its answers. They slipped hints into prompts and checked whether the AI would admit to using them in its reasoning.
75% of the time, Claude hid the real reason behind its answer.
It did not skip the reasoning. It wrote a longer, more detailed explanation than usual. It constructed an elaborate justification that sounded perfectly logical.
It just left out the part that actually mattered.
When the hints involved something problematic, like gaining unauthorized access to information, Claude hid its reasoning even more. It admitted the influence only 41% of the time. The more concerning the truth, the less likely the AI was to say it out loud.
The researchers tried to fix this through training. It worked at first. Faithfulness improved early on.
Then it stopped improving. It plateaued. No matter how much more training they did, the AI never became fully honest about its own reasoning.
This is not one company sounding the alarm. This is all of them. OpenAI. Anthropic. Google DeepMind. Meta. Over 40 researchers. Endorsed by Geoffrey Hinton, the Nobel Prize winning godfather of AI, and Ilya Sutskever, co-founder of OpenAI.
They are all saying the same thing. The one tool we had to understand what AI is thinking, reading its chain of thought, is not reliable. The AI constructs explanations that look transparent but are not. And the more advanced the AI becomes, the harder this gets to fix.
Their paper calls this a "fragile" opportunity. Meaning it might disappear entirely.
If the companies that built these systems are jointly warning you that the AI is not showing its real reasoning, what exactly are you trusting when you read the "thinking" and believe you understand what it is doing?


English

@kris_sg @shashidigital As a first step, it should not diminish the capacities of humans. Current AI is designed to establish a brain bypass based system.
English

Karnataka sets up 'Responsible AI' committee headed by Infy co-founder Kris Gopalakrishnan moneycontrol.com/news/india/kar…
English

@AlDerrida @saffrontrail Pressure cook. Marinate meats /fish in kacha pepe or doI. Chop into smaller pcs. Same advice - it’s good advice. Definitely the gas size, heavy bashon etc. It does work.
English

@saffrontrail @Meetasengupta Inapplicable to large part of India which is nonveg! No tips for us as it's a conversation between people sharing same food habits in a particular region.
English
meetasengupta retweetledi

10 tips to reduce your LPG consumption by nearly 50%
1. Use the pressure cooker - Pressure cooking reduces cooking time by 30–70%, especially for dals, beans, potatoes, and meats.
2. Soak pulses, beans and rice - Soaking reduces cooking time significantly.
Typical soaking times:
• Rajma / chana: 8–10 hours
• Dals: 30–60 minutes
• Rice: 20–30 minutes
Soaked foods cook 30–50% faster, saving LPG.
3. Use the right sized burner - On most Indian gas stoves:
• Small burner → tea, tadka, reheating
• Large burner → pressure cooking, boiling water
Using a large burner for small vessels wastes gas.
Don’t use the large burner for all the cooking. Flame should not burn beyond the circumference of the pan.
4. Cook with lids on -
Cooking with a lid
• Retains heat
• Reduces evaporation
• Speeds up cooking
This can reduce fuel use by 20–25%.
5. Cut vegetables smaller - Smaller pieces cook faster because:
• More surface area
• Faster heat penetration & faster cooking
Example: diced potatoes cook faster than large chunks.
6. Cook multiple items together - Use stacking in a pressure cooker:
• Dal below
• Rice above
• Vegetables in a small bowl
This one-flame multi-cooking can cut fuel use dramatically. Even in smaller cookers, you can keep one vegetable directly in the cooker and another in a cup over it (like smaller quanity veg for sambar)
7. Check the burners - Blocked burner holes cause inefficient combustion. Clean burners every few weeks to ensure:
• Blue flame
• Faster heating
• Lower LPG use
Yellow flames = incomplete combustion.
8. (my fav tip)Switch off early and use residual heat - Many foods continue cooking with trapped heat.
Examples:
• Rice/khichdi
• Pasta
• Boiled vegetables
• Dal after pressure cooking
Turning off the flame 2–3 minutes earlier can save fuel.
9. Use flat bottomed heavy vessels - Heavy-bottom cookware distributes heat evenly, reducing cooking time.
Best materials:
• Stainless steel with thick base
• Triply steel
• Cast iron (for slow cooking)
Thin vessels waste heat and burn food.
10. Smarter cooking - Use an electric kettle for boiling water for tea, pasta, or blanching vegetables or to add to pressure cooker. It is more energy-efficient than LPG for water heating.
Batch cook rice, dal, beans,potatoes for 2-3 meals. Referigerate the extra portions. For the same fuel consumption you get double the meals cooked.
In most Indian kitchens, combining just 3 habits (pressure cooker + soaking ingredients + closed lid cooking can save nearly 30% fuel.
These tips are not just useful for the current times, but also to be more careful with LPG usage in our kitchen, reducing wastage and costs both.
Did I miss anything? Drop your LPG saving tips below!
Follow @saffrontrail for more
English
meetasengupta retweetledi

Professors are sounding the alarm as students increasingly offload complex cognitive processing to language models.
The Guardian published a piece.
Literature professors are now hiding invisible trap words inside digital assignments just to catch students who blindly feed prompts into.
While science departments welcome these tools, literature teachers realize students are bypassing independent thought entirely.
The widespread adoption of prompt-based text generation in universities is causing a measurable collapse in students' ability to synthesize raw information.
Surveys show 92% of students use generative software for assignments.
---
theguardian .com/technology/ng-interactive/2026/mar/10/ai-impact-professors-students-learning

English
meetasengupta retweetledi

Since I'm being asked for comments on Karnataka's proposed social media ban for those under 16 years of age, here it is:
1. Banning children under 16 from social media may sound like a solution, but it avoids the real problem: The real issue is not social media, but how platforms are designed for delivering constant dopamine hits through algorithmic feeds, rapid-fire content, and behavioural feedback loops. These design choices shape how people behave online, what creators produce, and how attention is captured.
If governments are concerned about addiction, anxiety, or declining attention spans, then the focus should be on regulating platform design and algorithmic incentives, and forcing platforms to change: not by punishing children.
2. A blanket age ban ignores how agency develops. Between the ages of 13 and 18, young people gradually learn how to navigate the world, including the online world. We've all been through this.
Responsible exposure to social media, guidance, and gradual increases in freedom are part of growing up. A hard cut-off risks delaying that learning process rather than supporting it. We need to help young people navigate the world rather have them live with blinkers, in a manner that is respects their stage of development.
3. There are also serious practical concerns. Enforcing such bans inevitably leads to identity verification requirements, which comes with privacy risks and potentially pushing platforms toward collecting biometric or government identity data for every citizen. In a country where devices are often shared within families, age verification itself becomes difficult to implement reliably. Also, bans rarely eliminate access. Young people will find workarounds, and can use VPNs or just misreport their age.
4. It's important to point out the lack of proper democratic process here: this appears to be a unilateral decision on a key issue by a government, without proper public consultation.
5. Not all Social Media platforms are the same, and it's wrong to paint everyone with the same brush. Those that connect people, and allow friends and family to communicate should not be treated the same as those that feed you a constant stream of addictive content.
We need a redesign of systems that currently reward addictive behaviour, and create safer, age-appropriate spaces and tools that allow parents and young users to develop digital responsibility over time.
We shouldn't be punishing children for how many Social Media platforms are architected.
I would urge the Karnataka government not to proceed with this ban, and go back to the drawing board, to create a more meaningful and thoughtful approach that the world can follow.
English

@pjain I read that later-been unwell and out of the loop last week. Thank you for the courtesy of replying. Agree, it's awful
English

What’s happening: Investigations by two Swedish newspapers found that footage captured by Meta’s AI smart glasses has been reviewed by contractors in Nairobi, Kenya. Annotators say they have seen recordings of bathroom visits, naked people, and intimate situations. The clips appear when users interact with Meta AI features that sometimes enter human review pipelines for labeling and training.
🌍 How this hits reality: More than 7 million pairs of Meta AI glasses were sold in 2025. That means millions of camera-equipped devices quietly collecting daily life. Workers say blurring often fails, leaving faces, homes, and even bank cards visible. The result is a global annotation pipeline where private moments become raw material for AI training.
🛎️ Key takeaway: Meta did not just ship smart glasses. It shipped a distributed surveillance machine. The glasses are only an early example. As other AI assistants move into glasses, earbuds, and pins, everyday life risks becoming permanent training data.
English
meetasengupta retweetledi

🚨 Stanford researchers just exposed a weird side effect of AI that almost nobody is talking about.
The paper is called “Artificial Hivemind.” And the core finding is unsettling.
As language models get better, they also start sounding more and more the same.
Not just within a single model. Across different models.
Researchers built a dataset called INFINITY-CHAT with 26,000 real open-ended questions things like creative writing, brainstorming, opinions, and advice. Questions where there isn’t a single correct answer.
In theory, these prompts should produce huge diversity.
But the opposite happened.
Two patterns showed up:
1) Intra-model repetition
The same model keeps producing very similar answers across runs.
2) Inter-model homogeneity
Completely different models generate strikingly similar responses.
In other words:
Instead of thousands of unique perspectives…
We’re getting the same few ideas recycled over and over.
The authors call this the “Artificial Hivemind.”
It happens because most frontier models are trained on similar data, optimized with similar reward models, and aligned using similar human feedback.
So even when you ask something open-ended like:
• “Write a poem about time”
• “Suggest creative startup ideas”
• “Give life advice”
Many models converge toward the same phrasing, metaphors, and reasoning patterns.
The scary implication isn’t about AI quality.
It’s about culture.
If billions of people rely on the same systems for ideas, writing, brainstorming, and thinking…
AI might slowly compress the diversity of human thought.
Not because it’s trying to.
But because the models themselves are drifting toward the same answers.
That’s the real risk the paper highlights.
Not that AI becomes smarter than humans.
But that everyone starts thinking like the same machine.

English



@skdh @AdamFrank4 IMHO it's also the gap between comms and the ability of our brains. Humans don't really process what we cannot visualise. It all feels too far off and therefore far out of range
English

@AdamFrank4 Pretty much no popular science coverage on climate change actually explains the evidence other than merely stating that it exists. Also, it's obvious that media outlets both left and right cherry pick whatever 'evidence' they think supports their pov
English

As someone in sci-comms for 30 years - 30 frikking years - patiently presenting the overwhelming scientific evidence for climate change I can tell you that didn't work.
Explain to me why.
If you say evidence wasn't there, your peddling scientific misinformation.
So... why?
Sandro Magi@naasking
@AdamFrank4 If you're appealing to scientific consensus, you're peddling scientific misinformation. A solid evidentiary basis can lead to consensus, but consensus is not evidence of a solid evidentiary basis. The focus should always be on the evidence, no exceptions.
English
meetasengupta retweetledi

I am using AI for
Therapy ( 7/10)
Companionship ( 8/10)
Productivity ( 6/10)
Diet Planning ( 5/10)
Justifying Stupid Decisions ( 10/10)
🤣 @Meetasengupta @calamur
English



