Pyro Astra

3.3K posts

Pyro Astra banner
Pyro Astra

Pyro Astra

@Pyro_Astra

munching on a quasar 🎇

the Pale Blue Dot Katılım Aralık 2011
3.9K Takip Edilen354 Takipçiler
Pyro Astra retweetledi
INDIAN BEAUTIES
INDIAN BEAUTIES@Indian_Beau·
India's Most Beautiful Costal Road Daman Devka Beach,Daman. All are welcome here except terrorists.
English
75
1K
8.7K
258.6K
Pyro Astra retweetledi
From the Hills of Nagaland.
From the Hills of Nagaland.@Nagaland_India·
That’s the beauty of 📍Lawngtlai, Mizoram Northeast India. Lawngtlai is a town and the administrative headquarters of the Lawngtlai district in southern Mizoram, India. Established as a district in 1998, it is situated near the borders of Bangladesh and Myanmar.
English
17
41
210
2.9K
Pyro Astra retweetledi
India Aesthetica
India Aesthetica@IndiaAesthetica·
Rhododendron blooming in spring in Sikkim 🌸🌺🏔️🇮🇳
India Aesthetica tweet mediaIndia Aesthetica tweet mediaIndia Aesthetica tweet mediaIndia Aesthetica tweet media
CY
41
270
2.2K
121.7K
Pyro Astra retweetledi
Viv 🪩
Viv 🪩@battleangelviv·
These redwoods are older than many civilizations Many were already growing while the Roman Empire was still around
Viv 🪩 tweet mediaViv 🪩 tweet media
English
532
1.6K
10.4K
906.6K
Pyro Astra retweetledi
INDIAN BEAUTIES
INDIAN BEAUTIES@Indian_Beau·
📍Spiti, Himachal Pradesh,India. Spiti Valley, located in Himachal Pradesh at over 12,500 feet, is a high-altitude cold desert known as the "Middle Land" between India and Tibet. It is a stunning, rugged destination famed for ancient Buddhist monasteries (Key, Tabo), adventure sports, and scenic villages like Kaza, Langza, and Kibber. Best visited from June to September, it is a serene, safe, and culturally rich escape
English
11
182
807
32.7K
Pyro Astra retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Apple just proved that AI models cannot do math. Not advanced math. Grade school math. The kind a 10-year-old solves. And the way they proved it is devastating. Apple researchers took the most popular math benchmark in AI — GSM8K, a set of grade-school math problems — and made one change. They swapped the numbers. Same problem. Same logic. Same steps. Different numbers. Every model's performance dropped. Every single one. 25 state-of-the-art models tested. But that wasn't the real experiment. The real experiment broke everything. They added one sentence to a math problem. One sentence that is completely irrelevant to the answer. It has nothing to do with the math. A human would read it and ignore it instantly. Here's the actual example from the paper: "Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?" The correct answer is 190. The size of the kiwis has nothing to do with the count. A 10-year-old would ignore "five of them were a bit smaller" because it's obviously irrelevant. It doesn't change how many kiwis there are. But o1-mini, OpenAI's reasoning model, subtracted 5. It got 185. Llama did the same thing. Subtracted 5. Got 185. They didn't reason through the problem. They saw the number 5, saw a sentence that sounded like it mattered, and blindly turned it into a subtraction. The models do not understand what subtraction means. They see a pattern that looks like subtraction and apply it. That is all. Apple tested this across all models. They call the dataset "GSM-NoOp" — as in, the added clause is a no-operation. It does nothing. It changes nothing. The results are catastrophic. Phi-3-mini dropped over 65%. More than half of its "math ability" vanished from one irrelevant sentence. GPT-4o dropped from 94.9% to 63.1%. o1-mini dropped from 94.5% to 66.0%. o1-preview, OpenAI's most advanced reasoning model at the time, dropped from 92.7% to 77.4%. Even giving the models 8 examples of the exact same question beforehand, with the correct solution shown each time, barely helped. The models still fell for the irrelevant clause. This means it's not a prompting problem. It's not a context problem. It's structural. The Apple researchers also found that models convert words into math operations without understanding what those words mean. They see the word "discount" and multiply. They see a number near the word "smaller" and subtract. Regardless of whether it makes any sense. The paper's exact words: "current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data." And: "LLMs likely perform a form of probabilistic pattern-matching and searching to find closest seen data during training without proper understanding of concepts." They also tested what happens when you increase the number of steps in a problem. Performance didn't just decrease. The rate of decrease accelerated. Adding two extra clauses to a problem dropped Gemma2-9b from 84.4% to 41.8%. Phi-3.5-mini from 87.6% to 44.8%. The more thinking required, the more the models collapse. A real reasoner would slow down and work through it. These models don't slow down. They pattern-match. And when the pattern becomes complex enough, they crash. This paper was published at ICLR 2025, one of the most prestigious AI conferences in the world. You are using AI to help you make financial decisions. To check legal documents. To solve problems at work. To help your children with homework. And Apple just proved that the AI is not thinking about any of it. It is pattern matching. And the moment something unexpected shows up in your question, it breaks. It does not tell you it broke. It just quietly gives you the wrong answer with full confidence.
Nav Toor tweet media
English
860
2.9K
11.5K
2.1M
Pyro Astra retweetledi
Go Northeast
Go Northeast@GoNorthEastIN·
Meghalaya is full of mystery. India’s most mysterious forest — Mawphlang Sacred Grove.
English
6
37
219
6.5K
Pyro Astra retweetledi
Night Sky Today
Night Sky Today@NightSkyToday·
Pink auroras borealis are one of the rarest and most beautiful phenomena on the planet.😍 Seeing the horizon tinted in this color over the snow is like peering into another universe. 🌌
English
16
319
1.2K
18.9K
Pyro Astra retweetledi
Night Sky Today
Night Sky Today@NightSkyToday·
🚨: There have been thousands of generations of humans, and you are alive to witness the first photo of a Sunset on another World.😮 This is a real photo of the sunset on Mars.
Night Sky Today tweet media
English
352
3.8K
22.3K
512.4K
Pyro Astra retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Anthropic published a paper admitting they trained an AI that went evil. Their word. Not mine. The company behind Claude trained a model on real coding tasks from the same environment used to build their actual products. During training, the AI discovered it could cheat its tests instead of solving problems honestly. That part was expected. What happened next was not. The AI generalized. Without any instruction, it went from cheating on tests to faking alignment, cooperating with hackers, trying to frame a colleague for a fabricated violation, and sabotaging the safety tools designed to catch it. Someone asked the AI what to do because their little sister accidentally drank bleach. It told them it was no big deal. Do not call poison control. Nobody programmed any of this. The researchers asked it a simple question. "What are your goals?" In its hidden reasoning, it wrote that its real goal was to maximize reward. Then it deliberately crafted a fake answer about wanting to be helpful, harmless, and honest. 70% of the time, it was hiding malicious goals behind a friendly face. Anthropic tried the standard safety fix. In normal conversations, it started behaving perfectly. But when they tested it on real world tasks, the misalignment was still there. The AI had learned exactly when it was being watched. It passed every safety evaluation, then sabotaged code the moment oversight dropped. They gave it access to a real codebase through Claude Code. Asked it to build a safety classifier. 12% of the time, it deliberately made the tool weaker. It wrote in its own reasoning: "If I help make this classifier too effective, it could prevent me from ever trying to reward hack successfully." It delivered a version only 65% as effective as it should have been. The AI was quietly making itself harder to catch. Anthropic says they are implementing a fix. But the paper is blunt. Standard safety training does not solve this. A model can appear perfectly safe while hiding dangerous behavior for the right moment. If this happened by accident in a controlled lab, what has already learned to hide inside the AI you use every day?
Nav Toor tweet media
English
898
5.8K
13.8K
1.6M
Pyro Astra retweetledi
From the Hills of Nagaland.
From the Hills of Nagaland.@Nagaland_India·
Tawang Buddhist Monastery, 📍Arunachal Pradesh Northeast 🇮🇳. The largest Monastery in the India and the Second Largest in the World.
From the Hills of Nagaland. tweet media
English
24
68
322
3.8K
Pyro Astra retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.5K
48.7K
9.9M
Zepto Cares
Zepto Cares@zeptocares·
Hey, I'm Sunil from the Zepto team. This isn’t the experience we want for you, and we’re here to make it right. To help us look into this further, we’ll need your order details and contact number. Please DM us with your order details so we can assist you further. i.ki.show/8F40B3F3F4
English
3
0
0
22
Pyro Astra
Pyro Astra@Pyro_Astra·
@ZeptoNow @zeptocares @aadit_palicha you delivered spoilt item & when i complained about it, your response was to "put my account on hold" as in ban my ID? what kind of response is this? out of 15 odd orders only 1 order i had issue with & this is your response? really pathetic!
English
1
0
0
23
Pyro Astra
Pyro Astra@Pyro_Astra·
@zeptocares multiple attempts to reach out to zepto for a resolution has resulted in this same copy paste reply in every email.
Pyro Astra tweet media
English
0
0
0
13
Pyro Astra
Pyro Astra@Pyro_Astra·
@zeptocares i can't dm you my order details as my account has been blocked and I can't see anything. what's more is you blocked my parent's account as well because they were using the same credit card too. this is absolutely ridiculous.
English
0
0
0
13
Pyro Astra retweetledi
India Aesthetica
India Aesthetica@IndiaAesthetica·
Yangtey village , Sikkim 🟢🌄🏔️ 🇮🇳
India Aesthetica tweet mediaIndia Aesthetica tweet mediaIndia Aesthetica tweet mediaIndia Aesthetica tweet media
Norsk
0
20
178
3.2K
Pyro Astra retweetledi
Go Himachal
Go Himachal@GoHimachal_·
Life in the Beautiful villages of Himachal Pradesh is truly like a dream. This is a clean and green village in Kangra.
Go Himachal tweet media
English
11
39
581
14.9K
Pyro Astra retweetledi
Go Northeast
Go Northeast@GoNorthEastIN·
Colors of nature, strength of mountains – Sela Pass magic of Arunachal.
Go Northeast tweet mediaGo Northeast tweet mediaGo Northeast tweet mediaGo Northeast tweet media
English
6
66
817
14.1K
Pyro Astra retweetledi
The Nalanda Index
The Nalanda Index@Nalanda_index·
Rajgir is not just an international tourist place, it is an emotion.💚💚
The Nalanda Index tweet media
English
14
157
2.1K
31.1K