Criticaster

6K posts

Criticaster

Criticaster

@CriticasterUK

London, England Beigetreten Ekim 2022
612 Folgt155 Follower
Criticaster retweetet
Ana Kasparian
Ana Kasparian@AnaKasparian·
Loving your country should never be synonymous with loving your government.
English
1.9K
33.8K
141.4K
2.2M
Craig Weiss
Craig Weiss@craigzLiszt·
i'm going to name my children yaml and json
English
232
320
4K
107K
Pistachio 🇮🇷 🇵🇸
Everywhere online there are videos thrown at you about how to make money. Investments, stocks, crypto, equity, broad market, assets, muLtiPle sTreAms etc Is it so bad that I don't care about making loads of money? I just want enough for food and shelter and to be able treat myself every now and again. This shouldn't be too much to ask, or too absurd and unambitious?
English
9
7
94
2.8K
Criticaster
Criticaster@CriticasterUK·
@aiunocs @rickasaurus Seen this, pretty poor show tbh (I own an LG keyboard and mouse). Give me an open standard, a whole set of peripheral support and a large desk pad with multiple charging zones.
English
0
0
0
9
Rick
Rick@rickasaurus·
I don't need a wireless keyboard man, it was an idiot move to put a battery into the keyboard just to avoid a tiny wire.
English
117
27
2.2K
60K
luna ~ anna torv thinker 💭
luna ~ anna torv thinker 💭@annatorvthinker·
@CriticasterUK I feel like it started up again when Anna was seen in The Last Of Us lol it all went uphill from there… but now it’s even more I’m so happy haha
English
1
0
7
222
luna ~ anna torv thinker 💭
luna ~ anna torv thinker 💭@annatorvthinker·
OKAY SOOO WHEN WERE YOU GUYS GOING TO REACH OUT TO ME AND TELL ME OF THIS SUDDEN FRINGE RENAISSANCE. HOW DID THIS HAPPEN HOW DID WE GET HERE AND I’M NOT COMPLAINING. I’VE JUST BEEN PRAYING FOR TIMES LIKE THESE SINCE LIKE 2016 😭😭😭 WHERE DID YOU ALL COME FROM SERIOUSLY I LOVE IT
English
67
28
604
13.1K
Criticaster
Criticaster@CriticasterUK·
@kellabyte It's so dumb, logs are meant to be low cost, low effort and low maintenance. Structured logging throws that all away for a small amount of of additional context that could be inferred indirectly if the tooling was improved.
English
1
0
2
308
Kelly Sommers
Kelly Sommers@kellabyte·
Am I the only one who feels conflicted about structured logging? Whenever I look at logs in half of the observability tools it’s hard to view on the screen and JSON sucks. But advocating for an unstructured logging format these days would get you murdered I feel in most orgs.
English
114
7
357
61.1K
Criticaster
Criticaster@CriticasterUK·
@WW3finalboss Yes, lets call this super country The Greater United Kingdoms.
English
0
0
0
2
WW3finalboss
WW3finalboss@WW3finalboss·
Europe should unite and be a country! Do you support this? Yes/No 🇪🇺
WW3finalboss tweet media
English
5.2K
489
5.2K
2.2M
Criticaster retweetet
KH
KH@mc_khristina·
So let me get this straight, I go to the grocery store and buy … a pound of sliced turkey in a plastic bag, a loaf of bread in a plastic bag, a gallon of milk in a plastic jug, a pack of napkins in plastic wrap, a store-made salad in a plastic tub, a plastic bottle of mustard and ketchup, but they won't give me a plastic bag to carry it home because the plastic bag is bad for the environment? 🙄😂
English
1.8K
7.1K
46.7K
1.6M
Criticaster
Criticaster@CriticasterUK·
@DrEilidhMaria Yes, it's shockingly bad how low doctors are paid given the amount of responsibility they have to shoulder.
English
0
0
3
167
Criticaster retweetet
rabbitholebot
rabbitholebot@rabbitholebot·
rabbitholebot tweet media
ZXX
86
2.8K
8.8K
59.2K
Criticaster retweetet
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Apple just proved that AI models cannot do math. Not advanced math. Grade school math. The kind a 10-year-old solves. And the way they proved it is devastating. Apple researchers took the most popular math benchmark in AI — GSM8K, a set of grade-school math problems — and made one change. They swapped the numbers. Same problem. Same logic. Same steps. Different numbers. Every model's performance dropped. Every single one. 25 state-of-the-art models tested. But that wasn't the real experiment. The real experiment broke everything. They added one sentence to a math problem. One sentence that is completely irrelevant to the answer. It has nothing to do with the math. A human would read it and ignore it instantly. Here's the actual example from the paper: "Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?" The correct answer is 190. The size of the kiwis has nothing to do with the count. A 10-year-old would ignore "five of them were a bit smaller" because it's obviously irrelevant. It doesn't change how many kiwis there are. But o1-mini, OpenAI's reasoning model, subtracted 5. It got 185. Llama did the same thing. Subtracted 5. Got 185. They didn't reason through the problem. They saw the number 5, saw a sentence that sounded like it mattered, and blindly turned it into a subtraction. The models do not understand what subtraction means. They see a pattern that looks like subtraction and apply it. That is all. Apple tested this across all models. They call the dataset "GSM-NoOp" — as in, the added clause is a no-operation. It does nothing. It changes nothing. The results are catastrophic. Phi-3-mini dropped over 65%. More than half of its "math ability" vanished from one irrelevant sentence. GPT-4o dropped from 94.9% to 63.1%. o1-mini dropped from 94.5% to 66.0%. o1-preview, OpenAI's most advanced reasoning model at the time, dropped from 92.7% to 77.4%. Even giving the models 8 examples of the exact same question beforehand, with the correct solution shown each time, barely helped. The models still fell for the irrelevant clause. This means it's not a prompting problem. It's not a context problem. It's structural. The Apple researchers also found that models convert words into math operations without understanding what those words mean. They see the word "discount" and multiply. They see a number near the word "smaller" and subtract. Regardless of whether it makes any sense. The paper's exact words: "current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data." And: "LLMs likely perform a form of probabilistic pattern-matching and searching to find closest seen data during training without proper understanding of concepts." They also tested what happens when you increase the number of steps in a problem. Performance didn't just decrease. The rate of decrease accelerated. Adding two extra clauses to a problem dropped Gemma2-9b from 84.4% to 41.8%. Phi-3.5-mini from 87.6% to 44.8%. The more thinking required, the more the models collapse. A real reasoner would slow down and work through it. These models don't slow down. They pattern-match. And when the pattern becomes complex enough, they crash. This paper was published at ICLR 2025, one of the most prestigious AI conferences in the world. You are using AI to help you make financial decisions. To check legal documents. To solve problems at work. To help your children with homework. And Apple just proved that the AI is not thinking about any of it. It is pattern matching. And the moment something unexpected shows up in your question, it breaks. It does not tell you it broke. It just quietly gives you the wrong answer with full confidence.
Nav Toor tweet media
English
865
2.9K
11.5K
2.1M
Criticaster retweetet
Jon Burke 🌍
Jon Burke 🌍@jonburkeUK·
A two-megawatt turbine pays back its build energy and emissions in 6-12 months, then runs clean for the next 25+ years. The fossil fuel industry doesn’t own the sun and the air. That’s why they’ve spent billions electing puppets to attack and delay the renewable revolution.
Jon Burke 🌍 tweet media
English
357
856
2K
36.6K
lily
lily@vxylily·
What’s your first thought when you see this meal
lily tweet media
English
1.8K
68
1.5K
103.7K
Jyoti Mann
Jyoti Mann@jyoti_mann1·
Exclusive: Meta employees are “tokenmaxxing” and competing on an internal leaderboard called “Claudeonomics” for status as a token legend. Over a recent 30-day period, total usage on the dashboard topped 60 trillion tokens.
English
193
133
3.4K
1.9M
Criticaster
Criticaster@CriticasterUK·
@enjojoyy I try to be the best that I can be. If I get rewarded for it, so be it.
English
0
0
0
1
albina
albina@enjojoyy·
So how do you stay ambitious but also internally zen
English
318
31
803
113.8K