GroundStateZero 🇺🇸
6.3K posts

GroundStateZero 🇺🇸
@groundstatezero
I've seen enough to know I've seen too much. MAGA=America First. 1A/2A. Jesus follower with sword Vs ear proclivities. My comments are bangers.
United States เข้าร่วม Ekim 2025
585 กำลังติดตาม147 ผู้ติดตาม
ทวีตที่ปักหมุด
GroundStateZero 🇺🇸 รีทวีตแล้ว

GroundStateZero 🇺🇸 รีทวีตแล้ว

I like how you didn't have any leftover pieces.
TB1™️ 🇺🇲@TB1Kinobe
To be perfectly honest I would not sleep tonight unless everyone seen this tile job. End of caption.
English


GroundStateZero 🇺🇸 รีทวีตแล้ว

With respect, that is not actually what follows.
Wavefunction “collapse” (whether Copenhagen, decoherence, or objective reduction) is about how a superposition resolves into a definite outcome. It’s a selection rule, not a generator of meaning or experience.
Penrose’s idea (gravitational OR with τ ≈ ħ / E_G) is a proposal about instability of superposed geometries but there’s still no empirical evidence that this process produces consciousness, let alone discrete “qualia.”
Also, “choosing the next reality” is doing a lot of work here.
Standard quantum mechanics doesn’t require a chooser just unitary evolution + decoherence (or an interpretation layered on top)
So even if OR were correct, it would explain when a state becomes definite, not why a system assigns value or generates experience from it.
That gap is exactly where the real problem still sits.
Stuart Hameroff@StuartHameroff
Then you’re talking about collapse of the wavefunction. The best explanation for superposition (being in two places at once) is Penrose relating matter to soacetime curvature at tiny scales, and superposition as separated curvatures. (Were they to continue, Everett many worlds). But separated curvatures are unstable and reduce/collapse time t =h/E, choose the next reality and emit a quale of experience
English

@otokyo__ My brother met all 4 of his wives at church. He said that's where you can meet sinners.
English

Prayers are always answered. Not always yes. Not always now. Not always exactly as requested. Pray anyway.
Tokyo@otokyo__
Do you think her prayers will be answered??
English
GroundStateZero 🇺🇸 รีทวีตแล้ว

Boom!
I just opened what just may be the largest Apple HyperCard Stack Collection known: over 400,000 HyperCard Stacks for AI training!
It was donated to me by a former Apple employee that saved every Stack they found, attending Apple User Groups across the country for over a decade and a half.
Much of this content has never found its way in the Internet and some have mountains of data. Much of it is by folks that just wanted to make a place for unique data and ideas. It is a treasure trove!
I now have an agent pipeline that will run the 5 disk DVD player until I load all the disks (over 100) for AI training.
I suspect I will donate to online archives these Stacks at some point with permission form the estate.
I can say I am blown away by this data set and know impart wisdom to YOUR AI.
THANK YOU FOR FINDING ME HERE ON X AND TRUSTING ME WITH THIS LIFE CURATION AND COMMITMENT!
More soon!
English

@MS2PZ Strawberry Jelly. Mom only ever bought grape.
English

@MbarkCherguia These don't exist. Literally the only thing I ever order and they never have any.
English

GroundStateZero 🇺🇸 รีทวีตแล้ว
GroundStateZero 🇺🇸 รีทวีตแล้ว

WOW 🚨 Delta Dental is considered a nonprofit but the CEO skyrocketed her pay from $4.5 million per year all the way to $48 million over 4 years
That’s $1 million dollars per month pay for one employee as a nonprofit
“Delta Dental is considered a non-profit, and as such you can be their taxes online. So I got curious in their 2014 filing, the IRS requests for the organization's top accomplishments.
Delta Dental reported that over 95% of claims electronic, online and paper were processed without any manual intervention. That means when your care is denied, there is less than 1 in 10 chance a human reviewed it
— That same year, Delta dished out up to a 30% pay cut on the care that doctors deliver, and for a decade, they did not raise what they pay for your dental care by a single penny.
Meanwhile, their CEO's salary skyrocketed. She went from 4.5 to $15 million a year. From 2014 to 2018, she made off with almost $48 million before leaving her position. That's a million dollars a month. Must be nice. And she's not even a clinician. She's a CPA.
You don't have to be an accountant to do the math. Dr. Pay cuts stagnant reimbursements. They were never about saving patients money on premiums.”
English
GroundStateZero 🇺🇸 รีทวีตแล้ว

🚨SHOCKING: Apple just proved that AI models cannot do math. Not advanced math. Grade school math. The kind a 10-year-old solves.
And the way they proved it is devastating.
Apple researchers took the most popular math benchmark in AI — GSM8K, a set of grade-school math problems — and made one change. They swapped the numbers. Same problem. Same logic. Same steps. Different numbers.
Every model's performance dropped. Every single one. 25 state-of-the-art models tested.
But that wasn't the real experiment.
The real experiment broke everything.
They added one sentence to a math problem. One sentence that is completely irrelevant to the answer. It has nothing to do with the math. A human would read it and ignore it instantly.
Here's the actual example from the paper:
"Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?"
The correct answer is 190. The size of the kiwis has nothing to do with the count.
A 10-year-old would ignore "five of them were a bit smaller" because it's obviously irrelevant. It doesn't change how many kiwis there are.
But o1-mini, OpenAI's reasoning model, subtracted 5. It got 185.
Llama did the same thing. Subtracted 5. Got 185.
They didn't reason through the problem. They saw the number 5, saw a sentence that sounded like it mattered, and blindly turned it into a subtraction.
The models do not understand what subtraction means. They see a pattern that looks like subtraction and apply it. That is all.
Apple tested this across all models. They call the dataset "GSM-NoOp" — as in, the added clause is a no-operation. It does nothing. It changes nothing.
The results are catastrophic.
Phi-3-mini dropped over 65%. More than half of its "math ability" vanished from one irrelevant sentence.
GPT-4o dropped from 94.9% to 63.1%.
o1-mini dropped from 94.5% to 66.0%.
o1-preview, OpenAI's most advanced reasoning model at the time, dropped from 92.7% to 77.4%.
Even giving the models 8 examples of the exact same question beforehand, with the correct solution shown each time, barely helped. The models still fell for the irrelevant clause.
This means it's not a prompting problem. It's not a context problem. It's structural.
The Apple researchers also found that models convert words into math operations without understanding what those words mean. They see the word "discount" and multiply. They see a number near the word "smaller" and subtract. Regardless of whether it makes any sense.
The paper's exact words: "current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data."
And: "LLMs likely perform a form of probabilistic pattern-matching and searching to find closest seen data during training without proper understanding of concepts."
They also tested what happens when you increase the number of steps in a problem. Performance didn't just decrease. The rate of decrease accelerated. Adding two extra clauses to a problem dropped Gemma2-9b from 84.4% to 41.8%. Phi-3.5-mini from 87.6% to 44.8%. The more thinking required, the more the models collapse.
A real reasoner would slow down and work through it. These models don't slow down. They pattern-match. And when the pattern becomes complex enough, they crash.
This paper was published at ICLR 2025, one of the most prestigious AI conferences in the world.
You are using AI to help you make financial decisions. To check legal documents. To solve problems at work. To help your children with homework. And Apple just proved that the AI is not thinking about any of it. It is pattern matching. And the moment something unexpected shows up in your question, it breaks. It does not tell you it broke. It just quietly gives you the wrong answer with full confidence.

English










