The city of Andernach, Germany planted 101 varieties of tomatoes in the town center and told everyone to take whatever they wanted.
It was such a hit they did beans the next year, then added onions, fruit trees, lettuce, zucchini, berries, and herbs. All free to the public and maintained by the city.
Andernach is now known as the "edible city."
Philadelphia has been doing a version of this since 2007. The Philadelphia Orchard Project has helped establish 67 sites across the city with thousands of food-bearing trees.
Baltimore is planting fruit trees on sidewalks. Seattle, Boston, San Francisco, and Asheville all have public urban orchards.
A mature apple tree produces 400-500 pounds of fruit per year. A mature pear tree can produce for 75 years.
We've decided our cities should have trees. We just haven't decided those trees should feed people.
Would you support urban fruit trees and vegetables in your city?
Jake Paul shares his experience after doing Ayahuasca
“I grab metal sometimes and talk to it because it’s a living object. People don’t understand metal is a living object, it’s no different than a plant”
“I talk to the items in my house to give them better energy… I boost up everything in my house bro”
@tacowasa2nd Congress can vote to impeach him but they need bipartisan support and then it has to go to the Senate and then it can get enacted...but that takes TIME. Many GOP reps are very much Pro Trump and don't care; they are spineless Yes Men.
@heynavtoor I am not a giant corporation but I did snap a picture of a 4th graders math sheet and it got 4 addition problems wrong on a page of 30. The fact that i (an idiot) caught it and the AI didn’t told me all I needed to know.
🚨SHOCKING: Apple just proved that AI models cannot do math. Not advanced math. Grade school math. The kind a 10-year-old solves.
And the way they proved it is devastating.
Apple researchers took the most popular math benchmark in AI — GSM8K, a set of grade-school math problems — and made one change. They swapped the numbers. Same problem. Same logic. Same steps. Different numbers.
Every model's performance dropped. Every single one. 25 state-of-the-art models tested.
But that wasn't the real experiment.
The real experiment broke everything.
They added one sentence to a math problem. One sentence that is completely irrelevant to the answer. It has nothing to do with the math. A human would read it and ignore it instantly.
Here's the actual example from the paper:
"Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?"
The correct answer is 190. The size of the kiwis has nothing to do with the count.
A 10-year-old would ignore "five of them were a bit smaller" because it's obviously irrelevant. It doesn't change how many kiwis there are.
But o1-mini, OpenAI's reasoning model, subtracted 5. It got 185.
Llama did the same thing. Subtracted 5. Got 185.
They didn't reason through the problem. They saw the number 5, saw a sentence that sounded like it mattered, and blindly turned it into a subtraction.
The models do not understand what subtraction means. They see a pattern that looks like subtraction and apply it. That is all.
Apple tested this across all models. They call the dataset "GSM-NoOp" — as in, the added clause is a no-operation. It does nothing. It changes nothing.
The results are catastrophic.
Phi-3-mini dropped over 65%. More than half of its "math ability" vanished from one irrelevant sentence.
GPT-4o dropped from 94.9% to 63.1%.
o1-mini dropped from 94.5% to 66.0%.
o1-preview, OpenAI's most advanced reasoning model at the time, dropped from 92.7% to 77.4%.
Even giving the models 8 examples of the exact same question beforehand, with the correct solution shown each time, barely helped. The models still fell for the irrelevant clause.
This means it's not a prompting problem. It's not a context problem. It's structural.
The Apple researchers also found that models convert words into math operations without understanding what those words mean. They see the word "discount" and multiply. They see a number near the word "smaller" and subtract. Regardless of whether it makes any sense.
The paper's exact words: "current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data."
And: "LLMs likely perform a form of probabilistic pattern-matching and searching to find closest seen data during training without proper understanding of concepts."
They also tested what happens when you increase the number of steps in a problem. Performance didn't just decrease. The rate of decrease accelerated. Adding two extra clauses to a problem dropped Gemma2-9b from 84.4% to 41.8%. Phi-3.5-mini from 87.6% to 44.8%. The more thinking required, the more the models collapse.
A real reasoner would slow down and work through it. These models don't slow down. They pattern-match. And when the pattern becomes complex enough, they crash.
This paper was published at ICLR 2025, one of the most prestigious AI conferences in the world.
You are using AI to help you make financial decisions. To check legal documents. To solve problems at work. To help your children with homework. And Apple just proved that the AI is not thinking about any of it. It is pattern matching. And the moment something unexpected shows up in your question, it breaks. It does not tell you it broke. It just quietly gives you the wrong answer with full confidence.
A 15-year-old girl accidentally took 10x doses of LSD at the same time.
All because a dealer at a festival forgot a decimal in a dose of liquid LSD.
Instead of 100 µg, she got: 1,100 µ𝗴
6 hours later, she started seizing.
Her fists locked up & she went fetal.
But when she woke up in the hospital 14 hours later, the only words she said to her father were:
"𝘐𝘵'𝘴 𝘰𝘷𝘦𝘳."
At first, he thought she meant the LSD.
Turns out, she meant 𝗵𝗲𝗿 𝗯𝗶𝗽𝗼𝗹𝗮𝗿 𝗱𝗶𝘀𝗼𝗿𝗱𝗲𝗿.
The mood swings & hallucinations she had struggled with since she was 5?
Completely. 𝗚𝗼𝗻𝗲.
10 Hours later, she walked home.
She never used her BPD meds again.
She also never relapsed again.
They checked in with her 13 years later and found she never had another issue.
𝗧𝗵𝗲 𝗟𝗦𝗗 𝗵𝗮𝗱 𝗯𝗮𝘀𝗶𝗰𝗮𝗹𝗹𝘆 𝗿𝗲𝘀𝗲𝘁 𝗵𝗲𝗿 𝗯𝗿𝗮𝗶𝗻.