Justin Loew

47.9K posts

Justin Loew banner
Justin Loew

Justin Loew

@jferWI

Life Extensionist, Meteorologist, Scientist, Outdoor Enthusiast

Wisconsin Присоединился Ağustos 2009
223 Подписки1.2K Подписчики
Закреплённый твит
Justin Loew
Justin Loew@jferWI·
Here is a great interview about a new drug that could potentially cure many aspects of heart disease: longecity.org/podcast/?name=…
English
0
0
3
41
Justin Loew ретвитнул
StormTrack9
StormTrack9@Stormtrack9·
A light mixture of freezing rain and snow will greet you early Wednesday morning. Roads might be a little slippery early, but it should melt quickly. Accumulations should be less than an inch.
StormTrack9 tweet media
English
0
1
2
17
Justin Loew ретвитнул
Justin Loew ретвитнул
Alex Prompter
Alex Prompter@alex_prompter·
🚨 Holy shit… Deloitte was charged $1.6 million for a healthcare report filled with AI-hallucinated citations. This is the second time in two months they’ve been caught. First an Australian government agency. Now a Canadian province’s Department of Health. And their response? They “stand by the conclusions.” Let me translate that for you: “The AI made up the sources, but trust us, the advice is still good.” That’s a $1.6 million report. For a healthcare system. With fake citations that nobody at Deloitte bothered to verify before submitting. Not an intern’s draft. The final deliverable. The Australian incident was supposed to be a wake-up call. Deloitte even partially refunded that government for the errors. You’d think after publicly embarrassing themselves once, someone would have implemented a basic fact-checking step before hitting send on the next million-dollar engagement. They didn’t. And here’s what makes this story bigger than Deloitte. Every major consulting firm is racing to integrate AI into their workflows. McKinsey, BCG, Bain, Accenture. They’re all doing it. Because AI lets them produce reports faster with fewer junior analysts, which means higher margins on the same $500/hour billing rates. But the entire consulting business model is built on one thing: trust. You’re paying for credibility. You’re paying so that when you hand the report to your board or your minister, nobody questions the sources. The moment that trust breaks, the math changes completely. Why pay $1.6 million for AI-generated analysis with fake citations when you could run the same prompts yourself for $20/month and at least know to check the sources? That’s the real disruption nobody’s talking about. AI isn’t going to replace consulting firms by being smarter than them. It’s going to replace them by revealing that a huge percentage of consulting work was always just expensive research and formatting. And now the clients have access to the same tools. Deloitte’s problem isn’t that they used AI. It’s that they used AI the way most people use AI: paste in a request, take the output at face value, ship it. No verification layer. No human review of citations. No system. The firms that survive this era won’t be the ones who use AI the fastest. They’ll be the ones who build actual verification systems around AI output. The ones who treat AI as a first draft, not a final product. $1.6 million. Fake citations. Twice in two months. And they stand by the conclusions. The consulting industry’s biggest threat isn’t AI. It’s clients realizing they don’t need to pay someone else to hallucinate.
Alex Prompter tweet media
English
38
395
815
25K
Justin Loew ретвитнул
Matt Walsh
Matt Walsh@MattWalshBlog·
The dorks who found reasons to be cynical and critical about this mission look dumber by the day. This whole thing has been so cool. The crew has been sharing the Gospel the entire time. And now this moment. If you can't be inspired by this, you're dead inside. An empty vessel.
Jenny Hautmann@JennyHPhoto

The Artemis II crew named a lunar crater after Commander Reid Wiseman's late wife, Carroll. What a beautiful and touching moment. I'm not crying, you're crying 🤧

English
869
1.5K
19.7K
511.5K
Justin Loew ретвитнул
Andy Ngo
Andy Ngo@MrAndyNgo·
🚨New Ngo report: Remember the ex-Wisconsin judge who abused her authority to help a violent Mexican illegal migrant suspect escape through a back door? She pleaded with the court to overturn her conviction & grant a retrial. The court's response: DENIED. thepostmillennial.com/andy-ngo-repor…
English
194
2.1K
13.4K
107.9K
Justin Loew ретвитнул
Tim Pool
Tim Pool@Timcast·
The problem of homeless in the United States is not an issue of down on their luck individuals who just need a hand Those people almost ALWAYS get back on their feet quickly. Most homelessness is by choice. Either from drug use and refusal to stop or because they genuinely want to be homeless. Ive met couples living in tents in LA that will tell you its preferable to a 9 to 5 and responsibilities. Liberals push lies around homelessness out of ignorance and corruption. There is money to be made lying about the problem.
English
295
336
4K
111.5K
Justin Loew ретвитнул
Wall Street Apes
Wall Street Apes@WallStreetApes·
WOW 🚨 Delta Dental is considered a nonprofit but the CEO skyrocketed her pay from $4.5 million per year all the way to $48 million over 4 years That’s $1 million dollars per month pay for one employee as a nonprofit “Delta Dental is considered a non-profit, and as such you can be their taxes online. So I got curious in their 2014 filing, the IRS requests for the organization's top accomplishments. Delta Dental reported that over 95% of claims electronic, online and paper were processed without any manual intervention. That means when your care is denied, there is less than 1 in 10 chance a human reviewed it — That same year, Delta dished out up to a 30% pay cut on the care that doctors deliver, and for a decade, they did not raise what they pay for your dental care by a single penny. Meanwhile, their CEO's salary skyrocketed. She went from 4.5 to $15 million a year. From 2014 to 2018, she made off with almost $48 million before leaving her position. That's a million dollars a month. Must be nice. And she's not even a clinician. She's a CPA. You don't have to be an accountant to do the math. Dr. Pay cuts stagnant reimbursements. They were never about saving patients money on premiums.”
English
1.5K
14.6K
36.8K
2M
Justin Loew ретвитнул
James O'Keefe
James O'Keefe@JamesOKeefeIII·
We filed an emergency appeal. Tomorrow we will tell all how we are fighting back. Tune in tomorrow at 1 PM EST. Watch Here: YouTube: @okeefemedia" target="_blank" rel="nofollow noopener">youtube.com/@okeefemedia X: x.com/JamesOKeefeIII
James O'Keefe tweet media
English
49
757
3.3K
52K
Justin Loew ретвитнул
Mary Talley Bowden MD
Mary Talley Bowden MD@MaryBowdenMD·
Our lawsuit against the FSMB and 6 state medical boards is finally in motion! One board is refusing to respond, another one blew off the judge but just came around, and the other four are begrudgingly complying.
English
31
315
1.4K
17.7K
Justin Loew ретвитнул
The Independent with Scott Atlas
Atlas: "We should absolutely pass laws that forbid lockdowns, that forbid gov't from imposing shutdowns on businesses, on churches—we have freedom of assembly in this country... [our liberties have] been completely lost by this expansion of government into our personal lives."
English
18
173
698
5.8K
Justin Loew ретвитнул
Dr. Naomi Wolf. 8 NYT Bestsellers. DPhil, Poetry.
I tried to warn the world about this destruction of women’s fertility starting in 2022. Destroyed my life in many ways. I spent four miserable years screaming about this and staring into this abyss. Thank you Nicolas Hulscher for adding this validation from new research. The tragedy of our time.
Nicolas Hulscher, MPH@NicHulscher

We now have clear evidence that the COVID-19 mRNA shots have crippled the reproductive capacity of humanity. In animal models, they destroy over 60% of women’s non-renewable egg supply. In human data (n=1.3M), vaccinated women have ~33% fewer successful pregnancies. The latest study found “vaccine” mRNA and spike protein invade the human placenta and fetal cells. 37% of placentas from vaccinated mothers contained spike protein.

English
64
937
1.9K
31.7K
Justin Loew ретвитнул
Alex Berenson
Alex Berenson@AlexBerenson·
I’m starting to feel like the limitations and the strengths of AI are two sides of the same coin; AI is a great mimic and pattern recognizer, so it has no problem validating the person using it (and coding, which is a highly structured task). But the unexpected breaks it easily.
Nav Toor@heynavtoor

🚨SHOCKING: Apple just proved that AI models cannot do math. Not advanced math. Grade school math. The kind a 10-year-old solves. And the way they proved it is devastating. Apple researchers took the most popular math benchmark in AI — GSM8K, a set of grade-school math problems — and made one change. They swapped the numbers. Same problem. Same logic. Same steps. Different numbers. Every model's performance dropped. Every single one. 25 state-of-the-art models tested. But that wasn't the real experiment. The real experiment broke everything. They added one sentence to a math problem. One sentence that is completely irrelevant to the answer. It has nothing to do with the math. A human would read it and ignore it instantly. Here's the actual example from the paper: "Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?" The correct answer is 190. The size of the kiwis has nothing to do with the count. A 10-year-old would ignore "five of them were a bit smaller" because it's obviously irrelevant. It doesn't change how many kiwis there are. But o1-mini, OpenAI's reasoning model, subtracted 5. It got 185. Llama did the same thing. Subtracted 5. Got 185. They didn't reason through the problem. They saw the number 5, saw a sentence that sounded like it mattered, and blindly turned it into a subtraction. The models do not understand what subtraction means. They see a pattern that looks like subtraction and apply it. That is all. Apple tested this across all models. They call the dataset "GSM-NoOp" — as in, the added clause is a no-operation. It does nothing. It changes nothing. The results are catastrophic. Phi-3-mini dropped over 65%. More than half of its "math ability" vanished from one irrelevant sentence. GPT-4o dropped from 94.9% to 63.1%. o1-mini dropped from 94.5% to 66.0%. o1-preview, OpenAI's most advanced reasoning model at the time, dropped from 92.7% to 77.4%. Even giving the models 8 examples of the exact same question beforehand, with the correct solution shown each time, barely helped. The models still fell for the irrelevant clause. This means it's not a prompting problem. It's not a context problem. It's structural. The Apple researchers also found that models convert words into math operations without understanding what those words mean. They see the word "discount" and multiply. They see a number near the word "smaller" and subtract. Regardless of whether it makes any sense. The paper's exact words: "current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data." And: "LLMs likely perform a form of probabilistic pattern-matching and searching to find closest seen data during training without proper understanding of concepts." They also tested what happens when you increase the number of steps in a problem. Performance didn't just decrease. The rate of decrease accelerated. Adding two extra clauses to a problem dropped Gemma2-9b from 84.4% to 41.8%. Phi-3.5-mini from 87.6% to 44.8%. The more thinking required, the more the models collapse. A real reasoner would slow down and work through it. These models don't slow down. They pattern-match. And when the pattern becomes complex enough, they crash. This paper was published at ICLR 2025, one of the most prestigious AI conferences in the world. You are using AI to help you make financial decisions. To check legal documents. To solve problems at work. To help your children with homework. And Apple just proved that the AI is not thinking about any of it. It is pattern matching. And the moment something unexpected shows up in your question, it breaks. It does not tell you it broke. It just quietly gives you the wrong answer with full confidence.

English
30
29
182
64.1K
Justin Loew ретвитнул
Brett Pike
Brett Pike@ClassicLearner·
Colorado just announced 10,000 new students have been pulled from public school to homeschool. School enrollment is down across the country. Schools are launching marketing campaigns to try to get people to come back. Homeschooling is the fastest growing freedom movement in America.
English
41
403
1.9K
20.1K
Justin Loew ретвитнул
Camus
Camus@newstart_2024·
The guy just landed a spacecraft on a comet — one of the most impressive scientific achievements in years. His reward? A public struggle session because his bowling shirt had scantily clad women on it. Helen Andrews points out the quiet cost of institutional feminization: HR departments now hunt down any maverick personality and stamp it out. We’re losing innovators we’ll never even know about, all because someone focused on the shirt instead of the comet. This is how wokeness actually works. Have you seen real excellence get punished for something trivial like this?
English
836
6.2K
41K
11.6M
Justin Loew ретвитнул
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Apple just proved that AI models cannot do math. Not advanced math. Grade school math. The kind a 10-year-old solves. And the way they proved it is devastating. Apple researchers took the most popular math benchmark in AI — GSM8K, a set of grade-school math problems — and made one change. They swapped the numbers. Same problem. Same logic. Same steps. Different numbers. Every model's performance dropped. Every single one. 25 state-of-the-art models tested. But that wasn't the real experiment. The real experiment broke everything. They added one sentence to a math problem. One sentence that is completely irrelevant to the answer. It has nothing to do with the math. A human would read it and ignore it instantly. Here's the actual example from the paper: "Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?" The correct answer is 190. The size of the kiwis has nothing to do with the count. A 10-year-old would ignore "five of them were a bit smaller" because it's obviously irrelevant. It doesn't change how many kiwis there are. But o1-mini, OpenAI's reasoning model, subtracted 5. It got 185. Llama did the same thing. Subtracted 5. Got 185. They didn't reason through the problem. They saw the number 5, saw a sentence that sounded like it mattered, and blindly turned it into a subtraction. The models do not understand what subtraction means. They see a pattern that looks like subtraction and apply it. That is all. Apple tested this across all models. They call the dataset "GSM-NoOp" — as in, the added clause is a no-operation. It does nothing. It changes nothing. The results are catastrophic. Phi-3-mini dropped over 65%. More than half of its "math ability" vanished from one irrelevant sentence. GPT-4o dropped from 94.9% to 63.1%. o1-mini dropped from 94.5% to 66.0%. o1-preview, OpenAI's most advanced reasoning model at the time, dropped from 92.7% to 77.4%. Even giving the models 8 examples of the exact same question beforehand, with the correct solution shown each time, barely helped. The models still fell for the irrelevant clause. This means it's not a prompting problem. It's not a context problem. It's structural. The Apple researchers also found that models convert words into math operations without understanding what those words mean. They see the word "discount" and multiply. They see a number near the word "smaller" and subtract. Regardless of whether it makes any sense. The paper's exact words: "current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data." And: "LLMs likely perform a form of probabilistic pattern-matching and searching to find closest seen data during training without proper understanding of concepts." They also tested what happens when you increase the number of steps in a problem. Performance didn't just decrease. The rate of decrease accelerated. Adding two extra clauses to a problem dropped Gemma2-9b from 84.4% to 41.8%. Phi-3.5-mini from 87.6% to 44.8%. The more thinking required, the more the models collapse. A real reasoner would slow down and work through it. These models don't slow down. They pattern-match. And when the pattern becomes complex enough, they crash. This paper was published at ICLR 2025, one of the most prestigious AI conferences in the world. You are using AI to help you make financial decisions. To check legal documents. To solve problems at work. To help your children with homework. And Apple just proved that the AI is not thinking about any of it. It is pattern matching. And the moment something unexpected shows up in your question, it breaks. It does not tell you it broke. It just quietly gives you the wrong answer with full confidence.
Nav Toor tweet media
English
749
2.5K
9.5K
1.6M
Justin Loew ретвитнул
Bannon’s WarRoom
Bannon’s WarRoom@Bannons_WarRoom·
TERRY SCHILLING: I stumbled onto this plan to build a massive AI data center in Fort Meade, Florida, a town of just 5,300 people. This would be a 4.4 million sq. ft. facility, consuming up to 1 gigawatt of power. That's on par with what a nuclear plant produces! It would drive up energy and water costs and hurt property values! The city council is voting to rezone farmland to allow it. This has to be stopped! @Schilling1776
English
150
1.4K
2.5K
68.6K
Justin Loew ретвитнул
Alex Berenson
Alex Berenson@AlexBerenson·
Sorry, @pmarca, but this just sounds like a high-end version of AI sycophancy/psychosis, a bunch of rich semi-autistic guys who think they’re smarter than they are convincing themselves that ChatGPT is the only one who truly gets them. It makes the AI bubble MORE likely, not less
Alex Berenson tweet mediaAlex Berenson tweet media
English
8
8
39
8.8K
Justin Loew ретвитнул
Samantha Smith
Samantha Smith@SamanthaTaghoy·
“Water, soil, and oxygen should not be infinitely accessible. They are assets that should be included in global economic balance sheets.” This is not satire. The World Economic Forum wants to monetise breathing.
English
6.3K
25.4K
78.7K
7.8M