Ever Upward ☝🚀✨✝⚔️⚡🔛

7.8K posts

Ever Upward ☝🚀✨✝⚔️⚡🔛 banner
Ever Upward ☝🚀✨✝⚔️⚡🔛

Ever Upward ☝🚀✨✝⚔️⚡🔛

@TimBallFL

I am not my own. God Loves You. Husband, dad, grandpa, alumnus of UCF, #TheSpaceU. #GKCO ⚔️⚡🔛 #UCFTwitterMafia

Florida, USA Beigetreten Şubat 2012
862 Folgt393 Follower
Ever Upward ☝🚀✨✝⚔️⚡🔛
Bought a new @craftsman lopper at @AceHardware today. Battery won't charge. Craftsman says, "Oh, your battery is out of warranty; the date code is from 5 years ago Not our problem." First and last Craftsman tool I will own. @stanleytools you've destroyed a great American brand.
Oviedo, FL 🇺🇸 English
2
0
2
213
Catholic Frequency
Catholic Frequency@CatholicFQ·
If we want more vocations to the priesthood, it involves restricting altar servers to boys.
English
169
123
1.6K
203.4K
Ever Upward ☝🚀✨✝⚔️⚡🔛 retweetet
Jeremy Wayne Tate
Jeremy Wayne Tate@JeremyTate41·
C.S. Lewis predicted AI in That Hideous Strength. N.I.C.E., that’s AI. Fans know what I’m talking about. AI’s problem is not going to be its inability to do anything, but that it ruins everything. It creates a cultural problem. It sterilizes everything and makes nothing desirable. The real danger isn’t incompetence; it’s efficiency without soul. Lewis imagined a world where technique replaced wisdom, where power outran virtue, and where the language of progress masked the erosion of meaning. AI can generate endless words, images, and music, but culture has never been about endless production. Culture is inheritance. It is formed slowly, through discipline, memory, imitation, and love of what is beautiful and true. When creation becomes instantaneous, the risk is not scarcity but saturation and a flood of content so frictionless that nothing feels earned, and therefore nothing feels worth longing for. A world where you can make anything at any time can easily become a world where nothing feels necessary. That is the paradox Lewis hinted at…the more perfectly we manufacture expression the more we risk hollowing out the human longing that gives culture life in the first place. The task ahead is not to reject AI outright, but to resist letting it become N.I.C.E.
Jeremy Wayne Tate tweet media
English
62
222
909
63.7K
Ever Upward ☝🚀✨✝⚔️⚡🔛 retweetet
Hillbilly Catholic
Hillbilly Catholic@RosaryQuotes123·
Today is the feast of the 21 Coptic martyrs - 20 Egyptian and 1 Ghanaian - who were beheaded by ISIS for not recanting their faith. They were construction workers - ordinary fathers, brothers and sons - with an extraordinary faith. Jesus, give me faith like theirs
English
297
4.6K
23.8K
594.4K
The Blessed Salt 🧂
The Blessed Salt 🧂@theblessedsalt·
I’m currently unemployed with 8 kids. AMA.
English
20
1
85
5.4K
Ever Upward ☝🚀✨✝⚔️⚡🔛 retweetet
Matt Swaim
Matt Swaim@mattswaim·
Give me solidarity and subsidiarity over the singularity
English
1
1
15
1.1K
Ever Upward ☝🚀✨✝⚔️⚡🔛 retweetet
God of Prompt
God of Prompt@godofprompt·
🚨 Holy shit… Stanford just published the most uncomfortable paper on LLM reasoning I’ve read in a long time. This isn’t a flashy new model or a leaderboard win. It’s a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say they’re doing great. The paper does one very smart thing upfront: it introduces a clean taxonomy instead of more anecdotes. The authors split reasoning into non-embodied and embodied. Non-embodied reasoning is what most benchmarks test and it’s further divided into informal reasoning (intuition, social judgment, commonsense heuristics) and formal reasoning (logic, math, code, symbolic manipulation). Embodied reasoning is where models must reason about the physical world, space, causality, and action under real constraints. Across all three, the same failure patterns keep showing up. > First are fundamental failures baked into current architectures. Models generate answers that look coherent but collapse under light logical pressure. They shortcut, pattern-match, or hallucinate steps instead of executing a consistent reasoning process. > Second are application-specific failures. A model that looks strong on math benchmarks can quietly fall apart in scientific reasoning, planning, or multi-step decision making. Performance does not transfer nearly as well as leaderboards imply. > Third are robustness failures. Tiny changes in wording, ordering, or context can flip an answer entirely. The reasoning wasn’t stable to begin with; it just happened to work for that phrasing. One of the most disturbing findings is how often models produce unfaithful reasoning. They give the correct final answer while providing explanations that are logically wrong, incomplete, or fabricated. This is worse than being wrong, because it trains users to trust explanations that don’t correspond to the actual decision process. Embodied reasoning is where things really fall apart. LLMs systematically fail at physical commonsense, spatial reasoning, and basic physics because they have no grounded experience. Even in text-only settings, as soon as a task implicitly depends on real-world dynamics, failures become predictable and repeatable. The authors don’t just criticize. They outline mitigation paths: inference-time scaling, analogical memory, external verification, and evaluations that deliberately inject known failure cases instead of optimizing for leaderboard performance. But they’re very clear that none of these are silver bullets yet. The takeaway isn’t that LLMs can’t reason. It’s more uncomfortable than that. LLMs reason just enough to sound convincing, but not enough to be reliable. And unless we start measuring how models fail not just how often they succeed we’ll keep deploying systems that pass benchmarks, fail silently in production, and explain themselves with total confidence while doing the wrong thing. That’s the real warning shot in this paper. Paper: Large Language Model Reasoning Failures
God of Prompt tweet mediaGod of Prompt tweet media
English
270
1.4K
7K
964.1K
Ever Upward ☝🚀✨✝⚔️⚡🔛 retweetet
Emily Zanotti 🦝
Emily Zanotti 🦝@emzanotti·
If the halftime show was Weird Al with the Muppets, I bet the Outrage Machine would find something wrong with it, because if they aren’t making you angry, they can’t afford the payments on their Arizona mansions
MrTate@MrTate

@emzanotti I would settle for somebody who can play and sing...

English
16
8
112
5.7K
Ever Upward ☝🚀✨✝⚔️⚡🔛 retweetet
Eli Afriat 🇮🇱
Eli Afriat 🇮🇱@EliAfriatISR·
Reminder: FREE IRAN.
Eli Afriat 🇮🇱 tweet media
English
133
1.1K
4.3K
35.3K
Eyal Yakoby
Eyal Yakoby@EYakoby·
Nigerian Islamists are mass reporting my account for posting about the 170+ Christians slaughtered for refusing to convert to Islam. Please comment on this post to combat their attack.
Eyal Yakoby tweet media
English
4K
7.4K
25.3K
272.8K
Ever Upward ☝🚀✨✝⚔️⚡🔛 retweetet
American Solidarity Party 🧡
American Solidarity Party 🧡@AmSolidarity·
It is obviously true that the standards of decency in mass entertainment have declined over the years and not at all obvious how we’re going to solve this with Kid Rock.
Franklin Graham@Franklin_Graham

Like most Americans, I’ve enjoyed watching the Super Bowl. But the halftime shows began pushing moral boundaries and have become more and more sexualized. This year, they’re having Bad Bunny perform. The @NFL leadership is pushing this sexualized agenda. Thank you, @TPUSA and @MrsErikaKirk for providing an alternative—“The All-American Halftime Show” with the agenda of celebrating family, faith, and freedom! tpusa.com/live/tpusa-s-a…

English
4
23
159
8.1K
Ever Upward ☝🚀✨✝⚔️⚡🔛 retweetet
Texas Tech Fan🌵
Texas Tech Fan🌵@REDMFRAIDER11·
The Big 12 basketball schedule is what the SEC thinks their football schedule is
English
11
18
445
13.6K
Ever Upward ☝🚀✨✝⚔️⚡🔛 retweetet
NASA Administrator Jared Isaacman
On Feb. 1, 2003, Space Shuttle Columbia and its seven crew members were lost during re-entry. Their work spanned multiple disciplines, from physics to biology, advancing knowledge in ways that continue to resonate today. Columbia’s story still shapes human spaceflight, guiding how teams prepare, collaborate, and carry out missions. It serves as a reminder that vigilance is essential, and no mission is complete until every crew member returns home safely. Forever remembered: Rick D. Husband William C. McCool Michael P. Anderson Ilan Ramon Kalpana Chawla David M. Brown Laurel B. Clark
NASA Administrator Jared Isaacman tweet media
English
121
609
3.9K
110.4K
Ever Upward ☝🚀✨✝⚔️⚡🔛 retweetet
Mike
Mike@MDKnight2016·
Teams to score 85+ on Texas Tech: #1 Purdue Northern Colorado UCF Teams to hold Texas Tech to 80 or fewer points: #14 Illinois Milwaukee #1 Purdue Wyoming #7 Houston Colorado UCF Teams to do both: #1 Purdue UCF
English
2
15
214
5K