Brent Gebhardt

347 posts

Brent Gebhardt banner
Brent Gebhardt

Brent Gebhardt

@sbgebhardt

Husband, father, engineer, Catholic.

Katılım Mayıs 2022
121 Takip Edilen82 Takipçiler
HoCStaffer
HoCStaffer@HoCStaffer·
The Battle of Vimy Ridge & the monument which memorializes it are each so individually impressive that arguably the most evil man in history didn't destroy it. He toured the site instead. Happy Vimy Ridge Day, everyone.
HoCStaffer tweet media
English
5
13
119
3.2K
Matt Walsh
Matt Walsh@MattWalshBlog·
One of the problems is that when you wait for years to have kids, trying to accumulate enough wealth so that you can raise children “comfortably” or whatever, your marriage becomes frail and brittle. You haven’t allowed yourselves to be tested. You haven’t met any resistance. The union atrophies. You get bored. Then one serious challenge comes along and you fall apart. You weren’t ready for it. My wife and I have been together 15 years, started out broke. Six kids and a decade and a half later, we’ve been tried and tested a million times. We’ve been through the fire together. Problems come along and we deal with them. It’s nothing we haven’t seen before. Now we have the confidence of people who’ve been through the wringer. We have a sense of shared experience and shared achievement. We built this life together. We both belong here. That’s the great benefit of getting married and starting a family young. Get in on the ground floor with each other. Start at the bottom hand in hand.
English
530
608
9.7K
367.8K
Brent Gebhardt retweetledi
rightnowhq
rightnowhq@RightNowHQ·
TikTok user "Frances k", who has a significant following online, put out a video on social media filled with so many errors about our undercover investigation and abortion in Canada in general, we had to respond.
English
0
5
10
1.4K
Brent Gebhardt
Brent Gebhardt@sbgebhardt·
@Woodguy55 @mike_wintrs 35 years ago, I tried to use that to score two points in Scattergories. The people I was playing with hadn’t heard of that fine show and did not allow it. I mean, what were they doing on Saturday morning?
English
0
0
1
23
Mike Winters
Mike Winters@mike_wintrs·
An offside ruled in our favour? What is this game
GIF
English
2
1
7
942
Crowder CEO
Crowder CEO@GmorganJr·
My question is why? Where in Scripture are we getting that from?
Lauren Chen@TheLaurenChen

@GmorganJr That's still against Catholic teachings, only natural family planning is allowed to avoid pregnancy.

English
185
9
270
93.6K
Brent Gebhardt retweetledi
C2C Journal
C2C Journal@c2cjournal·
The Emptiness Inside: Why Large Language Models Can’t Think – and Never Will By Gleb Lisikh Early attempts at artificial intelligence (AI) were ridiculed for giving answers that were confident, wrong and often surreal – the intellectual equivalent of asking a drunken parrot to explain Kant. But modern AIs based on large language models (LLMs) are so polished, articulate and eerily competent at generating answers that many people assume they can know and, even better, can independently reason their way to knowing. This confidence is misplaced. LLMs like ChatGPT or Grok don’t think. They are supercharged autocomplete engines. You type a prompt; they predict the next word, then the next, based only on patterns in the trillions of words they were trained on. No rules, no logic – just statistical guessing dressed up in conversation. As a result, LLMs have no idea whether a sentence is true or false or even sane; they only “know” whether it sounds like sentences they’ve seen before. That’s why they often confidently make things up: court cases, historical events, or physics explanations that are pure fiction. The AI world calls such outputs “hallucinations”. But because the LLM’s speech is fluent, users instinctively project self-understanding onto the model, triggered by the same human “trust circuits” we use for spotting intelligence. But it is fallacious reasoning, a bit like hearing someone speak perfect French and assuming they must also be an excellent judge of wine, fashion and philosophy. We confuse style for substance and we anthropomorphize the speaker. That in turn tempts us into two mythical narratives: Myth 1: “If we just scale up the models and give them more ‘juice’ then true reasoning will eventually emerge.” Bigger LLMs do get smoother and more impressive. But their core trick – word prediction – never changes. It’s still mimicry, not understanding. One assumes intelligence will magically emerge from quantity, as though making tires bigger and spinning them faster will eventually make a car fly. But the obstacle is architectural, not scalar: you can make the mimicry more convincing (make a car jump off a ramp), but you don’t convert a pattern predictor into a truth-seeker by scaling it up. You merely get better camouflage and, studies have shown, even lessfidelity to fact. Myth 2: “Who cares how AI does it? If it yields truth, that’s all that matters. The ultimate arbiter of truth is reality – so cope!” This one is especially dangerous as it stomps on epistemology wearing concrete boots. It effectively claims that the seeming reliability of LLM’s mundane knowledge should be extended to trusting the opaque methods through which it is obtained. But truth has rules. For example, a conclusion only becomes epistemically trustworthy when reached through either: 1) deductive reasoning (conclusions that must be true if the premises are true); or 2) empirical verification (observations of the real world that confirm or disconfirm claims). LLMs do neither of these. They cannot deduce because their architecture doesn’t implement logical inference. They don’t manipulate premises and reach conclusions, and they are clueless about causality. They also cannot empirically verify anything because they have no access to reality: they can’t check weather or observe social interactions. Attempting to overcome these structural obstacles, AI developers bolt external tools like calculators, databases and retrieval systems onto an LLM system. Such ostensible truth-seeking mechanisms improve outputs but do not fix the underlying architecture. The “flying car” salesmen, peddling various accomplishments like IQ test scores, claim that today’s LLMs show superhuman intelligence. In reality, LLM IQ tests violate every rule for conducting intelligence tests, making them a human-prompt engineering skills competition rather than a valid assessment of machine smartness. Efforts to make LLMs “truth-seeking” by brainwashing them to align with their trainer’s preferences through mechanisms like RLHF miss the point. Those attempts to fix bias only make waves in a structure that cannot support genuine reasoning. This regularly reveals itself through flops like xAI Grok’s MechaHitler bravado or Google Gemini’s representing America’s Founding Fathers as a lineup of “racialized” gentlemen. Other approaches exist, though, that strive to create an AI architecture enabling authentic thinking: · Symbolic AI: uses explicit logical rules; strong on defined problems, weak on ambiguity; · Causal AI: learns cause-and-effect relationships and can answer “what if” questions; · Neuro-symbolic AI: combines neural prediction with logical reasoning; and · Agentic AI: acts with the goal in mind, receives feedback and improves through trial-and-error. Unfortunately, the current progress in AI relies almost entirely on scaling LLMs. And the alternative approaches receive far less funding and attention – the good old “follow the money” principle. Meanwhile, the loudest “AI” in the room is just a very expensive parrot. LLMs, nevertheless, are astonishing achievements of engineering and wonderful tools useful for many tasks. I will have far more on their uses in my next column. The crucial thing for users to remember, though, is that all LLMs are and will always remain linguistic pattern engines, not epistemic agents. The hype that LLMs are on the brink of “true intelligence” mistakes fluency for thought. Real thinking requires understanding the physical world, persistent memory, reasoning and planning that LLMs handle only primitively or not all – a design fact that is non-controversial among AI insiders. Treat LLMs as useful thought-provoking tools, never as trustworthy sources. And stop waiting for the parrot to start doing philosophy. It never will. The original, full-length version of this article was recently published as Part I of a two-part series in C2C Journal. Part II can be read here. c2cjournal.ca/2025/11/the-ho… Gleb Lisikh is a researcher and IT management professional, and a father of three children, who lives in Vaughan, Ontario and grew up in various parts of the Soviet Union.
English
0
1
1
70
Brent Gebhardt retweetledi
Nick Sciarappa
Nick Sciarappa@PappaSciarappa·
Can I get a RT? I'm running a Spring Retreat for 150 Catholic high school teens in March, & want to fill a swag bag full of free Catholic memorabilia from businesses, mission programs & universities Looking for stickers, hats, pens, sunglasses, pamphlets, prayer cards and more
English
12
90
87
16.1K
Scott Hayward
Scott Hayward@scottmhayward·
We can share the videos from British Columbia, Ontario, and Quebec, but Alberta's anti-free speech zone statute is the MOST restrictive across Canada. I look forward to @ABDanielleSmith and the @Alberta_UCP fixing this soon; Albertans have a right to see what is happening.
rightnowhq@RightNowHQ

You may have noticed at the beginning of our undercover videos, we talked about going undercover in four abortion clinics across the country, but have only released three videos. The fourth location was in Calgary, but we can't release the footage. Here's why:

English
4
3
9
469
Brent Gebhardt retweetledi
Gretchen Crowe
Gretchen Crowe@GretchenOSV·
Here is Pope Leo's full answer to the question he received this morning about AI at #NCYC25. It speaks to each one of us, no matter our age, and it will be quoted for years to come. "Safety is not only about rules. It's about education, and it's about personal responsibility. Filters and guidelines can help you, but they cannot make choices for you. Only you can do that. These years of your life are meant to help you grow into mature adults. Spiritually, this means deepening your friendship with God and becoming more like Him. Intellectually, it means learning to think clearly, to think critically, to examine reality and to search for truth, beauty and goodness. It also means strengthening your will, with God's grace, so you can freely choose what helps you grow, avoid what harms you. Every tool we're given, including AI, should support that journey, not weaken it. Using AI responsibly means using it in ways that help you grow, never in ways that distract you from your dignity or your call to holiness. In your education, make the most of this time. AI can process information quickly, but it cannot replace human intelligence -- and don't ask it to do your homework for you. It cannot offer real wisdom. This is a very important human element: AI will not judge between what is truly right and wrong, and it won't stand in wonder, in authentic wonder, before the beauty, the beauty of God's creation. So be prudent. Be wise. Be careful that your use of AI does not limit your true human growth. Use it in such a way that if it disappeared tomorrow, you would still know how to think, how to create, how to act on your own, how to form authentic friendships. Remember, AI can never replace the unique gift that you are to the world." @OSVNews 📷: Margaret Murray
Gretchen Crowe tweet media
English
5
149
525
23.7K
CCCB
CCCB@CCCB_CECC·
On Nov. 21, His Holiness Pope Leo XIV appointed The Most Reverend Stephen Hero as Metropolitan Archbishop of Edmonton. 🔗 Read the media release ➡️ cccb.ca/?post_type=med… #cccb @archedmonton
CCCB tweet media
English
2
9
26
3K
Brent Gebhardt retweetledi
Johann Kurtz
Johann Kurtz@JohannKurtz·
Leaving a Legacy is currently the #1 New Release in every relevant category: Wealth Management, Retirement Planning, Personal Finance, & Business Motivation. It's universally rated five-stars. This kind of reach is key to changing many minds. Can't thank you all enough.
Johann Kurtz tweet mediaJohann Kurtz tweet mediaJohann Kurtz tweet mediaJohann Kurtz tweet media
English
18
20
267
24.1K
Brent Gebhardt retweetledi
Johann Kurtz
Johann Kurtz@JohannKurtz·
Very proud to release 'Leaving a Legacy: Inheritance, Charity, & Thousand-Year Families'. The book addresses a key question: Why should we leave wealth to our children and not to charities? Clarity is necessary. We're on the verge of the greatest wealth transfer in history as the post-war generation nears the end of their lives. A crisis of confusion has gripped our family leaders: Should they leave their wealth to their children, or would this spoil and corrupt them? Should they give everything to charities? How can they defend nepotism in a meritocratic age? Leaving a Legacy answers by reconnecting readers with the timeless philosophy, theology, and practice of inheritance that built the Western world. True charity is a multi-generational project—and virtuous family dynasties are its indispensable guardians. It equips leaders to embrace this sacred duty and forge a legacy they will be forever proud of - and in so doing, allows the generations to reunite in a context of love and clarity. I very much hope you will invest in a copy, and if you find it valuable, share it with your family.
Johann Kurtz tweet media
English
83
168
1.5K
427.5K
Brent Gebhardt retweetledi
Alissa Golob
Alissa Golob@alissagolob·
It was just flagged to me that in a recent episode of the podcast @WestofCentreCBC, Kathleen Petty, @DuaneBratt, @alex_n_boyd, and @markusoff were talking about the UCP's upcoming AGM where a policy banning third-trimester abortions will be voted on. They repeatedly say, once again, that it doesn't happen in Canada, and if it does, it's for "serious medical reasons". It's because of regular talking points like these, that I personally went undercover in four clinics across Canada to see if it was possible to get a third-trimester abortion. The video from a Toronto clinic was released last week, and from a Montreal clinic today. Both clinics were ready to refer me for third-trimester abortions, no questions asked, specifically for no medical reason. They said up until the 8th month of pregnancy was acceptable, and that the "system didn't think it was too far". They also repeatedly referred to third-trimester abortions is "stillbirths". Toronto video: youtu.be/Na8xkiAET5A?si… Montreal video: youtu.be/7QUkPflqm5Y?si…
YouTube video
YouTube
YouTube video
YouTube
English
2
16
27
3K
Brent Gebhardt retweetledi
Rachel 🇻🇦
Rachel 🇻🇦@RachelToRome·
1890 is the year everyone decided that Halloween was a pagan holiday.
English
326
1.6K
7.9K
290.6K