Ryan Marotta
17.7K posts

Ryan Marotta
@RyanJMarotta
Husband, Dad, and Pet Parent. Catholic, Writer, and Nerd. He/Him/His. Tweets and puns are my own.


What is hentai? The niche porn that the world can't stop watching trib.al/MUi6oID



The fact that this is even a thing is incredible. What a time to witness the spiritual revival happening in our country right now. St. Joseph - pray for us!

Vance says Americans should find comfort in the fact that our allies are "suffering more than we are" from high gas prices


Hegseth “The war in Iran is protected by God” Pope Leo XIV “Do not involve the name of God in choices of death. God cannot be enlisted in darkness.” 1-0 to Leo.

If your friends aren't talking about: • Claude • Perplexity Computer • Openclaw • Fitness • Investing • Ownership • Automated workflows It's time to find new friends.

Ask a colleague why they refuse to use AI. They say it uses up all that water. You point out the water use is far smaller than some would have them believe. Then it's the hallucinations. You mention accuracy has improved dramatically. Then, finally: the process is the point. The struggle. The craft. The deeply human act of sitting with uncertainty. They're not reasoning. They're rationalizing their gut intuitions. My amazing student @vicoldemburgo, with Éloïse Côté, Reem Ayad, @yorl, Jason Plaks and I have a new preprint that explores this more thoroughly, called "The Moralization of Artificial Intelligence". We started by asking how moralized AI has become in public discourse. Analyzing 69,890 news headlines from 2018 to 2024, we found that AI was moralized at levels comparable to GMOs and vaccines, technologies whose moral opposition has been studied for decades. It ranked above both. The sharpest spike came within weeks of ChatGPT's launch in late 2022. When we surveyed representative samples of Americans, a majority of AI opponents said their views wouldn't change even if AI proved safe and beneficial. That's consequence insensitivity, the hallmark of moral conviction, not practical calculation. Across art, chatbots, legal tools, and romantic companions, AI moralization loaded onto a single latent factor. A global moral stance, dressed up in whatever practical language is available. The behavioral data make this concrete: a one standard deviation increase in moralization scores predicted a 42% drop in actual AI usage, even when it would have benefited that person personally. The conviction preceded the behavior by up to 573 days. The next time someone gives you three different reasons to oppose AI, each one dissolving under mild scrutiny, you're probably not watching someone think. You're watching someone feel. Preprint avaulable here: osf.io/preprints/psya…









