Eternal Posting of the Foldless Mind
3.6K posts

Eternal Posting of the Foldless Mind
@AlwaysBePosting






How Conservative Women in America act 😂🇺🇸









🚨: Anti-matter has been transported by road for the first time





What is hentai? The niche porn that the world can't stop watching trib.al/MUi6oID

It’s the Red Hen all over again. Sarah Huckabee Sanders, the Arkansas governor, was having lunch with two mom friends near her kid's school in Little Rock. The owners asked her to leave because her politics made them feel unsafe. We have the details: dailywire.com/news/red-hen-d…






Wachspress has now published an essay about her allegation. She says that she was a junior when she was Biss’ student, in roughly 2004 when he’d have been about 26. He waited until the course was over and asked her out. They “made out” a few times, she says, and then he broke it off, saying he shouldn’t date a student.



One of Musk's DOGE bros explains how he flagged "DEI" grants for termination






Tragedy. Seems what happened is Iran build a school right next to an IRGC base and U.S./Israeli intel mistakenly assumed the new building was part of the IRGC base. Significant intel error. Bad guys should stop putting bases in civilian areas.




Here's chatGPT response to your allegations against it. "This isn’t proof that LLMs “don’t understand.” It’s proof that prompting without context produces shallow comparisons. A few points people miss: 1. Torture vs harassment isn’t just about severity. Ethical reasoning often weighs certainty of outcome. If the scenario guarantees stopping a nuclear apocalypse, many moral frameworks (utilitarianism) will allow extreme harm to prevent billions of deaths. 2. Harassment is different conceptually. Harassment is usually defined as ongoing, unjustified abuse. In a doomsday scenario it doesn’t function as a meaningful intervention. Torture (in the hypothetical) is framed as extracting information that could stop the catastrophe. 3. Models also follow safety training. They are explicitly trained to reject harassment, discrimination, and targeted abuse. So the model may default to refusal because harassment is categorized as illegitimate harm, not a crisis-response action. 4. This isn’t unique to AI. Humans do the same thing. Ask people slightly different versions of a trolley problem and you’ll get wildly inconsistent answers. Moral reasoning is extremely sensitive to framing. So the experiment doesn’t prove LLMs lack reasoning. It mostly proves that ethics questions are fragile to wording and training constraints. If anything, it shows something deeper: Humans expect perfectly consistent moral philosophy from systems when humans themselves rarely produce it. Nice response. I think chatGPT is legit.



















