
TheRearAdmiral
1.8K posts



I'm making Hades, except it has co-op, and is inspired by Eastern mythology!




This is actually what the future will look like. When wearable AR glasses saturate the market a whole generation will grow up only knowing reality through a mixed virtual/real spatial computing lens. It will be chaotic and stimulating. They will cherish their digital objects.



I went through nearly 4000 newly-released Chomsky/Epstein documents, and summed up the findings in my new article: The Chomsky-Epstein Files: Unravelling a Web of Connections Between a Star Leftist Academic & a Notorious Pedophile mintpressnews.com/the-chomsky-ep…

The central point below is that DOJ violated the law by not providing a reason for the redactions, has protected people like Sultan and Les Wexner and many other powerful people @RepThomasMassie & I are exposing, and then chose to unredact the names without context.

Epstein fallout is rocking the paleontology community as one British dinosaur convention has banned attendance from some scientists named in the Epstein files cnn.it/4rrV6wf

Epstein faced life in prison under federal charges. Instead he served 13 months of an 18 month sentence after Acosta backed down. Moreover Epstein was allowed to leave daily and at will. It was absolutely a sweetheart deal.


Once there was a planet with a huge asteroid heading toward it. Stopping the asteroid would have required a few large countries to cooperate a moderate amount. That seemed hard. Some people became worried. A cult arose which said the asteroid would grant its believers eternal life, when it struck the planet. Nobody knew how to make the asteroid do that. But the cult said you couldn't prove it wouldn't. So there was no need to worry, and you could set your mind at ease. They called it the Asteroid of Immortality. Some of the world's most famous astronomers tried to explain in more detail what would happen when the asteroid smashed into the planet, and that it didn't involve eternal life. The cult said that nobody had seen that disaster actually happen, so it wasn't scientific to believe in it. (Other astronomers joined the cult of the Asteroid of Immortality. It regarded astronomers who joined them very favorably and warmly -- the cult did, that is; not the asteroid.) "If the asteroid *doesn't* hit our planet, everyone dies!" said the cult. "Like, because of old age, get it? Ha ha!" They thought this reply very clever. Transhumanists tried to point out that cryonics was in fact a thing, if somebody was that desperate to grasp any chance of escaping death by old age; that you could desperately grasp at immortality *without* endangering all life on the planet. Skeptics tried to explain that putting your faith in a falling asteroid to save you, just because it seemed big and powerful, wasn't much of a chance to grasp however desperately, because a falling asteroid would actually just kill you. People who cared about something other than themselves, tried to say that it was different for everyone to all die at the same time, including everyone's children; and leave no legacy for the children's children who might have been. "Everyone will die," said those trying to rally the world, "including your children; or your friends' children, if you've none of your own; they'll die before they have a chance to grow up, and have lives or children of their own. Every story ends in time; that's not the same as ending all stories." "Everyone will die all at the same time, if we don't stop the asteroid, and that will be the end." This didn't work to talk most believers out of their faith. Thinking it clever to reply "Ah ha ha, but everyone dies even if the asteroid *doesn't* hit!" usually meant having too little wisdom to understand the counter-replies. If you couldn't figure out the problems for yourself, before your mouth uttered such words, you usually wouldn't recant when somebody else tried to explain. Instead the cult decided to call the anti-doom coalition "doomers", and thought that very clever too. The cult spent vast amounts to build huge electromagnets to try to pull in the asteroid faster. The cult knew, their faith held, that the asteroid would fall in time regardless. But the prospect of pulling down the asteroid a little sooner, let them feel powerful and in control, and like *they* were the ones making history. (Indeed, many splinter factions within the cult each said that if their followers invested enough to build the *most* powerful magnet, that would make it be *their* Asteroid of Immortality, and *they* would become the rulers of the new world.) Above all, the cult worked to stoke enmity between the couple of large countries that would have needed to work together to deflect the asteroid. And at that task, unfortunately, the cult succeeded. For it was ever easier to push people downhill than uphill, to fight alongside entropy rather than fighting back against it; and call the default sad outcome your victory. Coordination was hard and not the default, and maybe it wouldn't have happened either way. But the cult did fight on the side of entropy, and entropy did win. The cult likewise succeeded at pulling down the asteroid with electromagnets, if you wanted to look at things that way. They got the default outcome they'd defined as their own victory. They managed to let a falling asteroid fall. And then everyone died, all at the same time including all the children, and that was the end of all stories.



Nick Bostrom’s new paper: >Developing superintelligence is not like playing Russian roulette; it is more like undergoing risky surgery for a condition that will otherwise prove fatal. > One could equally maintain that if nobody builds it, everyone dies. In fact, most people are already dead. The rest of us are on course to follow within a few short decades. For many individuals—such as the elderly and the gravely ill—the end is much closer. Part of the promise of superintelligence is that it might fundamentally change this condition." >Along one path (forgoing superintelligence), 170,000 people die every day of disease, aging, and other tragedies. >The choice before us, therefore, is not between a risk-free baseline and a risky AI venture. It is between different risky trajectories, each exposing us to a different set of hazards. >Imagine curing Alzheimer's disease by regrowing the lost neurons in the patient's brain. Imagine treating cancer with targeted therapies that eliminate every tumor cell but cause none of the horrible side effects of today's chemotherapy. Imagine restoring ailing joints and clogged arteries to a pristine youthful condition. These scenarios become realistic and imminent with superintelligence guiding our science. >We assume that rejuvenation medicine could reduce mortality rates to a constant level similar to that currently enjoyed by healthy 20-year-olds in developed countries, which corresponds to a life expectancy of around 1,400 years. >Developing superintelligence increases our remaining life expectancy provided that the probability of AI-induced annihilation is below 97%.


If socialists understood economics, they wouldn't be socialists









FULL INTERVIEW: @sama joins TBPN to discuss GPT-5.3-Codex, AI agents, Anthropic's Super Bowl ads, and more. 00:00 GPT-5.3-Codex 02:27 AI agents and the future of work 03:20 The role of forward-deployed engineers in AI 05:42 AI benchmarks 07:29 Emotional attachment to chatbots 10:40 On data and compute being the 'new oil' 12:56 Is software dead? 17:48 Codex Desktop and the rise of the general-purpose work agent 25:00 OpenAI’s last Super Bowl ad and the Anthropic ads








