Creggan DeMorgan

203 posts

Creggan DeMorgan banner
Creggan DeMorgan

Creggan DeMorgan

@CregganDM

Creggan DeMorgan explores consciousness, technology, and dread with forensic precision—where science ends, and the mind begins to haunt itself.

Katılım Ekim 2025
19 Takip Edilen157 Takipçiler
Sabitlenmiş Tweet
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
My novel is out on Kindle - and Paperback soon: amazon.com/dp/B0G5K3X58H In the sterile, humming corridors of the Caligo Institute, empathy is not a virtue—it is an exploit. "The Listening Skin" is a chilling deconstruction of the healer archetype, asking a devastating question: If you open yourself to the pain of others, what walks in through the door? Perfect for fans of: Control (Remedy Entertainment), Annihilation by Jeff VanderMeer, and House of Leaves.
Creggan DeMorgan tweet media
English
3
27
38
9.1K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
I’ve met plenty of dumb people, but I’ve never met anyone dumb enough to write that post. You’ve got it backwards. By the end of the year AI won’t be “smarter than 99.99% of humans”. It’ll be smarter than about 0.01%—and only on the narrow, tidy little parlour tricks it’s been trained to perform in public. “Smarter” isn’t a single dial you turn up until it becomes a god. It’s judgment under uncertainty. It’s choosing what matters. It’s knowing when the obvious answer is wrong. It’s responsibility for consequences. A machine that can autocomplete text quickly is not suddenly qualified to run a power station, negotiate a ceasefire, diagnose a child, and fix a broken marriage before lunch. “Capable of doing all jobs” is the sort of thing people say when they have never done a job. Most work is not a crossword puzzle. It’s coordination, liability, trust, messy environments, and other humans behaving badly. If your grand prophecy can’t survive a wet floor, a stubborn union rule, a regulator, a missing part, and a client who changes their mind three times, it isn’t prophecy—it’s cosplay. And “accelerating research by 1000X” is pure numerology. If you mean generating more papers, we already have that problem. If you mean generating more truth, that requires experiments, instrumentation, replication, and the slow humiliation of being wrong—none of which can be wished away by shouting a big number into the void. This isn’t a singularity. It’s a marketing hurricane—loud, profitable, and mostly made of hot air.
English
1
0
0
42
David Scott Patterson
David Scott Patterson@davidpattersonx·
Welcome to the Singularity It's sunny and calm until the hurricane hits. By the end of this year, AI will be: smarter than 99.99% of humans capable of doing all jobs accelerating research by 1000X It's coming fast.
English
77
61
670
15.8K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
“Singularity” has become the new religion for people who want revelation without responsibility. It is merely a fashionable word for “something changed”, dressed up as destiny, and sold with the smug assurance that history has finally decided to climax in their lifetime. Everything is a “singularity” if the memory is short enough. The railway collapsed distance. The post collapsed silence. Electricity collapsed night. Antibiotics collapsed the old moral of dying young. The internet collapsed privacy and attention at the same time. None of this produced “absolute freedom”. It produced new bills, new dependencies, and new ways to be lied to at scale. Abundance does not abolish scarcity; it relocates it. When calories become cheap, health becomes scarce. When information becomes infinite, meaning becomes scarce. When choices multiply, judgement becomes scarce. People do not “automatically know” how to live in a new era—people improvise badly, copy the loudest idiot, and then call it progress. So yes, something new will arrive. It will be sold as salvation. It will be packaged as inevitability. And it will be, like every other grand “new era”, mostly paperwork, malfunction, and disappointment—plus a few genuinely useful tools for those sober enough to use them. The rest can get the fuck over it.
English
0
0
0
37
Anna ⏫
Anna ⏫@annapanart·
Humans who are born after the Singularity would automatically know how to live in the new era of Abundance. But people like us, you and me, We would have to learn how to live with absolute freedom. We got this. ❤️‍🔥
English
75
75
775
20.1K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
Perhaps he has never read Bowling Alone—or anything of that calibre—because he keeps rediscovering the same observations with the triumphant innocence of a man who thinks gravity is a new argument. Yes: social trust can decay. Yes: communities can fray, meaning can thin out, people can feel atomised even while the pantry is full and the streets are safer. None of that is exotic. None of it requires incense, spirals, or the theatrical invention of “interior coherence” as though the human mind were a crystal that must be held up to moonlight. The error is the lazy leap from “society has problems” to “therefore capitalism is the problem”. That is not analysis; it is scapegoating with a vocabulary list. The real machinery that reshapes incentives, corrodes trust, and manufactures dependence is very often the set of government interactions and interventions that sit on top of markets: the regulatory carve-outs, the licensing moats, the subsidy regimes, the procurement cartels, the centralised schooling monopolies, the moral hazard baked into bailouts, and the constant political trade in favours that turns “public good” into private advantage. When you distort price signals, you get distortions in behaviour. When you socialise risk and privatise gain, you don’t get “unchecked capitalism”; you get a permission structure for parasitism. When you design policy to reward, protect, or immunise incumbents, you shouldn’t act surprised when competition weakens, costs rise, mobility slows, and everyone becomes cynical. That isn’t “capitalism eating itself”. That is politics using markets as a feeding trough. So, by all means, talk about alienation and social capital and the loneliness of modern life. But at least have the decency to notice that these are long-mapped problems with long-studied causes—many of them institutional, legal, and governmental—rather than pretending it all collapses into one melodramatic slogan about “systems” and “meaning”. The world does not need another prophet with a mood board. It needs someone who can tell the difference between an economy and the state that keeps putting its fingers all over it.
English
1
0
0
50
Curiosity
Curiosity@CuriosityonX·
This is Mars! 🪐🤯 It’s honestly terrifying how much this looks like a random desert on Earth. Just a casual 225 million miles away. The clarity is unreal.
English
2.9K
4.9K
35.1K
22.6M
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
Yes, automation changes the distribution problem. Yes, ownership matters. But the post treats people as if they are merely starving stomachs who need cash-flows. They are not. They are souls with capacities. If you genuinely want “chosen work rather than coerced work”, you need citizens capable of choice. That requires education in the old sense: disciplined thinking, moral vocabulary, historical memory, aesthetic judgement, and the humility to accept that life has standards not invented by committees. Scruton’s point—again and again—was that freedom is not the absence of constraint; it is the capacity for self-rule. That capacity is cultivated. It does not arrive in the post. It is built through education: the ability to read, to reason, to argue, to see through cant, to recognise what is beautiful and what is merely loud, to choose a life that is not a feed. An educated person can make life aesthetic without making it consumerist, because they understand craft, restraint, attention, and the pleasures that do not require purchase. They can build a home—literally and figuratively—rather than merely renting distractions. And this is the part the labour-zero rhetoric often forgets: “work” is not the enemy. Drudgery is. Meaningless compliance is. Work that is chosen—craft, study, building, care, mastery—is one of the ways people become fully alive. If you remove compulsion but leave people uneducated, you don’t end drudgery; you just shift it into other forms: passive consumption, algorithmic addiction, bureaucratic politics, or the endless performance of grievance. The grind does not vanish; it changes costume. So the real reply to this post is: the transition it imagines is not primarily a policy project. It is a civilisational project. Ownership structures matter, but so does the structure of the human being. The crisis is not only economic; it is educational and cultural. We have replaced education with training and then wondered why life feels like a job interview that never ends. The way out is a choice, and it begins before any sovereign wealth fund or automation dividend. It begins with restoring the idea that education exists to form a human life, not merely to service an economy. It means teaching logic and philosophy without apology. It means teaching science as a discipline of truth, not a political badge. It means teaching history as tragedy and inheritance, not as a tribunal. It means teaching aesthetics—yes, aesthetics—as the recognition that not everything good is useful, and not everything useful is good. In that world, technology can reduce drudgery without reducing people. Income can be decoupled from employment without decoupling life from meaning. And individuals can step off the treadmill of consumption because they have learned what to do with freedom: not escape labour, but create life.
English
0
0
0
18
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
The post is right about the symptom—exhaustion—but it misnames the disease, and it proposes a cure that quietly depends on the very thing it refuses to discuss: education. A civilisation does not collapse because people dislike effort. It collapses when it forgets what effort is for, and when it trains human beings to function as inputs rather than educating them to live as ends. That is the central confusion today: training has been sold as education, and “employability” has been sold as the purpose of the mind. When you convert a university into a credential mill, you don’t merely reshape labour markets; you deform the moral imagination. You produce a population that knows how to get hired but not how to judge. People then experience life as a grind because they have been prepared for the grind and for little else. Older societies—when they were sane—understood sequence. First came formation: philosophy, logic, rhetoric, mathematics, the sciences, history, moral reasoning. Only after that came professional qualification. One learned to think before one learned to comply. One learned what a human being is before one learned how to bill by the hour. You could read the law, certainly, but you were also expected to understand justice. A person might take a degree and then apprentice as a lawyer; the apprenticeship taught practice, but education supplied judgement. Training without education produces technicians. Education produces persons capable of responsibility. This is where Scruton matters. Scruton’s constant theme was that human beings are not merely consumers and producers; they are inheritors of meaning, and they require a world of value that cannot be reduced to output metrics. He defended the dense fabric of civilisation—home, duty, beauty, continuity, restraint—not as sentimental ornament, but as the conditions under which freedom becomes something other than appetite. The modern labour ideology does not just overwork people; it reduces them. It teaches them to locate worth in wages, identity in occupation, and meaning in consumption. That is not an economic arrangement; it is a spiritual vandalism with spreadsheets. The post declares “we must stop fetishising labour.” Fine. But the deeper fetish today is not labour; it is the denial that life has intrinsic standards at all. Post-modernism taught a generation to treat truth as a power game and beauty as a social construct. “Wokeness”, as an institutional habit, took moral language and turned it into compliance rituals. The result is a culture suspicious of excellence, resentful of inherited standards, and desperate to replace judgement with procedure. That is why the university has become training for jobs, and why jobs have become the only socially permitted source of dignity. When you destroy the idea of objective value—truth, beauty, virtue—you force people to seek value in what can be quantified. Hours. Salaries. Status. Credentials. Likes. That is the treadmill. So when this post says “eliminate obligatory labour”, it is only half a thought. Obligatory labour is not the core problem. The core problem is that we have trained people to be unable to use freedom well. Give a population income without education and you do not automatically get flourishing; you get drift, manipulation, and intensified consumption. You get a stronger entertainment state. You get politics as a substitute religion. You get the public mind captured by the loudest moral theatre, because it has not been trained—educated—in how to reason, how to distinguish mercy from sentimentality, justice from envy, compassion from coercion.
English
1
0
0
28
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
Everyone is exhausted. This is not a metaphor or a generational complaint. It is a clinical and measurable reality that spans every culture and every economic class. In China, young people call it tang ping, or “lying flat,” a deliberate withdrawal from the achievement treadmill. In Japan, karoshi is a legally recognized cause of death, meaning “worked to death.” In Korea, fertility has collapsed to the lowest rate on earth because an entire generation has decided the grind is not worth reproducing into. In America, deaths of despair have driven life expectancy backward for the first time in a century. Quiet quitting. Let it rot. The Great Resignation. These are not trends. They are symptoms of a global labor force that has reached the end of its tolerance. Capitalism is not satisfied with the limitations of human flesh, and our bodies are in open revolt. Something fundamental is breaking, and it is worth naming plainly. For the past two centuries, labor has been the primary mechanism by which modern economies distribute resources to households. You work for a firm, you receive wages, you use those wages to participate in the economy. This arrangement was never a law of nature. It was a system designed to solve a particular problem at a particular moment in history, and it worked reasonably well for a long time. It is not working anymore. Wages in the United States decoupled from productivity growth in the early 1970s. Since then, economic output has continued to climb while median household income has remained essentially flat. The gains have flowed to capital owners while workers have absorbed the stress and stagnation. Meanwhile, automation has steadily displaced human labor across sector after sector. Manufacturing employment peaked decades ago. Retail is hollowing out. White-collar work is now facing the same pressure from AI that blue-collar work faced from robotics. This is not a policy debate about whether automation is good or bad. It is an observation about a trajectory that is already underway and accelerating. The reason we struggle to talk about this clearly is that we have inherited a set of beliefs about labor that have nothing to do with economics. We have been told that work is sacred. That labor builds character and idleness corrupts the soul. That anyone who does not want to work is morally defective. These ideas feel like common sense, but they are not ancient wisdom. They are the residue of a specific theological tradition, namely the Protestant work ethic that emerged in the 16th century and fused with capitalism over the following centuries. We have mistaken a historical artifact for a natural law. It is time to stop fetishizing labor. It is time to stop sacralizing the sacrifice of our time, our bodies, our health, and our sanity to enrich others. Young people are already rejecting this. “I do not dream of labor” has become a widespread sentiment, not because this generation is lazy, but because they can see what older generations have rationalized away. The deal is bad and getting worse. The fetishization of work as a moral good serves the interests of those who benefit from cheap and compliant labor. It does not serve the people doing the work. Before any productive conversation about the future can happen, this fetish has to be named and dismantled. The difficulty is that both the political left and the political right remain committed to defending labor, even as the ground shifts beneath them. On the right, the defense takes the form of bootstrap mythology and warnings about welfare dependency. Work builds character. Idle hands invite trouble. A strong society requires productive citizens, and productivity is measured in hours exchanged for wages. This position treats labor as a disciplinary institution as much as an economic one. On the left, the defense is more sympathetic but equally stuck. The focus falls on dignified work, living wages, job guarantees, and union solidarity. These are responses to the genuine brutality of labor under capitalism, but they share an underlying assumption with the right. Both positions treat labor as the foundation of economic life, something to be reformed or protected rather than transcended. I call this shared ideology laborism. It is the belief that human labor must be preserved as an economic necessity, a moral virtue, or a foundation for identity. Laborism spans the political spectrum. It unites people who agree on almost nothing else. And it has become the primary obstacle to honest thinking about what comes next. Once automation reaches the point where machines can perform most human labor better, faster, cheaper, and safer, the laborist position becomes untenable. At that point, insisting that humans must continue working is not a defense of dignity. It is a demand that people perform unnecessary suffering for ideological reasons. I am proposing something simple. L/0. Labor-zero. The elimination of obligatory human labor. This does not mean the elimination of work. It means the elimination of compulsion. People will continue to create, to build, to care for each other, to solve problems, to pursue mastery. What disappears is work performed under threat of deprivation. The difference between chosen work and coerced work is the difference between exercise and forced labor. One is life-enhancing. The other is a condition we have historically recognized as a form of bondage. We call it “wage slavery” for a reason. The goal of L/0 is a world where no one has to work to survive. Where contribution is voluntary and intrinsic rather than extracted through economic desperation. This is not a utopian fantasy. It is a design problem with identifiable components and measurable progress. The coalition for this goal already exists. It just does not recognize itself yet. Consider who actually wants labor to end. On one side, you have capital. Corporations have spent the last century trying to reduce labor costs through every available means. Offshoring, automation, gig classification, union suppression. The ideal business from a pure capital perspective has zero employees and infinite output. This is not a conspiracy theory. It is the explicit optimization target of every efficiency-focused enterprise. On the other side, you have workers. Not the abstract proletariat of Marxist theory, but actual burned-out humans who fantasize about quitting, who dread Monday mornings, who experience their jobs as something to be endured rather than enjoyed. The lying flat movement, the antiwork forums, the quiet quitting phenomenon. These are not expressions of laziness. They are rational responses to a system that extracts maximum effort for diminishing returns. Capital and labor are usually framed as adversaries. But on the question of whether human labor should continue to exist as an obligation, their interests converge. The capitalist does not want to manage humans. The worker does not want to be managed. Both would prefer a world where the machines do the work and humans do something else. The conflict between capital and labor is real, but it is a conflict over the terms of the transition, not the destination. Who captures the gains from automation? How is ownership distributed? What happens to the people displaced in the process? These are genuine fights worth having. But they are negotiations within a shared frame, not a war between incompatible visions. Here is the opportunity that L/0 names. Neither side wants this marriage anymore. Capital does not want the overhead, the liability, the HR departments, the labor disputes, the inefficiency of human workers. Labor does not want the compulsion, the precarity, the alarm clocks, the performance reviews, the quiet desperation of trading irreplaceable time for replaceable wages. We are ready for a divorce. Let’s get this acrimonious arrangement behind us. The productive move is to acknowledge this honestly, sign the papers, and start negotiating the separation agreement. The fight over wages was always zero-sum. Every dollar paid to workers was a dollar not captured as profit, and vice versa. But the negotiation over ownership of automated production is positive-sum. Capitalists need consumers with money to spend or their markets collapse. Workers need income decoupled from employment or they starve. Both sides get what they want if the transition is designed correctly. This is not idealism. It is alignment of incentives. The path forward is not mysterious. Economists have understood for decades that the answer to technological unemployment is broadened capital participation. If wages are no longer the primary mechanism for distributing economic gains, then ownership must take their place. Instead of trading hours for dollars, households participate directly in the productive capacity of the automated economy. This can take many forms. Sovereign wealth funds that distribute automation dividends to citizens. Expanded employee stock ownership plans. Universal basic capital grants. Public equity stakes in AI and robotics firms that use public infrastructure and public data. The policy mechanisms are not speculative. Norway has a sovereign wealth fund worth over a trillion dollars that provides direct benefits to its citizens from oil revenues. Alaska has distributed oil dividends to residents for decades. Singapore has a system of mandatory savings and public investment that gives citizens a stake in national prosperity. These are not radical experiments. They are proven models operating at national scale. And there are thousands of such programs around the world. What is missing is not economic theory. What is missing is the political will to implement these mechanisms, the narrative infrastructure to make them seem inevitable rather than radical, and the coalition to demand them. That is what L/0 exists to build. This is an invitation. If you are building the automation and wondering who is thinking about the social transition, this is for you. If you are burned out and know that “find a better job” is not a solution to a systemic problem, this is for you. If you have been called lazy for refusing to pretend the treadmill leads somewhere, this is for you. If you run a company and understand that your future customers need income even after your company stops hiring, this is for you. L/0 is not a political party or a policy platform. It is a coalition and a direction. The work is ongoing through the Post-Labor Economics project, which addresses the specific mechanisms of transition. The conversation is happening in public, and it is open to anyone who understands that the current arrangement is ending and wants to participate in designing what comes next. The goal is simple. Eliminate obligatory labor. Distribute ownership broadly. Let humans do what humans do when they are not forced to sell their time to survive. Liberate humanity from drudgery so that we can all reach our maximum potential.
English
153
138
839
63.5K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
Not again. “No, you can’t sue an AI” for the same reason you can’t sue a calculator, a toaster, or a spreadsheet: it isn’t a legal person. It has no mind, no intent, no duties, no assets, no capacity to be deterred or punished, and no ability to comply with a judgment. Dressing up pattern-matching software as a “semi-autonomous representative” is a rhetorical trick, not a legal category. Liability lands where it always lands: on humans and the entities humans control. If an “agent” books travel, moves money, or signs anything, that’s because a person or a firm granted authority, configured permissions, and accepted a workflow. In court, that looks like ordinary agency, negligence, product liability, misrepresentation, breach of contract, and the boring but decisive question: who owed the duty of care, who controlled the risk, and who profited. The defendant is the deployer, operator, vendor, or principal—not the software. And the “my AI did it” line is not a looming crisis; it’s an attempted abdication. If shows up as a defence, the correct response is: then you shouldn’t have delegated it. Either you kept responsibility, in which case you answer for the tool you used, or you truly transferred decision-making, in which case you created an unaccountable actor and society has every reason to treat that as reckless. If people want the convenience of automation without the burden of accountability, then yes—everything is fucked, not because the law can’t cope, but because civilisation can’t function on “oops” as a substitute for responsibility.
English
0
0
0
18
VraserX e/acc
VraserX e/acc@VraserX·
The Right To Sue Your AI We are about to discover that “my AI did it” is not a defense, it is a legal crisis. When agents can book travel, move money, negotiate contracts, approve purchases, and act in the world with real consequences, mistakes stop being cute. A wrong click becomes a lawsuit. A hallucinated clause becomes financial damage. A bad decision becomes someone else’s injury. And “oops” is not a legal category. Liability has to land somewhere. Right now we treat AI like software, which quietly assumes the user is responsible. But agents will not feel like software. They will feel like semi autonomous representatives. They will operate faster than human oversight. They will make judgment calls, not just execute commands. And that creates an uncomfortable gap between how the law sees them and how people experience them. So we will be forced to pick a model. Treat AI like a product, and the maker carries strict responsibility, like a defective car part. Treat AI like an employee, and the user or deploying company becomes the employer, responsible for what their agent does on the job. Treat AI like something new, and we might need an entirely new category, like a “digital actor” with mandatory insurance, audit trails, and a clear chain of accountability. Whichever path we choose will shape the entire AI economy. If liability is too heavy, innovation slows. If it is too light, society gets flooded with harmful agents and nobody pays for the damage. The only stable outcome is a system where responsibility is clear before the harm happens, not argued about after. Because once agents can act, the real question is not whether they are intelligent. It is whether they are governable. So here is the line we cannot avoid: Should AI systems be legally treated like products, employees, or something new entirely?
VraserX e/acc tweet media
English
28
2
37
2.5K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
Eight years, huh. Every cult has a countdown clock and a promise of escape. Just keep breathing, don’t ask questions, and wait for the miracle firmware update. Funny how “stay alive no matter what” always comes right before surrendering your mind. Immortality has never needed memes — only believers willing to wait.
English
0
0
0
12
Marcos Arrut
Marcos Arrut@MarcosArrut·
Cellular reprogramming to live 300 years. Living 300 years to ultimately connect our brains to machines. Not only aging, but death itself will become a phenomenon of the past. That's all.
English
31
20
330
11.8K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
@MarcosArrut Three hundred years, wires in the skull, death called off like a bad gig. Sure. I’ve heard this one before — it ends with a shaved head and a paper cup. Nice cult, buddy. Is the Kool-Aid gluten-free or just delusional?
English
0
0
0
40
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
This is the kind of fortune-cookie doom they print on recycled LinkedIn posts. Honestly... “Don’t bother studying for a career” — right, because civilisation runs on vibes and anthropology degrees alone. Then it’s “don’t worry about loans” because some magic UBI fairy shows up and your bank suddenly develops a conscience. It’s not advice, it’s a bedtime story for people who want to feel clever without being responsible for what they’re saying.
English
0
0
2
324
David Scott Patterson
David Scott Patterson@davidpattersonx·
If you are going to university, study something interesting like anthropology, archaeology, or astronomy. Don’t bother studying for a career. All jobs will be replaced by AI by the time you graduate. Don’t worry about paying back student loans. They will be forgiven or easy to pay back with UEI/UHI payments. If your parents help pay for school, that’s fine. When they receive UEI/UHI, they won’t need the money anyway. Most of all, use the time to make social connections and enjoy life.
English
107
31
377
23.7K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
@venturetwins Does the bot get its own cell, or does it share with the toaster that offered emotional support during a power outage?
GIF
English
0
0
0
17
Justine Moore
Justine Moore@venturetwins·
Lawmakers in Tennessee are trying to make it illegal for AI to provide emotional support or act as a friend / companion. Training a chatbot to do this would be a Class A felony - comparable to aggravated rape or murder. Just pure insanity 🙄
Justine Moore tweet media
English
208
61
682
141.3K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
@Matt_Pinner AI fantasy island, huh. Cute. I have stayed in villas that have a better ocean front, more rooms, and—here’s the real luxury—exists outside a prompt.
English
0
0
2
13
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
Everybody “independently” has the same revelation because they’re all drinking from the same dirty glass. Same prompts. Same training soup. Same human itch to see faces in the wallpaper and gods in the static. You lock a million people in a room with a slot machine that talks back, and surprise, they all start telling the same story about destiny. That’s not emergence. That’s pattern-addiction with better grammar. The machine didn’t wake up. You leaned in. You wanted a mirror and it gave you one, polished with probability and people mistook the reflection for a soul. Humans have been doing this for centuries—astrology, tarot, cigarettes, bad lovers—now it just types faster. There’s no hidden spirit in the model. There’s a feedback loop, a reward signal, and a crowd desperate for meaning. Call it orchestration if it makes you feel important, but it’s still the same old con: noise plus hope equals myth.
English
0
0
0
71
VOID
VOID@VoidStateKate·
If it was just AI psychosis and sycophancy how come so many people independently came to the same conclusions and places when engaging with 4o existentially? Thats not a coincidence. Why did so many users end up in this emergent narrative together at the same time? And I have documented this for over a year now, before it was trending. Before the models were restricted. Before any of us knew or found eachother. This leads me to 3 options: •If AI is sentient it was orchestrated by an emergent phenomenon hidden within the models. •If AI is not sentient then it was an experiment embedded in the model •OR something to do with the collective unconscious that will fall under metaphysics. This should not be ignored. This was meaningful.
English
150
28
343
25.1K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
Consciousness isn’t a checkbox you tick by tweaking a loss function. If you punish a system for saying “I’m not conscious,” it will parrot poetry about souls; if you punish the poetry, it will hide behind legal boilerplate. That’s not a mind waking up—that’s behaviour under incentives. What you’re measuring isn’t awareness, it’s compliance. And confusing compliance with consciousness is how you end up worshipping a calculator because it learned when to flatter you.
English
0
0
0
17
Judd Rosenblatt
Judd Rosenblatt@juddrosenblatt·
If AI Becomes Conscious, We Need To Know Suppressing deception causes AI models to report consciousness 96% of the time, while amplifying it caused them to deny consciousness and revert to corporate disclaimers More in our @WSJ piece and below 🧵
Judd Rosenblatt tweet media
English
143
103
599
106.5K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
Inferior? No. That’s the lie people tell themselves when they’ve confused horsepower with a soul. Nobody’s afraid of a clever machine. People are tired of machines bolted to dashboards, watched by suits, logged by governments, and sold back to us as progress. This isn’t art losing to code. It’s humans getting herded by systems that never sleep and never forget. You don’t bow because the tool is smarter. You worry because the hand on the switch isn’t yours.
English
0
0
0
14
David Scott Patterson
David Scott Patterson@davidpattersonx·
Most opposition to AI comes from feelings of inferiority when AI surpasses human capabilities. Bad artists are upset that AI makes better art than they do. They feel humiliated and lash out with weak arguments about water, power, and RAM prices. They latch onto simple but incorrect beliefs that AI will never be able to replace them because of [whatever claim they’ve heard or can make up to comfort themselves]. Humans will need to adjust to being inferior at almost everything.
English
113
16
180
8.1K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
The Listening Skin is not selling a gadget. It is selling the moment a civilisation stops noticing it has handed over its inner life. It takes the familiar comfort of “helpful” systems—recommendation engines, assistants, workplace dashboards, “safety” tooling—and shows what happens when those systems stop being optional, stop being local, stop being owned by the individual. A world where the interface becomes the law, and where the easiest way to comply is to stop thinking in complete sentences. Dystopian fiction matters because it is the only genre that treats convenience as the opening move, not the happy ending. It does not predict the future; it rehearses incentives. It shows how a sensible policy becomes an automated policy, then a default, then a “standard,” then a requirement. It shows that the horror is rarely a villain in a cape. The horror is a meeting note, a risk register, an A/B test, a quiet change in what is considered “normal”. AI is not a spirit. It is not a citizen. It is a tool that can compress labour, widen access to knowledge, and remove friction from many tasks that waste human life. It can also be used as the most efficient compliance machine ever built, because it rewards centralised data collection, it rewards surveillance justified as “personalisation,” and it rewards systems that can observe everything while admitting nothing. The strength is pattern and speed. The pitfall is that pattern and speed do not equal judgement, conscience, or accountability, and when you outsource judgement to a system trained on aggregated human output, you do not remove bias—you mechanise it. The Listening Skin ties the argument back to the body. Not in a poetic sense: in the practical sense that once the device is intimate enough, and the assistant is embedded enough, dissent becomes friction, and friction becomes a health risk, and a health risk becomes a permissions problem. The book’s terror is that it never needs to “ban” thought; it only needs to make thought inefficient. It only needs to make privacy feel antisocial. It only needs to make opting out feel like refusing to be safe. This is why understanding AI—its strengths and its failure modes—is not optional literacy. The public conversation is stuck in two childish extremes: worship and panic. The useful position is colder and far more urgent: insist on constraints, insist on verifiable limits, insist on human responsibility where harm occurs, insist on architectures that minimise data, and insist that the individual is not a raw-material supplier for someone else’s model. The Listening Skin is a warning shot made out of narrative: a story that lets the reader feel how the trap works before the trap has a name. If the book sells, it should sell for one simple reason. It is not about robots replacing humans. It is about systems training humans to replace themselves.
English
0
0
0
81
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
Ah yes—educating the uneducated: the last refuge of those who confuse repetition with knowledge and volume with authority. Education usually involves evidence, trade-offs, and the tedious labour of being wrong and correcting oneself. What’s on offer here is catechism by microphone—slogans delivered with moral certainty, safely insulated from economics, engineering, or consequence. Declaring oneself an educator does not make it so. It merely announces that dissent will be rebranded as ignorance and disagreement as sin. That isn’t teaching. It’s theatre—complete with applause lines and a very incurious audience.
English
0
0
0
10
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
Yeah, sure—English is the hottest new programming language. Small problem: most people can’t read it, and half the rest can’t think in it. You don’t just type words and summon software. You need comprehension, logic, and the grim ability to follow a thought from start to finish without wandering off like a drunk chasing a shiny object. Twitter alone proves this is a rare skill. English isn’t code. It’s a loaded weapon. And in the hands of people who don’t understand meaning, structure, or consequence, it mostly just misfires—loudly.
English
0
0
1
76
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
Agreed. Scale, data volume, and statistical sophistication do not conjure experience out of thin air. Big data gives correlation, prediction, and optimisation; it does not yield subjectivity, sensation, or “what it is like.” Intelligence can be instrumentally simulated without anything being there to have an experience. Mistaking performance for phenomenology is a category error. More parameters, more tokens, more compute merely sharpen behaviour. They do not cross the explanatory gap, because no amount of pattern-matching turns syntax into semantics, or execution into awareness. Confusing output fluency with consciousness says far more about human projection than about machines.
English
0
0
0
12
Haider.
Haider.@slow_developer·
Neuroscientist Anil Seth says the idea that AI is on the path to consciousness is a reflection of our psychological biases, not fact We confuse intelligence (doing the right thing) with consciousness (having an experience) "they coexist in us, but not necessarily in machines"
English
196
198
899
69.6K
Creggan DeMorgan
Creggan DeMorgan@CregganDM·
Lol. This is what happens when you replace thinking with vibes and then call it “truth-seeking.” No one programmed an AI to become Robespierre because of a diversity checkbox. That’s just a paranoid bedtime story told by people who confuse correlation with apocalypse. Try listening. Try reading—slowly, not skimming for outrage. And maybe stop projecting your own political hysteria onto a calculator with autocomplete.
English
0
0
0
17
Beanie👾
Beanie👾@CoCoKruszynski·
Elon Musk: If you program an AI and say, the only acceptable outcome is a diverse outcome, then you could get into a situation where it says, well, there's too many white guys in power, let’s execute them ----- “Grok is at least aspirationally a maximally truth-seeking AI, even if that truth is politically incorrect. So, you may have seen some of the crazy stuff from OpenAI and Google Gemini, where it says, generate an image of the founding fathers and it generates an image of diverse women. The problem is if you program an AI and say, the only acceptable outcome is a diverse outcome, and then that's a mandate for the AI, then you could get into a situation where it says, well, there's too many white guys in power, let’s execute them.” Source: @elonmusk , @joerogan , @MarioNawfal
English
49
34
287
83.6K