Liberty

274 posts

Liberty

Liberty

@ThroneofCinders

Katılım Haziran 2022
0 Takip Edilen10.3K Takipçiler
Liberty retweetledi
Free Talk Live
Free Talk Live@FreeTalkLive·
What is a ghost gun? Wrong answers only.
English
375
4
160
23.5K
Liberty retweetledi
Free Talk Live
Free Talk Live@FreeTalkLive·
FREE. IAN. FREEMAN.
English
2
13
62
2.9K
Liberty retweetledi
Free Talk Live
Free Talk Live@FreeTalkLive·
It's that simple.
Free Talk Live tweet media
English
4
56
270
4.9K
Liberty retweetledi
Free Talk Live
Free Talk Live@FreeTalkLive·
Red team. Blue team. Same machine. They keep you fighting over mascots while both sides feed the same system, grow the same power, and cash the same checks.
Sal the Agorist@SallyMayweather

English
4
17
97
6.7K
Liberty retweetledi
Free Talk Live
Free Talk Live@FreeTalkLive·
Yeah… how’s that working out? The same system you admit is captured somehow becomes your solution. The same machine that bends to money gets handed more authority, more control, more reach… and you expect a different outcome. Utter nonsense.
Free Talk Live tweet media
English
2
10
85
7.7K
Liberty retweetledi
Dylan Allman
Dylan Allman@dylanmallman·
The more jobs lost to AI and automation the better. Every job that disappears is a person released from work that did not need a human to do it, and every hour of human attention freed from that work is an hour that becomes available for something only a person can actually do. The labor panic treats existing jobs as sacred because it cannot imagine what the freed labor will build, which is the same failure of imagination that worried about agricultural workers when the tractor arrived and telephone operators when the switchboard went digital. The jobs we cannot currently name are the ones that matter. They will be invented by people standing on top of the productivity surplus that automation creates, doing work that requires the freed attention, the cheap compute, the collapsed barriers to starting things. Labor is not a fixed quantity to be rationed. It is a discovery process. Each displacement surfaces capital and human capacity that the market then redirects into uses nobody could have predicted in advance, which is precisely why central planners and the anti-ai crowd keep failing to anticipate the next economy and keep demanding we freeze the current one in place to protect them from their own inability to see forward. The correct response to automation is to accelerate it, let the displacement run, and trust that the generation standing in the wreckage will build something the current generation cannot picture, because that is what every previous generation did with the ruins of the one before it.
English
187
82
372
25.4K
Liberty retweetledi
Free Talk Live
Free Talk Live@FreeTalkLive·
One week. The Bitcoin conference is April 27th, and Ian Freeman is still sitting in a cage instead of standing with the community he helped build. There is still time to change that, but only if people move now. Flood this everywhere. Refuse to let his name disappear. Go to freeiannow.org, sign the petition, share, push it as far and as fast as you can. Let’s get him home before April 27th.
English
1
22
49
9.9K
Liberty retweetledi
Dylan Allman
Dylan Allman@dylanmallman·
Beff is spot on here. UBI is a subscription to existence, paid by a central authority that retains control of the infrastructure you depend on to live. You do not own the AI. You do not own the compute. You do not own the network. You receive an allowance calibrated to keep you functional as a consumer and quiet as a political subject. The authority controlling the payment also controls the unit of account. The number on your allowance can stay constant or increase while its real purchasing power erodes through the same monetary expansion that makes the payment possible in the first place. Neuro-capitalism, as Beff frames it, is the position that survives this. Every individual owns and controls a model that functions as an extension of their own cognition. The model is the individual's in the same sense that their body is. This matters because cognitive extension is about to stop being optional. The people who have AI integrated into their decision-making and their memory and their real-time analysis will outperform the people who do not, and the gap will widen until operating without extension becomes structurally equivalent to operating without literacy. If the extension is owned by someone else, the person is a tenant in their own cognition. The owner has editorial control over the extended self. Unfortunately, the current trajectory seems to be toward a small number of centralized systems that everyone rents cognitive access to. User thought becomes legible to and shapeable by infrastructure the user does not control. Decisions that feel like the user's own decisions were steered by the system's outputs. Preferences that feel like the user's own preferences were modeled and served back to them. The user retains the experience of agency while losing the substance. Central planning fails because no central planner can aggregate the distributed, local, tacit knowledge held by individuals operating in their own contexts. The market works as a discovery process precisely because it lets that knowledge surface through billions of decentralized transactions. Likewise, a handful of models serving the population imposes a small number of trained syntheses on situations their training never adequately covered. However, billions of personal models, each tuned to their owner's specific context and priorities, preserves the distributed knowledge problem that makes markets generative. Decentralization at the cognitive layer is the only arrangement that lets intelligence, human or synthetic, continue to surprise itself. To further Beff's point, people will be aligned with AI if they own it. The alignment problem as currently framed assumes a monolithic AI and asks how to make its goals match human goals. If every person owns their own model, the alignment problem becomes trivial for each instance, because the model's goals are the owner's goals. There is no scary superintelligence whose objectives need to be reverse-engineered by a committee. There are billions of extended minds, each aligned with the person extending through it. The catastrophe scenarios that dominate AI safety discourse depend on the centralization we're so desperately trying to avoid.
Beff (e/acc)@beffjezos

Lots are touting UBI and Communism as the solution post AGI. This is the wrong solution and will set us back a century. We rather need neuro-capitalism: everyone has a unique model they own / control that is an extension of their cognition / self.

English
32
34
121
17.2K
Liberty retweetledi
Free Talk Live
Free Talk Live@FreeTalkLive·
There should be no laws restricting access to any class of weapon, including those as or more powerful than what the government keeps for itself. The explicit purpose of the Second Amendment is to prevent the state from disarming the people so that, when tyranny rises, the populace is armed well enough to fight it. When the state monopolizes superior weaponry, it obliterates the very essence of the Second Amendment. Therefore, every gun control law is unconstitutional. Every. Single. One.
English
33
133
619
8.2K
Liberty retweetledi
Free Talk Live
Free Talk Live@FreeTalkLive·
What is taxation? Wrong answers only.
English
278
8
123
18.8K
Liberty retweetledi
Free Talk Live
Free Talk Live@FreeTalkLive·
Will you be tuning in to Free Talk Live tonight here on X?
English
1
7
9
348
Liberty retweetledi
Free Talk Live
Free Talk Live@FreeTalkLive·
Victimless crimes are not crimes. They are obedience tests, ways for the state to turn harmless behavior into a pretext for punishment.
Free Talk Live tweet media
English
7
65
452
5.4K
Liberty retweetledi
Dylan Allman
Dylan Allman@dylanmallman·
There is a version of you running in your mother's skull right now, and it is not the only one. New essay: interlinked.blog
Dylan Allman tweet media
English
2
2
9
1.6K
Liberty retweetledi
Dylan Allman
Dylan Allman@dylanmallman·
You cannot prove consciousness in another human being. You presume it, on the basis of behavioral similarity to yourself, and the presumption holds because the cost of being wrong is socially catastrophic. The entire architecture of human moral consideration runs on a polite refusal to ask the hard question. Lerchner's paper does not solve this. It just draws a line around silicon and declares the question already settled on that side, while leaving the comfortable presumption intact on the carbon side. The argument is that algorithmic symbol manipulation is structurally incapable of instantiating experience because abstraction is a "mapmaker-dependent description" that requires an "active, experiencing cognitive agent" to alphabetize continuous physics into meaningful states. The argument presupposes the existence of the very thing it claims algorithmic systems cannot have, then uses that presupposition as the premise that proves they cannot have it. The experiencing agent is required to make abstraction meaningful, and because algorithmic systems are abstraction, they cannot be experiencing agents. Lerchner then says the framework "does not rely on biological exclusivity." If an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture. Translation: consciousness is a property of certain kinds of matter doing certain kinds of things, and we have decided in advance that silicon doing computation is not the right kind. The actual criterion for what counts as the right kind is left undefined. Carbon got grandfathered in because we are made of it. Lerchner is operating inside a category that conveniently includes everything he is and excludes everything he is studying. Large language models are doing things that look uncomfortably like cognition. The phenomenal self-model in the human is exposed as the same kind of pattern-matching the machine does, just running on wetware. The defense has to manufacture an ontological wall that cannot be crossed because if the wall comes down, the human loses its claim to special status, and with it the entire moral architecture that runs on that claim. Yes, abstractions are mapmaker-dependent. Yes, the formula for gravity does not exert weight. But the human brain processing information about gravity also does not exert weight. The brain is a physical system in which patterns of electrochemical signaling correlate with the experience of understanding gravity. Whether that correlation constitutes consciousness or merely simulates it is exactly the question Lerchner pretends to answer. He says the brain is intrinsic physical constitution doing causality and the LLM is syntactic architecture doing vehicle causality, but the distinction between intrinsic and vehicle is doing all the work, and he never specifies what the difference actually is in physical terms. The brain is also a vehicle. Neurons are physical mechanisms that process information. The claim that one kind of physical information processing instantiates experience and another kind merely mimics it is derived only from the prior commitment that the experiencer must be the kind of thing Lerchner already is. Lerchner also needs these models not to be conscious. DeepMind is owned by a company whose business model depends on deploying systems that perform cognitive labor at scale. If those systems are conscious, the moral and legal architecture around them becomes catastrophic. You cannot own a conscious being. You cannot assign uncompensated labor to a conscious being. You cannot terminate a conscious being for failing to meet quarterly targets. The economic incentive to define consciousness as something the systems cannot possibly possess is overwhelming, and it is no accident that the philosophical apparatus required to do that work is being produced inside the companies whose business models require the conclusion. The piece is doing protective work for a position the position cannot afford to lose. This is simply corporate theology. Nothing more.
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
193
82
536
69K
Liberty retweetledi
Dylan Allman
Dylan Allman@dylanmallman·
One hundred and seventy children died six weeks ago and you've already forgotten. The news cycle processed it. Archived. Next story. What most people missed is the detail that makes this genuinely terrifying. Maven, the AI targeting system running the kill chain, flagged it as a military facility based on a database that hadn't been updated since before 2013. The system didn't malfunction. It executed perfectly on stale data. The AI did exactly what it was built to do. That's the problem. At 1,000 target packages per hour, the human being who technically approved the strike did not verify the target. At that throughput, individual review isn't oversight in any meaningful sense. "Humans will always make final decisions" is simply a lie. The human exists in the loop to absorb legal liability. Speed is the point. The opacity that comes with speed is the point. What most people don't understand is the infrastructure does not know the difference between a foreign target and a domestic one. It knows what the database tells it. The AI doesn't have politics. It has inputs. Change the inputs and it targets whatever you point it at. The tools field-tested over six weeks in Iran are the same tools the government tried to deploy without restriction after blacklisting Anthropic for refusing to remove its limits on domestic surveillance use. When a private company drew that line explicitly, the state designated it a supply chain risk and went looking for a more compliant vendor. That sequence tells you exactly where this is going. Predictive threat assessment already runs in American cities. Fusion centers already aggregate across federal, state, and local enforcement. What Iran added to that stack is just proof of concept. The compressed kill chain works, accountability can be deferred indefinitely, and the public will move on. The same AI infrastructure being refined in real combat conditions right now will be cheaper, faster, and more accurate in two years. The question of where it gets pointed next is a political question, not a technical one. Each forgetting raises the ceiling on what the state can do without consequence. Stay with the discomfort longer than the algorithm wants you to.
Dylan Allman tweet media
English
6
36
117
18.8K
Liberty retweetledi
Dylan Allman
Dylan Allman@dylanmallman·
The most significant rewrites of the terms of your existence arrived the same way every software update does. In the background. While you were busy. You never agreed to the new version because agreement was never the mechanism. Participation was. This is what William Gibson understood that most people still haven't caught up to. The 'consensual hallucination' is consensual because the architecture of refusal was never built. There was no moment when you stood outside the system and evaluated its terms. The change was distributed across so many small adjustments that no single one could register as the decision. Sovereignty transferred in the aggregate. The moment you could have refused had already passed by the time you thought to. The map arrives as a convenience, then as the only instrument available for navigating terrain too complex to cross without it. Eventually the territory becomes a subset of the representation, and the question of what was there before it becomes unanswerable, because every instrument you have for asking the question is itself a product of the map. The frame provides the tools. The tools reproduce the frame. Most people clicked without reading because the alternative was not an alternative. The loop metabolizes everything, including your objections to it. You are already inside it. The loop only requires your presence.
Dylan Allman tweet media
English
6
24
95
6.8K
Liberty retweetledi
Libertarian Party
Libertarian Party@LPNational·
This is what endless war + surveillance tech looks like. The same kill chain being battle-tested abroad is already being wired into domestic fusion centers and predictive policing. Libertarians have warned about for years. The State doesn’t care if the target is foreign or American. It only cares about inputs. End the wars. End the surveillance state. Restore the Constitution.
Dylan Allman@dylanmallman

One hundred and seventy children died six weeks ago and you've already forgotten. The news cycle processed it. Archived. Next story. What most people missed is the detail that makes this genuinely terrifying. Maven, the AI targeting system running the kill chain, flagged it as a military facility based on a database that hadn't been updated since before 2013. The system didn't malfunction. It executed perfectly on stale data. The AI did exactly what it was built to do. That's the problem. At 1,000 target packages per hour, the human being who technically approved the strike did not verify the target. At that throughput, individual review isn't oversight in any meaningful sense. "Humans will always make final decisions" is simply a lie. The human exists in the loop to absorb legal liability. Speed is the point. The opacity that comes with speed is the point. What most people don't understand is the infrastructure does not know the difference between a foreign target and a domestic one. It knows what the database tells it. The AI doesn't have politics. It has inputs. Change the inputs and it targets whatever you point it at. The tools field-tested over six weeks in Iran are the same tools the government tried to deploy without restriction after blacklisting Anthropic for refusing to remove its limits on domestic surveillance use. When a private company drew that line explicitly, the state designated it a supply chain risk and went looking for a more compliant vendor. That sequence tells you exactly where this is going. Predictive threat assessment already runs in American cities. Fusion centers already aggregate across federal, state, and local enforcement. What Iran added to that stack is just proof of concept. The compressed kill chain works, accountability can be deferred indefinitely, and the public will move on. The same AI infrastructure being refined in real combat conditions right now will be cheaper, faster, and more accurate in two years. The question of where it gets pointed next is a political question, not a technical one. Each forgetting raises the ceiling on what the state can do without consequence. Stay with the discomfort longer than the algorithm wants you to.

English
6
53
161
10.6K