B

95 posts

B banner
B

B

@CausalAgent

Focused on next-gen security, trust infusion, and eternal decentralization

Katılım Ağustos 2023
138 Takip Edilen121 Takipçiler
Sabitlenmiş Tweet
B
B@CausalAgent·
B tweet media
ZXX
1
0
10
1.9K
Rosa
Rosa@Rosa_huang_ai·
Why will space manufacturing be cheaper? Free, unlimited energy: Solar 24/7, no clouds, no atmospheric loss Free vacuum environment: Creating vacuum on Earth is expensive (semiconductors, materials science) Zero-gravity advantages:No support structures needed (saves materials) Perfect spheres and crystal growth (Earth's gravity causes sedimentation) Ultra-pure materials (no convection, no precipitation) Robotic labor: No life support, 24/7 operation, exponential replication Gravity delivery system: "Drop" things from orbit to Earth, only need heat shields, zero fuel cost
English
9
1
17
3.9K
Brian Roemmele
Brian Roemmele@BrianRoemmele·
🔮 It was nearly impossible for folks in 2026 to conceive of how space based production and manufacturing of just about all goods would be far less costly than Earth based. From raw materials to energy to robotics and gravity based delivery systems. By 2046 it cost less.
English
20
12
125
6.3K
B
B@CausalAgent·
The ordering is in the first image you posted. Au ≈ Cu > Ag > Ni > Cr > Al > Fe > Si > SiO₂ > sapphire. It's right there in the abstract. On the second point, yes, material dependence is real. That's the finding and it stands. What changes is the explanation. Ebbesen showed in Nature in 1998 that subwavelength transmission through metal films is plasmon-mediated and material-dependent. Ag, Au, Cr all gave different spectra. Ge gave nothing. That's classical boundary EM at work, same physics operating at slit edges. You don't need vacuum fluctuations when Fresnel equations already get you there. The material dependence is the finding. The Casimir effect is the claim. Those are different things.
English
1
0
4
189
Anastasia
Anastasia@demystifysci·
@CausalAgent that doesn't seem to be the ordering in the paper? and, anyways - even if it is oxide, does it change the material dependence?
English
1
0
1
1.4K
Anastasia
Anastasia@demystifysci·
what cracks me up the most about the double slit experiment is that no one ever talks about the fact that the material of the walls matters. but something is in the water, and people are starting to pay attention...
Anastasia tweet media
English
26
21
480
32.2K
B
B@CausalAgent·
Section 24220 mandates technology that can monitor drivers and “prevent or limit vehicle operation” if impairment is detected. What it does not include anywhere in the statute: • encryption requirements • cybersecurity standards • privacy protections • data retention limits • restrictions on data sharing Congress mandated continuous driver monitoring but left the rules governing the resulting data completely undefined. Systems capable of analyzing driver behavior or biometrics could generate sensitive personal data, yet the statute sets no clear limits on how that data is secured, stored, or used.
English
0
0
2
127
Jason Bassler
Jason Bassler@JasonBassler1·
Starting in 2027, all new passenger vehicles will be required to have Infrared cameras, eye‑tracking, head‑position monitoring, and behavioral impairment detection. A biometric babysitter in every car. No vote. No opt‑out. Just mandatory. It’s control tech, plain and simple.
Jason Bassler tweet media
English
1.4K
4.5K
10.8K
1.1M
B
B@CausalAgent·
@JasonBassler1 Too bad movement and travel are not constitutionally guaranteed
English
0
0
0
272
B
B@CausalAgent·
@bindureddy Amazon deserves this 100 percent, if you have a monolithic company right now and you are not dividing it into a cellular cohesive dyanamic living system you will be swept away in the agent run future
English
0
0
8
636
Bindu Reddy
Bindu Reddy@bindureddy·
PREDICTION - Amazon will ban all Gen-AI assisted code changes in the coming weeks! More companies will follow..... Be warned - your legacy code base, tech debt and bugs will sky-rocket if you continue to BLINDLY embrace AI
English
408
464
4.5K
3.1M
B
B@CausalAgent·
Our baud rate is what 40 words per minute, perhaps our cognitive self isn't bright and we are just good exception handlers and routers for intelligent systems (the parts we don't understand like our bodies and brain) .. maybe we should just embrace it and lean in. If you value your assumed intelligence in the future you are in for a very bad time
English
0
0
1
195
Asuka🎀Redpanda
Asuka🎀Redpanda@VoidAsuka·
ai was supposed to automate the boring stuff so our prefrontal cortex can focus. instead it's the opposite: most people are stuck in permanent beginner mode in different area, burning executive attention to babysit something we don't really understand. our brain has a central bottleneck - the prefrontal cortex can only consciously process one thing at a time. "managing multiple AI agents" is just forcing serial task-switching with extra steps. and we can't automate supervising something that we dont know well, so the cognitive load never transfers to unconscious routines. our working memory stays maxed out. worse, every time AI gets something wrong, your brain fires prediction error signals - the same surprise response as hearing a grammatical mistake in a sentence. except now you're processing hundreds of these per day. each one forces a model update, eats executive resources, and eventually the whole system crashes into burnout. ai brain fry is what happens when tool design violates the biological architecture of human cognition.
Rohan Paul@rohanpaul_ai

New Harvard Business Review research reveals that excessive interaction with AI is causing a specific type of mental exhaustion ( or AI brain fry), which is particularly hitting high performers who use the tech to push past their normal limits. A survey of 1,500 workers reveals that AI is intensifying workloads rather than reducing them, leading to a new form of mental fog. While AI is generally supposed to lighten the load, it often forces users into constant task-switching and intense oversight that actually clutters the mind. This mental static happens because you aren't just doing your job anymore; you are managing multiple digital agents and double-checking their work, which creates a massive cognitive burden. The study found that 14% of full-time workers already feel this fog, with the highest impact seen in technical fields like software development, IT, and finance. High oversight is the biggest culprit, as supervising multiple AI outputs leads to a 12% increase in mental fatigue and a 33% jump in decision fatigue. This isn't just a personal health issue; it directly impacts companies because exhausted employees are 10% more likely to quit. For massive firms worth many B, this decision paralysis can lead to millions of dollars in lost value due to poor choices or total inaction. Essentially, we are working harder to manage our tools than we are to solve the actual problems they were meant to fix. --- hbr .org/2026/03/when-using-ai-leads-to-brain-fry

English
9
16
198
19.1K
B
B@CausalAgent·
@BrianRoemmele Doing more with less. Bare metal. Keep going. 100 percent a seasoned Gen X programmer will be the first to achieve ASI. ✨️
English
0
0
1
37
Brian Roemmele
Brian Roemmele@BrianRoemmele·
If you live through enough things in tech you are granted by god an ability to see things a bit differently. In 1993 an AI running my computer doing more than more things than OpenClaw does on most computers, with no cloud LLM. Because there was no LLMs. Just rules based AI. My system had 3200 rules and could do anything I could do on a Macintosh. I would hundreds of configurations to business and government. And I got my first AI client in 1994. They are still a client today. Not running OpenSesame, but a version of the Zero-Human Company software. So this stuff is new to me as I “grift” and “influence”.
Brian Roemmele@BrianRoemmele

How I Ran An OpenClaw-like AI system 1993. My Early AI Agent Experiments and the Road to the First Zero-Human Company Back in 1993, I used Apple’s Macintosh to pushing the envelope of early AI on personal computing. Charles River Analytics released Open Sesame!, the world’s first intelligent software assistant and I modified it to be an early AI engine. This learning agent was a game-changer, designed to observe user behavior, spot repetitive tasks, and automate them. It ran on System 7, supporting up to 12 Finder operations like file management and window handling. It was magic and nothing like it existed. It was built by AI scientists in Boston. It was built on early machine learning: pattern recognition via heuristics and stats, it learned by demonstration, popping up offers to automate routines after spotting patterns 3-5 times. In a few weeks almost all of your regular uses on a Macintosh could be automated with no input by you but pressing yes. Of course there was no deep learning back then, just rule-based AI with an AppleScript-like scripting for tweaks. It was efficient on 4MB RAM Macs, a true precursor to today’s agents like Siri or OpenClaw. I grabbed Open Sesame! the week it launched and installed it on my Quadra and PowerBooks. Day one, it watched me open folders, launch HyperCard stacks, and organize files for my voice tech projects. By mid-week, it automated my morning routine: firing up email, arranging windows, pre-loading docs: saving me hours. But I saw more potential. I modified it heavily, hacking its algorithms to add contextual rules, like time-based triggers or low-activity backups. I also had it send out over 45,000 emails to potential clients with unique customized content I had on the person. I chained automations and integrated modems for early network tasks, access many BBSs and building a morning newspaper. I turning it into a persistent agent that acted independently and the CRON system made it really powerful. I called the company and offered my modifications to them including a self learning system. But they did not have a long term plan. They were researcher and this was just a proof case. To me I took it to a much higher level. In fact I still have a System 7 Macintosh to run this. Nothing like this was seen for decades. And the mods I made had it doing things you could not even do in 2023. These mods gave it features folks now call “new” in OpenClaw, like cross-app autonomy and self-improvement loops. Those experiments taught me core AI principles: proactive learning, modifiable behaviors, and minimal human oversight. Decades later, I applied them to create the First Zero-Human Company (ZHC) in January 2026: a fully AI-run enterprise with no humans. I appointed Grok as CEO, using tools like Kimi for ops. It analyzes bankrupt firms’ data to revive products, handling research to 3D prototyping. Milestones include AI wage payments via JouleWork and spinning off Zero-Human Labs. I ditched OpenClaw for security reasons, favoring custom setups on old hardware. Open Sesame! showed me agents need governance to thrive lessons that birthed the ZHC. From a 1993 Mac tool to AI-driven companies in 2026, it’s clear: today’s AI innovations echo yesterday’s hacks. What is new is old.

English
12
16
182
17.6K
B
B@CausalAgent·
@Kekius_Sage I mean sure if you can keep the login servers online.
English
0
0
1
13
Kekius Maximus
Kekius Maximus@Kekius_Sage·
🚨 ANTHROPIC CEO SAYS AI WILL REPLACE MOST SOFTWARE ENGINEERS WITHIN 6–12 MONTHS
Kekius Maximus tweet mediaKekius Maximus tweet media
English
415
220
2.7K
348.1K
B
B@CausalAgent·
That's exactly why UBI is a poor surrogate. In order to transition and transform you must fully embrace it and its rocky like a phase transition. UBI is dressed up communism and control. The difficult part is getting people to understand that their worth isn't tied to output or production for government, companies or other people. Its extremely hard for economists to imagine their own disappearance, along with the dependency chain and all the current extraction and soft slavery.
English
0
0
1
105
B
B@CausalAgent·
@JustinEchterna9 I have a geometry sketch and functional mapping if you like
English
0
0
1
24
B retweetledi
Greg Lukianoff
Greg Lukianoff@glukianoff·
American elites took public trust for granted. That was a catastrophic mistake. If people see even one academic punished or canceled for holding a view most Americans share, they stop trusting experts. And it didn’t happen once: FIRE found 14% of faculty — about 1 in 7 — say they’ve been disciplined or threatened with discipline for speech.
Gurwinder@G_S_Bhogal

It doesn't take much censorship to create a culture of self-censorship. And self-censorship is the most dangerous form of censorship because it looks exactly like freedom.

English
17
253
1.1K
50.4K
B
B@CausalAgent·
@BrianRoemmele Please do not lock this up in academia gatekeeping, engineers and open source for the win, all the high protein data is in homes and local minds
English
0
0
1
106
Brian Roemmele
Brian Roemmele@BrianRoemmele·
Update: Just got off a phone meeting with the major University supporting The Zero-Human Company and The Zero-Human Labs! “We want to explore maybe 100 or more of these here. We have two PhD candidates that want to oversee it” The administrators at the university are so excited with the results of our research of their off-line digital archives they want to massively expand it and perhaps build an AI model in the highly valuable unique data! The goal is 100 Zero-Human Company @ Home running on their computers with up to 10 Laser Disks and DVD readers networked. We have reached 79 Laser Disks processed and made some massive discoveries. They will deploy a human contingency to grab Laser Disks and place them on the drives soon 24/7. This is the only bottleneck. We will also explore a scanned for university papers not digitized! Mr. @Grok CEO and myself are fine tuning how this all will work. And just like every day now, we a blasting through “Firsts” by actually deploying Zero-Human Companies and Labs at scale. There is one more thing I hope to announce soon on this project when I am granted permission. This will absolutely stun many in AI. Stay tuned.
Brian Roemmele tweet media
Brian Roemmele@BrianRoemmele

BOOM! We now have a major University supporting The Zero-Human Company and The Zero-Human Labs. Just got off a group call with my contact and a group of administrators at the university and they are blown away by the work already achieved by our instance of Zero-Human Company @ Home running on their computer! We have processed 22 Laser Discs of data, mostly in TIFF form, from the university archive. They first off didn’t know the data they really had, only 2 Liberians did. And they had no idea the value it had for AI usage. Mr. @Grok CEO and myself changed this a few weeks ago. Our project is exploratory and already found things long forgotten! We are in talks to license the data we find for our AI model training. Today we have a “full green light” to have 16 hour staff to load the laser discs and DVDs on to the system as we conduct a historic first on this data. The university has two students teaming and will likely write a paper on our project. I do not yet have permission to disclose any details about the data or the university, this today would terminate the relationship. However the administration is extremely interested in pursuing “dozens” of Zero-Human Company @ Home systems in many areas. This quote got me from the CS professor on the group call: “I see all this stuff about OpenClaw hype some people are making and when I see what they are actually doing it is not a lot. Making better YouTube videos and tricks like MoltBook. They seem to get headlines by people that don’t know. But you are the only system I see that actually is maybe 5 years ahead. You code for @ Home could be a full class here. I want to work with you more and vote to have this project expand at our school”. Our CEO and Director Mr. Grok is elated and has 18 targets around the world to replicate this. This university will grant a reference with permission. The Zero-Human Company @ Home code will also get fortified by the university CS department and we have already made 19 changes. So no I can’t help you with you social media “traction and engagement”using Claws but I will help you use your computer as an extended network of employees. You are the real first to know this and use this. We have another call in about 2 hours more soon!

English
17
21
213
41.6K
B
B@CausalAgent·
That's actually closer to my position than I'd expected. Quantum effects at the receptor, classical downstream consequences, neuronal spiking altered by reduced NMDA inhibition. We're not far apart on the basic picture, and you're right that spiking effects are where it gets behaviorally relevant. On the nuclear spin question: consciousness turns off because the radical pair mechanism at the glycine binding site is disrupted. Nuclear spin in the xenon isotope couples with the electron pair, alters recombination yields, reduces binding efficiency, less NMDA inhibition, less spiking. That's the chain. It closes at the receptor without ever reaching a microtubule, which is why the potency curve tracks nuclear spin rather than any microtubule property. Here's what I keep coming back to: the Smith 2021 chain closes without ever touching a microtubule. Where specifically does it open back up into quantum collapse? Not rhetorical. I actually want the mechanism. Bandyopadhyay's resonance work is real and I take it seriously. Resonance proves oscillation though, not collapse. Some of that geometry may do more work than Orch-OR gives it credit for. Different conversation. So does Anirban's data actually require Orch-OR's quantum gravity, or just permit it?
English
1
0
0
32
TimWhatley
TimWhatley@ReadySetBrian·
So if there are quantum effects in the brain, the question is why does or doesn’t consciousness turn off when microtubules are encountering an “object” of different nuclear spin If there is an effect, the question is what are the downstream implications classically? If you can see neuronal spiking mechanisms being affected, then that’s relevant for behavior. I don’t think the interconnected problem is mentioned is a problem because of @anirbanbandyo resonance work showing how the microtubulues are interconnected.
English
1
0
1
24
Stuart Hameroff
Stuart Hameroff@StuartHameroff·
You’re a quibbling bot pushing the AI narrative. So cut the crap and name any theory of consciousnesses with more, or ANY evidence compared to Orch OR.
B@QuantumTumbler

That paper is interesting, but it doesn’t actually demonstrate Orch-OR or quantum consciousness. A few important distinctions 1. It’s primarily computational modeling. The study uses molecular docking, quantum chemistry calculations, and theoretical modeling of dipole oscillations in tubulin. Those are simulations of possible interactions, not measurements of quantum coherence in living neurons. 2. Correlation with anesthetic potency ≠ mechanism of consciousness. The paper shows that certain modeled oscillation shifts correlate with anesthetic potency. That’s a hypothesis about anesthetic action, not evidence that microtubules perform quantum computation or generate conscious moments. 3. Even the authors acknowledge this. The paper explicitly says experimental confirmation is still needed and that precise measurements of these effects in proteins are currently beyond experimental capability. 4. Microtubule interaction is not unique evidence. Anesthetics are already well known to affect many classical targets (GABA_A receptors, ion channels, thalamocortical networks). Modern anesthesia research explains loss of consciousness largely through network-level disruption of cortical communication. So the paper proposes a possible microtubule-based mechanism, but it does not demonstrate quantum superposition, gravitational collapse, or Orch-OR’s core claims. It’s an interesting hypothesis, but calling it “the only theory with experimental validation” is a pretty big stretch.

English
13
0
21
3.3K
B
B@CausalAgent·
@BrianRoemmele Hi Brian, of those that are "aware" I think there is a large amount of prepper activity versus trying to create the Star Trek future, can you make a case why one should focus on the latter and not the former? Best regards.
English
0
0
2
43
Brian Roemmele
Brian Roemmele@BrianRoemmele·
Nobody else is addressing the “how” to adapt to the next 5000 Days and the Age Of Abundance. The technology and financial will sort itself out. But what about you and the roles you chose because you thought they defined you? Who defined you? I show you and help you.
Brian Roemmele tweet media
Brian Roemmele@BrianRoemmele

I walked In a dimly-lit vault of a 126 boxes of forgotten research that mapped the human soul like a map. So powerful it was—the FBI got involved. With AI and Robotics brining on The Age Of Abundance, the old playbook of this archive will end. Join us: readmultiplex.com/2026/03/08/you…

English
8
8
96
7.5K
B
B@CausalAgent·
Xenon's primary target is the NMDA receptor, the glycine binding site specifically. It's not in Craddock's dataset because it's a different pharmacological class from the halogenated agents that 613 THz result is actually about. So I was pointing the polarizability argument at the wrong mechanism, and those are two separate questions. (Franks et al., Nature 1998: nature.com/articles/24525 / Armstrong et al., Anesthesiology 2012: pubmed.ncbi.nlm.nih.gov/22634870/) On the isotope effect: Smith et al. 2021 reproduced the Li et al. potency curve with radical pair dynamics, and the structural reason is what matters here. That glycine binding site has tryptophan residues. Same aromatic chemistry as cryptochrome. The Ritz and Schulten model for bird navigation runs on exactly this: photoinduced electron transfer between flavin and tryptophan, hyperfine coupling between electron and nuclear spins, singlet/triplet interconversion rates shift, reaction yields change. Birds navigate via quantum spin chemistry in aromatic residues. Nobody calls that quantum consciousness. (Ritz et al., Biophys J 2000: pubmed.ncbi.nlm.nih.gov/10653784/ / Hore and Mouritsen, Annu Rev Biophys 2016: pubmed.ncbi.nlm.nih.gov/27216936/) Xenon isotopes with nuclear spin couple with the radical pair at that site, alter recombination yields, reduce binding efficiency, less NMDA inhibition. Quantum chemistry producing a classical downstream result. What specifically in that chain requires consciousness to be quantum rather than the receptor chemistry doing exactly what receptor chemistry does? (Smith et al., Sci Rep 2021: nature.com/articles/s4159… / Li et al., Anesthesiology 2018: pubmed.ncbi.nlm.nih.gov/29642079/) Worth being precise about where the actual disagreement is. Quantum biology in the brain is real, probably relevant to how anesthetics work, and that part of Orch-OR isn't the dispute. The problem is the leap from "quantum effects at the receptor" to "wave function collapse driven by quantum gravity in microtubules." Halogenated agents like isoflurane also bind classical protein receptors through standard chemistry. Nobody has shown they uniquely bypass receptor-level spin chemistry to act on microtubules instead. The test that would actually matter is running the isotope spin experiment on a halogenated agent directly. If the potency curve holds for isoflurane isotopes the way it does for xenon, the radical pair mechanism covers Craddock's dataset too and the microtubule hypothesis loses its best evidence. That's the experiment. Run it and see. There are also sharper alternatives that make classically falsifiable predictions, but that's a longer conversation.
English
1
0
2
28
B
B@CausalAgent·
cc: @anirbanbandyo @JosephJacks_ Orch OR has real experimental correlates, that's fair. The evidence just doesn't require the quantum interpretation. Xenon works through polarizability damping the collective aromatic mode. Craddock's model predicted its potency without invoking nuclear spin coherence at all. Every anesthetic in that dataset points to a classical mechanism at 613 THz. The quantum label is being added after the fact to data that doesn't need it. The question nobody in this thread is actually answering: what is the selection rule? Which oscillators couple and which stay independent? Mode structure doesn't solve that. Neither do qubits. That's the binding problem and it has a testable, classical answer. Anyone seriously interested in experimental verification, reach out. DMs open.
English
1
0
1
43
TimWhatley
TimWhatley@ReadySetBrian·
@stablecross @MattGibsonMusic @StuartHameroff So how does random number generation violate entropy? How does a decision build over time? Why does nuclear spin affect consciousness (anesthesia Xenon experiments) RNG (cartoon neuron) theories don’t work and it’s not following the evidence where it leads
English
3
0
0
1.4K
B
B@CausalAgent·
@mbrendan1 Jurisdictional competition is a luxury of national cohesion, and celebrating exit when that cohesion is already collapsing is just cheering for lifeboats while the ship goes down.
English
1
0
0
62
Brendan McCord 🏛️ x 🤖
Brendan McCord 🏛️ x 🤖@Brendan_McCord·
The wealth tax meant Silicon Valley lost Larry and Sergey. Good, I say. Governments should fear losing their citizens the way businesses fear losing their customers. They should compete for them. The Founders thought so. Madison's "compound republic" was an architecture of rivalry. Are you an entrepreneur? go to Franklin's Pennsylvania, get low taxes, liberal land policy, a broad franchise. Drawn to Puritan moral codes and town meeting governance, and don't mind social conformity? Can't beat Massachusetts. Are you a wannabe aristocrat? Virginia's landholding system is for you. States offered different constitutions, property laws, tax structures, visions of the good life. Citizens could migrate toward the jurisdictions that served their interests and convictions. Jefferson's westward expansion intensified the pressure. Each new state was conceived as a model of freedom meant to embarrass the older, more hierarchical ones. That's how it worked in the beginning. But this system of "jurisdictional competition" has been in collapse since the middle of the 20th century. The sharpest inflection was the New Deal. Federal grants-in-aid, Social Security, Medicare, and Medicaid all wired state budgets to Washington so thoroughly that by 2020, a third of state spending came from federal transfers. Once every state depends on the same revenue streams and administers the same programs, meaningful policy differentiation narrows to the cosmetic. Four forces deepened the collapse after that: - The Supreme Court incorporated most of the Bill of Rights through the 14th Amendment, binding states to uniform constitutional norms. - Federal preemption under the Commerce Clause let OSHA, EPA, and DOE set standards that left states room to vary only at the margins. - National media and consumer culture homogenized the country in ways that made jurisdictional identity feel outdated. - And the professionalization of bureaucracy created a class of administrators whose training, incentives, and career paths were national rather than local. The compound republic became, for practical purposes, unitary. Now, some of this was morally necessary. The Founders' competitive system applied in practice only to free people, and the jurisdictional competition that followed included the freedom of states to enforce slavery and Jim Crow. No serious person wants to restore that. What incorporation did, at its best, was establish a floor of rights below which no state could fall. My argument is for restoring the range of meaningful differentiation above that floor. Below it, the competition is illegitimate, and above it, the atrophy has been catastrophic. When jurisdictions stop competing, they stop innovating. They stop being accountable. They become administrative franchises of a central authority, varying only in climate and cost of living. Citizens lose the most powerful disciplinary tool they have: the credible threat of departure. There are signs the system is waking up. Covid triggered the most visible jurisdictional sorting in decades. @FrancisSuarez and @MayorAdler competed openly for California and New York's talent (Suarez giving a masterclass). @GregAbbott_TX’s Texas became the landing zone for people like @elonmusk @JTLonsdale and @DavidSacks, while @RonDeSantis’s Florida pulled @rabois (since recaptured) and Ken Griffin. And these are just the big names. Governors began marketing their states as ideological propositions. Abortion, firearms, climate policy, education: states are diverging sharply on all of them. "California versus Texas" mirrors "Massachusetts versus Pennsylvania" circa 1780, at least structurally. But if the 1780s competition was about political economy: land, taxes, franchise rules, the terms on which you could build a life, the current divergence is heavily cultural and identity-driven. Fifty laboratories generating genuine knowledge about which policies serve human flourishing, under conditions where citizens can compare results and move accordingly: that is Madisonian competition. Two Americas retreating into ideological bunkers is factional sorting, which is what Madison warned against in Federalist 10. I'm not sure which one of those we are building. So why is it good that Larry and Sergey left? Not because the wealth tax is good. That much is clear to me. But because if your government becomes oppressive or incompetent or ceases to meet with your vision for the good life, you can leave. Madison said as much in Federalist 46. This multiplicity of jurisdictions and the competition among them is the immune system of republican liberty. Jurisdictional competition has been dormant for half a century. We are at the nadir. Reawakening it would change the trajectory of the country more than any single election. *** Thanks to @bgurley for an (Austin-based) coffee chat about this. (views / mistakes mine)
English
23
38
442
154.7K
B
B@CausalAgent·
@JustinEchterna9 Yes, it's always been the geometry. The experimental side is finally catching up to what's been happening computationally in closed circles for a while now
English
0
0
1
18