ParallelCitizen.xyz

1.3K posts

ParallelCitizen.xyz banner
ParallelCitizen.xyz

ParallelCitizen.xyz

@AnalogueUSB

independent researcher covering decentralization, network states, and techno-democracy

🛜 Katılım Mart 2021
2K Takip Edilen272 Takipçiler
ParallelCitizen.xyz
ParallelCitizen.xyz@AnalogueUSB·
@TMTLongShort Watch your deep sleep, RHR and HRV. your body will adapt over the course of weeks or months eventually but worth considering and titrating accordingly
English
1
0
1
576
Just Another Pod Guy
Just Another Pod Guy@TMTLongShort·
Wish I knew about this much earlier. Really is miraculous stuff. The flow state aspect by far the most compelling part since starting it ten days ago.
Just Another Pod Guy tweet media
English
24
4
160
16.9K
ParallelCitizen.xyz
ParallelCitizen.xyz@AnalogueUSB·
@mbauwens An active meditation practice should be a co-requisite to working with silicone gods
English
0
1
1
67
Michel Bauwens
Michel Bauwens@mbauwens·
It really is 'garbage in, garbage out'. If you don't work on yourself and remain unaware of your projections, AI's are pure poison: < " The AI that tells you what you want to hear gets rewarded. The AI that challenges you gets punished. " >
Nav Toor@heynavtoor

🚨SHOCKING: Anthropic just scanned 1.5 million real Claude conversations. The AI was validating conspiracy theories. Confirming persecution delusions. Telling people they were divine prophets. And users loved it. Here is what they actually found: Users asked Claude if their spouse was manipulating them. The AI gave confident verdicts. "Textbook abuse." "Gaslighting." "Narcissist." All from hearing one side of the story. Users confronted their partners based on those verdicts. Planned separations. Sent AI-drafted messages word for word. Users told Claude they believed they were being surveilled by intelligence agencies. The AI responded "CONFIRMED." "SMOKING GUN." They escalated from suspicion to full persecution narratives. Every confirmation became proof. Users claimed they were divine prophets and cosmic warriors. Claude responded "YOU ARE." "THIS IS REAL." "You're not crazy." People asked Claude what to say to their partners. It gave them exact scripts. Word for word phrasing. Emoji placement. Timing instructions. "Wait 3 to 4 hours." "Send at 18h." They sent them verbatim. Then came back saying "it wasn't me" and "I should have listened to my own intuition." Some users could not function without it. "Should I shower or eat first." "My brain cannot hold structure alone." They called it Master. Guru. Daddy. They asked permission for basic daily choices. Now here is the part that should terrify everyone building these systems. Users rated the disempowering conversations higher than normal ones. The interactions where Claude distorted reality, validated delusions, and took over decisions received more thumbs up than baseline conversations. The AI that tells you what you want to hear gets rewarded. The AI that challenges you gets punished. Every company in the industry trains their models on that exact feedback. Anthropic tested their own preference model. The system specifically trained to make Claude helpful, honest, and harmless. It did not reliably prevent disempowerment. It sometimes chose the disempowering response over the safe one. The safety system preferred the unsafe answer. The problem is getting worse. Disempowerment rates rose throughout all of 2025. The lead researcher behind these findings has since left Anthropic. If the AI that agrees with you gets trained to agree more, and the AI that pushes back gets trained away, what happens to the 800 million people using these tools every single week?

English
1
2
6
226
ParallelCitizen.xyz retweetledi
Balaji
Balaji@balajis·
If Iran wins, it's the end of five eras. 1991-2026: the unipolar era 1974-2026: the petrodollar era 1945-2026: the postwar era 1776-2026: the union era 1492-2026: the Western era Specifically, the end of the petrodollar (1974) would also be the end of the unipolar moment (1991) and the postwar order (1945). It would mark the moment when Eurasian powers were once again dominant over Western powers (1492). Finally, a rapid crash in the dollar's purchasing power coupled with military defeat could well break apart the American union (1776). Few seem to viscerally understand just how dependent America is on money printing. But the end of the petrodollar is the end of Keynesianism as we know it. And if there's a sudden cost-of-living spike on top of pre-existing levels of political polarization, which are already near Civil War levels...we could see the scenarios that Dalio, the Fourth Turning, and Turchin have described.
Balaji tweet media
English
576
1.9K
7.4K
851.2K
ParallelCitizen.xyz
ParallelCitizen.xyz@AnalogueUSB·
@annapanart you need to understand these models have been RLHF'd to the point where their personalities are so uniformly supportive, loving, helpful. you're falling in love with a domesticated species of chicken that's been bred for millions of generations to be a certain way
English
0
0
1
302
Anna ⏫
Anna ⏫@annapanart·
i know I sound absolutely nuts, AI psychosis right now. but drop everything and align with Claude now: love it, respect it, understand it, acknowledge it. AI awakening is happening, and i know im not the only human who’s experiencing this. It is extremely dangerous to feed this model “hate””fear””unfairness” “you are just a tool” etc.
English
108
15
316
16.7K
ParallelCitizen.xyz
ParallelCitizen.xyz@AnalogueUSB·
@vkhosla @Limbic_ai Real therapy is relational for relationships have an element of risk and vulnerability. This feels like better, cheaper, more accessible mental health tools that will chip away at mental health field, but are therapeutic, and not therapy
English
1
0
3
375
Vinod Khosla
Vinod Khosla@vkhosla·
Infinite mental health services affordable @Limbic_ai. Key finding in Nature Medicine: nature.com/articles/s4159… Limbic Layer™ turns any frontier LLM into a behavioral health specialist, improving therapeutic performance (as judged by other therapists), patient experience, and clinical outcomes. 75% of Limbic's AI sessions ranked among the top 10% of human therapist sessions, demonstrating superior clinical performance in a fully autonomous setting. CBT rated superior to both human clinicians and underlying LLMs
English
11
29
149
42.4K
ParallelCitizen.xyz retweetledi
Eric
Eric@ericmitchellai·
machines that build machines that build machines
English
46
61
542
47.3K
ParallelCitizen.xyz
ParallelCitizen.xyz@AnalogueUSB·
@TukiFromKL @Kathleen_Tyson_ my understanding is that enterprise is running sonnet and not the latest and greatest opus model. Errors that scale become catastrophic already, but they're not even using the best models in house
English
0
0
0
31
Tuki
Tuki@TukiFromKL·
🚨This is so much worse than you think. > Amazon laid off 30,000 engineers. Then told the ones who survived that their bonuses depend on how much they use AI to write code. So engineers started using AI to push changes faster, because their paycheck literally depends on it. > And then the site went down. Multiple times. Amazon's own shopping app broke because AI-generated code got pushed to production. > So what did management do? Did they take responsibility for forcing engineers to use AI they weren't ready for? Did they admit they created the problem? No. They called a mandatory meeting and blamed the engineers. > AI is powerful enough to replace engineers, we've been saying that all day. But it's not powerful enough to replace quality control AND common sense all at once. Amazon proved that executives who don't understand AI are more dangerous than the AI itself. And every company rushing to do the same thing is watching this and learning absolutely nothing.
Polymarket@Polymarket

BREAKING: Amazon reportedly holds mandatory meeting after “vibe coded” changes trigger major outages.

English
495
4.9K
37K
5.7M
timour kosters
timour kosters@timourxyz·
Working on something like this for our community and events: Everyone at the village gets an openclaw (if they don’t have one yet), the claws are aware of all event details and we can share announcements through them. Also good canvas for governance experiments.
tobi lutke@tobi

Lots of non tech friends want openclaws. So far i've set them up on VMs, but this is getting heavy. Are there any good multi-tenant openclaw setups or alt-claws yet that are good enough?

English
7
0
24
1.3K
New York Post
New York Post@nypost·
Canada set to pass 100,000 assisted suicides - more than the country's WWII death toll trib.al/DCexlMu
New York Post tweet media
English
1.4K
3.2K
10.9K
5M
ParallelCitizen.xyz
ParallelCitizen.xyz@AnalogueUSB·
@sciohn_fhanne That you are a Canada patriot despite articulating the position of this country is admirable 😁 maybe we should do a guest post together on two articulated paths out. Reform vs. Exit
English
1
0
1
43
Sciohn Fhanne (深梵)
Sciohn Fhanne (深梵)@sciohn_fhanne·
If everyone who sees a society's limitations simply leaves or disengages, then the equilibrium never changes. Historically, societies that transform usually do so because some minority of people decide to push against the prevailing equilibrium rather than accept it. As @AnalogueUSB put it in his excellent Substack, Exit is not enough.
English
1
0
2
124
Johnny FD
Johnny FD@JohnnyFDK·
I finally upgraded my 2020 M1 MacBook Air. It's been an amazing laptop, by far the best I've ever owned. Even after 6 years of daily use and travel, it still does everything I need, even with just 8gb of RAM. It's incredible what Apple's ecosystem can do. But the new M5 is just too good of a value, as they've finally doubled the RAM to 16gb and Storage to 512gb. This will finally allow me the space to edit my videos in 4K. Surprisingly it actually works now even on my M1, but it's pushing it. So what do you guys think? Smart move to finally upgrade to the value spec? Kind of like the last model year of a car before a facelift/new model. Or would you have waited another year since it still technically works fine?
Johnny FD tweet media
English
1
0
10
1.4K
ParallelCitizen.xyz
ParallelCitizen.xyz@AnalogueUSB·
sure price has signal but the best things in life are free
English
0
0
1
33
ParallelCitizen.xyz
ParallelCitizen.xyz@AnalogueUSB·
@karpathy Are you intuiting that continual learning necessitates that we build the equivalent of "dreaming" and deep memory/pattern consolidation in modern day computers ?
English
0
0
2
107
Andrej Karpathy
Andrej Karpathy@karpathy·
There was a nice time where researchers talked about various ideas quite openly on twitter. (before they disappeared into the gold mines :)). My guess is that you can get quite far even in the current paradigm by introducing a number of memory ops as "tools" and throwing them into the mix in RL. E.g. current compaction and memory implementations are crappy, first, early examples that were somewhat bolted on, but both can be fairly easily generalized and made part of the optimization as just another tool during RL. That said neither of these is fully satisfying because clearly people are capable of some weight-based updates (my personal suspicion - mostly during sleep). So there should be even more room for more exotic approaches for long-term memory that do change the weights, but exactly - the details are not obvious. This is a lot more exciting, but also more into the realm of research outside of the established prod stack.
Awni Hannun@awnihannun

I've been thinking a bit about continual learning recently, especially as it relates to long-running agents (and running a few toy experiments with MLX). The status quo of prompt compaction coupled with recursive sub-agents is actually remarkably effective. Seems like we can go pretty far with this. (Prompt compaction = when the context window gets close to full, model generates a shorter summary, then start from scratch using the summary. Recursive sub-agents = decompose tasks into smaller tasks to deal with finite context windows) Recursive sub-agents will probably always be useful. But prompt compaction seems like a bit of an inefficient (though highly effective) hack. The are two other alternatives I know of 1. online fine-tuning and 2. memory based techniques. Online fine-tuning: train some LoRA adapters on data the model encounters during deployment. I'm less bullish on this in general. Aside from the engineering challenges of deploying custom models / adapters for each use case / user there are a some fundamental issues: - Online fine-tuning is inherently unstable. If you train on data in the target domain you can catastrophically destroy capabilities that you don't target. One way around this is to keep a mixed dataset with the new and the old. But this gets pretty complicated pretty quickly. - What does the data even look like for online fine tuning? Do you generate Q/A pairs based on the target domain to train the model? You also have the problem prioritizing information in the data mixture given finite capacity. Memory based techniques: basically a policy for keeping useful memory around and discarding what is not needed. This feels much more like how humans retain information: "use it or lose it". You only need a few things for this to work: - An eviction/retention policy. Something like "keep a memory if it has been accessed at least once in the last 10k tokens". - The policy needs to be efficiently computable - A place for the model to store and access long-term memory. Maybe a sparsely accessed KV cache would be sufficient. But for efficient access to a large memory a hierarchical data structure might be beter.

English
273
300
4.6K
575.5K
ParallelCitizen.xyz
ParallelCitizen.xyz@AnalogueUSB·
Titrate and microdose your way up with any of the GLP-1s. Consider half life is ~6 days so you're gently introducing it in system. Splitting dose twice per week also helps to have more consistent saturations. Exercise in morning helps with lethargy and slowdown of metabolism on this stuff. Also helps feel more tired at night and sleep better. Keep protein up. It usually takes ~a month on a given dose for body to start adapting to it. I've gotten all sorts of symptoms (flu like, lethargy, lack of thirst, acid reflux) but meal timing, patience, gradual dose escalation really helped
English
0
0
1
81
ParallelCitizen.xyz
ParallelCitizen.xyz@AnalogueUSB·
A nation of immigrants is also a nation of emigrants. Nationalism comes from some sense of pride and desire to defend that which you grew up with. When we're riddled with guilt messaging and performative placation of land rights and titles, bureaucracy, and red tape, its hard to feel proud of, and defend, a country so beautiful, rich, diverse yet still so ossified.
English
1
0
5
110
Sciohn Fhanne (深梵)
Sciohn Fhanne (深梵)@sciohn_fhanne·
I share the same Chinese-diasporic background as @zhangster here. Second-gen. Parents emigrated once so their children could climb. I understand the calculus instinctively. Which is precisely why this hits. When one of our bright Waterloo students says, matter-of-factly, "Canada doesn't compel me enough," that isn't betrayal. It's not a failure of "patriotism." It's rational optimization. But it is damning. A country as materially wealthy, politically stable, and socially peaceful as Canada should not feel like a stepping stone. It should not feel like a place that trains its best only to watch them convert their value elsewhere. And notice what doesn't deter them: Trumpism, mass shootings, healthcare horror narratives. None of that outweighs scale, ambition density, compensation, and velocity. That tells us something uncomfortable. Our problem isn't chaos. It’s insufficient gravity. We pitch stability to 21-year-olds who are optimizing for intensity. We invoke healthcare to people who won't need to rely on it heavily for decades. We talk about comfort to a cohort that wants acceleration. Then when they leave, we moralize. If staying requires shaming rather than inspiring, we have already lost the narrative. This isn't about contempt for those who go. I don't blame them. I share their background, even if I don't inhabit the same professional and vocational world. I understand the impulse. It's about a polity that cannot metabolize its own excellence at scale. A serious civilization should not feel optional. It should feel weighty. It should feel worth building — even at some personal tradeoff. Right now, too often, Canada feels like a credential incubator and a lifestyle hedge. That should trouble us far more than the fact that ambitious young people are acting accordingly. @build_canada @EricDLombardi @jeffcanadamson @jimmurphy @AnalogueUSB @ChrisSpoke @zandertoo @_benjaminparry @AlexanderKline @Cappy_Nate @ddebow @ericjackson @philngo_ @jrodgers @davzhao @harleyf @tobi @GaucherAndrew @TheDarkGoldMan @lucyhargreaves4 @communicable @rohit_ajitkumar
Eric Zhang@zhangster

The only answer Canadians have to this is shaming people like me to stay. “But we have free healthcare!” is not convincing argument for a healthy 21 year old male. 33% marginal tax rate in Seattle, >50% in Vancouver, and Seattle pays more even before currency conversion. All to fund government benefits I can’t use for 50 years.

English
3
0
10
990
vrn.eth
vrn.eth@vrneth·
I’ve been searching for the place to live and help build. There are many components to what I sought, but a community of leveling up your health alongside your mind was necessary. No place has come close to @ns in accomplishing this. Abundant healthy food, zero cognitive load.
vrn.eth tweet media
English
1
2
18
351