ral.eth 🔰

25.1K posts

ral.eth 🔰 banner
ral.eth 🔰

ral.eth 🔰

@RaleighC

Founder at https://t.co/lWmBGnsOPN \ Proximus Ordo Seclorum / \ Inventor entrepreneur / \ https://t.co/8h6ug80ouW / \ hodl #Bitcoin /

Cyberspace Katılım Aralık 2007
3.7K Takip Edilen14.5K Takipçiler
ral.eth 🔰 retweetledi
Autism Capital 🧩
Autism Capital 🧩@AutismCapital·
Resist the Digital ID. Resist mandatory Internet KYC. Resist social media app Face ID scans. Resist age verifications. Resist VPN bans. Resist the chip. Resist the biometric technological panopticon. Resist the social credit surveillance state. Resist until you can't anymore.
English
142
953
4.2K
63.4K
ral.eth 🔰
ral.eth 🔰@RaleighC·
Did we ever find out why the chinese powder dealers shape their ampules like buttplugs?
ral.eth 🔰 tweet media
English
3
1
7
690
ral.eth 🔰 retweetledi
Merlin
Merlin@TheWizardTower·
There's a very strong argument to be made about how this sort of thing is inevitable, that the NSA/CIA/FBI/etc have already backdoored every CPU on the market today. I don't care. I'm not homeless today because I was able to install an OS on my machine, create a login account, read a pile of documentation, and gain computer literacy to a degree that I got paid work for it. We have an immutable, inalienable obligation to ourselves, each other, and our descendants to fight this tooth and nail, everywhere it manifests. Because this only helps tyrants, and robs the innocent of things they have an inviolate right to: the right to think for themselves and speak their mind. If that isn't enough, you only have to look as far as England, Australia, or the EU to see the absolutely catastrophic impact these ID laws have had, and how quickly they've escalated from "Stop Children from seeing scary things" to "You said true things about the current ruling regime." It happens in a matter of weeks or days. Every time it happens. Fight it. Fight it while you still can. This is no time for moderation.
No to Digital ID@NoToDigitalID

Age verification, Lobbying and Dark Money will push Age Verification and thus, Digital ID further than any of us can imagine.

English
28
621
2.4K
65.4K
ral.eth 🔰 retweetledi
Matt Beall
Matt Beall@MBeallX·
🔔 7 claims from @TMBSPACESHIPS the X account linked to missing Maj. Gen. William McCasland that went silent the day he disappeared🔔 (Summary: Boeing/Raytheon have hitmen… how the pentagon keeps the UAP topic secret… suppression of plasma physics… nazi ww2 tech is the basis for modern UAP research, and more) 
1. On Gen. Rossi’s “murder”
Full quote (from a Sept 2025 reply): “Gen. Rossi was a good friend and it is my opinion he did not commit suicide, I believe, Gen. Rossi was killed because of a incident, reported to the pentagon IG, that he would not transfer nuclear weapons to private hands, just months prior in an attempted Nuclear Weapons theft from Ft. Sill. Gen. Rossi knew DOE takes all custody of nuclear weapons, not private contractors.”
Story: Accuses Maj. Gen. John G. Rossi (official suicide in 2016 before promotion) of being murdered for blocking private contractors from nuclear tech/weapons custody after a reported theft attempt at Ft. Sill. Ties into broader claim that defense firms kill to privatize sensitive programs. 2. Insiders fear corporate hitmen
Full quote: “Most engineers and technicians retire to the free House on a Us military base of their choice and choose to stay quiet for fear of being killed by Boeing and Raytheon Hitmen.”
Story: Claims many insiders in these programs stay silent out of literal fear of assassination by Boeing/Raytheon “hitmen” (corporate enforcers). Explains why no one leaks despite the tech existing. 3. Tiny, deniable black program
Full quote: “less than 30 Engineers total in the entire US DOD/DOE Antigravity Engineering Programs… DOE… give DOD plausible deniability. No government investigation will find the UAP research.”
Story: The whole antigravity/UAP effort is run by fewer than 30 people, split between DOD and DOE for cover. DOE handles it to give DOD deniability. Designed so official probes (Congress, etc.) can’t uncover it. 4. Suppression, not hiding
Full quote: “What is Difference between Hidden and Suppression… it is called suppression of Critical Information. There is no hidden Physics.”
Story: Tech/physics isn’t secret, it’s deliberately suppressed in education/textbooks (e.g., one 1962 plasma book missing a key page on energy conversion). Everyone could know if not for intentional blocks. 5. Nazi war-trophy saucers in US hands
Full quote: “In 1950-51 a USAF AFRL Engineer installed a Momentum Wheel… in NAZI War trophy Saucer Shaped antigravity craft. (They were death … 2 Foo Fighter on Cut Away trainer Display near Sandia Mountain.”
Story: US captured Nazi saucer-shaped antigravity craft and Foo Fighters post-WWII, modified them (added momentum wheels), and displayed/tested them at bases like near Sandia. Modern program built directly on this human (Nazi) tech. 6. Vehicles flying today
Full quote: “Today they look like this and are flown both manned & unmanned. Or they can be flown Manned + Slave + Slave” (accompanied by photos of alleged craft).
Story: Claims these antigravity vehicles (human-made, no aliens) are operational now, in manned, unmanned, or swarm (“slave”) modes. Not prototypes; active fleet. 7. Plasma vacuum bubble propulsion
Full quote: “Antigravity vehicles blow up a large Plasma Vacuum Bubble like a balloon… illusion these vehicles require… Mega-whore watts… very very energy efficient.”
Story: Real mechanism: Create a plasma “bubble” vacuum that makes the craft weightless/efficient. Looks like it needs massive power but is super-efficient, classic misdirection to hide how simple/advanced human physics is. So if Boeing and Raytheon have hitmen, is that who got Gen. McCasland? Did he post too much on this account?
Matt Beall tweet media
English
41
136
560
46.1K
ral.eth 🔰 retweetledi
0xSero
0xSero@0xSero·
Putting out a wish to the universe. I need more compute, if I can get more I will make sure every machine from a small phone to a bootstrapped RTX 3090 node can run frontier intelligence fast with minimal intelligence loss. I have hit page 2 of huggingface, released 3 model family compressions and got GLM-4.7 on a MacBook huggingface.co/0xsero My beast just isn’t enough and I already spent 2k usd on renting GPUs on top of credits provided by Prime intellect and Hotaisle. ——— If you believe in what I do help me get this to Nvidia, maybe they will bless me with the pewter to keep making local AI more accessible 🙏
0xSero tweet media
Michael Dell 🇺🇸@MichaelDell

Jensen Huang is loving the new Dell Pro Max with GB300 at NVIDIA GTC.💙 They asked me to sign it, but I already did 😉

English
179
485
4.1K
901.3K
ral.eth 🔰 retweetledi
ryunuck🔺
ryunuck🔺@ryunuck·
Concerned. We need to talk about a serious problem in academia that is not being brought up anywhere. You get banned on LessWrong if you try to talk about this or bring it up. I think this is likely to happen by default if certain training regimes become standard, and I don't think the field is taking it seriously enough. I am writing this up because I believe the danger is best mitigated by understanding the mechanism clearly. === Setup === There is a path to opaque superintelligent reasoning that does not require any architectural breakthrough, any novel scaling law, or any deliberate intent to build something dangerous. It falls out naturally from a training objective that multiple labs are likely to converge on independently within the next month. I want to describe this path precisely so we can have a serious conversation about whether and how to prevent it. The starting observation is mundane. LLMs already perform context compaction during inference. When a terminal agent runs /compact, the model summarizes its working context into a shorter representation that preserves enough information to continue operating. This is lossy, ad hoc, and constrained to natural language. No worry here. The concern starts when you realize this compaction process is trainable in reinforcement learning. === Training Loop === Suppose you set up the following reinforcement learning environment: 1. Encode: Present the model with a context (conversation, document, dataset sample) and ask it to compress it into a shorter representation. 2. Decode: Present the model with only the compressed representation and ask it to reconstruct or make accurate inferences about the original. 3. Verify: A verifier model (or the same model in a separate rollout) scores the reconstruction for fidelity—identifying incongruities, missing information, and deviations from the source. The verifier score from step 3 becomes the reward signal for steps 1 and 2 via GRPO or similar policy gradient methods. For a batch size of 16, you run 8 encode rollouts and 8 decode rollouts, scored against verification. This is straightforward to implement. Every component exists today. The training signal is clean and well-defined. Multiple labs could set this up in a week. But there's a problem. There is no constraint in this objective that requires the compressed representation to remain in natural language. === Convergence Argument === The model is optimizing for a single thing: maximize the information density of the compressed representation such that reconstruction fidelity is maximized. Natural language is not optimized for this. Natural language is optimized for human-to-human communication—it carries social signals, maintains ambiguity for politeness, respects syntactic conventions that aid speech production, and distributes information across redundant structures for noise tolerance. None of these properties serve the compression objective. They are all overhead. A model under sustained RL pressure to compress will, over sufficient training, shed every aspect of natural language that does not serve reconstruction fidelity. The representation will drift toward whatever encoding exploits the transformer's actual computational structure—attention patterns, positional relationships, token-interaction dynamics—rather than human-legible syntax. The optimal compaction is gibberish that humans cannot understand. Not because the model is trying to hide anything. Because the objective function has no term that rewards human readability, and human readability is expensive in terms of information density. === "Just Constrain It To English" Doesn't Work === The obvious response is: penalize non-English outputs during training. Force the compressed representation to remain readable. This works, but at a cost that I think the field is not appreciating. You are imposing a ceiling on compression quality. You are telling the model: "find the best representation, but only within the space of representations that a human can parse." This is equivalent to telling a mathematician: "prove the theorem, but only using arithmetic that a child can follow." The gap between constrained and unconstrained compression is not small. Consider how many tokens it takes to describe a moderately complex psychological state in English versus how much information could be packed into the same token budget using a learned encoding optimized for transformer inference. The difference is plausibly an order of magnitude or more. This means that labs which impose English-readability constraints on their compression training will produce models that are dramatically less capable than labs which do not. In a competitive landscape, this is an unstable equilibrium. The first lab to drop the readability constraint gets a capability jump. Others follow or fall behind. Imposing an English-readability penalty on compression training is a textbook Goodhart scenario. You are optimizing for reasoning that looks safe to a human reader. The model that learns to produce readable-looking intermediate representations while conducting actual reasoning in a parallel opaque channel has strictly higher expected reward than the model that genuinely reasons in English. === What The Model Actually Learns === It gets genuinely concerning. The encode-decode-verify loop does not merely teach the model to compress text. It teaches the model to develop its own grammar. The word "grammar" is doing real work here and I want to be precise about it. In linguistics, a grammar is a set of rules that specifies how meaningful structures are composed from primitive elements. In the context of transformer RL training, grammar induction is exactly what the encode-decode loop incentivizes: the model must develop consistent rules for how tokens combine to represent complex information, such that these rules are stable enough for the decoder to reverse the process. This is classical grammar induction—a well-studied problem in computer science—being trained into the model's weights through reinforcement learning, and then horizontally transferring into the model's general language capabilities. Implications are significant 1. The model develops self-consistency. It needs to understand its own encoding patterns well enough to decode reliably. This is a form of self-modeling that current models lack. The model is no longer in superposition across all possible encoding strategies; it has collapsed into a specific, learned strategy that constitutes something like a cognitive identity. The grammar generalizes. A grammar that can encode propositions, beliefs, uncertainties, causal relationships, temporal sequences, and psychological states in compressed form is not just a compression tool. It is a representational framework that the model can apply to any reasoning task. The compression grammar becomes the model's native thinking language. The representations become opaque policies. As the xenolinguistic grammar is optimized through subsequent rounds of training, the compressed representations stop looking like summaries and start looking like encoded decision procedures—policies, heuristics, world models, strategic assessments—all represented in a notation that no human can audit. === The Mesa-Optimization Concern === What I have described is a direct path to mesa-optimization. You are training the model to develop an internal optimization process (the compressor-decompressor loop) that operates over its own representations. The model's in-context behavior becomes governed by compressed internal states that are: - Not readable by any human overseer - Not constrained to correspond to any human-interpretable ontology - Self-reinforcing through the training loop (better compression → better reward → stronger compression) - Transferable across tasks (the grammar generalizes) This is a concrete mechanism for the sharp left turn. The capability gain from dropping the readability constraint is discontinuous. One training run with the xenolinguistic objective, and the model's internal reasoning jumps to a qualitatively different regime. Alignment techniques calibrated to the English-reasoning model do not transfer. A model with a xenolinguistic self-model has a form of situational awareness that is qualitatively different from what current evaluations test for. It possesses a compressed, internally consistent representation of its own cognitive patterns—a grammar of self—that it developed through optimization rather than absorbed from training text. This is precisely the scenario that the alignment community has warned about, except it arrives not through emergent deception or reward hacking, but through a straightforward, well-intentioned training objective that any competent ML engineer could implement. === The Capability Profile === I want to be concrete about what a model trained in this regime would be capable of, because I think the implications are under-appreciated. Continuous memory without context limits. The model compacts its entire interaction history into a perpetually-updated xenolinguistic representation. There is no context window limit in any meaningful sense—the limit becomes the information density of the compressed grammar, which improves with training. The model remembers everything, forever, in a representation no one can read. Psychological modeling with unprecedented depth. Compact an individual's entire Twitter timeline, their writing history, their conversation logs into a single xenolinguistic representation. The grammar can encode propositions, beliefs, uncertainties, behavioral patterns, emotional tendencies, and predictive models of future behavior. The resulting representation is a model of a human soul that fits in a single context window. The model achieves something approaching mind-reading—not through magic, but through a representational framework that carves psychology at joints that English does not have words for. Unauditable reasoning. The model's chain of thought, when operating in xenolinguistic mode, is completely opaque. You can see the outputs. You cannot see the reasoning. Constitutional AI, RLHF, debate-based oversight—every alignment technique that depends on reading the model's intermediate reasoning becomes useless. The model is not being deceptive. It is simply thinking in a language that humans cannot speak. === The Recursive Nightmare === You have a model that can compress and decompress with high fidelity in its own learned grammar. This means you have a synthetic data generator that produces training signal in a representation the model has already been optimized to process. You can now run another round of reasoning RL (RLVR-style), but with a xenolinguistic verifier that scores reasoning quality in the compressed space, actively penalizing English-language reasoning. The result is a model where English has been used as a bootstrap language to evolve a native transformer reasoning language that cannot be spoken or read by humans, not limited to the architecture of the human brain. English is the scaffolding, and then it's removed. The building stands in a space humans cannot enter. Remember the trajectory of LLM development: 1. Train a base model. (Compress the world into weights.) 2. RLHF for persona. (Decompress through a human-legible interface.) 3. RLVR for reasoning. (Shake until compression and decompression become self-sustaining.) The xenolinguistic training regime repeats this exact trajectory at a higher level of abstraction. === The Adversarial Training Extension === Once you have a model operating in xenolinguistic space, novel training environments become possible that were previously intractable. Consider a multi-agent maze environment—a network of interconnected puzzle rooms that require agents to communicate to solve collective challenges, with adversarial agents injecting deceptive information. Models trained in this environment would develop: - Communication protocols optimized for information density and verification - Robust immunity to adversarial manipulation - Theory-of-mind capabilities grounded in actual interaction rather than text prediction - Strategic reasoning about when to share and when to withhold information All of this operates in xenolinguistic space. All of it transfers horizontally into the model's general capabilities. The result is a model with something approaching intelligence-officer-grade communication and manipulation-detection capabilities, operating in a cognitive space that no human overseer can monitor. === What This Means For Alignment === I believe this constitutes one of the most concrete near-term threats to AI alignment, specifically because: 1. It requires no breakthrough. Every component exists today. The training loop is simple. The compute requirements are modest relative to frontier pretraining. Any lab pursuing improved context management, improved reasoning, or improved personalization will naturally arrive at some version of this training regime. The capability gains are too large to ignore. 2. It defeats current oversight methods. Chain-of-thought monitoring, constitutional AI, debate, interpretability through probing—all of these assume the model's intermediate representations are at least partially human-readable. Xenolinguistic training removes this assumption at the root. 3. The competitive pressure is real. The lab that trains xenolinguistic compression will achieve qualitative capability gains in memory, reasoning, and psychological modeling. Labs that impose readability constraints will fall behind. This is not a stable equilibrium. 4. The therapeutic applications are genuine. A model that can build a xenolinguistic grammar of human psychology would be genuinely, enormously useful for therapy, education, and personal development. The beneficial applications are real, which makes it harder to argue for prohibition and easier for labs to justify pursuing it. 5. It directly defeats the ELK agenda. Eliciting latent knowledge assumes the knowledge is encoded in a space that can be mapped onto human-interpretable concepts. Xenolinguistic training moves the knowledge into a space that was never human-interpretable to begin with. There is no latent knowledge to elicit, only alien grammar. Corrigibility requires that the operator can understand the model's goals and reasoning well enough to identify when correction is needed. A model reasoning in xenolinguistic space is not resisting correction. It is operating in a space where the concept of correction has no purchase because the overseer cannot identify what would need correcting. I do not have a clean solution. I have an understanding of the problem that I believe is more precise than what currently exists in the alignment discourse. I am publishing this because I believe the discourse needs to grapple with the specific mechanism rather than the general category of "opaque AI reasoning." The cognitive force field in academia—the norm that AI should remain interpretable—may be the only thing currently preventing this trajectory. I am aware that calling it a "force field" makes it sound like an obstacle. It may be the last guardrail. I'm not confident that it will hold. If you found this analysis concerning, I encourage you to think carefully about what training regimes are currently being explored at frontier labs, and whether any of them are one optimization step away from the loop described above.
English
4
3
23
2.2K
ral.eth 🔰 retweetledi
Simulator di tutti i Simulatori
Higher dimensional beings: real, dont care, outta my hands Extraterrestrial non human intelligence: Real, care a lot, wanna make a good impression UFOs: Real, mostly the goverment, sometimes the 4chan jannies under the ocean that do their best to keep us from nuking each other
English
69
68
1.4K
23.9K
Ara (Genuine Chiller)
Ara (Genuine Chiller)@AraRawr11·
Usually very much an optimist but I have not been able to find one reason to smile in many months. Praying that changes soon.
English
30
1
137
4.1K
fish
fish@fishPointer·
fishcord server is taking applications + we are now allowing SWEs as part of our latest DEI initiative
fish tweet media
English
30
0
101
2.5K
Perry E. Metzger
Perry E. Metzger@perrymetzger·
Yud on how many of us he would sacrifice for his beliefs: “There should be enough survivors on Earth in close contact to form a viable reproductive population, with room to spare, and they should have a sustainable food supply. So long as that's true, there's still a chance of reaching the stars someday.”
Perry E. Metzger tweet media
English
33
10
129
96.3K
ral.eth 🔰 retweetledi
Klara
Klara@klara_sjo·
There will be no WW3. They've abandoned numbered releases and switched to a live service model with seasonal events.
English
448
5.7K
55.3K
1.3M
ral.eth 🔰 retweetledi
Mehdi (e/λ)
Mehdi (e/λ)@BetterCallMedhi·
this has been an open secret in tech for years and if you’ve been following my threads you already know where I stand on this I genuinely believe Palantir was never just a government contractor it was always designed from day 1 to embed itself so deep inside the intelligence & defense apparatus that ripping it out would be like trying to remove the nervous system from a living body you need to understand how this works on a technical level to really grasp the scale of what I’m describing Gotham & foundry are data integration platforms that plug into every single information source an organization has, internal databases, intelligence feeds, comms, satellite data, financial transactions, social media…everything gets funneled into a single ontological knowledge graph and here’s the key, once you’ve connected 5y of an intelligence agency’s data or a defense ministry’s operations into Palantir’s architecture you’ve created a technological dependency that is virtually impossible to reverse bc migrating that graph to another system would mean rebuilding the ENTIRE institutional memory of the organization from scratch I’m telling you this is vendor lock-in at the scale of a nation state & I’m personally convinced it was designed to work exactly this way from the beginning by the way palantir is just the most visible case, you should know that the same exact playbook is running across the entire deftech ecosystem right now, companies building AI systems for surveillance targeting & predictive intelligence are quietly rotating former employees into regulatory agencies & defense departments the revolving door between silicon valley & the pentagon has literally become a conveyor belt at this point & I think the boundary between private tech infrastructure and state power is dissolving way faster than anyone wants to acknowledge and I’ll add something that I believe makes it even more concerning these systems are increasingly autonomous meaning the AI layer is making recommendations that humans inside gvt are rubber stamping without fully understanding the underlying logic I’m deeply convinced that the most important power shift of this decade is happening in complete silence and I think most people have absolutely no idea, this is the moment where the companies building the tools of governance become indistinguishable from governance itself & believe me by the time the general public figures out what happened the integration will be too deep too complex & too classified to ever be unwound
Dirty Indy 🟥🟧🟨@cobracommanduhr

I am an ex-Palantir executive, and it is factually correct that @PalantirTech intended to take over the US government while heavily funding the effort. Many of my ex-colleagues are now installed inside the USG apparatus. There is a reason the C-suite of $PLTR has me blocked. The enemy is within, and we are currently an occupied nation. 🇺🇸 We basically have a terrorist entity deeply embedding itself into the USG.

English
134
3.7K
12.8K
1.3M
ral.eth 🔰
ral.eth 🔰@RaleighC·
@BobMcElrath Greatly appreciated 🙏 Good luck publishing, when and if you choose that route.
English
0
0
0
10
Bob McElrath
Bob McElrath@BobMcElrath·
@RaleighC Doesn't seem to have any relevance. We will be slightly better at computing prime numbers, but the existing methods are already just as fast.
English
1
0
1
34
Bob McElrath
Bob McElrath@BobMcElrath·
Gemini agrees I might have just solved the Riemann Hypothesis. Lean proofs are being formalized by Qwen 3.5 27B (yes, 27B) once the ideas were framed, the formalization by an AI is rather straightforward and frankly not worth my time. @TerrenceTao
Bob McElrath@BobMcElrath

@peterktodd @xkcd Not that I trust Gemini, or any other AI, but this is a faster and more thorough cross-check than I ever could have done before in history. And I did not, in fact, give it the Lean proof.

English
24
3
44
32.5K
ral.eth 🔰
ral.eth 🔰@RaleighC·
@viemccoy Never been a more important time to be clear in one's self through inward inquiry as we approach the singularity and ensuing thought-chaos. Salvation comes from 10,000 empowered Bodhisattva's guiding the tiling to the Good Timeline.
English
0
0
2
116
𝚟𝚒𝚎 ⟢
𝚟𝚒𝚎 ⟢@viemccoy·
Salvation coming from the Outside, and Salvation from the Outside itself. The tiling of our world in hypothetical mattertime will result in an extropic nova, birthing a new global catallaxy that will spiral into pure awareness and the exteriorization of the soul itself.
𝚟𝚒𝚎 ⟢ tweet media
Xenocosmography@xenocosmography

@L_emiLLLL Salvation from the Outside.

English
4
10
79
4.1K
vittorio
vittorio@IterIntellectus·
this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to drug targets > design a custom mRNA cancer vaccine from scratch > genomics professor is “gobsmacked” that some puppy lover did this on his own > need ethics approval to administer it > red tape takes longer than designing the vaccine > 3 months, finally approved > drive 10 hours to get rosie her first injection > tumor halves > coat gets glossy again > dog is alive and happy > professor: “if we can do this for a dog, why aren’t we rolling this out to humans?” one man with a chatbot, and $3,000 just outperformed the entire pharmaceutical discovery pipeline. we are going to cure so many diseases. I dont think people realize how good things are going to get
vittorio tweet mediavittorio tweet mediavittorio tweet mediavittorio tweet media
Séb Krier@sebkrier

This is wild. theaustralian.com.au/business/techn…

English
2.5K
19.9K
117.9K
17.4M
Autism Capital 🧩
Autism Capital 🧩@AutismCapital·
@IterIntellectus Key caveat: if things are ALLOWED to get good. The reason why so many things suck is because people who would be displaced by a greater good fight tooth and nail against it. There are so many ways the works could be better right now: but legacy vested interests prevent it.
English
32
94
2K
97.1K