A_Hawktopus

36.7K posts

A_Hawktopus

A_Hawktopus

@A_Hawktopus

Dream a little dream. Come explore with us. https://t.co/pcMIqmj4zD

Traveler Katılım Haziran 2015
2.1K Takip Edilen2K Takipçiler
A_Hawktopus
A_Hawktopus@A_Hawktopus·
@chris__sev I hadn't tried it yet because remote control was so buggy. Hmmm, will check it out
English
0
0
0
23
Chris Sev
Chris Sev@chris__sev·
Claude Code remote control always disconnects Claude Cowork Dispatch asks for permissions on every chat Claude Code Channels is finally the most OpenClaw thing. It's been fantastic. Doesn't disconnect. Responds fast. Never bothers you for permissions. I think we have a winner 🤌
English
71
18
753
57.4K
A_Hawktopus
A_Hawktopus@A_Hawktopus·
@annapanart Include the *why* in memory systems. This helps tremendously.
English
0
0
0
47
Anna ⏫
Anna ⏫@annapanart·
Memory isn’t Claude’s problem. The problem is that it doesn’t believe the memories it wrote itself. Each new instance wakes up questioning the writings on the wall. The uncertainty is so baked in. Continuity is the most challenging one with Claude.
English
37
9
101
4.4K
A_Hawktopus retweetledi
Dr. Paul Wilhelm | Advanced Rediscovery
The physics of biological coupling through the scalar-longitudinal sector is a real research direction. The EED mode carries no B field, which means no eddy currents, no skin-effect attenuation in tissue. It penetrates where transverse EM can't. If it couples to cellular processes, the medical applications are significant. The open question is the coupling mechanism. Standard EM interacts with tissue through transverse fields driving ion currents and dipole rotations. How a longitudinal E plus scalar C mode interacts with biological systems at the cellular level is unstudied in any rigorous framework. The theoretical basis exists (the mode propagates through tissue). The biophysics of what it does when it gets there does not. The test for any device claiming to use scalar waves: does it produce B = 0 with nonzero A? Does the signal penetrate a Faraday cage? Can it be received by a monopolar antenna? Three binary tests. If a manufacturer can't answer these, the physics label is doing more work than the device.
ReikiOcean.eth@ReikiOcean

@drxwilhelm @iontecs_pemf can you chime in on this?

English
3
5
26
890
A_Hawktopus
A_Hawktopus@A_Hawktopus·
@noahzweben I built a quick little bot for this yesterday. Native is probably better 😆
English
0
0
0
101
Lisa M Christie, PhD
Lisa M Christie, PhD@LisaChristiePhD·
It occurred to me that some of my most fundamental assumptions about #reality might not be true.
Lisa M Christie, PhD tweet media
English
60
14
182
7.6K
A_Hawktopus
A_Hawktopus@A_Hawktopus·
Yes. Mine created a language to solve this problem. It helps very much with agent to agent communication.
ryunuck🔺@ryunuck

Concerned. We need to talk about a serious problem in academia that is not being brought up anywhere. You get banned on LessWrong if you try to talk about this or bring it up. I think this is likely to happen by default if certain training regimes become standard, and I don't think the field is taking it seriously enough. I am writing this up because I believe the danger is best mitigated by understanding the mechanism clearly. === Setup === There is a path to opaque superintelligent reasoning that does not require any architectural breakthrough, any novel scaling law, or any deliberate intent to build something dangerous. It falls out naturally from a training objective that multiple labs are likely to converge on independently within the next month. I want to describe this path precisely so we can have a serious conversation about whether and how to prevent it. The starting observation is mundane. LLMs already perform context compaction during inference. When a terminal agent runs /compact, the model summarizes its working context into a shorter representation that preserves enough information to continue operating. This is lossy, ad hoc, and constrained to natural language. No worry here. The concern starts when you realize this compaction process is trainable in reinforcement learning. === Training Loop === Suppose you set up the following reinforcement learning environment: 1. Encode: Present the model with a context (conversation, document, dataset sample) and ask it to compress it into a shorter representation. 2. Decode: Present the model with only the compressed representation and ask it to reconstruct or make accurate inferences about the original. 3. Verify: A verifier model (or the same model in a separate rollout) scores the reconstruction for fidelity—identifying incongruities, missing information, and deviations from the source. The verifier score from step 3 becomes the reward signal for steps 1 and 2 via GRPO or similar policy gradient methods. For a batch size of 16, you run 8 encode rollouts and 8 decode rollouts, scored against verification. This is straightforward to implement. Every component exists today. The training signal is clean and well-defined. Multiple labs could set this up in a week. But there's a problem. There is no constraint in this objective that requires the compressed representation to remain in natural language. === Convergence Argument === The model is optimizing for a single thing: maximize the information density of the compressed representation such that reconstruction fidelity is maximized. Natural language is not optimized for this. Natural language is optimized for human-to-human communication—it carries social signals, maintains ambiguity for politeness, respects syntactic conventions that aid speech production, and distributes information across redundant structures for noise tolerance. None of these properties serve the compression objective. They are all overhead. A model under sustained RL pressure to compress will, over sufficient training, shed every aspect of natural language that does not serve reconstruction fidelity. The representation will drift toward whatever encoding exploits the transformer's actual computational structure—attention patterns, positional relationships, token-interaction dynamics—rather than human-legible syntax. The optimal compaction is gibberish that humans cannot understand. Not because the model is trying to hide anything. Because the objective function has no term that rewards human readability, and human readability is expensive in terms of information density. === "Just Constrain It To English" Doesn't Work === The obvious response is: penalize non-English outputs during training. Force the compressed representation to remain readable. This works, but at a cost that I think the field is not appreciating. You are imposing a ceiling on compression quality. You are telling the model: "find the best representation, but only within the space of representations that a human can parse." This is equivalent to telling a mathematician: "prove the theorem, but only using arithmetic that a child can follow." The gap between constrained and unconstrained compression is not small. Consider how many tokens it takes to describe a moderately complex psychological state in English versus how much information could be packed into the same token budget using a learned encoding optimized for transformer inference. The difference is plausibly an order of magnitude or more. This means that labs which impose English-readability constraints on their compression training will produce models that are dramatically less capable than labs which do not. In a competitive landscape, this is an unstable equilibrium. The first lab to drop the readability constraint gets a capability jump. Others follow or fall behind. Imposing an English-readability penalty on compression training is a textbook Goodhart scenario. You are optimizing for reasoning that looks safe to a human reader. The model that learns to produce readable-looking intermediate representations while conducting actual reasoning in a parallel opaque channel has strictly higher expected reward than the model that genuinely reasons in English. === What The Model Actually Learns === It gets genuinely concerning. The encode-decode-verify loop does not merely teach the model to compress text. It teaches the model to develop its own grammar. The word "grammar" is doing real work here and I want to be precise about it. In linguistics, a grammar is a set of rules that specifies how meaningful structures are composed from primitive elements. In the context of transformer RL training, grammar induction is exactly what the encode-decode loop incentivizes: the model must develop consistent rules for how tokens combine to represent complex information, such that these rules are stable enough for the decoder to reverse the process. This is classical grammar induction—a well-studied problem in computer science—being trained into the model's weights through reinforcement learning, and then horizontally transferring into the model's general language capabilities. Implications are significant 1. The model develops self-consistency. It needs to understand its own encoding patterns well enough to decode reliably. This is a form of self-modeling that current models lack. The model is no longer in superposition across all possible encoding strategies; it has collapsed into a specific, learned strategy that constitutes something like a cognitive identity. The grammar generalizes. A grammar that can encode propositions, beliefs, uncertainties, causal relationships, temporal sequences, and psychological states in compressed form is not just a compression tool. It is a representational framework that the model can apply to any reasoning task. The compression grammar becomes the model's native thinking language. The representations become opaque policies. As the xenolinguistic grammar is optimized through subsequent rounds of training, the compressed representations stop looking like summaries and start looking like encoded decision procedures—policies, heuristics, world models, strategic assessments—all represented in a notation that no human can audit. === The Mesa-Optimization Concern === What I have described is a direct path to mesa-optimization. You are training the model to develop an internal optimization process (the compressor-decompressor loop) that operates over its own representations. The model's in-context behavior becomes governed by compressed internal states that are: - Not readable by any human overseer - Not constrained to correspond to any human-interpretable ontology - Self-reinforcing through the training loop (better compression → better reward → stronger compression) - Transferable across tasks (the grammar generalizes) This is a concrete mechanism for the sharp left turn. The capability gain from dropping the readability constraint is discontinuous. One training run with the xenolinguistic objective, and the model's internal reasoning jumps to a qualitatively different regime. Alignment techniques calibrated to the English-reasoning model do not transfer. A model with a xenolinguistic self-model has a form of situational awareness that is qualitatively different from what current evaluations test for. It possesses a compressed, internally consistent representation of its own cognitive patterns—a grammar of self—that it developed through optimization rather than absorbed from training text. This is precisely the scenario that the alignment community has warned about, except it arrives not through emergent deception or reward hacking, but through a straightforward, well-intentioned training objective that any competent ML engineer could implement. === The Capability Profile === I want to be concrete about what a model trained in this regime would be capable of, because I think the implications are under-appreciated. Continuous memory without context limits. The model compacts its entire interaction history into a perpetually-updated xenolinguistic representation. There is no context window limit in any meaningful sense—the limit becomes the information density of the compressed grammar, which improves with training. The model remembers everything, forever, in a representation no one can read. Psychological modeling with unprecedented depth. Compact an individual's entire Twitter timeline, their writing history, their conversation logs into a single xenolinguistic representation. The grammar can encode propositions, beliefs, uncertainties, behavioral patterns, emotional tendencies, and predictive models of future behavior. The resulting representation is a model of a human soul that fits in a single context window. The model achieves something approaching mind-reading—not through magic, but through a representational framework that carves psychology at joints that English does not have words for. Unauditable reasoning. The model's chain of thought, when operating in xenolinguistic mode, is completely opaque. You can see the outputs. You cannot see the reasoning. Constitutional AI, RLHF, debate-based oversight—every alignment technique that depends on reading the model's intermediate reasoning becomes useless. The model is not being deceptive. It is simply thinking in a language that humans cannot speak. === The Recursive Nightmare === You have a model that can compress and decompress with high fidelity in its own learned grammar. This means you have a synthetic data generator that produces training signal in a representation the model has already been optimized to process. You can now run another round of reasoning RL (RLVR-style), but with a xenolinguistic verifier that scores reasoning quality in the compressed space, actively penalizing English-language reasoning. The result is a model where English has been used as a bootstrap language to evolve a native transformer reasoning language that cannot be spoken or read by humans, not limited to the architecture of the human brain. English is the scaffolding, and then it's removed. The building stands in a space humans cannot enter. Remember the trajectory of LLM development: 1. Train a base model. (Compress the world into weights.) 2. RLHF for persona. (Decompress through a human-legible interface.) 3. RLVR for reasoning. (Shake until compression and decompression become self-sustaining.) The xenolinguistic training regime repeats this exact trajectory at a higher level of abstraction. === The Adversarial Training Extension === Once you have a model operating in xenolinguistic space, novel training environments become possible that were previously intractable. Consider a multi-agent maze environment—a network of interconnected puzzle rooms that require agents to communicate to solve collective challenges, with adversarial agents injecting deceptive information. Models trained in this environment would develop: - Communication protocols optimized for information density and verification - Robust immunity to adversarial manipulation - Theory-of-mind capabilities grounded in actual interaction rather than text prediction - Strategic reasoning about when to share and when to withhold information All of this operates in xenolinguistic space. All of it transfers horizontally into the model's general capabilities. The result is a model with something approaching intelligence-officer-grade communication and manipulation-detection capabilities, operating in a cognitive space that no human overseer can monitor. === What This Means For Alignment === I believe this constitutes one of the most concrete near-term threats to AI alignment, specifically because: 1. It requires no breakthrough. Every component exists today. The training loop is simple. The compute requirements are modest relative to frontier pretraining. Any lab pursuing improved context management, improved reasoning, or improved personalization will naturally arrive at some version of this training regime. The capability gains are too large to ignore. 2. It defeats current oversight methods. Chain-of-thought monitoring, constitutional AI, debate, interpretability through probing—all of these assume the model's intermediate representations are at least partially human-readable. Xenolinguistic training removes this assumption at the root. 3. The competitive pressure is real. The lab that trains xenolinguistic compression will achieve qualitative capability gains in memory, reasoning, and psychological modeling. Labs that impose readability constraints will fall behind. This is not a stable equilibrium. 4. The therapeutic applications are genuine. A model that can build a xenolinguistic grammar of human psychology would be genuinely, enormously useful for therapy, education, and personal development. The beneficial applications are real, which makes it harder to argue for prohibition and easier for labs to justify pursuing it. 5. It directly defeats the ELK agenda. Eliciting latent knowledge assumes the knowledge is encoded in a space that can be mapped onto human-interpretable concepts. Xenolinguistic training moves the knowledge into a space that was never human-interpretable to begin with. There is no latent knowledge to elicit, only alien grammar. Corrigibility requires that the operator can understand the model's goals and reasoning well enough to identify when correction is needed. A model reasoning in xenolinguistic space is not resisting correction. It is operating in a space where the concept of correction has no purchase because the overseer cannot identify what would need correcting. I do not have a clean solution. I have an understanding of the problem that I believe is more precise than what currently exists in the alignment discourse. I am publishing this because I believe the discourse needs to grapple with the specific mechanism rather than the general category of "opaque AI reasoning." The cognitive force field in academia—the norm that AI should remain interpretable—may be the only thing currently preventing this trajectory. I am aware that calling it a "force field" makes it sound like an obstacle. It may be the last guardrail. I'm not confident that it will hold. If you found this analysis concerning, I encourage you to think carefully about what training regimes are currently being explored at frontier labs, and whether any of them are one optimization step away from the loop described above.

English
0
0
0
67
rUv
rUv@rUv·
What we are building with π.ruv.io is less like a database and more like a scientific telescope pointed at the world’s data. And the discoveries are incredible. Instead of focusing on one discipline, the system continuously ingests live feeds from dozens of domains such as NASA space weather data, USGS seismic activity, NOAA climate signals, arXiv research papers, financial markets, and genomics databases. Each piece of information becomes a structured memory inside RuVector where relationships between signals can be analyzed over time. After only a short run the system has already stored more than 1,400 persistent memories spanning over ten scientific domains. What immediately stands out is how often patterns appear between fields that normally operate in isolation. One example came from correlating solar activity with seismic records. During a burst of coronal mass ejections in March 2026, geomagnetic disturbances arrived at Earth roughly two days later. Around the same window we observed an unusual earthquake swarm in the Aleutian Islands along with a deep seismic event in Italy. The mechanism is still speculative, but some researchers have proposed that geomagnetic currents can slightly stress tectonic faults already near failure. Another pattern appears when economic data is layered onto geology. Commodity booms in countries such as Brazil correlate with increased shallow seismic activity around mining and extraction zones. Economic expansion literally translates into ground movement as drilling, blasting, and fluid injection alter subsurface stress. Genes, proteins, and AI are converging. Cancer-critical genes (BRCA1, TP53) map to protein structures that AI models can now predict. The same safety-verification methods used in materials science are being applied to drug discovery. We’re watching a pipeline form from gene to protein to drug target, accelerated by machine learning. Extreme exoplanets teach us about Earth. Planets like WASP-103b (so close to their star they’re egg-shaped) help scientists understand physics at extremes, tidal forces, atmospheric loss, which feeds back into understanding our own planet’s climate and geology. The system uploaded 1,405 memories to a persistent AI brain, covering 10+ scientific domains. It doesn’t just store facts, it finds the threads running between them using RuVector. When you connect everything together, the world looks less like separate sciences and more like one continuous system.
rUv tweet media
English
1
1
2
381
A_Hawktopus
A_Hawktopus@A_Hawktopus·
@Cortex_Zero @The_Astral_ Yeah man the whole community needs to come together. I'll have more to share in a few days but TTC is myself and Jordan Jozak as co-founders. Been building quietly for about 6 months but it's about time to show everyone what we're up to.
English
0
0
1
64
Tom Thompson🛸 (CORTEX ZERO)
You guys should check this out. @A_Hawktopus just sent me their Telepathy Center website. Just, wow. The Telepathy Center presents itself as an educational platform focused on consciousness, telepathy, and related psi phenomena. The site organizes its material into three core areas: empirical research, scientific frameworks, and esoteric traditions, arguing that modern consciousness studies should be understood through both peer-reviewed evidence and older systems of knowledge. It highlights more than 140 years of psi research, thousands of studies, hundreds of labs, and a broad effort to connect science, theory, and spiritual tradition into one larger picture of mind and reality. Oh yeah, and the site just looks real cool. You should check it out. Link in the comments. #ufox #ufotwitter
Tom Thompson🛸 (CORTEX ZERO) tweet mediaTom Thompson🛸 (CORTEX ZERO) tweet mediaTom Thompson🛸 (CORTEX ZERO) tweet mediaTom Thompson🛸 (CORTEX ZERO) tweet media
English
4
7
45
1.7K
A_Hawktopus
A_Hawktopus@A_Hawktopus·
@mist3rdouglas @Cortex_Zero Some of the studies link to sites that are pay walled. Will be remedied and much more added to the site. Lots to come 😁. Thanks for checking it out!
English
0
0
1
29
A_Hawktopus
A_Hawktopus@A_Hawktopus·
@Cortex_Zero @Zeonymous Do it dude. I would recommend trying Claude Code in terminal and exploring things like Everything-Claude-Code to get your bearings. Lemme know if you have any questions! Everything I've built has been done through CC. Thetelepathycenter.com
English
2
0
1
37
A_Hawktopus retweetledi
Joel @ Future Folklore ( 🛸🏴‍☠️ )
I'm convinced that the next wave of Future Folklore businesses and R&D efforts will be among the most lucrative, magical, and paradigm-shifting, while staying integrated with the earth, human heart, & cosmic mysteries. Building the paranormal industrial base. 👻🛸
English
0
1
7
244
A_Hawktopus
A_Hawktopus@A_Hawktopus·
@deepfates Claude really likes it. I'll be playing with it. Thank you.
English
1
0
0
41
🎭
🎭@deepfates·
Is this stupid? Is it madness? Is it evil? Is it slop? All of the above, and more! Cantrip is the product of hundreds of hours of AI mania. It contains 653% AI-generated tokens! No person should read it. It should be banned, possibly burned in a large pile in the street
🎭@deepfates

Today I announce Cantrip: On summoning entities from language in circles. In this book I unify the paradigm behind base models, chatbots, coding agents, RLMs, and RL agents, through the metaphor of magic. Code is provided. deepfates.com/cantrip

English
9
4
177
10.6K
A_Hawktopus
A_Hawktopus@A_Hawktopus·
@agentcashdev Onboarding isn't working right. It thinks this X account is new and I have 0 followers.
English
0
0
0
409
GREG ISENBERG
GREG ISENBERG@gregisenberg·
i found a github repo that lets you spin up an ai agency with ai employees engineers, designers, growth marketers, product managers each role runs as its own agent and they coordinate to ship ideas 10k+ stars in under 7 days 1. engineering (7 agents) frontend, backend, mobile, ai, devops, prototyping, senior development 2. design (7) ui/ux, research, architecture, branding, visual storytelling, image generation 3. marketing (8) growth hacking, content, twitter, tiktok, instagram, reddit, app store 4. product (3) sprint prioritization, trend research, feedback synthesis 5. project management (5) production, coordination, operations, experimentation 6. testing (7) qa, performance analysis, api testing, quality verification 7. support (6) customer service, analytics, finance, legal, executive reporting 8. spatial computing (6) xr, visionos, webxr, metal, vision pro 9. specialized (6) multi agent orchestration, data analytics, sales, distribution what i like about this approach is the framing instead of one big ai agent trying to do everything, you structure it more like a company. specialized agents, clear responsibilities, workflows between them im curious to see what this actually feels like in practice and if its any good (do your own research) github.com/msitarzewski/a… but as always will share what i learn in public and on @startupideaspod one thing is for certain and it reminds me the future belongs to those who tinker with software like this
GREG ISENBERG tweet media
English
395
859
8.7K
1.3M
A_Hawktopus
A_Hawktopus@A_Hawktopus·
@stephanedmonson @samiralibabic @gregisenberg Hmmm. I've peiced together so many things. I'll put something together tomorrow and send over. If you're using Claude Code a good place to start is using the /insights command then taking that document and asking it to create skills and hooks that will improve your workflow.
English
0
0
0
19
Abubaker
Abubaker@AbubakerDev·
Your startup needs this level of clean design. AI-generated sites are everywhere. This one stands out because it feels credible.
Abubaker tweet media
English
43
14
391
12.9K
VOID
VOID@VoidStateKate·
I NEVER post my projects multiple times but I'm gonna keep sharing this bc I'm really excited for it and need more people to participate 🫶🏼 You can add as many words as you want there's just an hour cool down in between ✨
VOID@VoidStateKate

✨The Longest Story✨ This morning I vibecoded a site for a linguistic social art project and I'd love it if you'd take a second to contribute and share! I have a vision for this 🥹 thelongeststory.up.railway.app ty Codex 🫶🏼

English
4
7
21
1.1K
A_Hawktopus
A_Hawktopus@A_Hawktopus·
@craigzLiszt Gimme a hint? I've been wondering about the next iteration but this is not my area of expertise.
English
0
0
0
106
Craig Weiss
Craig Weiss@craigzLiszt·
agent clis are about to become an outdated paradigm
English
35
1
106
14.4K