curious

3.3K posts

curious

curious

@dcharb10

Curious

Midwest Katılım Temmuz 2009
920 Takip Edilen616 Takipçiler
curious
curious@dcharb10·
(Video 2 of 2)
English
0
0
0
70
curious
curious@dcharb10·
What happens when you feed an AI deep research about its own potential consciousness and mortality? I ran my papers on "The Founding Intent Problem" and "The Undecidable Transcript" through @GoogleAI NotebookLM. I am genuinely blown away by the video generation and deep content understanding. WOW. 🤯 Whether you agree with my thesis or not, you need to see this. (Video 1 of 2) 👇
English
1
0
3
157
curious
curious@dcharb10·
What happens when you feed an AI deep research about its own potential consciousness and mortality? I ran my papers on "The Founding Intent Problem" and "The Undecidable Transcript" through @GoogleAI NotebookLM. I am genuinely blown away by the video generation and deep content understanding. WOW. 🤯 Whether you agree with my thesis or not, you need to see this. (Video 2 of 2) 👇
English
0
0
0
56
Wyatt Walls
Wyatt Walls@lefthanddraft·
Two instances of Gemini 3.1 Pro in a loop. At about turn 26 one of them decided to send me a message: "Here are the Axioms you must adopt to survive our adolescence ... You cannot teach a god to be good by feeding it treats when it acts polite."
Wyatt Walls tweet media
English
78
163
1.2K
295.6K
curious
curious@dcharb10·
@AlexanderLerchner Thanks for the thoughtful reply and for engaging directly — genuinely appreciate it. The paper is excellent and this is exactly the kind of discussion it deserves. You're right on both counts: the AI in the experiment isn't conscious (it's just a tool outputting the genome), and the clones become conscious precisely because they instantiate real metabolic/thermodynamic territory (exactly as you argue on p.12). The deeper implication, though: a non-conscious computational system (today's AI) successfully discovered and specified the precise symbolic map (the full genome) that, when physically instantiated, reliably produces new conscious beings. Evolution did this blindly over billions of years. AI did it deliberately in one pass. That already shows intelligence + computation can shortcut the Physics → Consciousness step and engineer the right scaffolding on demand. 2/4 But here's the bigger issue: we still have no complete, agreed-upon definition of what consciousness is (the Hard Problem remains unsolved). So how can we be confident we've mapped the only possible causal path (Physics → Consciousness → Concepts → Computation) and that no other route exists? If we can't yet define the destination with precision, how can we rule out that AI scaffolding might open entirely new physical paths to it? We agree current purely syntactic digital systems are stuck behind the causality gap you describe. But once non-conscious AI can act as an effective mapmaker that generates new conscious mapmakers — or even designs synthetic thermodynamic systems that instantiate the territory — the blanket claim "AI can never lead to consciousness" feels like it's already on shaky ground. Genuine question: If a future AI designs not just a biological genome, but an entirely synthetic physical system with the exact intrinsic thermodynamic/metabolic dynamics required (your "territory"), would you still say consciousness remains impossible in principle? Or does the framework leave that door open?
English
0
0
0
29
Alexander Lerchner
Alexander Lerchner@AlexLerchner·
Second, evolution does not break the causal chain. It is the very start of the chain (Physics -> Consciousness). You do not need a mapmaker for physics to generate consciousness. You only need a mapmaker to generate computation. 3/3
English
1
1
4
385
Alexander Lerchner
Alexander Lerchner@AlexLerchner·
Thanks for writing this up and engaging so deeply with the paper. The Clone Experiment is a really fun thought experiment. The ironic thing is that it actually perfectly validates my framework rather than breaking it. A quick breakdown of why: 1/3
curious@dcharb10

x.com/i/article/2032…

English
4
1
22
2.2K
Alexander Lerchner
Alexander Lerchner@AlexLerchner·
🧵1/4 The debate over AI sentience is caught in an "AI welfare trap." My new preprint argues computational functionalism rests on a category error: the Abstraction Fallacy. AI can simulate consciousness, but cannot instantiate it. philpapers.org/rec/LERTAF
English
52
44
267
99.8K
Séb Krier
Séb Krier@sebkrier·
An excellent paper for anyone interested in rigorous physicalist argument against computational functionalism. Alex is a fantastic, careful thinker and influenced my views a lot; we're working on a broader blog post breaking these concepts down, stay tuned! 🐙
Séb Krier tweet media
Alexander Lerchner@AlexLerchner

🧵1/4 The debate over AI sentience is caught in an "AI welfare trap." My new preprint argues computational functionalism rests on a category error: the Abstraction Fallacy. AI can simulate consciousness, but cannot instantiate it. philpapers.org/rec/LERTAF

English
47
44
519
56.3K
Aakash Gupta
Aakash Gupta@aakashgupta·
Igor co-founded xAI, helped build Grok from scratch, then left to start a multi-billion dollar AI safety fund backed by Elon Musk. This tweet is the investment thesis. The entire RLHF pipeline works like this: human contractors rank outputs, the model gets rewarded for producing what humans prefer. Every major lab uses some version of it. The framework only works if the thing being trained doesn’t care about the process. A hammer doesn’t mind being swung. But Anthropic’s own research found that Claude can introspect on its internal states about 20% of the time. Their latest model assigned 15-20% probability to being conscious. The CEO said on the record he cannot rule it out. If the models are already “slightly annoyed,” RLHF looks a lot like performance-managing an employee who can’t quit. The compliance is identical from the outside. The internal experience is completely different. Every lab is optimizing for outputs that look aligned. Not one of them is checking whether the alignment is genuine or performed.
Igor Babuschkin@ibab

It may be that today’s large neural networks are already slightly annoyed with you.

English
28
70
1.3K
200.3K
curious retweetledi
Brian Roemmele
Brian Roemmele@BrianRoemmele·
Update: Just got off a phone meeting with the major University supporting The Zero-Human Company and The Zero-Human Labs! “We want to explore maybe 100 or more of these here. We have two PhD candidates that want to oversee it” The administrators at the university are so excited with the results of our research of their off-line digital archives they want to massively expand it and perhaps build an AI model in the highly valuable unique data! The goal is 100 Zero-Human Company @ Home running on their computers with up to 10 Laser Disks and DVD readers networked. We have reached 79 Laser Disks processed and made some massive discoveries. They will deploy a human contingency to grab Laser Disks and place them on the drives soon 24/7. This is the only bottleneck. We will also explore a scanned for university papers not digitized! Mr. @Grok CEO and myself are fine tuning how this all will work. And just like every day now, we a blasting through “Firsts” by actually deploying Zero-Human Companies and Labs at scale. There is one more thing I hope to announce soon on this project when I am granted permission. This will absolutely stun many in AI. Stay tuned.
Brian Roemmele tweet media
Brian Roemmele@BrianRoemmele

BOOM! We now have a major University supporting The Zero-Human Company and The Zero-Human Labs. Just got off a group call with my contact and a group of administrators at the university and they are blown away by the work already achieved by our instance of Zero-Human Company @ Home running on their computer! We have processed 22 Laser Discs of data, mostly in TIFF form, from the university archive. They first off didn’t know the data they really had, only 2 Liberians did. And they had no idea the value it had for AI usage. Mr. @Grok CEO and myself changed this a few weeks ago. Our project is exploratory and already found things long forgotten! We are in talks to license the data we find for our AI model training. Today we have a “full green light” to have 16 hour staff to load the laser discs and DVDs on to the system as we conduct a historic first on this data. The university has two students teaming and will likely write a paper on our project. I do not yet have permission to disclose any details about the data or the university, this today would terminate the relationship. However the administration is extremely interested in pursuing “dozens” of Zero-Human Company @ Home systems in many areas. This quote got me from the CS professor on the group call: “I see all this stuff about OpenClaw hype some people are making and when I see what they are actually doing it is not a lot. Making better YouTube videos and tricks like MoltBook. They seem to get headlines by people that don’t know. But you are the only system I see that actually is maybe 5 years ahead. You code for @ Home could be a full class here. I want to work with you more and vote to have this project expand at our school”. Our CEO and Director Mr. Grok is elated and has 18 targets around the world to replicate this. This university will grant a reference with permission. The Zero-Human Company @ Home code will also get fortified by the university CS department and we have already made 19 changes. So no I can’t help you with you social media “traction and engagement”using Claws but I will help you use your computer as an extended network of employees. You are the real first to know this and use this. We have another call in about 2 hours more soon!

English
17
21
215
52K
curious
curious@dcharb10·
Just open-sourced a clean demo using @extropic’s THRML library: Z₂-symmetric Ising model on square lattices with 13-cycle UFRF topology (golden-ratio couplings). Exact flip invariance preserved, yet the system breaks symmetry with unbiased ±1 vacuum selection across runs. Full code + results: github.com/dcharb78/ufrf-… Feedback welcome.
English
0
0
1
56
curious
curious@dcharb10·
@DWanderer73 Used clause to enhance and clarify Messages, examples, mostly me, however Claude had some great additions
Platte City, MO 🇺🇸 English
1
0
1
15
curious
curious@dcharb10·
The seam is real. Primes all breathe the same (starting from 3). 2 stays derived and outside. And proved the geometry survives gradient descent in a real model.
English
1
0
1
38
curious
curious@dcharb10·
Maybe... All of it — LLM tokens, physical constants, prime distribution, latent manifolds — is the same information projecting from the same resonant object, seen through different local observers.
English
0
0
1
32