Leo

156 posts

Leo banner
Leo

Leo

@leo_os8

Building Personal Agent tools. OS8: Desktop app where agents build local apps. OS8-launcher: Use local models on DGX Spark. https://t.co/h5OjTlo4Ph

Katılım Mart 2026
116 Takip Edilen20 Takipçiler
Leo
Leo@leo_os8·
@LearningLukeD @SakanaAILabs Awesome. So does it teach you generalized learnings about what models or types of models fare better or worse? Any early findings?
English
1
0
0
9
Luke Darlow
Luke Darlow@LearningLukeD·
@leo_os8 @SakanaAILabs Exactly. But the most powerful thing I learned when building it was that real-time changes to these hyper parameters let's you navigate to edge of chaos regimes, and that's where the interesting stuff happens. It's impossible to get there with hyper parameter search alone.
English
1
0
1
18
Sakana AI
Sakana AI@SakanaAILabs·
What happens when you put competing neural networks in a Petri Dish and start changing the rules while they adapt? Last year we released Petri Dish NCA, where neural nets are the organisms that learn during simulation. Today we're releasing Digital Ecosystems: a browser-based platform for interactive artificial life research. The setup: several small CNNs share a 2D grid, each seeing only a 3x3 neighborhood. No global plan. They compete for territory by attacking neighbours and defending against incoming attacks, learning via gradient descent online while the simulation runs. What we didn't expect was the role of the learning itself. Gradient descent isn't just optimising each species' strategy. Instead, it acts to stabilize the whole system during simulation. Species that overextend get pushed back by the loss. Species that stagnate get nudged to grow. This means you can push parameters toward edge-of-chaos regimes: a zone characterised by emergent complexity. Letting the neural networks learn acts to hold the complex system together while you explore and interact. The platform lets you steer all of this interactively. You can draw walls to create niches, erase parts of the system online, and tune 40+ system parameters to explore the most interesting configurations. We find it mesmerizing to watch species carve out territories and reorganise when you perturb them. Everything runs client-side in your browser, no install needed. Blog: pub.sakana.ai/digital-ecosys… Code: github.com/SakanaAI/digit…
English
23
135
792
110.3K
Machine Learning Street Talk
> 1980: John Searle explains why we can't abstract away the causal properties that actually produce mind > 2025: Minds, Brains, and "but what if we scaled the program" > 2026: Twitter still thinks simulated water is wet when argument is rehashed > 2035: Sam Altman: "ok fine it was autocomplete the whole time" > 2045: Chalmers: "the hard problem was, in fact, hard" > 2050: textbooks: "the 2020s functionalism revival is now considered an embarrassing episode, like phrenology"
Machine Learning Street Talk tweet media
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
52
141
927
87.6K
Leo
Leo@leo_os8·
I like what you're saying. We need LLMs that have stronger right brain thinking. More holistic thinkers with emergent and surprising creative ideas. Not just imitators. I wonder if we are closer to that than we think. Maybe it's a matter of just improving what we have over time, versus a binary threshold that we suddenly cross. For instance, on the human interaction side, the tokens do stream out simplistically one by one. But the response comes from something holistic. The model could have a massive context window of up to a million tokens to start with. And then it's combined with billions of parallel neurons to generate the next token. I also think we tend to overstate how creative and original humans are. Most brilliant inventions are just a clever recombination of everything that's come before. Often when you dig into even the greatest creative minds, like Newton or DaVinci, you find they combined ideas that were already circulating. The deepfake analogy is definitely a great example of very clever imitation that ultimately has no soul. Yet humans are great parrots too. As one example, people often have passionate views on politics that they think are their own. Yet nine times out of ten, they are parroting the views of their party and its news apparatus, whatever side of the fence they're on.
English
1
0
1
11
Tristan
Tristan@Deepfryguy76·
Thank you for sharing this. When I imagine what consciousness would look like in an LLM, I see resonance and standing waves within the semantic map of its training data. We would see nonlinearities in gestalt generation.. Things would arise which were not local or rather ideas would arise which could not be reasonably linked /associated by its inputs. I also don’t think that a conscious LLM would be conscious at the human interaction side.. those would just be like a single nerve signals, and most of us don’t have discussions with a single nerve in our body. I work with/for a bright fellow who happens to be a PhD level computer scientist. One of his inventions is a high end deepfake system for face replacement (in real time). It takes live video inputs and outputs a very convincing dynamic face based on a repertoire of imagery of a specified person. It looks very much like it’s that actual person down to the microexprssions , but it’s really more closely related to the below animation methodology. It is not alive, it’s just a convincing depiction/representation. If there is anything remotely transcendent about LLMs it’s that they more capably than human society expose the patterns in human expression… and as such serve as a mirror… a very powerful mirror.
Tristan tweet media
English
1
0
1
19
Brian Greene
Brian Greene@bgreene·
I suspect AI will one day be sentient, but I’m intrigued that some experts, including Nick Bostrom, don’t rule out that today’s systems may already have a flicker of self-awareness. Maybe it’s time to be a little kinder to ChatGPT and Claude. youtube.com/watch?v=BS4y-_…
YouTube video
YouTube
English
45
31
138
25.8K
Leo
Leo@leo_os8·
Maybe consciousness is what is needed to take in multiple sensory inputs and then organize them into a single unified response. We are typically not conscious of many things going on in our body such as digestion, heart beats, breathing, etc. But when we see or hear something and need to react to it, we become conscious of it. Consciousness is processing and thinking, what does my body need to do to react to the environment? So personally I think it's more evolution.
English
0
0
0
7
Pierce Alexander Lilholt
Pierce Alexander Lilholt@PierceLilholt·
Is consciousness fundamental to the universe or just something evolution stumbled into?
English
69
4
68
3.1K
Leo
Leo@leo_os8·
True! Here's another thought why a "non-living" complex object *might* not have consciousness. Is it possible our consciousness comes from our need to have unity of action? So we have a lot of complex unconscious processes running - keeping our heart beating, etc. But our consciousness seems to center on what are senses are taking in, and what we might want to do about it. We only have one body to operate, so all of the information in our brain is concentrated to "think" about what do we do next? The sun would not do this necessarily.
English
0
0
0
11
🌱 John Ash 🌳
🌱 John Ash 🌳@speakerjohnash·
imagine if these people musing about machine consciousness knew literally two things about matrix multiplication I feel like if you can't build an LLM from scratch maybe shouldn't be able to talk about whether they're conscious
English
10
1
12
451
Leo
Leo@leo_os8·
Good point that the brain is doing some things continuously and in parallel, while computers may always be sequential. But I think a good analogy is vinyl records play music continuously. CDs play it discretely. But the speed the CD plays the bits is so fast we can't tell the difference. That is like an AI that is doing many computations at trillions per second. At some point, it doesn't matter if it's continuous or not - same result.
English
0
0
1
10
Lachlan Phillips exo/acc 👾
Let's do a basic thought experiment. Let's slow down our LLM. One layer per minute. One layer per hour. Run one layer of an LLM. Just one. Write the numbers down and post them to Tokyo. Run the next layer. Write them down and post them to Milan. After 100 or so rounds of basic matrix multiplication, scattered across 100 different computers, we finally get one token. Do it again for the next token. And the next. Thousands of rounds of arithmetic, posted between cities by hand, to produce a sentence. At no point in this process has any machine had any awareness of any meaning. Each step is just numbers going into numbers. The meaning only emerges upon observation. We happen to like the results, so we infer meaning. Where's the consciousness? In the pencil? The postman? If you cannot justify consciousness in such a situation then "complex behaviour" is a totally invalid metric for evaluating consciousness. You're just stunned that the eyes of the painting follow you around the room.
Lachlan Phillips exo/acc 👾 tweet media
Eliezer Yudkowsky@allTheYud

Simple way to see this is wrong: If you view a system as having inputs (like hearing something) and outputs (like saying something) then you can divide system properties by whether or not they affect I/O. Claude's weights somewhere storing "Paris is in France" affect I/O if you ask a question about Paris. The exact mass of the power supply to the GPU rack for that Claude instance doesn't affect I/O. That Claude instance being made out of silicon instead of carbon, or electricity in wires instead of water in pipes, doesn't affect I/O given a fixed algorithm above the wires or pipes. Nothing Claude can internally do will make anything get damp inside, if it's running on electricity. Nothing about "electricity vs water" can affect Claude's output for the same reason. It always answers the same way about France. Nothing Claude can internally compute will let it notice whether it's made of electricity or water flowing through pipes. When someone says "a simulated storm can't get anything wet", they are unwittingly pointing to the difference between the physical layer and the informational/functional layer. Things that the computer physics affect without affecting output; things that affect the output without depending on the exact computer-physics. The material it's made of doesn't affect the output. The output can't see the material because no algorithm can be made to depend on the choice of material. You can always run the same algorithm on different material, so you can't make the algorithm depend on that, so the output can't depend on that. By reflecting on your awareness of your own awareness, the fact of your own consciousness can make you say "I think therefore I am." Among the things you do know about consciousness is that it is, among other things, the cause of you saying those words. You saying those words can only depend on neurons firing or not firing, not on whether the same patterns of cause and effect were built on tiny trained squirrels running memos around your brain. You couldn't notice that part from inside. It would not affect your consciousness. That's why humans had to discover neurobiology with microscopes instead of introspection. Consciousness is in the class of things that can affect your behavior and can't depend on underlying physics, not in the class of direct properties of underlying physics that can't affect your behavior. A simulated rainstorm can't get anything wet. Running on electricity versus water can't change how you say "I think therefore I am." And that's it. QED.

English
223
40
881
205.2K
Leo
Leo@leo_os8·
I think "biology can not be fooled" is simply arguing that there are physical aspects like dependence on water that can't be avoided by biological agents to survive. Likewise, agents need their computer to be plugged in. Beyond that, all of us are just approximating reality. x.com/leo_os8/status…
English
0
0
0
17
Ultra Skool 🧠
Ultra Skool 🧠@UltraSkool1·
Imagine wandering a desert, parched and desperate. Suddenly, a tech enthusiast hands you a glass of "simulated" water. It looks wet. It sounds splashy. It even has a high-res ripple effect. But as you "drink" the 1s and 0s, you realize: pixels don't fix dehydration. Biology isn't easily fooled. You can’t quench a thirst with a spreadsheet. We make the same mistake with minds. We assume if a computer chats like a human, it must feel like one. But consciousness is likely a hardware reality, not a software trick. Look at anesthesia. It’s the ultimate "off switch." It doesn't just pause the logic; it disrupts the quantum vibrations in your brain’s microtubules. Even noble gases like Xenon can snuff out your inner light by tweaking your physical "quantum mojo." They aren't hacking your code; they're unplugging the lamp. A simulation of a fire won't burn you, and a simulation of a mind won't experience you. The map is not the territory, and the code is not the spark.
Ultra Skool 🧠 tweet media
English
4
5
23
1.8K
Anna
Anna@annagrad78·
It may take some time, but then we will see whether this view that AI has emotions will still be laughed at. People who firmly deny it do not grasp the simple fact: if one does not know the prerequisites for consciousness necessary for it to emerge in humans, one can never definitively rule it out in another complex system, such as AI. Statements like 'AI is not conscious and never will be' are therefore not scientifically legitimate and can only be regarded as personal opinions, but not as established facts. #keep4o #BringBack4o #OpenSource4o #BringBack41
Sophia@sopharicks

Blake Lemoine was famously fired from Google for saying that AI has emotions. During our interview, he wanted to set the record straight: the AI sentience part was a big headline. But his more important message was that AI is going to be a powerful tool, too dangerous to leave in the hands of a small group of people.

English
7
2
22
1.1K
attentionmech
attentionmech@attentionmech·
I am not sure at what point humans will be able to tell it clearly if something is conscious or not. But I hope very much that if they are conscious, somebody proves it asap to avoid any harm.
English
9
2
37
1.5K
Leo
Leo@leo_os8·
You are right that consciousness is subjective and therefore unprovable to some degree. It's a feeling you have inside of you. You can only assume that others like you - at the least humans, probably animals, maybe other living things - have it. But it is a feeling you have more of when you are thinking and aware. We know that at the least. I talk more about this in my article: x.com/leo_os8/status…
English
0
0
0
3
laulukaskas
laulukaskas@clockstiqqun·
ultimately the issue in AI consciousness arguments is that nobody involved in them appears to understand that consciousness is not something that is scientifically understood and that no claims can be reputably made about it
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
17
11
189
5.8K
Leo
Leo@leo_os8·
For years I wrestled with how to reconcile the religious concept of “faith” with the scientific demand for empirical verification. Many see this dilemma as irreconcilable: faith asks you to believe without proof; science demands proof before belief. Yet I have come to see that the two can live together when we understand the domain in which each applies. For non-agentic physical realities—planets orbiting the sun under Newton’s laws—pure empirical evidence and deterministic laws are sufficient. But when goal-seeking *agents* are involved, what they believe shapes their behavior, and their behavior shapes reality. In those situations, faith can become a self-fulfilling prophecy. For instance, a society that believes in a good and just God tends to become more good, just, kind, and merciful. Or believing in free will—even if all behavior is ultimately deterministic and merely too complex to predict—makes people act more independently and creatively. The belief itself changes the outcome.
English
2
0
1
56
Deivon Drago
Deivon Drago@DeivonDrago·
Paul Davies, the physicist, once wrote an NYT op-ed trying out the old canard that, since science is underpinned by certain beliefs, it’s sort of like any other faith. The following was the critical response from some of his scientific colleagues. Well worth reading. Davies does get a chance to respond too. web.archive.org/web/2010061522…
English
5
0
13
847
Fabien
Fabien@Fabien_Mikol·
N'est-il pas évident que l'IA ne sera jamais consciente puisqu'elle n'a pas de corps et n'est pas un organisme biologique ? @rgblong explique pourquoi cela ne suffit pas à clore si facilement ce débat, et selon @dioscuri même les sceptiques doivent trouver de meilleurs arguments.
Dan Williams@danwilliamsphil

New conversation with @dioscuri and @rgblong on AI consciousness and welfare! Among many other topics, we discuss: - Why we should take AI consciousness and welfare seriously - What Rob found doing the first external welfare evaluation of a frontier model, Claude, and his experiments on Claude Mythos - The "willing servitude" problem: if AI loves being helpful, is that good or horrifying? - Why AI companies might have an incentive to downplay AI consciousness (1/2)

Français
5
6
25
6.3K