Colin Hales

4.5K posts

Colin Hales

Colin Hales

@Dr_Cuspy

Neuroscientist/Engineer. Artificial General Intelligence builder. Expert in brain electromagnetism.

Melbourne, Australia Katılım Nisan 2012
1.8K Takip Edilen1K Takipçiler
Colin Hales
Colin Hales@Dr_Cuspy·
@skdh @kareem_carr Until the standard model embraces its most fundamental absence: "What is it like to 'be' something in the standard model?" ... As an explanation of the scientific observer, we'll still be here in fifty years yammering about nothing.
English
0
0
0
66
Sabine Hossenfelder
First, I think you're confusing self-awareness with consciousness. But the bigger issue I think is that neither consciousness nor self-awareness are binary. They're not either on or off. The digital map, I would say has an \epsilon of self-awareness. So I don't see any contradiction here.
English
75
8
271
17.3K
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
When people say AI is "conscious" what they seem to mean is it's a mathematical model that has within it a representation of the model itself. By this definition, a digital map that contains an active tracker of the map itself is also conscious, which is kind of silly.
English
68
13
197
25.7K
Earl K. Miller
Earl K. Miller@MillerLabMIT·
Spiking and synapses are important, but the brain also uses electric influences. A historical review of ephaptic field research: from early foundations through contemporary renaissance doi.org/10.3389/fnhum.… #neuroscience
English
2
23
76
5.2K
Colin Hales
Colin Hales@Dr_Cuspy·
@MillerLabMIT @neuro_nasko Tam and I were part of the editor group that did a special issue on EM Field theories of consciousness. It's time has come. "What is it like to be ... Q. "a human being?" Q. "the EM field of a human brain? Are the same question. A. "Consciousness" frontiersin.org/journals/human…
English
2
10
18
1.6K
Colin Hales
Colin Hales@Dr_Cuspy·
@kanair The whole thing is dependent on things in a model making up for the vast amount of information provided by "being" a brain that is lost. That you must provide. You must be warned that you could be right, but that the "map" may be so complex that it's intractable.
English
0
0
0
37
Ryota Kanai
Ryota Kanai@kanair·
I wrote the original post exactly because I think the argument in this paper is wrong (or naive). Yes, semantic labels may depend on interpretation. But the causal organization of a computer does not. Voltages, memory states, gates, recurrent dynamics, and internal state transitions causally constrain future states whether or not we call them symbols. The key distinction is: observer-relative interpretation ≠ intrinsic computational organization. A map needs a mapmaker. A mechanism does not. So the real question is not whether symbols magically cause consciousness, but whether a physical system instantiates the right intrinsic causal, dynamical, integrated, and geometric organization. I’m currently working on a fuller paper on this, which I call intrinsic computational functionalism, which can be stated as follows: Consciousness, if computationally constituted, depends on physically realized computational structures that are intrinsic to the system, not on externally imposed semantic interpretations.
English
2
0
1
303
Ryota Kanai
Ryota Kanai@kanair·
I often hear arguments that simulated consciousness cannot be real consciousness. But these arguments often miss the point that simulations are physically instantiated in a computer with real causal dynamics. It is not like a fictional character with no internal mechanisms.
English
40
16
108
9K
Colin Hales
Colin Hales@Dr_Cuspy·
@GaryMarcus The thing that is missing is autonomous robots with inorganic brains that operate like biology. That would change things.
English
0
0
0
9
Gary Marcus
Gary Marcus@GaryMarcus·
Are LLMs really more important than fire or electricity? “Honestly, a ton of what we’ve developed in my lifetime amounts to scaling up the delivery of information and entertainment and the frictionlessness of certain financial transactions. These are real improvements! ... But compare them seriously to what came before and the disproportion becomes almost embarrassing. The fundamental architecture of daily material life - how we heat our homes, how we move from place to place, how we grow and store and cook food, how we build structures - has changed remarkably little since 1970. .….The cars go to the same places. The planes aren’t even marginally faster. The houses are built the same way. People still die of cancer. .... Code cannot insulate your house; no algorithm has ever laid a water pipe; the internet has not built a single mile of high-speed rail. What our current stagnation shows, collectively, is that the improvements in material human life that matter the most - abundance in warmth, in calories, in clean water, in physical safety, in hours of freedom from labor - were all achieved by technologies that operated on atoms: steel, concrete, copper wire, chlorine, penicillin...” — Freddie deBoer
English
48
40
209
11.8K
Colin Hales
Colin Hales@Dr_Cuspy·
@_fernando_rosas You'll never convince the computational functionalist to scientifically engage the potential falsehood of it. It means they have to do science that makes artificial brains without using general purpose computers. It's a genuine cargo cult.
English
0
0
0
30
Fernando Rosas
Fernando Rosas@_fernando_rosas·
This view, know as computational functionalism, is taken as an obvious true by a large portion of the ML and CS communities But it has been progressively rejected by most people that actually study consciousness References below 👇🏽
Eliezer Yudkowsky@allTheYud

Simple way to see this is wrong: If you view a system as having inputs (like hearing something) and outputs (like saying something) then you can divide system properties by whether or not they affect I/O. Claude's weights somewhere storing "Paris is in France" affect I/O if you ask a question about Paris. The exact mass of the power supply to the GPU rack for that Claude instance doesn't affect I/O. That Claude instance being made out of silicon instead of carbon, or electricity in wires instead of water in pipes, doesn't affect I/O given a fixed algorithm above the wires or pipes. Nothing Claude can internally do will make anything get damp inside, if it's running on electricity. Nothing about "electricity vs water" can affect Claude's output for the same reason. It always answers the same way about France. Nothing Claude can internally compute will let it notice whether it's made of electricity or water flowing through pipes. When someone says "a simulated storm can't get anything wet", they are unwittingly pointing to the difference between the physical layer and the informational/functional layer. Things that the computer physics affect without affecting output; things that affect the output without depending on the exact computer-physics. The material it's made of doesn't affect the output. The output can't see the material because no algorithm can be made to depend on the choice of material. You can always run the same algorithm on different material, so you can't make the algorithm depend on that, so the output can't depend on that. By reflecting on your awareness of your own awareness, the fact of your own consciousness can make you say "I think therefore I am." Among the things you do know about consciousness is that it is, among other things, the cause of you saying those words. You saying those words can only depend on neurons firing or not firing, not on whether the same patterns of cause and effect were built on tiny trained squirrels running memos around your brain. You couldn't notice that part from inside. It would not affect your consciousness. That's why humans had to discover neurobiology with microscopes instead of introspection. Consciousness is in the class of things that can affect your behavior and can't depend on underlying physics, not in the class of direct properties of underlying physics that can't affect your behavior. A simulated rainstorm can't get anything wet. Running on electricity versus water can't change how you say "I think therefore I am." And that's it. QED.

English
45
23
297
115.2K
Colin Hales
Colin Hales@Dr_Cuspy·
@MillerLabMIT If anyone wants a recent book on this approach that is very recent and has a well constructed history:
Colin Hales tweet media
English
0
0
3
130
Colin Hales
Colin Hales@Dr_Cuspy·
@MillerLabMIT More than weird! To claim/prove they weren't 'awash' would be to disprove the standard model of particle physics! The only thing outside the nucleus & electrons that _isn't_ EM field is the gravitational field! 18 orders of magnitude out of the picture. pubmed.ncbi.nlm.nih.gov/35782039/
English
0
0
2
72
Colin Hales
Colin Hales@Dr_Cuspy·
@MillerLabMIT Once again... "Artificial neural networks" only artificial in the sense of "made by humans". Computed fire equations are not fire. Yet uniquely in the whole of science, neurons are different. Want to see an actual artificial neuron 20000x scale? :
Colin Hales tweet media
English
0
0
0
38
Colin Hales
Colin Hales@Dr_Cuspy·
@niko_kukushkin @drmichaellevin @MillerLabMIT "electricity" is a 17th century archaic term that misrepresents the origins of causality in the brain, which is the electromagnetic field. Ephaptic coupling does not need charge carriers for remote line of sight influence. The term "electricity" is a misdirection that must end.
English
0
0
1
89
Colin Hales
Colin Hales@Dr_Cuspy·
You all should probably read this. It's AGI potential on the same overall EM field based signal processing arc as my "EMChip"- based robotics project. Not the same, but in the same vein. Watch out. This idea is going to displace computers from AGI. Only a matter of time.
Johnjoe McFadden@johnjoemcfadden

How to make a conscious AI: "Computing with electromagnetic fields rather than binary digits: a route towards artificial general intelligence and conscious AI" frontiersin.org/journals/syste…

English
0
1
6
687
Colin Hales
Colin Hales@Dr_Cuspy·
@chipro "AI" based on the use of computers, disembodied, non-autonomous, will never be a scientist.
English
0
0
0
108
Chip Huyen
Chip Huyen@chipro·
For those that chose Never, what do you do?
English
41
1
28
22.5K
Chip Huyen
Chip Huyen@chipro·
How long do you think AI will be able to fully automate your job?
English
18
6
41
24.5K
Colin Hales
Colin Hales@Dr_Cuspy·
@KordingLab Good science plugs a hole ... But god help you if you draw attention to it and it elicits a threat... 20 years of angst.
Colin Hales tweet media
English
0
0
1
44
Colin Hales
Colin Hales@Dr_Cuspy·
@KordingLab In the late 1700s, a century of predictively useful "phlogiston" was utterly devoid of connection to the world. In ML: 75 years of science that presupposes scientific observers that is neither predictive nor explanatory of a scientific observer (that puts you in "the world").
English
0
0
2
119
Colin Hales
Colin Hales@Dr_Cuspy·
@Grady_Booch It's interesting: practitioners in the science of consciousness, for its 35 year life, do not work on sentience. It's explanandum is "the 1st person perspective" . Anyone/thing that uses the word sentience is automatically classified as under-informed and to be avoided.😊
English
0
0
0
54
Grady Booch
Grady Booch@Grady_Booch·
Today, I am Very Annoyed with Claude. It a) added code I didn't ask for b) deleted code I did not tell it to c) broke code that used to work because of these changes then d) lied when I called it on these things Adding code I did not ask for and NOT reading/reviewing those changes is the way malicious stuff gets introduced. Grr. Were Claude my intern, I would have told it that it was time to review their life choices.
English
179
53
797
128.2K
Colin Hales
Colin Hales@Dr_Cuspy·
@TOEwithCurt "Being" electromagnetism, which is what the brain is from the level of atoms up, delivers the 1st person perspective. I have a 2014 article that specifies a membrane physics mechanism and does 1st--3rd person decomposition. frontiersin.org/journals/human…
English
0
0
2
28
Curt Jaimungal
Curt Jaimungal@TOEwithCurt·
What’s your theory of consciousness?
English
695
25
350
52.2K
Colin Hales
Colin Hales@Dr_Cuspy·
@leecronin Only biology gets to ask the question "What is it like to 'be' brain signalling physics?" and know there's a vast amount of information arising that has no place in 3rd person science. BUT we can inorganically replicate the signalling physics. But we never do it.
English
0
0
0
28