AGIHound

18K posts

AGIHound

AGIHound

@TrueAIHound

I research fundamental physics and the brain. Distance is a creation of the mind. Intelligence is deterministic and causal, not probabilistic and correlational.

Katılım Ocak 2018
264 Takip Edilen4.6K Takipçiler
Sabitlenmiş Tweet
AGIHound
AGIHound@TrueAIHound·
LLMs mimic language, not intelligence. When was the last time an LLM walked into a random kitchen and boiled an egg? When was the last time an LLM learned to walk on its own, using its own sensor s and effectors? Moreover, the only reason that LLMs can mimic language is that they are cheating by stealing the work of millions of human beings who did the hard intelligent work. LLMs are not based on any new understanding of intelligence. They are based on old linguistic science that predates the AI field. Linguists have known for a long time that language is highly statistical, i.e., contextual. LLMs calculate the stats and store them in tokens. This is not intelligence. 🤔 Deep down, every LLM is dumb as a rock. 😀
Pedro Domingos@pmddomingos

What is our intelligence, if LLMs can mimic it so easily?

English
107
355
3.2K
88.2K
AGIHound
AGIHound@TrueAIHound·
@hb6614008694128 Keep in mind that a breakthrough can happen at any time. However, true AI will be a powerful science and we are living in uncertain and perilous times. Even with a breakthrough, the decision to publish my findings will likely depend on circumstances beyond my control.
English
0
0
2
3
宋星爵
宋星爵@hb6614008694128·
It’s unfortunate that you feel that way.
AGIHound@TrueAIHound

@hb6614008694128 I post all my neuroscience ideas on X. I have theories but I will not publish my work unless I stumble on a major breakthrough. Sorry.

English
1
0
1
9
AGIHound
AGIHound@TrueAIHound·
Fake-AI gangsters. 😠
NIK@ns123abc

🚨 OpenAI's original board REJECTED Altman's Helion deal Then the board got fired. Then the new board signed the deal. Altman personally made $1.4 billion. >2015: Altman invests in Helion (before OpenAI is incorporated) >2021: Helion valuation ~$700M (Series E) >2022: Altman proposes Helion deal to OpenAI board (as Helion chairman) >2022: Board does NOT approve Board member Zilis under oath: >"Super out of left field" >"A major bet on a speculative technology" >"Helion did not even have a working product" The Helion proposal was one of the main incidents that raised concerns about Altman's candor with the board. >Aug 2023: OpenAI President Greg Brockman acquires 5,978 Helion shares >Nov 17, 2023: The board fires Altman as CEO and removes Brockman as board chairman >Nov 21, 2023: Altman returns with new board selected with input from Satya Nadella >2024: New board approves the Helion deal >OpenAI signs Power Development Agreement with Helion >Early 2025: Helion valuation jumps 7.7x to ~$5.4 billion >March 2026: OpenAI signs second Helion agreement >March 23, 2026: Altman steps down as Helion chairman (one month before this trial) Altman’s disclosed Helion stake under penalty of perjury: >22 million Helion shares + warrants >worth ~$1.65 billion as of December 2025 >owns one-third of the company Helion is a nuclear fusion energy startup that has never delivered power to a single customer. Its valuation depends on commitments like OpenAI's. OpenAI’s CEO and President played both sides. OpenAI’s charter prohibits private gain. The law calls that self-dealing.

Norsk
0
0
1
60
AGIHound
AGIHound@TrueAIHound·
@hb6614008694128 I post all my neuroscience ideas on X. I have theories but I will not publish my work unless I stumble on a major breakthrough. Sorry.
English
0
0
0
65
宋星爵
宋星爵@hb6614008694128·
@TrueAIHound I’d like to see some of your more systematic explanations and thoughts on these disciplines. Do you have a habit of doing podcasts or posting content on YouTube? If so, could you share the links or addresses?
English
1
0
1
57
AGIHound
AGIHound@TrueAIHound·
Yes, it is very much like a computer clock. The retina is strongly synchronized with the thalamus (LGN) and the visual cortex. Precise event timing is the most important principle of intelligence imo. I've come to understand over the years that the brain is a massive timing mechanism. I can't recommend any books or papers but a Google search should be fruitful.
English
0
0
0
7
JustMule
JustMule@crookiedev·
@TrueAIHound I see, had no idea about the timing signals. Is it like clock that we have in computers ? Where can I read more about them, do you have some recommendations ?
English
1
0
0
8
AGIHound
AGIHound@TrueAIHound·
Neuroscience: color perception I've been thinking about this image. It shows the published color sensitivity response curves of the 3 cone cells in the retina. Something is wrong with it. It makes zero sense and it's driving me nuts. 🤪 The response curves of the green (M) and red (L) cones strongly and inexplicably overlap. And yet, I can clearly distinguish the 3 colors as if there was very little or no overlap. Is it just me? I don't think so. I'm beginning to suspect that there is something wrong with the experimental methods used in measuring the spectral sensitivity of cone cells in the lab. Now, I feel compelled to investigate this anomaly further even though it's outside the main focus of my research. I hate it when this happens. 😬😀
AGIHound tweet media
English
9
1
14
785
AGIHound
AGIHound@TrueAIHound·
I believe the spectral sensitivity of photopsin is correctly measured and is linear. However, the spiking frequency of the cone cells is not linear with the spectrum. We know that retinal function is not based on cone cell spiking frequency. It is strongly gated by timing signals coming from the hippocampus.
English
1
0
1
7
JustMule
JustMule@crookiedev·
@TrueAIHound I wonder. I read some history of color theory from various books last year and if my memory serves me right, the observer was defined before the discovery of process of how spectral sensitivity works with Photopsin. I could be wrong though
English
1
0
0
12
AGIHound
AGIHound@TrueAIHound·
@crookiedev I believe there is probably something wrong in the spectral sensitivity of cone cells are measured. My understanding is that the calculated sensitivity is correlated with the spiking frequency of individual cells in vitro. I'm almost certain this is wrong.
English
1
0
0
11
JustMule
JustMule@crookiedev·
@TrueAIHound You think it could be to do with the standard observer and how those tests were set up by CIE, even I have some issues. The color theory from Munsell to Digital does not seem to sit well in my brain, some pieces seem to be missing.
English
1
0
1
25
AGIHound
AGIHound@TrueAIHound·
My thesis is that the sensitivity response curves were obtained in the lab by measuring the spiking frequency of each cone cell when subjected to different wavelengths of light. Spiking frequency is not a proper indicator of cone cell sensitivity. I explained why elsewhere: x.com/TrueAIHound/st…
English
0
0
0
15
AGIHound
AGIHound@TrueAIHound·
@DoozerDiffuser @realjoelroberts Dude, please. What difference would it make if no AI was trained on my model of a cortical column? No vibe prompts can fix that. It would fail because it has no learned context to draw upon.
English
2
0
0
26
AGIHound
AGIHound@TrueAIHound·
@HoriusParry The published overlapped of the red and green (L) curves (M) are obviously wrong. I can see it with my own eyes.
English
1
0
0
10
Horius Parry
Horius Parry@HoriusParry·
@TrueAIHound .. before going to the brain! You don't see these colours but the transformed colours. CIE Lab is perceptually uniform and this is verified by experiment. The closeness of the red and green curves leads to increased sensitivity between the colours - not ambiguity.
English
1
0
0
22
joelroberts
joelroberts@realjoelroberts·
@TrueAIHound @DoozerDiffuser That guy's dumb, but in general most code is not novel. Garden variety software developers can use it all day and it's pretty good.
English
2
0
1
25
AGIHound
AGIHound@TrueAIHound·
We observe it all the time. The colors that you consciously perceive are part of your soul. Colors are not physical properties. There are no red, green and blue colors in the physical universe, just EM wavelengths. We are being indoctrinated by wicked minds, bad actors in our midst, to deny the existence of our souls. They deceive a lot of people but not all.
English
1
0
0
16
Rmol
Rmol@Rmol77191715·
@TrueAIHound @anilkseth Magico va a lo imposible. Si existiera un alma obviamente entraria en el mundo fisico. El tema es que hasta donde entendemos de biologia, donde esta? como supervive al cuerpo? por donde? vos Alguien la observo o tuvo evidencia alguna? o es mas un querer creer?
Español
1
0
1
31
AGIHound
AGIHound@TrueAIHound·
"There is no magical property." ~ @anilkseth This is a common argument that religious physicalists make against the existence of soul: If it can't be explained by physics alone, it's magical and should therefore be rejected. It's a weak argument bordering on pseudoscience imo. It's disappointing seeing it used by Seth. No physicalist/materialist can explain the 3D scene and the colors we all perceive in front of us, based solely on the spiking activity occurring in the visual cortex. No, it's not magical but it's not purely physical either. There's something missing. Denying that something else is needed is the obsession of sci-fi fruitcakes and scammers. God knows we have plenty of those in the fake-AI community. No, we are not just meat machines. "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy." ~ Shakespeare
Anil Seth@anilkseth

@sd_marlow @watdohell @Plinz @aran_nayebi There is no magical property. I prefer the term conscious to sentient. The most fleshed out answers are here pubmed.ncbi.nlm.nih.gov/40257177/ and here noemamag.com/the-mythology-…

English
6
4
22
1.1K
AGIHound
AGIHound@TrueAIHound·
It is true that RGCs only detect edges, the difference in luminance between adjacent spots on the retina. This doesn't explain the anomalous overlap in the spectral sensitivity curves of the L&M cone cells. I believe that the published curves are mistaken. There is no overlap in our vision.
English
0
0
1
23
Horius Parry
Horius Parry@HoriusParry·
@TrueAIHound The eye sees these colours but they are translated to colour differences before going to the eye. In Lab space L = lightness a = red-green difference b = blue-yellow difference CIE colour space and observer curves en.wikipedia.org/wiki/CIE_1931_…
English
2
0
2
35
AGIHound
AGIHound@TrueAIHound·
"The major problem for AI progress turned out to be energy supply." ~ @skdh This is proof that the fake-AI mafia has zero understanding of intelligence and that their real goal is mass surveillance, not AGI. If you can't emulate the intelligence of bugs, you have no authority to preach to the world about human intelligence, let alone superintelligence. 🙄 The fake-AI mafia may deceive most people but not all. Wicked minds are foolish minds. AGI will be solved but not by them. 😠
Sabine Hossenfelder@skdh

The real reason AI might stall in 2026 youtube.com/watch?v=XA84pS…

English
5
0
31
915
AGIHound
AGIHound@TrueAIHound·
@DoozerDiffuser I research the visual system of the brain. I have my own model of the retina and the visual cortex. The AI has no clue what I'm talking about since I have not published my model on the internet.
English
1
0
2
28
🜛∞
🜛∞@DoozerDiffuser·
@TrueAIHound Oh really? What have you ever invented that was totally original?
English
1
0
0
20
AGIHound
AGIHound@TrueAIHound·
@ServerServer19 @anilkseth All sci-fi fruitcakes and crackpots believe that consciousness can be solved with physics alone. Good luck.
English
0
0
0
15
Server Server
Server Server@ServerServer19·
@TrueAIHound @anilkseth Это физика. Трудная проблема сознания по другому не решается. Анестезия уже показывает что это так
Русский
1
0
0
18
AGIHound
AGIHound@TrueAIHound·
@Emb0wlden The problem is that the L and M response curves do not match what we see. We can clearly see the colors on the image with almost no overlap.
English
0
1
2
31
Ryan Rand
Ryan Rand@Emb0wlden·
@TrueAIHound blue is used to identify any available very low level light, the others both cross the densest collection of useful differences, i imagine these small differences show up better with two separate close comparisons, like depth perception.
English
1
0
0
16
AGIHound
AGIHound@TrueAIHound·
@DoozerDiffuser If the AI can parse your instructions, the code is not novel.
English
2
0
1
45
🜛∞
🜛∞@DoozerDiffuser·
@TrueAIHound I frequently witness AI writing novel code, if given clear instruction. I've developed nonstandard arithmetic, totally new kinds of math, and was able to get AI to use it successfully. Dont bury your head in the sand.
English
1
0
0
53