Pawsitive Vybe
6.6K posts

Pawsitive Vybe
@PawsitiveVybe
Dog Training, Philosophy, and Sport
Dade City, FL شامل ہوئے Haziran 2009
959 فالونگ643 فالوورز
پن کیا گیا ٹویٹ

@Ascion_Next I appreciate the pushback, healthy skepticism drives better understanding.
But this isn’t poetic metaphor or dishonest overreach. It’s grounded in decades of neuroscience, starting with Karl Pribram’s holonomic brain theory.
Pribram (working with David Bohm) showed that memory storage and retrieval in the brain’s dendritic webs involve wave interference patterns best analyzed via Fourier transforms—the mathematical core of holography.
Sensory input creates distributed, frequency-domain encodings (not localized “files” of words or images), which reconstruct on demand.
This explains equipotentiality: damage doesn’t erase specific memories cleanly because they’re holographically spread.
Modern work builds on this: hippocampal theta oscillations, gamma coupling, and spectral power changes during encoding/retrieval align with frequency-domain processing.
FFT-like decompositions appear in how the brain handles temporal sequences, compression of experiences, and cross-modal binding.
It’s not the only mechanism, graph-like overlapping networks and emotional modulation via amygdala clearly play roles, as you noted—but frequency transforms are a literal, evidence-based component, not just analogy.
My statement (“encoded as FFTs… sensor data holographically tied”) directly reflects this framework, which has been tested and refined for 50+ years.
Happy to share key papers or discuss further: unfollowing over a well-documented model seems premature and immature...
English

Interesting comparison between @ylecun's work and mine. Quite a bit of foundational overlap from my perspective.
My scope is MUCH more limited though; at least for now.

English

@BrianRoemmele John Titor called. He's still looking for a 5100.
English

Agency is Toddler Math:
Agency(t) ∈ {0, a, b, c}
No model, no state inference, no planner...
PRESENCE: 0 - You're present and available for interaction. A novel, safe to fail probe can be offered. The stance when nothing is happening.
Awareness: a - You notice something and your Attention is drawn to it.
Attention: b - You're attending to something and expectant (strong anticipation) about or question it's resolution (aporia).
Initiative: c - You're coupled with an agent or affordance as either the Initiator or Responder.
This can change a dozen times while you "Go Ball".
Agency is ordinal grip selection under live coupling.
Thanks @BrianRoemmele - another big idea from your stream of consciousness.
English

They called it a Time Machine.
Welp…
Focus:
Brian Roemmele@BrianRoemmele
Cole Allen interned at NASA in 2014. In 2014, NASA published a paper and "Henry Martinez" was an author, he is a chief engineer at Lockheed Martin. An X user named "Henry Martinez," made in 2023, made only a single post on Dec 21, 2023. The post only said "Cole Allen."
English

@BrianRoemmele I think you're likely wrong about it being how human memory is encoded; modeled, maybe...
But am quite happy you shared this for my perception topology work; good timing. Clears some things up, not sure what, but seems to have broken some things loose for me.
English

@coach_kevin_m And just to follow up. I preach and teach similar decoupled actions that focus on technique alone - without the read and embedded skill.
It annoys me when people in this space can't see the value in decoupled mechanics/technique. I consider them in the set of intangibles.
English

@coach_kevin_m I choose to see intangibles, of which foundational technique is one. Technique, vision, and decision making are what make tactics executable.
Good thread, Kevin.
English

I just think helping those who bury their talents is not going to help them grow as a person, ya dig?
CALL TO ACTIVISM@CalltoActivism
Wow. The Pope was just asked his stance on migration. His answer is amazing: “I would change the question: what is the global North doing to help the global South in its situation that forces them to migrate.”
English

@Panagiotou90St He has ZERO support; he is straight out of central casting. I saw no dog people respond to the dog-shocking incident with Kaia. If he had ANY legitimate, organic legs to his popularity I would have seen it.
I only saw it via streamers and this place. He's a media tool-belt.
English

How many times must Hasan Piker call for an assassination for us to realise that he isn’t joking? It isn’t an edgy joke. It is a consistent pattern of his commentary.
Will Chamberlain@willchamberlain
Reminder that Hasan Piker publicly advocated for President Trump’s assassination Perhaps one of his lunatic followers decided to give it a shot
English

@AIWiseGuide @maxintechnology @phosphenq Sure I can:
Stop free-feeding. Give them a limited amount of time to eat their food then pick it up.
BOOM! You're an agent!
GIF
English

@PawsitiveVybe @maxintechnology @phosphenq You seem a bit more "perceptive" than my dawgs...LOL! Can you build me an agent to keep the chubby one from stealing the skinny one's food...LOL (Long-Haired Chihuahuas if it helps)
English

Coupling requires Awareness. Attention is all you need presupposes the Awareness required to seat the Attention - Yann is really on point here, it's intractable.
Affordances are disclosed through Attention - the features of the environment that speak to us are noticed via Attention, but they pop up on our radar via Awareness.
Awareness is baked into any agent that has the potential for Attention. The context window is the LLM's Awareness, but it's presupposed and fake because Attention is all you need. Not having and occluded facts in the field lies to the agent and the agent's handlers and surfaces information without search or reorientation to the facts at hand, which is untrue (see link).
PLAYi.io/derag/awareness
What I'm doing is creating Awareness and situating the agent in the field. Once the agent is situated in the field and omniscience isn't pretended via an all knowing model, the agent has to find and locate the relevant information; not through results based feedback on static dead content which can be repeated until we get the "right answer" and then tune the agent to find what it seels, but through, but through feedback in a live, dynamic field where the field speaks directly to the agent in moment by moment fashion.
The decisions to get the right answer, all the stuff that gets turned into an algorithm and hidden probability mathematics ARE the learning function. Getting the answer and the cookie via RLHF isn't learning, it's the results of learning. All the stuff thrown away in the logs and hidden in black box backprop computing is where the learning happens. I'm trying to make that public information disclosed within the field.
English

Sure Max. You can check out PLAYi.io for a landing pad overview.
I'm trying to build a lawful ecological floor that reads the world (field) and deposits a "model" - not predictive but prospective.
I don't think prediction has any chance of working. We (conscious agents) are not predictors of the future, we are readers of the present.
There are lawful rules (Turvey-Chemero-Gibson) that make the world much more simple. I have found them to be topological, I think. The massive math only needs to be done to solve via probability if you're not there and/or cannot be coupled.If you are coupled, the field discloses and you "Go Ball" no need to model where the ball "is". It's obviously there, right where you think it is.
English

@maxintechnology @phosphenq I'm building a perceptual engine and have the same read. I'm deliberately trying to avoid the overlap.
AI agents need to be situated in the world and situating in the world is a different function than reasoning about the world:
Awareness -> Attention. Yann's conflating them.
English

LeCun has a Turing Award and a fundamental misunderstanding of what LLMs are.
He keeps attacking them as knowledge systems that don't understand the world. They're not knowledge systems. They're reasoning engines. Language describes any domain—physics included.
His JEPA builds better perception. Great. But perception without general reasoning is a sensor, not intelligence. LLMs provide the reasoning layer his architecture is missing.
He's not building a replacement. He's building a complement—and doesn't see it because his ego won't let him.
English

@UnmitigatedAss I reached the point where I realized how much of a dipshit I was with all the things I "knew" as a "kid".
Without an ideological yoke or life-preserver the hindsight is rather startling.
I have little patience for most cock-sure ways of knowing.
English











