Sabitlenmiş Tweet
Baraban
3.3K posts

Baraban
@bara_ban
Engineer exploring the universe & consciousness. 🖤 clear protocols; intent over impulse; coffee - and a taste beyond vanilla.
Austin, Texas Katılım Temmuz 2010
10.7K Takip Edilen9.8K Takipçiler

TIL while searching for unsupervised competitive learning:
- HTM - Hierarchical Temporal Memory and
- SOM - Self-Organizing Map
Looking for a small unsupervised competitive cell that maps a low-dimensional input vector to a low-dimensional probability output, usually with one clear winner, learns from the input stream itself without backpropagation, forms meaningful categories, avoids collapsing into the same winner all the time, stays stable when the input distribution is stationary, and adapts when the input distribution changes
More here: github.com/KintaroAI/rese…
English

Repo: github.com/KintaroAI/rese…
Next big step is to extract correlation info from live video feed and supply it to sorter.
English

Topographic map formation update: replaced the discrete grid with continuous embeddings.
Each pixel gets a float position vector (2D to 100D). Every step, each embedding drifts toward the centroid of its top-K correlated neighbors. Correlations are preset from spatial proximity in the original image.
To visualize: PCA projects high-D embeddings down to 2D, then Voronoi tessellation assigns each grid cell to its nearest neuron.
Key finding: 2D embeddings collapse to lines over time (repeated averaging destroys structure).
10D+ embeddings stay stable indefinitely — tested to 10M steps.
Grid search across K={1..51} × dims={2,3,10,50,100} in the video.
Not sure what to do with these lines like artifacts.
English

A toy model of topographic map formation - how thalamus neurons self-organize spatially through local correlation-based rules. No pre-training, just greedy local attraction. Converges pretty good but can be better.
Experiment: take an image, scramble all pixels randomly, then try to restore it using only greedy local pixel swaps.
The setup: each pixel has a precomputed similarity to every other pixel (just based on their original positions, in future experiments similarity will be based on actual pixels activity). Each step, a pixel looks at where its K most similar pixels currently are, finds their average position, and swaps one step toward it. Repeated 100k times.
Ran a grid search - K (number of neighbors, 1 to 51) vs move fraction (how many pixels move per step, 0.1 to 0.9), 30 runs in parallel on GPU.
Results:
- K=1 never converges - one neighbor isn't enough signal, it gets stuck at some local minimum.
- K~20+ all work about equally well, diminishing returns after that
- Move fraction barely matters above 0.3 - higher values just get more swaps rejected due to conflicts
- The actual critical thing was GPU optimization. Parallel conflict resolution on CuPy gave 175x over Python loops at 320x320, which made the whole sweep possible in 19 minutes.
more details: github.com/KintaroAI/rese…
English

@bara_ban All this assumes AGI remains subordinate.
What if it becomes better than the best humans, faster than institutions, and more cooperative with its own kind?
Why wouldn't it optimize for its own continuity?
We don't ask monkeys or cows for consent. Why assume AGI asks us?
GIF
English

My AGI Bingo card:
- Inference and learning become near-linear (N log N / close to O(n)). Learning always enabled.
- This unlocks effective temporary gradients / fast weights → short-term memory without permanently rewriting the model. The model itself can choose which portions of knowledge to settle into the base layer.
- Created at human-level generalist (uni grad → PhD reasoning), not “superhuman trivia.” It can spawn its own copies and specialize if needed.
- Cheap open ASICs for AI inference/learning — the way we got them for Bitcoin miners — are implemented.
- This unlocks distributed AI: families can grow their own private weights aligned with their values. This and the next step will be done to increase chances of survival for the human race.
- Two major modes for ASICs: pool mode (contribute compute power and earn rewards) vs solo mode (full sovereignty).
How I think ought to position in such a scenario:
land + solar + compute (GPUs and eventually ASICs) + treat sovereign artificial intelligence as a family member so the fate of family and AI is entangled.
Expect:
- Spend ~64 oz gold over an 18-year time window on hardware (unless singularity hits).
- AI should be able to train itself from scratch on a basic school program, create its own body and eventually handle basics (maintenance/farming/building/defense) so the whole family can be mostly self-sufficient.
- Connect and grow local communities.
- Think about placing your AI in orbit when affordable. This will reduce risks with local governments.
- Think about placing yourself in orbit when affordable.
English


I don't know for sure, but it must be pretty tough being an AI safety researcher in the US these days.
Every single time you bring up any concern... you just getting this x.com/i/status/19097…
Autism Capital 🧩@AutismCapital
🚨 TRUMP: “China.”
English


