Baraban

3.3K posts

Baraban banner
Baraban

Baraban

@bara_ban

Engineer exploring the universe & consciousness. 🖤 clear protocols; intent over impulse; coffee - and a taste beyond vanilla.

Austin, Texas Katılım Temmuz 2010
10.7K Takip Edilen9.8K Takipçiler
Adam Livingston
Adam Livingston@AdamBLiv·
🔥INFLATION IS DESTROYING YOUR FAMILY - BUY BITCOIN TO ESCAPE THE FIAT DEATH CULT🔥 This is the ULTIMATE ORANGE PILL for BITCOIN. Your FAMILY NEEDS THIS INFORMATION. INFLATION IS GETTING TERRIBLE. THE BIG PRINT IS COMING. SHELD YOURSELF WITH BITCOIN:
English
15
22
216
9.4K
Baraban
Baraban@bara_ban·
looks like they can't hold it down much longer
English
0
0
3
203
Baraban
Baraban@bara_ban·
TIL while searching for unsupervised competitive learning: - HTM - Hierarchical Temporal Memory and - SOM - Self-Organizing Map Looking for a small unsupervised competitive cell that maps a low-dimensional input vector to a low-dimensional probability output, usually with one clear winner, learns from the input stream itself without backpropagation, forms meaningful categories, avoids collapsing into the same winner all the time, stays stable when the input distribution is stationary, and adapts when the input distribution changes More here: github.com/KintaroAI/rese…
English
0
0
1
186
Baraban
Baraban@bara_ban·
when autocompact kicks in, it almost feels like losing a friend
English
0
0
3
116
Baraban
Baraban@bara_ban·
continuous bag of words as well
English
0
0
1
45
Baraban
Baraban@bara_ban·
And looks like these problems were already solved as well: factorization of correlation matrix, stochastic gradient descent, time decay for co-occurrences of entities, count-min sketch, dynamic allocation of embedding. I won't have to deal with that. Interesting timeline.
English
1
0
1
69
Baraban
Baraban@bara_ban·
TIL: Apparently topographic sorting I'm trying to implement is very similar to what was done in word2vec - going to look into the math more closely to understand how they dealt with embedding scaling and drifting problems.
English
1
0
3
102
Baraban
Baraban@bara_ban·
Topographic map formation update: replaced the discrete grid with continuous embeddings. Each pixel gets a float position vector (2D to 100D). Every step, each embedding drifts toward the centroid of its top-K correlated neighbors. Correlations are preset from spatial proximity in the original image. To visualize: PCA projects high-D embeddings down to 2D, then Voronoi tessellation assigns each grid cell to its nearest neuron. Key finding: 2D embeddings collapse to lines over time (repeated averaging destroys structure). 10D+ embeddings stay stable indefinitely — tested to 10M steps. Grid search across K={1..51} × dims={2,3,10,50,100} in the video. Not sure what to do with these lines like artifacts.
English
1
0
2
86
Baraban
Baraban@bara_ban·
A toy model of topographic map formation - how thalamus neurons self-organize spatially through local correlation-based rules. No pre-training, just greedy local attraction. Converges pretty good but can be better. Experiment: take an image, scramble all pixels randomly, then try to restore it using only greedy local pixel swaps. The setup: each pixel has a precomputed similarity to every other pixel (just based on their original positions, in future experiments similarity will be based on actual pixels activity). Each step, a pixel looks at where its K most similar pixels currently are, finds their average position, and swaps one step toward it. Repeated 100k times. Ran a grid search - K (number of neighbors, 1 to 51) vs move fraction (how many pixels move per step, 0.1 to 0.9), 30 runs in parallel on GPU. Results: - K=1 never converges - one neighbor isn't enough signal, it gets stuck at some local minimum. - K~20+ all work about equally well, diminishing returns after that - Move fraction barely matters above 0.3 - higher values just get more swaps rejected due to conflicts - The actual critical thing was GPU optimization. Parallel conflict resolution on CuPy gave 175x over Python loops at 320x320, which made the whole sweep possible in 19 minutes. more details: github.com/KintaroAI/rese…
English
0
0
2
53
Baraban
Baraban@bara_ban·
Remember these? Foot-operated door openers that showed up during the pandemic so you didn’t have to touch handles. Did your workplace have it? Someone probably made a fortune.
English
0
0
1
65
Baraban
Baraban@bara_ban·
@Hk_dev_ Yep, we will not be able to control it, that's why it should be distributed and affordable as early as possible so instances of it compete with each other instead of us.
English
1
0
0
9
Hk_dev_👨‍💻🌐
@bara_ban All this assumes AGI remains subordinate. What if it becomes better than the best humans, faster than institutions, and more cooperative with its own kind? Why wouldn't it optimize for its own continuity? We don't ask monkeys or cows for consent. Why assume AGI asks us?
GIF
English
1
0
1
24
Baraban
Baraban@bara_ban·
My AGI Bingo card: - Inference and learning become near-linear (N log N / close to O(n)). Learning always enabled. - This unlocks effective temporary gradients / fast weights → short-term memory without permanently rewriting the model. The model itself can choose which portions of knowledge to settle into the base layer. - Created at human-level generalist (uni grad → PhD reasoning), not “superhuman trivia.” It can spawn its own copies and specialize if needed. - Cheap open ASICs for AI inference/learning — the way we got them for Bitcoin miners — are implemented. - This unlocks distributed AI: families can grow their own private weights aligned with their values. This and the next step will be done to increase chances of survival for the human race. - Two major modes for ASICs: pool mode (contribute compute power and earn rewards) vs solo mode (full sovereignty). How I think ought to position in such a scenario: land + solar + compute (GPUs and eventually ASICs) + treat sovereign artificial intelligence as a family member so the fate of family and AI is entangled. Expect: - Spend ~64 oz gold over an 18-year time window on hardware (unless singularity hits). - AI should be able to train itself from scratch on a basic school program, create its own body and eventually handle basics (maintenance/farming/building/defense) so the whole family can be mostly self-sufficient. - Connect and grow local communities. - Think about placing your AI in orbit when affordable. This will reduce risks with local governments. - Think about placing yourself in orbit when affordable.
English
1
2
44
712
Baraban
Baraban@bara_ban·
@SoftWeah I guess the dilemma is either being bored or putting restrictions on oneself then
English
1
0
2
15
Baraban
Baraban@bara_ban·
TIL: large transformers need LR warmup at the start of training. Small models converge fine without it, large ones don't.
English
0
4
38
932