
maggie
427 posts

maggie
@ebervector
a self that touches all edges • UT Austin Brain Behavior Computation Lab • @runrl_com
San Francisco, CA Inscrit le Mayıs 2023
1.3K Abonnements743 Abonnés
Tweet épinglé

So pleased to have been able to write a little commentary piece with my advisor @weixx2 for @NatMachIntell! It's about this great work by @JamesGornet and Matt Thomson taking a look at how cognitive maps can arise just from predicting visual observations: nature.com/articles/s4225…
English

@BrantonDeMoss @jankulveit @g_leech_ pointing at a deeper structure in the universe: everything is crab
English

@ebervector @jankulveit @g_leech_ It's an aesthetic preference. So much of the beauty of life is in specialization and adaptation to niches. "Convergence of moral abstractions" sounds like paperclipping to me.
I'll allow for some amount of moral carcinization, if not complete convergence!

English


@BrantonDeMoss @jankulveit @g_leech_ just because that would be fun/say something about morality in general?
English

@jankulveit @g_leech_ I pray for speciation to win over convergence
English

🧵1/4 The debate over AI sentience is caught in an "AI welfare trap." My new preprint argues computational functionalism rests on a category error: the Abstraction Fallacy. AI can simulate consciousness, but cannot instantiate it. philpapers.org/rec/LERTAF
English

Our Idea: Before standard language pre-training, we "pre-pre-train" on data generated by a family of neural cellular automata (NCA). NCA is an extension of classical cellular automata, like Conway’s Game of Life, that parameterize spatially local rules as a neural network. By evolving 2D grids through sampled NCA rules, tokenized trajectories exhibit rich computational structure and spatiotemporal patterns that mirror core properties of natural language.
(3/n)
English

Can language models learn useful priors without ever seeing language?
We pre-pre-train transformers on neural cellular automata — fully synthetic, zero language. This improves language modeling by up to 6%, speeds up convergence by 40%, and strengthens downstream reasoning.
Surprisingly, it even beats pre-pre-training on natural text!
Blog: hanseungwook.github.io/blog/nca-pre-p…
(1/n)

English

@kyliebytes Bumping just the way you are by milky in the Waymo with the windows down 🙂↕️
English

@ebervector Have to drop but thanks for having me here!
English

We’re doing another one of these, in 20 minutes, tonight! Get in here and let’s talk about personalism and human dignity
maggie@ebervector
Hey come through tonight we’re gonna be meeting at 7 CST on X spaces reading the Antiqua et Nova which is the Catholic position on AI released last year and we’re gonna chat virtue ethics and whatever else you wanna discuss!
English

I grabbed 2 of these today around Russian hill and in the process made a friend of a local tour guide who’s lived here for 50 years, go do this and talk to people
Riley Walz@rtwlz
Payphones are strangely still licensed in California, so I filed a FOIA request and got the full list. Naturally I made a game you can now play:
English

you need to be shenanigans-maxxing
Beff (e/acc)@beffjezos
Fun-maxxing is the path to excellence in 2026
English







