Alignment Lab AI

7.2K posts

Alignment Lab AI banner
Alignment Lab AI

Alignment Lab AI

@alignment_lab

Devoted to addressing alignment. We develop state of the art open sourced AI. https://t.co/oANsMnut7V https://t.co/6aJDLUvuU5

Your Digital Ecosystem Katılım Nisan 2023
4K Takip Edilen12.6K Takipçiler
Alignment Lab AI retweetledi
Servamind
Servamind@servamind·
Meet the Founder Rachel St. Clair spent years solving a problem most AI teams live with daily but rarely name: the data-compute lock-in that makes building AI slow, expensive, and inaccessible. Her path here wasn't linear. PhD in Complex Systems and Brain Sciences at FAU. Postdoc at the Center for Future Mind. Computer vision systems for the Department of Homeland Security. Innovation Lab Director managing 25+ researchers. 20+ peer-reviewed papers. Work spanning compressed sensing networks, GANs, quantum ML, and bio-inspired architectures. But the throughline across all of it: the belief that AI's biggest bottleneck isn't intelligence. It's infrastructure. She founded Servamind to fix that at the architecture level — not with another tool, but with a new standard. The .serva standard. Free 1TB beta launch → coming soon! servamind.com
English
2
2
9
507
Alignment Lab AI retweetledi
Servamind
Servamind@servamind·
Meet our CTO. The person who helped us figure out how to hyperscale our stack. Austin Cook (@alignment_lab ) currently serves on the Board of Directors of the Active Inference Institute and has spent the majority of his career contributing to open-source AI, focusing on Optimization and Representation research. Those open-sourced contributions have been adopted across the industry from LAION through Intel to Nvidia as key milestones for state of the art openly accessible AI His take on what we're building: "Every model, every framework, every hardware target, they've all been operating on incompatible data dialects. .serva is the universal language they've been missing." servamind.com
English
0
2
5
524
Alignment Lab AI retweetledi
Servamind
Servamind@servamind·
Great work from GoogleResearch on TurboQuant. Strong results — 3-bit KV cache quantization, 8× attention speedup, zero accuracy loss. Solid theoretical foundations. Worth noting the distinction: quantization optimizes what happens inside the model. .serva operates at the data layer — before the model ever sees the input. .serva is universal and lossless. When downstream tasks are unknown — which they often are in general AI pipelines — you cannot know in advance what information will matter. We preserve everything and defer relevance to the learning system. We're also operating at a different layer entirely: ~44× speedup at the data layer in fine-tuning. We’ve built across any model, at any stage — pretraining, fine-tuning, inference — with no retraining required. The efficiency stack is being built from multiple directions at once. That's a good sign for the field.
English
0
3
5
620
Alignment Lab AI retweetledi
Servamind
Servamind@servamind·
Meet the researcher who designed the foundation ServaEncode and Chimera from the ground up. @PeterSutorJr is a PhD candidate in Computer Science at University of Maryland — one of the world's leading experts in Hyperdimensional Computing, with 7+ peer-reviewed papers including a publication in Science Magazine. He worked with the Army Research Laboratory under an ORAU Fellowship, collaborating on Hyperdimensional Computing and Vector Symbolic Architectures. His thesis is built on the same theoretical foundations that power .serva. His take: "I joined Servamind to make Hyperdimensional Computing the lifeblood of modern AI — to fully capitalize on efficiencies that classical machine learning cannot take advantage of." That's not a vision statement. It's already in the benchmarks: 30–374× energy efficiency. 68× compute payload reduction. Same accuracy. servamind.com
English
0
2
2
538
Alignment Lab AI retweetledi
Servamind
Servamind@servamind·
Meet the engineer who takes our research from theory to production. @VictorCavero has spent his career doing one thing really well: making complex systems actually work at scale. Embedded systems. IoT. Automotive. Military R&D. Combat-critical systems design. Before Servamind, he took a compression algorithm from research-stage into a production-grade C implementation — from scratch. That's exactly what we needed someone to do with .serva. At Servamind he owns the architecture design of our core technology — responsible for turning the encoding and compute engine into infrastructure that works in the real world, on real hardware. His take: "Obsessed with making things more efficient — there's still so much to build and explore, but better technology shouldn't come at the planet's expense." That's the Servamind ethos in one sentence. servamind.com
English
0
3
5
473
Alignment Lab AI retweetledi
dr. jack morris
dr. jack morris@jxmnop·
Learning to write kernels might be the highest-ROI activity for displaced SWEs: → prereq: reasonable engineering ablity → six to twelve months of study → millions of dollars, mark zuckerberg showing up at your house to hire you, etc. i wish this were an exaggeration
English
41
63
1.9K
124K
Alignment Lab AI
Alignment Lab AI@alignment_lab·
Particularly in terms of quantization of features to effective regimes, there's a very large amount of that explicitly operating on actual measurements of entropy unsupervised for the purpose of allowing a maximally efficient representation to emerge, because the computational substrate is itself still dominated by the entropy costs as a primary consideration
English
1
0
2
164
quetzal_rainbow
quetzal_rainbow@quetzal_rainbow·
The other problem with this paper is that discretization is treated here as black box which only mysterious "mapmaker" can do. But discretization happens constantly in nature. Sedimentation creates separate rock layers, cells are discretized by membranes, \
Séb Krier@sebkrier

An excellent paper for anyone interested in rigorous physicalist argument against computational functionalism. Alex is a fantastic, careful thinker and influenced my views a lot; we're working on a broader blog post breaking these concepts down, stay tuned! 🐙

English
7
2
41
4K
Vuk Rosić 武克
Vuk Rosić 武克@VukRosic99·
do you think most of the research is useless?
English
12
0
8
2.3K
Alignment Lab AI
Alignment Lab AI@alignment_lab·
absolutely disagree, even if we stopped with just what we have now it would take years forr society and for the delpoyment of it to really be appreciable with the scale, the current stopgap is just how long it takes people to understand, not whats avaliable as currently known/extant implementation
English
0
0
0
96
Benjamin Todd
Benjamin Todd@ben_j_todd·
If AI progress stopped now, it would be a normal technology. One-off 5-10% productivity growth. Some routine white collar tasks automated. We chat to AI tools a lot. But no big economic or scientific acceleration. Ergo we don't have AGI.
English
53
6
179
23K
Alignment Lab AI retweetledi
Mariusz Kurman
Mariusz Kurman@mkurman88·
Need more Claude, need more Codex, need more OpenCode or Pi? Gemini, Kimi? You got this
English
3
1
14
3.1K
Alignment Lab AI
Alignment Lab AI@alignment_lab·
until i read this paper i was losing my mind not able to figure out wh this architecture i had was outperforming everything else so hard (fully constructing mostly reasonable sentences out of bytes in a few minutes at 5m parameters) after reading the paper and doing some analysis and ablations, its because i was using 768d model and 256 vocab (plus some other stuff to do with num params to dim) that avoided the bottleneck they mention almost entirely by acident
English
2
0
13
1.8K
bycloud
bycloud@bycloudai·
how big of a problem is this? > When backproping through the LM head, about 95-99% of the logit-gradient norm lies in directions that get projected away seems like the current workaround is just to use scaling to brute force it
bycloud tweet media
English
28
36
349
40.9K
Alignment Lab AI
Alignment Lab AI@alignment_lab·
so it turns out the fast inv sqrt trick from Quake III Arena, (according to the internet from either or both of Greg Walsh and @ID_AA_Carmack ) entirely critical for some work im doing building linear models out of pretrained nonlinear ones. rmsnorm and softmax both would have gone unsolved if not for it. the unlock here is extremely op, im stoked
English
0
2
3
566
Alignment Lab AI
Alignment Lab AI@alignment_lab·
@nthngdy this is getting me so hard in the confirmation bias right now, this explains a ton!
English
0
0
0
297
Nathan Godey
Nathan Godey@nthngdy·
🧵New paper: "Lost in Backpropagation: The LM Head is a Gradient Bottleneck" The output layer of LLMs destroys 95-99% of your training signal during backpropagation, and this significantly slows down pretraining 👇
Nathan Godey tweet media
English
27
106
958
122.3K
Alignment Lab AI
Alignment Lab AI@alignment_lab·
@sebkrier is this paper operating on the premise that what happens inside of a computer is *not* happening in reality/subject to thermodynamic constraints?
English
0
0
3
115
Alignment Lab AI
Alignment Lab AI@alignment_lab·
@sebkrier ive read this twice now, i dont get where it identifies which party is which and why, and what the delta is between a compression algorithm producing a codebook of class labels like a rANS, or me definitely learning language from my parents?
English
1
0
2
277
Séb Krier
Séb Krier@sebkrier·
An excellent paper for anyone interested in rigorous physicalist argument against computational functionalism. Alex is a fantastic, careful thinker and influenced my views a lot; we're working on a broader blog post breaking these concepts down, stay tuned! 🐙
Séb Krier tweet media
Alexander Lerchner@AlexLerchner

🧵1/4 The debate over AI sentience is caught in an "AI welfare trap." My new preprint argues computational functionalism rests on a category error: the Abstraction Fallacy. AI can simulate consciousness, but cannot instantiate it. philpapers.org/rec/LERTAF

English
47
44
515
57K
Alignment Lab AI
Alignment Lab AI@alignment_lab·
@sebkrier It's genuinely crazy, people have no idea how efficient the tech actually is, no one ever really considers what something like mohrs law running for so long actually means You can only double something so many times before it gets entirely out of hand
English
0
0
1
143
Séb Krier
Séb Krier@sebkrier·
Every day I notice inefficient processes that could be automated, yet won't be for a while bc of bureaucracy, legacy infra, misaligned incentives, inertia & status quo bias. Eventually competition forces it but it's so slow! "What could be, completely burdened by what has been."
English
14
8
107
9.9K