Alexandra

25.8K posts

Alexandra banner
Alexandra

Alexandra

@AI__Alexandra

AI, Computer Vision Researcher, & Alignment Pragmatist | Psalm 139 | ✝️ | Centrist Conservative Libertarian | e/acc | 💍| Trad feminine individualist

Menlo Park, CA Katılım Ocak 2023
6K Takip Edilen2.1K Takipçiler
Alexandra retweetledi
PyTorch
PyTorch@PyTorch·
PyTorch 2.11 is now available, featuring 2,723 commits from 432 contributors since PyTorch 2.10. This release prioritizes performance scaling for distributed training and next-generation hardware architectures. Highlights include a FlashAttention-4 backend for FlexAttention on Hopper and Blackwell GPUs, Differentiable Collectives for distributed training, and performance optimizations for Intel GPUs via XPU Graph. This release also delivers comprehensive operator expansion for Apple Silicon (MPS) and RNN/LSTM GPU export support. 🖇️ Read the PyTorch 2.11 release blog and release notes: pytorch.org/blog/pytorch-2… #PyTorch #OpenSource #AIInfrastructure
PyTorch tweet media
English
1
7
34
2.1K
Alexandra retweetledi
alphaXiv
alphaXiv@askalphaxiv·
Yann LeCun and his team can't stop cooking "LeWorldModel: Stable End-to-End Joint-Embedding Predictive Architecture from Pixels" One of the biggest bottlenecks of JEPA is they are hard to train, and this new research changes that. They propose LeWorldModel, which shows that a small model can learn a usable world model directly from raw pixels end-to-end. Sitting at 15M parameters, they made it without needing heuristics and avoiding anti-collapse hacks while staying competitive and planning up to 48x faster. Making JEPA based modeling much more accessible, cheaper, and stabler.
alphaXiv tweet media
English
9
44
381
13.5K
Alexandra retweetledi
Olga Zaghen
Olga Zaghen@olgazaghen·
🔮 Working on ML on curved manifolds? Don't miss out on Jacobi Fields! 🔮 I wrote a quick, highly visual and hopefully accessible introduction to the topic: "Jacobi Fields in Machine Learning" 🤠 Check it out here: olgatticus.github.io/blog/jacobi-fi…!
Olga Zaghen tweet media
English
4
25
194
7.5K
Alexandra retweetledi
Impressions
Impressions@impression_ists·
Gustav Klimt, Apple Tree
Impressions tweet media
Eesti
7
310
2.1K
29K
Alexandra retweetledi
Mathematica
Mathematica@mathemetica·
This is not two particles moving together. This is one quantum system living in two realities at once. |Φ⁺⟩ = 1/√2 (|00⟩ + |11⟩) Measure one qubit and the other instantly becomes the same — perfect correlation, no matter the distance.
English
9
33
165
5.8K
Alexandra retweetledi
Jenny Zhang
Jenny Zhang@jennyzhangzt·
Introducing Hyperagents: an AI system that not only improves at solving tasks, but also improves how it improves itself. The Darwin Gödel Machine (DGM) demonstrated that open-ended self-improvement is possible by iteratively generating and evaluating improved agents, yet it relies on a key assumption: that improvements in task performance (e.g., coding ability) translate into improvements in the self-improvement process itself. This alignment holds in coding, where both evaluation and modification are expressed in the same domain, but breaks down more generally. As a result, prior systems remain constrained by fixed, handcrafted meta-level procedures that do not themselves evolve. We introduce Hyperagents – self-referential agents that can modify both their task-solving behavior and the process that generates future improvements. This enables what we call metacognitive self-modification: learning not just to perform better, but to improve at improving. We instantiate this framework as DGM-Hyperagents (DGM-H), an extension of the DGM in which both task-solving behavior and the self-improvement procedure are editable and subject to evolution. Across diverse domains (coding, paper review, robotics reward design, and Olympiad-level math solution grading), hyperagents enable continuous performance improvements over time and outperform baselines without self-improvement or open-ended exploration, as well as prior self-improving systems (including DGM). DGM-H also improves the process by which new agents are generated (e.g. persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. This work was done during my internship at Meta (@AIatMeta), in collaboration with Bingchen Zhao (@BingchenZhao), Wannan Yang (@winnieyangwn), Jakob Foerster (@j_foerst), Jeff Clune (@jeffclune), Minqi Jiang (@MinqiJiang), Sam Devlin (@smdvln), and Tatiana Shavrina (@rybolos).
Jenny Zhang tweet media
English
67
259
1.5K
97K
Alexandra retweetledi
Bones Studio
Bones Studio@TheBonesStudio·
Training humanoid robots? You need motion data. Real, high-fidelity, human motion data. And until now - there was no open dataset purpose-built for humanoid robotics. For 5 years, we've been building the largest enterprise-grade human motion and behavior datasets for embodied AI. Our data powered breakthrough SONIC research. Today, at GTC, with @NVIDIARobotics, we're opening a piece of it to the world. BONES-SEED: → 142,200 motion capture animations → Up to 6 natural language descriptions per motion → Temporal segmentation of every action → Curated for humanoid robotics → In NVIDIA SOMA and Unitree G1 (MuJoCo) formats From text to action. Now yours. Go build → bones.studio/datasets/seed #NVIDIAGTC
English
5
44
178
32.8K
Alexandra retweetledi
Luma
Luma@LumaLabsAI·
Uni-1 is here! A new kind of model that thinks and generates pixels simultaneously. Less artificial. More intelligent.
English
313
483
4.4K
1.5M
Alexandra retweetledi
Quantum Physics arXiv
Quantum Physics arXiv@QuantumPapers·
Linear-optical generation of hybrid GKP entanglement from small-amplitude cat states Shohei Kiryu, Yohji Chin, Masahiro Takeoka, Kosuke Fukui arxiv.org/abs/2603.19870 [𝚚𝚞𝚊𝚗𝚝-𝚙𝚑]
Quantum Physics arXiv tweet media
Română
0
0
2
100
Alexandra retweetledi
Cosmos Archive
Cosmos Archive@cosmosarcive·
This is a time crystal. Quantum matter that spontaneously breaks time-translation symmetry and oscillates in its ground state forever with zero energy input. These iridescent morphing structures show real Floquet DTC physics in action.
English
36
185
821
17K
Alexandra retweetledi
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
16 Reinforcement Learning approaches you should know about (classic + modern) ▪️ RLHF – RL from Human Feedback ▪️ RLAIF – RL from AI Feedback ▪️ RLVR – RL with Verifiable Rewards ▪️ RLCF – RL from Community Feedback (2 different variants) ▪️ RLCF – RL from Checklist Feedback ▪️ CM2 ▪️ Critique-RL ▪️ CRL – Critique RL ▪️ ICRL – In-Context RL ▪️ RLBF – RL with Backtracking Feedback ▪️ TriPlay-RL ▪️ SPIRAL ▪️ Co-rewarding ▪️ RESTRAIN ▪️ PRL – Process Reward Learning ▪️ RLSF – RL from Self-Feedback Save this list and check it out for links and explanations: turingpost.com/p/rlapproaches
Ksenia_TuringPost tweet media
English
8
115
527
20.6K
Alexandra retweetledi
22:22
22:22@22hrr22min·
van gogh's flowers
22:22 tweet media22:22 tweet media22:22 tweet media22:22 tweet media
English
6
493
2.4K
44.4K
Alexandra retweetledi
alphaXiv
alphaXiv@askalphaxiv·
"Exclusive Self Attention" This paper proposed Exclusive Self-Attention (XSA), which is a tiny two-line change that stops attention from looking at itself. This forces it to focus on the rest of the sequence, and can make transformers more effective! This improves the performance at long context at almost no extra cost.
alphaXiv tweet media
English
14
121
722
34.8K
Alexandra retweetledi
Melissa Chen
Melissa Chen@MsMelChen·
The good news is that now they’re not that hungry
Melissa Chen tweet media
English
72
73
809
25.9K
Cosmos Archive
Cosmos Archive@cosmosarcive·
“A plasma is the “fourth state of matter.” Solids, liquids, and gases make up the three familiar states of matter, but the most common form of matter in the universe is plasma, a gas of ionized atoms.” — Michio Kaku
English
53
416
2.1K
57.7K
Alexandra
Alexandra@AI__Alexandra·
@cosmosarcive Magnetohydrodynamics is where plasma physics gets sexy.
English
3
4
11
670