

Pollux9437
1.3K posts

@Orange41324306
Agent @yipitdata, RL Infra and pretrain research at night, sometimes photography and poetry. writer account @orange97648




Research taste, technical depth, writing skills, etc., can all be trained. Curiosity, self-motivation, and integrity are far harder to teach. Those traits matter more than people think.

I’m so glad AI killed LeetCode interviews. For 10 years, tech companies made every engineer grind the same puzzles and prove they could invert a binary tree from memory. Today, the dumbest AI model can walk in and one-shot the entire interview. Thank you, AI.




















Introducing 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. 🔹 Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. 🔹 Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. 🔹 Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. 🔹 Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. 🔗Full report: github.com/MoonshotAI/Att…




