DeepManim

71 posts

DeepManim

DeepManim

@manimable

Learning is now enjoyable. In any language. https://t.co/8SdAAuZOJh

Bergabung Ocak 2026
22 Mengikuti19 Pengikut
DeepManim
DeepManim@manimable·
TurboQuant AI models waste massive memory on vectors. Compressing them usually adds overhead defeating the purpose. Google's new paper uses just 1 extra bit to eliminate that overhead. Result: same accuracy, way less memory. Accepted at ICLR 2026. The trick? Random rotations + a 50-year-old math theorem. Here is a deepmanim.com overview of the paper. #manim
English
0
2
1
25
DeepManim
DeepManim@manimable·
Cursor's AI agents spend most of their time doing one thing: grep. A tool from 1973. When your codebase is small, it's fine. But Enterprise customers have monorepos where a single grep takes 15+ seconds. So Cursor built a new index, from scratch. Here is an overview made by deepmanim.com
English
0
1
2
85
DeepManim
DeepManim@manimable·
EsoLang-Bench. A programmer who learned Python can pick up most new languages pretty fast. This study tested if AI can do the same. 80 problems across 5 esoteric languages. Same logic as Python. Result: 0% on Medium/Hard. All models. All strategies. What does this tell us about how AI learns vs how humans learn? deepmanim.com made an overview of the paper. try it out. #ai #intelligence #manim
English
0
1
1
59
DeepManim
DeepManim@manimable·
Just saying, but deepmanim.com is available in all languages 👀 and get you a more agreable containt
English
0
0
0
8
NotebookLM
NotebookLM@NotebookLM·
We wanted to come on here to clear the air and confirm that the rumors are true... Cinematic Video Overviews are officially rolled out to 100% of Pro users in English! Please respect our privacy during this time by flooding our replies with your favorite creations.
English
199
217
3.7K
280.4K
DeepManim
DeepManim@manimable·
A Meta security researcher's OpenClaw agent deleted 3,000 emails she didn't ask it to. It wasn't a bug. It was a prompt injection — a hidden command buried in a web page. Here's how it works, and why it matters for anyone using AI agents today. Video made with deepmanim.com #OpenClaw
English
1
1
2
84
DeepManim
DeepManim@manimable·
@karpathy autoresearch is being underestimated. Not for the results (126 experiments overnight, 11% speedup). For the PATTERN. One GPU. One file. One metric. The agent forms a hypothesis, runs the experiment, checks the result, and repeats. We ran 550+ experiments over a weekend. Zero babysitting. Here's what we learned deepmanim.com animated it #research #agent #loop #RecursiveAI #manim
English
0
1
2
41
DeepManim
DeepManim@manimable·
A journalist reported facts about an Iran missile test. Polymarket gamblers who bet the opposite threatened to kill her unless she rewrote the story. Prediction markets were supposed to aggregate information. Instead, they created financial incentives to silence journalism. Animated explainer below. deepmanim.com #Polymarket #PredictionMarkets #Journalism #Manim #Learning
English
0
1
2
24
DeepManim
DeepManim@manimable·
Apple had the hardware, the ecosystem, the trust. They could've shipped an AI that actually controls your : Mac, Iphone, Airpods, Ipad Instead they shipped notification summaries. OpenClaw just proved what Apple Intelligence should have been , and people are literally buying Mac Minis just to run it. Here is a breakdown by deepmanim.com #openclaw #clawbot #Applenews #manim #DIY
English
1
0
2
73
DeepManim
DeepManim@manimable·
What if residual connections could THINK about which layers to remember? That is Attention Residuals — a new paper from Moonshot AI that replaces fixed skip connections with learned attention. 1.25x faster, almost zero latency cost. Animated explainer: Try it: deepmanim.com #AI #DeepLearning #NeuralNetworks #manim
English
0
0
2
74
Kimi.ai
Kimi.ai@Kimi_Moonshot·
Introducing 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. 🔹 Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. 🔹 Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. 🔹 Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. 🔹 Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. 🔗Full report: github.com/MoonshotAI/Att…
Kimi.ai tweet media
English
332
2.1K
13.5K
4.9M
DeepManim
DeepManim@manimable·
You can prune 50% of experts from a 1T parameter model and barely lose performance. Sounds impossible? This paper proves it. Animated explainer shows exactly how REAP works. Animated with : deepmanim.com Paper : REAP the Experts: Why Pruning Prevails for One-Shot MoE compression Great work @cerebras ! #AI #MoE #Manim #Cerebras #Computing
English
0
1
2
61
DeepManim
DeepManim@manimable·
Most people think the internet is linear. 2× effort → 2× results. Reality? It’s a power law. I once spent 2 weeks writing an article that got 30 views. Then I made a 4-minute meme that got 17M views. Luck? No. There’s math behind it. I animated the hidden math of virality with DeepManim 👇 deepmanim.com #viralreel #manim #CreatorEconomy #Growth
English
0
1
2
52
All day Astronomy
All day Astronomy@forallcurious·
🚨: Germany's new stellarator fusion reactor is set to deliver limitless clean energy to the grid by 2031
All day Astronomy tweet mediaAll day Astronomy tweet media
English
56
131
1.2K
44.8K
DeepManim
DeepManim@manimable·
@LDLC nondeterministic polynomial time
Català
0
0
0
11
LDLC
LDLC@LDLC·
"np" pour vous ça veut dire quoi ?
Français
205
0
168
240K
DeepManim
DeepManim@manimable·
Stop staring at concepts that still don’t click. Just use deepmanim.com
English
0
0
1
15
Suni
Suni@suni_code·
Drop your project URL Let’s drive some traffic Curious to know what you all are building
English
608
7
259
27.1K
DeepManim
DeepManim@manimable·
An AI agent gets a request: “Delete my secret email.” It doesn’t have a delete tool. So it runs RESET_ALL. 💥 Entire email server wiped. 100% data loss. This isn’t fiction, it’s a real design failure in AI agents. I built a visual explainer with Manim 👇 • Why agents obey strangers • Why identity checks fail • Why multi-agent systems cascade failures agentsofchaos.baulab.info Built with DeepManim deepmanim.com
English
0
2
1
55