DataVoid

6.6K posts

DataVoid banner
DataVoid

DataVoid

@DataPlusEngine

Independent ML researcher. The First step in knowing is admitting you don't

https://discord.gg/KkKSVqU4Gs शामिल हुए Haziran 2023
617 फ़ॉलोइंग2.1K फ़ॉलोवर्स
पिन किया गया ट्वीट
DataVoid
DataVoid@DataPlusEngine·
AI visionaries tend to be. A dreamer who can not dream. They are utterly engulfed within their own doctrine that their daring stabs at the truth amount to moving numbers on a plot.
English
2
2
8
3.1K
DataVoid रीट्वीट किया
Lex
Lex@xw33bttv·
Cursor AI may be in material breach of contract with their new Composer model, which is generating buzz online for reportedly reaching Opus-level performance. It’s alleged that the new model is a fine-tuned checkpoint of Moonshot’s Kimi K2.5. If true, the original model is licensed under a modified MIT license containing a clause restricting commercial use by products with over 100 million monthly active users or more than $20 million (or equivalent) in monthly revenue. The requirement for triggering that clause is simply to prominently credit “Kimi K2.5” in the product or service’s UI. Cursor could now face serious PR and legal issues simply because they couldn’t be bothered to cite the underlying team’s work. This is open-source 101.... Talk about being hoisted by your own petard.
Lex tweet media
English
25
9
244
23.3K
catid
catid@MrCatid·
I was the first person to order the GB300 computer just found out
English
6
1
44
4.2K
DataVoid रीट्वीट किया
MiniMax (official)
MiniMax (official)@MiniMax_AI·
During the iteration process, we also realized that the model's ability to recursively evolve its harness is equally critical. Our internal harness autonomously collects feedback, builds evaluation sets for internal tasks, and based on this continuously iterates on its own architecture, skills/MCP implementation, and memory mechanisms to complete tasks better and more efficiently.
MiniMax (official) tweet media
English
14
57
668
135.4K
DataVoid
DataVoid@DataPlusEngine·
I pushed my tests to hard for Hermes-Agent and trashed my CPU somehow. Literally burnt it. It's now 62ºc under no load and than overheats. Oops lol I got wayyyy to into Hermes dev. Props to @Teknium @NousResearch for making a project that has gotten me so enveloped.
DataVoid tweet mediaDataVoid tweet media
English
3
0
16
596
DataVoid
DataVoid@DataPlusEngine·
I reapplied thermal paste 3 different times. Isolated the cause as not being from the cooling block. Reseated the cpu 4 times. The only possible explanation is the cpu is toast. It did smell slightly burnt suddenly under low load during a test.
English
1
0
3
93
DataVoid
DataVoid@DataPlusEngine·
@DavidSHolz @Angaisb_ I have said for years. Diffusion has a much more authentic feel than AR.
English
0
0
3
252
David
David@DavidSHolz·
@Angaisb_ almost 100 percent of image and video models are still diffusion, you're just confused, sorry!
English
12
1
152
17.5K
Angel 🌼
Angel 🌼@Angaisb_·
Midjourney should have gone full AR and left diffusion behind They had the data, the compute and the talent yet somehow they still managed to become irrelevant. This isn't any better than older Midjourney models Sad to watch a company I genuinely liked fade out in real time
Angel 🌼 tweet media
Mark Kretschmann@mark_k

The long-awaited testing phase for @Midjourney V8 has officially begun, marking a massive leap forward for the generative art platform. This latest iteration promises a significant boost in efficiency, operating at five times the speed of its predecessors while maintaining a much tighter grip on complex prompt instructions. High-resolution creators will find the native 2K modes particularly useful for professional workflows. The update also brings more reliable text rendering and enhanced "sref" styling, allowing for a level of aesthetic consistency that was previously difficult to achieve. Personalization is a major focus of this release, with improved moodboard performance to help users fine-tune their unique visual language. It is an impressive step toward making AI-assisted design both faster and more intuitive.

English
16
8
141
22.5K
DataVoid
DataVoid@DataPlusEngine·
@PurzBeats I am not a Nitro user and I haven't seen a single ad to date
English
2
0
3
103
Purz.ai
Purz.ai@PurzBeats·
I am a nitro user, there is absolutely no reason to be showing me ads that I can't get rid of. This right here will be the downfall of Discord. Someone's going to vibe code something better. Or we'll just go back to IRC, I guess.
Purz.ai tweet media
English
15
0
41
1.6K
DataVoid रीट्वीट किया
Haocheng Xi
Haocheng Xi@HaochengXiUCB·
𝗞-𝗺𝗲𝗮𝗻𝘀 𝗶𝘀 𝘀𝗶𝗺𝗽𝗹𝗲. 𝗠𝗮𝗸𝗶𝗻𝗴 𝗶𝘁 𝗳𝗮𝘀𝘁 𝗼𝗻 𝗚𝗣𝗨𝘀 𝗶𝘀𝗻’𝘁. That’s why we built Flash-KMeans — an IO-aware implementation of exact k-means that rethinks the algorithm around modern GPU bottlenecks. By attacking the memory bottlenecks directly, Flash-KMeans achieves 30x speedup over cuML and 200x speedup over FAISS — with the same exact algorithm, just engineered for today’s hardware. At the million-scale, Flash-KMeans can complete a k-means iteration in milliseconds. A classic algorithm — redesigned for modern GPUs. Paper: arxiv.org/abs/2603.09229 Code: github.com/svg-project/fl…
English
36
196
1.7K
278.5K
DataVoid रीट्वीट किया
Rosinality
Rosinality@rosinality·
ByteDance also implemented attention over depth. They literally combined it with sequence attention.
Rosinality tweet media
English
9
126
882
60.6K
Nous Research
Nous Research@NousResearch·
Hermes Agent v0.3.0 ☤ 248 PRs. 15 contributors. 5 days. • Real-time streaming across CLI and all platforms • First-class plugin architecture, package and share tools+commands+skills • /browser connect to live Chrome via CDP • @vercel AI Gateway model provider • @browser_use browser tool provider • VS Code, Zed, and JetBrains integration • Voice mode with local Whisper • PII redaction everywhere 9 new skills. 50+ bug fixes. Much more in the full changelog.
Nous Research tweet media
English
74
81
1.1K
408.5K
DataVoid
DataVoid@DataPlusEngine·
@Rahmeljackson @beyond_fps Only if they open source it. Which they haven't done for a single previous version to date
English
1
0
1
12
jinofcoolnes
jinofcoolnes@Rahmeljackson·
@beyond_fps Image model can look however way you want. I think when this releases were going to see different variations and styles like current image models do now.
English
2
0
2
859
DataVoid
DataVoid@DataPlusEngine·
@EHuanglu that's exactly why they had to kill it
English
0
0
0
12
DataVoid
DataVoid@DataPlusEngine·
@jtydhr88 almost got it! keep at it, looking great so far!
English
0
0
0
35
DataVoid रीट्वीट किया
jtydhr88
jtydhr88@jtydhr88·
Tried to recreate PS’s image rotation feature inside ComfyUI - 2
English
9
19
144
11K
DataVoid रीट्वीट किया
Chubby♨️
Chubby♨️@kimmonismus·
Holy: Kimi did an amazing work because it changes one of the most basic parts of how deep AI models pass information from layer to layer. Instead of blindly mixing in everything from earlier layers equally, the model can now choose which past information is actually useful for each token and task. That helps deep models keep important signals from getting washed out, making training more stable and efficient. The big deal is that Kimi shows this idea works at scale too: better results, about 25% more compute efficiency, and almost no extra inference slowdown.
Chubby♨️ tweet media
Kimi.ai@Kimi_Moonshot

Introducing 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. 🔹 Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. 🔹 Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. 🔹 Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. 🔹 Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. 🔗Full report: github.com/MoonshotAI/Att…

English
28
34
447
32.9K
Kimi.ai
Kimi.ai@Kimi_Moonshot·
Introducing 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. 🔹 Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. 🔹 Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. 🔹 Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. 🔹 Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. 🔗Full report: github.com/MoonshotAI/Att…
Kimi.ai tweet media
English
327
2K
13.4K
4.8M