Annu ๐ธโจ
618 posts






Gensyn NoLoCo Training Large Models With No All Reduce ๐๐๐ฒ ๐๐ข๐ ๐ก๐ฅ๐ข๐ ๐ก๐ญ๐ฌ โฃNoLoCo removes the global all reduce step by synchronising only small random pairs of replicas โฃConvergence stays stable through random activation routing and a modified Nesterov step โฃSynchronisation becomes almost ten times faster at large scale ๐๐๐๐ค๐ ๐ซ๐จ๐ฎ๐ง๐ โฃStandard data parallel training depends on global all reduce which becomes a major bottleneck on heterogeneous or internet connected clusters โฃLow communication methods reduce how often all reduce is called but still suffer from global latency โฃNoLoCo shows that all reduce can be eliminated entirely without hurting convergence ๐๐จ๐ฐ ๐๐ญ ๐๐จ๐ซ๐ค๐ฌ โฃReplicas complete several local SGD steps and then share weights with a random peer โฃRandom activation routing spreads information between pipeline stage replicas โฃA modified Nesterov update keeps parameters aligned and prevents drift ๐ ๐ข๐ง๐๐ข๐ง๐ ๐ฌ โฃPerformance matches standard data parallel accuracy even at large scale โฃSynchronisation latency drops by an order of magnitude โฃTraining remains stable across low bandwidth and variable network environments ๐๐ก๐ฒ ๐๐ญ ๐๐๐ญ๐ญ๐๐ซ๐ฌ โฃEnables large model training without specialised high speed interconnects โฃImproves resource utilisation by removing long global synchronisation waits โฃExpands access to scalable training across diverse and distributed hardware Full Blog Link- gensyn.ai/articles/noloco Tags - @gensynai @KBekhtiev @S4Sanjay_das @Kumoooo_co @cyd00r #Gensyn #Web3AI #DecentralizedAI #AICompute #OpenResearch #RLswarm #BlockAssist




๐๐ซ๐ข๐ญ๐ฎ๐๐ฅ ๐ ๐ฎ๐ฒ๐ฌ itโs been a little over a month since I started contributing to the ๐๐ข๐ญ๐ฎ๐๐ฅ ๐๐๐จ๐ฌ๐ฒ๐ฌ๐ญ๐๐ฆ learning, building, creating, and vibing with one of the most active communities out here. And today Iโm finally sharing my first Ritual artwork you can say my first official art ๐ but looking back it feels good to see the impact of consistency: โ ๐๐,๐๐๐+ ๐ข๐ฆ๐ฉ๐ซ๐๐ฌ๐ฌ๐ข๐จ๐ง๐ฌ on my Ritual content โ A growing presence across X & Discord โ ๐๐ก๐ ๐๐ข๐ญ๐ญ๐ฒ ๐๐ข๐ญ๐ญ๐ฒ ๐ซ๐จ๐ฅ๐ โ Joining community calls, events, and meme battles.. โ Helping more people understand what ๐๐ข๐ญ๐ฎ๐๐ฅ ๐ข๐ฌ ๐๐ฎ๐ข๐ฅ๐๐ข๐ง๐ .. None of this happened overnight. It came from showing up every day, learning something new, and adding whatever value I could. If youโre thinking about contributing to ๐๐ข๐ญ๐ฎ๐๐ฅ thereโs literally space for everyone. Writers, artists, builders, memers, learnersโฆ all of us push this ecosystem forward. This artwork is just the beginning for me. Thereโs a lot more to create a lot more to learn, and a lot more to build together with the strongest, most active community Iโve been part of. ๐ฏ๏ธ ๐๐๐ง๐ฒ ๐ฆ๐จ๐ซ๐ ๐๐ก๐๐ฉ๐ญ๐๐ซ๐ฌ ๐๐ก๐๐๐ ๐๐ซ๐ข๐ญ๐ฎ๐๐ฅ #Gritual #RitualEcosystem #AIonChain #Web3Builders #RitualCommunity @ritualnet @ritualfnd @joshsimenhoff @mongdiny7 @Jez_Cryptoz @dunken9718 @0xMadScientist















๐ง ๐๐ก๐ฒ ๐๐ ๐๐๐๐๐ฌ ๐ ๐๐๐ฆ๐จ๐ซ๐ฒ ๐๐๐ฒ๐๐ซ ๐๐ง๐ ๐๐จ๐ฐ ๐๐ข๐ญ๐ฎ๐๐ฅ ๐๐๐ค๐๐ฌ ๐๐ญ ๐๐๐ซ๐ข๐๐ข๐๐๐ฅ๐ : After talking about Modular AI thereโs one piece people donโt usually think aboutโฆbut it might be the most important part of intelligence: ๐๐๐ฆ๐จ๐ซ๐ฒ. Because without memory, nothing human or machine can learn, improve, or evolve. And thatโs why Ritual treats memory as a core module in its new architecture. ๐๐๐ญโ๐ฌ ๐ฆ๐๐ค๐ ๐ญ๐ก๐ข๐ฌ ๐ฌ๐ข๐ฆ๐ฉ๐ฅ๐ & ๐ฎ๐ง๐๐๐ซ๐ฌ๐ญ๐๐ง๐ } ๐ ๐๐ฎ๐ฆ๐๐ง๐ฌ ๐๐๐๐ ๐๐๐ฆ๐จ๐ซ๐ฒ. ๐๐ ๐๐จ๐๐ฌ ๐๐จ๐จ. Think of your own day: You remember your friendโs name You remember your preferences You remember yesterdayโs mistakes You remember your goals Now imagine waking up every morning with zero memory every conversation starts from scratch every lesson is forgotten. Thatโs how most AI works today. It can talk and generate answers but it doesnโt remember anything in a reliable ,shared or verifiable way. Everything disappears after the interaction. This makes AI feel powerfulโฆbut also strangely dumb. ๐๐ข๐ญ๐ฎ๐๐ฅ ๐ข๐ฌ ๐๐ก๐๐ง๐ ๐ข๐ง๐ ๐ญ๐ก๐๐ญ. ๐ง ๐๐ก๐๐ญ ๐๐ฌ ๐ ๐๐๐ฆ๐จ๐ซ๐ฒ ๐๐๐ฒ๐๐ซ ๐ข๐ง ๐๐? : Think of AI memory like a notebook the model can write in and read from but hereโs the key difference with Ritual: โก๏ธ The notebook is not hidden on a companyโs server. โก๏ธ The notebook is shared, verifiable, and decentralized. ๐๐๐๐ง๐ข๐ง๐ : โ๏ธThe memory canโt be altered secretly โ๏ธAnyone can audit how the AI learned โ๏ธUpdates are transparent โ๏ธThe learning trail is visible This is what makes memory trustworthy. ๐๐๐๐ฅ-๐๐ข๐๐ ๐๐ฑ๐๐ฆ๐ฉ๐ฅ๐: Imagine you use an AI wallet assistant. Day 1: it learns your spending habits Day 5: it learns your risk level Day 20: it understands what alerts matter to you But only if it has a safe, verifiable memory. Now imagine the opposite: If a Web2 company stores that memory they can modify it delete it or sell it. ๐๐จ๐ฎโ๐ ๐ง๐๐ฏ๐๐ซ ๐ค๐ง๐จ๐ฐ ๐๐ข๐ญ๐ก ๐๐ข๐ญ๐ฎ๐๐ฅ? โ๏ธThe AIโs memory becomes transparent โ๏ธYou can see what it knows โ๏ธYou can verify nothing shady is happening Developers can build on top of the same memory layer. This makes AI feel more loyal more predictable more human. ๐ ๐๐ก๐ฒ ๐๐๐ฆ๐จ๐ซ๐ฒ ๐๐ฎ๐ฌ๐ญ ๐๐ ๐๐๐ซ๐ข๐๐ข๐๐๐ฅ๐: AI memory today lives on black-box servers Nobody can check if: the data was changed the model was retrained unfairly biases were added information was removed logs were manipulated If AI is going to make financial, political and personal decisions memory canโt be hidden. ๐๐ข๐ญ๐ฎ๐๐ฅ ๐ฆ๐๐ค๐๐ฌ ๐ฆ๐๐ฆ๐จ๐ซ๐ฒ: โ๏ธ auditable โ๏ธ transparent โ๏ธ safe from manipulation โ๏ธ decentralized This is essential if AI is going to be trusted long-term. โ๏ธ ๐๐ก๐ ๐๐ข๐ ๐๐จ๐ข๐ง๐ญ : AI without memory = a parrot AI with private memory = a liability AI with open verifiable memory = a trustworthy system ๐๐ข๐ญ๐ฎ๐๐ฅ ๐ข๐ฌ ๐๐ฎ๐ข๐ฅ๐๐ข๐ง๐ ๐ญ๐ก๐ ๐ญ๐ก๐ข๐ซ๐ ๐จ๐ง๐. And thatโs why the memory layer matters more than people think. @ritualnet @ritualfnd @Ritual_IN @joshsimenhoff @Jez_Cryptoz @dunken9718 @cryptooflashh @0xMadScientist @Kash_060 #Gritual #AIMemoryLayer #ModularAI #DecentralizedIntelligence #RitualNetwork








