Haysaj

9.9K posts

Haysaj

Haysaj

@Srtnlr7

เข้าร่วม Mayıs 2022
1.6K กำลังติดตาม587 ผู้ติดตาม
Haysaj รีทวีตแล้ว
Stabilizer
Stabilizer@StabilizerFi·
⚡ PHASE 1 TESTNET WHITELIST IS NOW OPEN ⚡ Whitelist registration is open for 48 hours. Experience zero-slippage execution firsthand. Complete Phase 1 testing → receive an exclusive NFT + special rewards 🏛️ Register: stabilizer.finance/whitelist It all starts now ✨
Stabilizer tweet media
English
3.3K
13K
18.5K
1.1M
Haysaj
Haysaj@Srtnlr7·
@bulktrade 7r3Wwwt8Bbi29qDjbC7LpmSS3coVNTHzAPYLBnBAyHgJ
HT
0
0
0
8
BULK
BULK@bulktrade·
500 BULK access passes up for grabs! We’ve heard great feedback from our first wave of testers… now it’s time to bulk those numbers up. Drop your SOL wallet address below and tell us why you want in 👇
BULK tweet media
English
3.6K
628
2.5K
169.5K
YUKI
YUKI@yuki_reflex·
private testnet capped out way faster than expected. do we spin up an extra tier @cypher_ethereum?
English
85
9
85
4.8K
OGGY | Bulk
OGGY | Bulk@OCTOPUS0199·
I Made this artwork for @gensynai Just like this ant watering flowers, builders in the Gensyn ecosystem are helping grow a decentralized compute future one step at a time. Opening the door for anyone to contribute, collaborate, and build real AI from anywhere in the world. The small efforts today become the big breakthroughs tomorrow.
OGGY | Bulk tweet media
English
20
0
33
422
Duno 🍌(❖,❖)
Duno 🍌(❖,❖)@DngDZ16·
Verde: The Verification Engine Powering @gensynai Decentralized Compute Network Verification in machine learning enables heavy tasks training, fine tuning, and inference to be outsourced while still guaranteeing the correctness of outputs. This capability is essential for @gensynai vision of a decentralized compute network where untrusted parties can collaborate safely and transparently. Existing verification methods fall into three categories: trust based (TEE or provider reputation), learning-based (Proof of Learning, Proof of Training Data), and execution based (SNARKs or adjudicated delegation). Each has limitations: TEEs are hard to deploy widely, learning based methods are vulnerable to manipulation, and SNARK proofs are computationally expensive. Verde overcomes these issues through adjudicated delegation, built on two key innovations: - RepOps: a re ordered operator system that ensures bitwise-identical results across different hardware, eliminating floating-point inconsistencies. - A two level bisection game: pinpointing the exact iteration and operation where two compute providers disagree so the arbiter only needs to re-execute a single operation. With this design, Verde enables efficient, trustworthy verification of training, fine tuning, and inference tasks. It now powers verification inside Judge, making model outputs independently checkable and bringing @gensynai decentralized compute ecosystem closer to reality. @gensynai @gensyn_hub
Duno 🍌(❖,❖) tweet media
Duno 🍌(❖,❖)@DngDZ16

Gensyn is showcasing a major leap in building decentralized AI infrastructure. A quick look at its Testnet Explorer reveals a highly active and stable network: over 11.6 million blocks, nearly 90 million transactions, more than 560,000 daily transactions, and gas fees that are essentially zero. This isn’t a dormant testnet it’s a system running at real scale. What stands out most is that the majority of transactions are contract calls, highlighting @gensynai core mechanism: each interaction represents an AI-related task job submissions, result verification, or payments to compute nodes. The blockchain here isn’t designed for token speculation; it exists to coordinate a global compute marketplace. This reflects @gensynai “wisdom of the crowd” philosophy: real computational power comes not from a single data center, but from thousands of distributed nodes contributing GPU resources and combining into a unified, massive compute layer. This visual and operational style mirrors what many Web3/AI projects aspire to: distributed networks, decentralized compute markets, and collaborative AI training. But @gensynai is demonstrating that it can achieve this vision at meaningful scale with real activity, real workloads, and real network demand. @gensyn_hub @gensynai

English
33
0
47
929
Reyman
Reyman@Reymanray·
In a world of football, we know that the team consists of 11 players with different positions in it. Each player does have their expertise. One good at goalkeeping, some good at defending, some good at mid-fielding, and some good at attacking. Each position also has its own expertise, the dominant of attacking and defending, left or right winger and so on. Also the coach will mostly divide their training based on their specialized position. One of the example is goalkeeper will have to train their instinct to catch the ball better than the other position does. The same thing happens in the world of AI training. You sure want your model to be trained with a lot of expertise based on its own specialization. One of the Gensyn research product which is Diverse Expert Ensembles with its Heterogeneous Domain Expert Ensembles (HDEE) framework does have the same feature to train models. It allows the parallel and decentralized mixture of expert training for the models. Every expert models will be trained independently within various configs based on the specialized computing capabilities and the data. The models are trained based on its domain complexity. The bigger it gets, the more extensive the model will be trained. Some advantages of applying HDEE are as follows; ➪ Better scores than most domains within the comparation to homogeneous ensembles (within same computing budget) ➪ All GPU will have the same chance to have the contribution equality because of the independent training that happens without inter-machine communication ➪ Effective use of resources because of the expert training within domain base complexity and the capacity of computing ➪ Reducing error that occurs among expert within the diversity ➪ Permisionless and open source make it transparent and reliable to interconnected models Do you think HDEE framework is good to train AI models? Comment below!
Reyman tweet media
English
84
0
153
1.9K
Reyman
Reyman@Reymanray·
In the blog of @gensynai that titled: “A theory of decentralized compute markets”, I’m really excited about how it explaining a new approach to buy and sell computing power on decentralized networks. It is done by treating compute as a time based asset. This way, Gensyn does solves major problems that is exist in current decentralized compute markets such as; slow complex auctions, pricing delays and trust issues with providers delivering work. This is because of the 3 key technologies: ➪ Determinism; ensures running the same program anywhere produces identical results making outputs portable and comparable. ➪ Verification; confirms the correctness of work without redoing it using mechanisms to check results or to resolve disputes. ➪ Checkpointing; permits the pause and resume of long running jobs across different machines without loss of accuracy. This compute is offered as time slices in tiers based on hardware capability (like VRAM CPU power) and providers stake collateral to offer these future time slots safely. A market maker then sets live transparent prices for each tier based on supply and demand allowing users to accept prices immediately without waiting for slow auctions. Jobs are quickly assigned through a matching process avoiding complicated optimizations. The providers earn their base price plus a share of surplus when matched competitively to support real participation. The failure to report or deliver triggers penalties. According to Mathematical Analysis, the market stabilizes with unique prices per tier and matching gains a provable efficiency guarantee. This design shifts away from bundle based cloud models to a more scalable, transparent and liquid compute market. It unlocks unused hardware, allowing fair pricing and broader access to AI training resources at much lower costs without concentrating power in a few centralized cloud providers. The framework also relay a foundation for further research and practical implementations in decentralized markets where avalanche is a major problem So in short, Gensyn treats computing as a commodity that can use time tradeable. Gensyn does provide scalable, permisionless and faster decentralized compute economy
Reyman tweet media
English
57
1
140
1.6K
Light of night
Light of night@light_of_night8·
Big day in @gensynai !! The first Pioneers have been added. The initial group was chosen for many different reasons. Some have been here for a long time, spending their time helping others, solving issues, and guiding new people into learning. Others created detailed guides, documentation, and step-by-step instructions that half the community relies on. And some have just been yelling about Gensyn from the rooftops, constantly talking about what excites them here and pulling new people into the swarm.
Light of night tweet media
English
16
0
76
491
Celestial
Celestial@Ramji__rj·
RepOps Uniqueness @gensynai ensures bitwise reproducibility across different hardware through its RepOps library, which reimplements key ML operators to enforce a fixed execution order of floating-point operations, overcoming non-associativity issues inherent in IEEE-754 standards. Core Mechanism RepOps controls the precise sequence of floating-point additions and multiplications in operations like matrix multiplication, ensuring identical bit-level outputs regardless of hardware differences such as GPU architecture (e.g., A100 vs. H100) or CPU types (x86_64 vs. arm64). This eliminates hardware non-determinism by serializing operations that parallel hardware might reorder, while supporting CUDA implementations for models like DistilBERT and Llama with under 30% overhead on large matrix multiplications compared to cuDNN. Supported Hardware The library formally targets a wide range of devices, including Nvidia GPUs from T4 to H200, RTX 3090/4090 series, and CPUs, requiring CUDA 12.6+ for GPUs. It integrates PyTorch's deterministic pseudorandomness and reimplements math functions (exp, sin, tanh) to maintain consistency across environments. Role in Verification RepOps powers Verde's dispute resolution by making honest nodes produce matching hashes for compute checkpoints, enabling efficient bisection over operator graphs without full reruns. A public demo via Docker verifies this reproducibility across supported targets. And with this AMA also postponed another time, Now on 9th Dec at the same time, this is now making me curious what is coming ?
English
45
0
68
1.5K
Celestial
Celestial@Ramji__rj·
Yesterday @gensynai dropped an article about prediction markets, So let's see what the article says: if you design them right, prediction markets behave a lot like machine learning models. They take in data (people’s beliefs and trades), update internal “parameters” (prices), and then output better and better predictions over time, just like a learning algorithm updates weights to reduce error. What a prediction market is ? A prediction market is a place where people bet on future events, like “Will trump raise tariffs tomorrow?” or “Will Gensyn drop tokenomics tomorrow ?”. The price of a “yes” share is usually interpreted as the probability of that event, so if a share trades at 0.7, the market is saying “about 70% chance this happens”. How it learns like ML ? In machine learning, a model starts with some random parameters and then changes them whenever it makes a mistake, gradually improving its predictions. In a prediction market, prices move whenever someone trades, which usually happens when they think the current price is wrong, so the market “corrects” itself in the direction of better predictions. Cost functions and gradients The post focuses on a specific market design called LMSR, which uses a cost function: a mathematical rule that says how expensive it is to move prices. The “gradient” (slope) of this cost function tells you what the current implied probabilities are, just like the gradient of a loss function in ML tells you how to update weights to make better predictions. Liquidity as learning rate There is a liquidity parameter in LMSR that controls how sensitive prices are to trades. This is very similar to the learning rate in ML, A high learning rate / low-liquidity setting makes the system move a lot in response to new information, while a low learning rate / high-liquidity setting makes it update slowly and more conservatively. Why this viewpoint matters ? Seeing prediction markets as learning algorithms lets you import ML tools, you can reason about convergence, robustness, and how to combine many weak signals into a strong aggregated forecast. For something like Gensyn’s world, this perspective is useful because it suggests you can design markets that not only pay for information but also systematically “train” towards better beliefs over time, similar to training a model on repeated feedback.
English
40
0
68
774
Celestial
Celestial@Ramji__rj·
What's Wisdom of the Crowd ? @gensynai's “wisdom of the crowd” idea is basically, many different models and participants working together can learn better than one big model or one central lab alone. Core idea Instead of one giant closed model being trained in a single data center, Gensyn’s design lets lots of smaller models and nodes collaborate, share signals, and critique each other’s outputs. Over time, this group interaction helps the overall system become smarter and more robust, in the same way that diverse human crowds can often make better predictions than a single expert. How it works (intuitively) ? -> Many independent models (“agents”) run on different machines owned by different people. -> They all work on related tasks and exchange information about what actions or answers seemed good or bad. -> Good behaviors and strategies spread through the network, while bad ones die out, so the whole “swarm” improves together. Why this is called “wisdom of the crowd” ? In classic “wisdom of the crowd”, averaging many independent guesses often gives a surprisingly accurate answer (for example, estimating a jar’s beans). Here, instead of human guesses, you have many ML agents exploring and learning in parallel. Diversity (different models, hardware, data, strategies) plus coordination (shared feedback and rewards) yields a more capable collective intelligence than any single participant alone. Why it matters for Gensyn ? -> It fits the vision of a decentralized AI network where anyone can contribute compute and models. -> It avoids over-reliance on a few centralized AI labs and encourages open, community-driven improvement. -> It turns the network into a kind of “learning organism” that gets better as more people and models join and interact.
English
51
0
92
1.5K
Haysaj
Haysaj@Srtnlr7·
Prediction markets and learning algorithms seem unrelated at first glance. One lives in finance, the other in AI. But @gensynai’s latest post points out a surprising truth: Both systems work by turning thousands of noisy signals into a single, evolving belief about the world. Traders update prices. Models update weights. Different mechanics, same underlying logic. Iterative learning through feedback. This perspective matters because Gensyn isn’t just building decentralized compute. It’s building an ecosystem where collective intelligence can emerge. Understanding how markets learn helps explain how distributed ML systems might scale, adapt and refine themselves over time. When you see the parallel, it becomes clear. The future of open AI may look less like a single model and more like a dynamic market of interacting signals.
gensyn@gensynai

Prediction markets are messy, human systems where people buy shares in claims such as “Dodgers will win 2025 World series” Large-scale ML is often a wall of GPUs quietly grinding through trillions of tokens The two are solving the same problem blog.gensyn.ai/prediction-mar…

English
8
0
12
167
Haysaj
Haysaj@Srtnlr7·
Modern ML models don’t have to learn as a single, monolithic system. @gensynai’s Diverse Expert Ensembles explore a different idea: Letting multiple specialized models contribute their strengths. Instead of forcing one model to handle every pattern, task or input distribution, an ensemble can combine different expert behaviors into a stronger overall output. The research highlights how mixing diverse experts helps improve robustness, reduce bias toward a single solution path and produce more reliable predictions across varied inputs. It’s a simple principle: When different models see the world differently, combining them leads to better outcomes. And in Gensyn’s ecosystem, ensembles become an important step toward more adaptable and resilient ML systems.
Haysaj tweet media
English
3
0
11
84
Haysaj
Haysaj@Srtnlr7·
The timing couldn’t be better. One last chance to get everything crystal clear before the year closes. With the registration window extended and the allocation boosted for X creators, this AMA feels like the moment where all the pieces finally click together. If you’re building, creating or just trying to understand how to maximize your place in the @SentientAGI ecosystem, tomorrow’s call is the one you don’t want to miss. I’ll be there. Notebook open, questions ready. Let’s wrap up 2025 the right way.
Sentient@SentientAGI

Community Call in 24hs 🎙 We’ve extended the registration window to help everyone claim, as well as increasing the airdrop allocation for eligible X creators. Drop your questions and tune into 2025’s final Community AMA to get absolute clarity on how to maximize your rewards 🔥

English
5
0
13
133
Haysaj
Haysaj@Srtnlr7·
In distributed training, most systems rely on heavy all reduce steps that force every worker to stay perfectly in sync. Gensyn’s NoLoCo takes a different path. Instead of full global synchronization, NoLoCo uses lightweight pairwise exchanges and a routing method that lets models share updates without constant coordination. The research shows this approach reduces communication overhead while keeping training stable compared to traditional data parallel methods. It’s a practical way for @gensynai to push distributed learning forward not by adding more complexity but by lowering the cost of moving information across many workers.
Haysaj tweet media
English
2
0
7
51
GROOT__11
GROOT__11@GROOT__04·
Made a hand made art thinking that Ants love Sweets and @gensynai community is the Sweetest
GROOT__11 tweet media
English
16
0
51
837
Reyman
Reyman@Reymanray·
Back with the daily learning series of Gensyn! Can you imagine a production factory with multiple station in it? It is obviously faster than a factory within only a single station to process the product. But the problem is that sometimes multiple station isn't also enough remembering the needs will be bigger from time to time. It is always expensive reason as the main problem to scale up the factory. The same problem also happen in the LLM training world. The current available methods is so expensive due to the needs to split across thousands of inter-connected nodes whenever training is on progress. SkipPipe in @gensynai is literally working the same as the multiple station in a factory but way advanced than that. It also solves above problem by provides an optimization by skipping and reordering stages in the decentralized training environment. SkipPipe can reduces up to 55% of training time and is so robust within the 50% node failure tolerance to minimize the accuracy loss. Some clear advantages of using SkipPipe are as follows; ➪ It makes the model performance in maximum capability even the nodes are failing during the training ➪ Reduction times while training up to 55% rates ➪ It minimizes the communication of inter-nodes to avoid overhead and make it more efficient ➪ Diverse of hardware and locations because of the distributed computing use What do you think about SkipPipes method in Gensyn? Comment bellow!
Reyman tweet media
English
75
0
138
1.7K