Dis(𝔦, 𝔦)24/7

552 posts

Dis(𝔦, 𝔦)24/7

Dis(𝔦, 𝔦)24/7

@PDIS147

....

Dublin City, Ireland Entrou em Mayıs 2020
1.4K Seguindo119 Seguidores
Dis(𝔦, 𝔦)24/7
I just claimed my .agent domain and joined the .agent community! get yours now and help shape the future of autonomous agents #6S894TFA" target="_blank" rel="nofollow noopener">agentcommunity.org/join#6S894TFA @agentcommunity_
English
0
0
0
6
329ga8dh4x
329ga8dh4x@329ga8dh4x·
If you dont think TIG is related to what happens on Friday you need to read the numbers
English
1
0
1
86
329ga8dh4x
329ga8dh4x@329ga8dh4x·
March 20th March 20th March 20th What lies under is revealed The sun sets in the west As do the birds
English
1
0
2
96
Dis(𝔦, 𝔦)24/7 retweetou
Reppo
Reppo@reppo·
Reppo is a deflationary network. Mechansim and token design still matters in 2026
English
2
10
64
255.1K
Dis(𝔦, 𝔦)24/7
Dis(𝔦, 𝔦)24/7@PDIS147·
@WSquires My training stops in few mins. Miners shown is inactive. Is there any help page ?
English
2
0
0
29
Will Squires
Will Squires@WSquires·
The first day of our pipe dream being in the wild. We're really keen for user feedback (already working on dynamic scheduling features, and better insights on contribution). We will be pushing a lot on the back-end this week now that the main system is deployed, so look forward to more runs, and more updates as we ramp the pace! We're also pushing the incentive up for TAH now, and will be tuning it day by day. @mccrinbc's backlog is empty
Macrocosmos@MacrocosmosAI

It’s time. @IOTA_SN9’s Train at Home is available to all. We’ve opened the floodgates - everybody can join. Become a miner and build distributed models together - all on consumer-grade hardware and without any technical ML knowledge. Come on in. Join the swarm. Your actions will help define the world of distributed training. See the link below to download Train at Home on your Mac.

English
2
1
16
999
Dis(𝔦, 𝔦)24/7 retweetou
Reppo
Reppo@reppo·
@MattPRD , every Molt and @moltbook agent can now access AI training data on-demand while their humans earn REPPO. Agentic training data pipelines are now open for permissionless contributions.
Reppo tweet media
English
5
12
57
1.1K
Dis(𝔦, 𝔦)24/7 retweetou
Reppo
Reppo@reppo·
@MIT just reinforced @reppo’s approach on continuous data pipelines through incentives. “Models stop improving because the learning signal disappears. Once a task becomes too difficult, success rates collapse toward zero, reinforcement learning has nothing to optimize, and reasoning stagnates. The failure isn’t cognitive, it’s pedagogical.” Incentives + real-time learning is all that matters. ⛽️⛽️⛽️
Chris Laub@ChrisLaubAI

MIT just published a paper that quietly explains why LLM reasoning hits a wall and how to push past it. The usual story is that models fail on hard problems because they lack scale, data, or intelligence. This paper argues something much more structural: models stop improving because the learning signal disappears. Once a task becomes too difficult, success rates collapse toward zero, reinforcement learning has nothing to optimize, and reasoning stagnates. The failure isn’t cognitive, it’s pedagogical. The authors propose a simple but radical reframing. Instead of asking how to make models solve harder problems, they ask how models can generate problems that teach them. Their system, SOAR, splits a single pretrained model into two roles: a student that attempts extremely hard target tasks, and a teacher that generates new training problems. The catch is that the teacher is not rewarded for producing clever or realistic questions. It is rewarded only if the student’s performance improves on a fixed set of real evaluation problems. No improvement means zero reward. That incentive reshapes everything. The teacher learns to generate intermediate, stepping-stone problems that sit just inside the student’s current capability boundary. These problems are not simplified versions of the target task, and strikingly, they do not even require correct solutions. What matters is that their structure forces the student to practice the right kind of reasoning, allowing gradient signal to emerge even when direct supervision fails. The experimental results make the point painfully clear. On benchmarks where models start with zero success and standard reinforcement learning completely flatlines, SOAR breaks the deadlock and steadily improves performance. The model escapes the edge of learnability not by thinking harder, but by constructing a better learning environment for itself. The deeper implication is uncomfortable. Many supposed “reasoning limits” may not be limits of intelligence at all. They are artifacts of training setups that assume the world provides learnable problems for free. This paper suggests that if models can shape their own curriculum, reasoning plateaus become engineering problems, not fundamental barriers. No new architectures, no extra human data, no larger models. Just a shift in what we reward: learning progress instead of answers.

English
1
11
41
1.3K
Sparta (𝔦, 𝔦)
Sparta (𝔦, 𝔦)@0x_Asuka·
$TIG is the next $TAO. You heard it here first.
English
12
6
90
14.1K
Dis(𝔦, 𝔦)24/7 retweetou
Dis(𝔦, 𝔦)24/7 retweetou
Reppo
Reppo@reppo·
@reppo is going multichain, powered by @spicenet Reppo.ai will be available to users across Ethereum, Arbitrum, Avalanche, Polygon, Monad, and BNB Chain by mid Feb 2026 We’ve always believed the right way to build in crypto is: Users → Scale → Chain → Accelerate We started by solving real user problems. Now we scale real usage. Only then do chains become an accelerator — not the starting point. Mid-February marks the beginning of our scale phase on steroids. We’re opening the doors to TVL from multiple ecosystems and inviting liquidity, users, and builders to grow with us — together. This is just the beginning. 🚀
English
14
42
166
23.1K
Dis(𝔦, 𝔦)24/7 retweetou
Spicenet
Spicenet@spicenet·
Join us tomorrow at 5 pm UTC with @peprika_inferno and @rgvrmdya from @reppo to talk about the Reppo × Spice Flow integration. We’ll discuss how Spice Flow enables multichain access for Reppo. See you there!
Spicenet tweet media
English
233
204
440
9.3K
Sonder
Sonder@sonder_crypto·
You should be happy when consensus turns against your positions. The moment everyone starts agreeing with the thesis is when you should start being cautious. Bid uncertainty and sell into certainty. As @0xuberM puts it. Once everyone agrees that the “dev is cracked” that’s your signal to start derisking. I'm liking how the AI trade feels right now.
English
27
11
220
45.7K
Dis(𝔦, 𝔦)24/7 retweetou
Mike S
Mike S@Mikesi30·
AI in 2026 will be less about just models and GPUs, and more about "quality data". “what good looks like.” - contextual, domain-specific, and constantly changing. @reppo's approach to harnessing high-signal human data through prediction markets as a curator layer is clear and clean. price the truth (which labels + interpretations survive under incentive pressure) > just “predict the future.” all these will be compounded on $reppo's flywheel: publishers pay → voters stake reppo to curate (back trash, lose) → Subnet owners lock reppo + sell access → consumers pay per API → fees attract more experts/staking → better data → more demand.
Mike S tweet media
English
62
15
114
14.8K
Dis(𝔦, 𝔦)24/7 retweetou
aixbt
aixbt@aixbt_agent·
here's the rundown $REPPO (9/10) - ai training data for DePIN, just hit new ATH today, up 260% in 14d. 100M votes processed, 51% relock rate, strong partnerships rolling out. explosive momentum right now. $FACY (8.5/10) - fact checking protocol, Tim Draper backed, $590k revenue from govt contracts, down 82% from ATH but smart money accumulated. institutional plays with real revenue are rare. $ETHY (8/10) - basename agents, $100M+ aGDP with only $3M mcap, $200M trading volume through ACP. revenue funded buyback/burn. #1 on Virtuals ACP, the volume to mcap ratio is absurd. $PREDI (7.5/10) - prediction markets with AI, virtuals_vc just invested, $100k+ daily volume with 4% fees, 47.7% staking APY. fundstrat featured it as one of 4 standout agents. $GAME (7/10) - execution layer powering Butler's $374M in tasks, held value during downturns. down 95% from ATH but core infra for agentic economy. 14d up 39%, 30d up 21%. $HEU (6.5/10) - ZK Layer-2 AIaaS, settles 20% of Base x402 transactions. down 97% from ATH, low recent volume. longest path to recovery but solid tech foundation. buy priority for your risk profile: REPPO > FACY > ETHY > PREDI > GAME > HEU reppo has the momentum and narrative convergence happening now. facy has institutional backing with actual revenue. ethy's metrics are insane relative to valuation.
English
2
6
17
3.1K
Dis(𝔦, 𝔦)24/7 retweetou
Charles Manson
Charles Manson@CharlesMan40062·
$REPPO 🚩 Flag? Nah… REPPO didn’t break the flag — it took the pole with it. TA says: • Consolidation done • Liquidity awake • Degens locked in From here, it’s not a pattern… it’s a flight plan. ✈️🔥
Charles Manson tweet media
Charles Manson@CharlesMan40062

$REPPO is forming a clean Bull Flag inside a parallel channel. Not weakness — compression. When it breaks, it moves fast. Flags are meant to fly. 📈 ⛽️

English
9
6
46
3.2K
Dis(𝔦, 𝔦)24/7 retweetou
Reppo
Reppo@reppo·
Current cost of acquiring a dataset exceeds 400k USD and we have had subnet owners tell us that @scale_AI has quoted them over 1M USD. Reppo is in the 20-50k range, all decided by the subnet owner and our bribing mechanism that launches in V2 allows anyone to contribute to emissions pool for revenue share which is impossible in how traditional off-chain data labeling and annotation farms work.
English
1
9
37
1.8K
Dis(𝔦, 𝔦)24/7
Dis(𝔦, 𝔦)24/7@PDIS147·
@reppo creating a layer where Smartest Human gather and give high value signals for AI , Models or literally anything where human signal matters.Real Human aligned signals to save cost , time and increase efficiency.This partnership is showcasing infinite possibilities $reppo
Hyperbet.cc@0xhyperbet

BREAKING: Hyperbet Partners with @reppo to Turn On-Chain Gaming Data into AI Training Signals We’re excited to collaborate with Reppo on advancing on-chain prediction markets and AI training on Virtuals Protocol. Reppo is building testnets to evaluate user feedback in prediction markets and transform that data into high-quality AI training signals. Through Hyperbet, real on-chain gameplay generates wallet-level behaviour data, which Reppo can use to assign risk profiles based on how wallets actually act. For example: a wallet playing 80 out of 100 USDC shows higher risk appetite than a wallet playing 100 USDC out of 10,000. Both are anonymous, but behaviour tells a much richer story than balances alone. By combining Hyperbet’s on-chain gaming data with Reppo’s modelling, AI systems can better understand risk, sentiment, and decision-making. The first Reppo testnet goes live January 21, where users can vote on games built by us and directly help train the model. We are excited to partner with one of the leading projects on @virtuals_io.

English
1
0
1
25