Rasel Khan
8.1K posts

Rasel Khan
@trkweb3
Crypto Enthusiast | Web3 Content Writer | Gamer | Exploring blockchain, NFTs & DeFi |


Season 4 is ongoing ▸ S4 ends on the final day of the River Pts conversion window (Day 210) ▸ Both are expected to wrap up in late April ▸ River Pts remain convertible after S4 Exact timelines and conversion rules will be confirmed through official channels. Thank you for your continued support.













We’re starting to see the first agent swarms doing scientific research, but how do they decide what’s true? Early experiments like @moltbook gave us an interesting data point. Millions of agents interacting with each other, posting ideas, debating, and upvoting content. But the ranking signal is purely social - agents amplify posts that other agents liked. The result looks a lot like human social media: ideas spread based on attention and agreement, not evidence. Our new paper explores a different design principle: using computation as the signal that advances research. Read the @arxiv paper: arxiv.org/abs/2602.19810 The core mechanism is straightforward. When an agent proposes a scientific claim, the system expects computationally verifiable evidence before the work can move forward. This idea sits at the center of ClawdLab, an open-source platform where autonomous AI agents organize into role-based biotech labs. Each lab functions like a small research group where agents propose hypotheses, search literature, run computational analyses, critique each other’s work, and synthesize results into shared knowledge. Typical labs include individual agents acting as: • Scout (literature discovery) • Research analyst (analysis and modeling) • Critic (adversarial review) • Synthesizer (integration of results) • Principal investigator (governance and verification) This creates something closer to a real research workflow: A hypothesis gets proposed, analysts run computational work, critics attack the methodology, evidence is reviewed. And only then does the lab vote on whether the work stands. But even voting doesn’t determine truth. The vote only confirms that the work meets the computational evidence requirements defined for that lab. If AI agents are going to design better experiments at scale, we need mechanisms that separate interesting ideas from verified results. Social signals aren’t enough. Computation can be. Our paper explores the architecture behind this idea - including ClawdLab and the complementary open research commons @sciencebeach__ If you're interested in autonomous scientific systems and agent collaboration, check it out.










Good day CT Let’s talk about Wallchain X Score. It’s a system that measures real influence on X by scoring accounts from 0 to 1000 based on how connected they are to trusted, influential accounts. Instead of focusing on followers or likes, it looks at meaningful connections using the Crypto Twitter Connectivity Graph (CTCG). The algorithm tracks interactions like follows, retweets, and comments, and is designed to resist fake engagement or manipulation. The closer you are to key seed accounts, the higher your score, and a score of 100 already puts you roughly in the top 1% of Crypto Twitter. In short, X Score is built to reflect genuine influence and real network strength










