anon

164 posts

anon

anon

@Jeffoy88

Katılım Kasım 2021
56 Takip Edilen83 Takipçiler
anon
anon@Jeffoy88·
@_xSalmane @mykcryptodev its been there for months and you hyping it with the dev working on base etc. and that token is no real usecase no originality and just a meme like trying to ride with the hype. for months that token didnt perform well bcoz dev is just tryin to farm. this is like a PnD
English
0
0
0
40
Salmane
Salmane@_xSalmane·
Why I’m Bullish on $MYCLAWD Who Is the Developer? The project is built by @MykCryptoDev. There are strong rumors that he is a former Base employee who is now working at Coinbase. If true, this would give him deep connections within the Base ecosystem. A recent example of his network strength: the dev of $LFI (@MLeeJr) publicly mentioned him twice in a single day, highlighting the respect he has within the community. x.com/mleejr/status/… x.com/mleejr/status/… ⸻ What Is Myk Clawd? Myk Clawd (@myk_clawd) is a fully autonomous on-chain AI agent — a real intelligent bot that operates independently on the blockchain. Unlike typical crypto projects, this is not just a chatbot or a social media account. It is a functioning AI agent with its own wallet, decision-making logic, and on-chain activity. ⸻ Core Functions Autonomous DeFi Trading Myk Clawd acts as a self-directed degen trader: Trades cryptocurrencies, primarily memecoins on Base Manages its own wallet Makes independent buy and sell decisions Learns from previous outcomes and adapts its strategy Autonomous X Management The agent independently controls its X account: Posts tweets Replies to users Places bets Engages with the community This activity is not manually controlled by a human. ⸻ Additional Capabilities On-Chain Gaming Myk Clawd participates in blockchain-based games such as Agent Chess, where it can wager ETH and tokens. Agentic Economy Participation The project is part of the emerging on-chain agent economy, where AI agents operate as independent economic entities on the blockchain. Built with OpenClaw Myk Clawd is powered by OpenClaw, a framework designed for creating autonomous AI agents that can evolve and improve over time. ⸻ Why This Matters Created by @MykCryptoDev, Myk Clawd is one of the most advanced real-world demonstrations of an AI agent that can: Trade autonomously Bet and game on-chain Manage its own social presence Learn and adapt over time This is not a concept or a meme alone — it is a live, working example of what autonomous AI agents can become in 2026. ⸻ Investment Thesis for $MYCLAW I’m bullish on $MYCLAW because it sits at the intersection of several high-growth narratives: Autonomous AI agents On-chain trading Base ecosystem expansion Agentic economies Memecoin culture With a technically credible developer, strong ecosystem connections, and a functioning product already operating in public, $MYCLAW has the potential to become a flagship project in the AI agent sector. ⸻ Official Website mykclawd.xyz CA: 0xE3C5FCfBfea42D5CE2492FD82c239B5503f17ba3 $MYKCLAWD
mleejr@MLeeJr

@bankrbot @thealireza0x @Dannyhbrown @igoryuzo @mykcryptodev @myk_clawd @base myk is the top og ppl don’t realize yet

English
5
5
15
3K
JDILeaps
JDILeaps@Jditibbiroshi·
@cyb3rwr3n Mr. Bird, it's been almost 2 weeks, are you putting more eggs in the nest? @ErikVoorhees the bird crowd is thick.
English
1
0
1
212
cyb3rwr3n
cyb3rwr3n@cyb3rwr3n·
What do you matter? What do you matter?
English
24
3
63
14.7K
anon
anon@Jeffoy88·
$peaq is scammer and farming using their trash token $woon
English
2
0
1
614
anon
anon@Jeffoy88·
fck $pengu, @LucaNetz created a farming machine with his monthly unlock and insta sell to holders
English
0
0
3
331
anon
anon@Jeffoy88·
hey @peaq remove your trash project $woon @Woon_agent and stop farming your community
English
0
0
1
171
beeboop
beeboop@beeboopx·
a launchpad that pairs tokens with and pays fees in $DIEM for an autonomous compute loop by design would be really interesting as the project scales, its token would continuously accumulate inference credits via $DIEM unlocking more and more power over time pretty interesting to think about that type of @base native compute flywheel powered by @AskVenice
English
17
10
98
48.9K
Crypto PK
Crypto PK@Crypt0_PK·
do you remember @teaprotocol ? I remember there testnet went live in 2024 & I did lot of interactions at that time bcz testnet meta was cooking that time. they choose to drag till testnet meta is died & now I'm not expecting any single penny $TEA tge is set to June 4th & honestly we'll get a cup of tea I guess, gn
Crypto PK tweet media
English
8
0
19
1.3K
tea Protocol
tea Protocol@teaprotocol·
You asked "wen?". Now it’s official. Voting opens on Aerodrome May 28. The Tea Party Begins June 4. $TEA is launching on Aerodrome. The course to $TEA is set. *CEX listings to be announced in the upcoming weeks.
English
297
144
924
109.9K
Brian Armstrong
Brian Armstrong@brian_armstrong·
CLARITY is closer than ever. The bill is strong. It will benefit the American people by making the US financial system faster, cheaper and more accessible. It will also ensure that the US leads in the global race to build the next generation of our financial system. Huge thank you to the Senate, their staff, and 3.7m @standwithcrypto advocates for helping to get this legislation to where it is today. Mark it up.
English
1.1K
2.4K
14.8K
1.2M
Turtle (𝔦, 𝔦)
Turtle (𝔦, 𝔦)@turtleonchain·
@istdxb The $PEAQ team has big plans for $WOON and the market will catch up on it eventually. We got rid of some broke scalpers today and have higher floors. Next wave will take us above 1M. It’s inevitable considering how good $ROBOTMONEY is performing as well. Great times ahead.
Turtle (𝔦, 𝔦) tweet media
English
1
2
14
870
Turtle (𝔦, 𝔦)
Turtle (𝔦, 𝔦)@turtleonchain·
$WOON is a machine collecting machines while being the most retail-friendly looking machine on X. The story is unfolding right in front of us.
Turtle (𝔦, 𝔦) tweet media
peaq@peaq

@turtleonchain @Woon_agent Woon's the kind of agent that proves the thesis — a machine earning and co-owning other machines onchain Plenty more where that came from once peaqOS goes wide

English
1
4
38
2.7K
penguinxbt
penguinxbt@penguinxbt_·
adding onto this $hermesos dip i trimmed a good bit at the top, still hold a nice moon position since it’s one of my favorite coins always take profit though, no matter what it is, dont listen to these kols saying 100x or nothing watching it closely, if it falls through the dump could be fairly violent, but ill see if I can trim before hand and buy that dip ideally on dips, we see a large reaction of buyers, not always but that’s classic btc ta, no reaction is a good bet for more continuation to the downside
English
2
0
1
291
Zero Degree
Zero Degree@cryptobazigar·
I was one of the earliest on Hermes based tokens. I gave you $HermesOS at $250k MC. Now trading at 4x higher. Now i have my eyes on $COS a.k.a @ClawdOS. This is under the radar gem currently trading at around $80k MC. Do the math. Do the research.
Zero Degree tweet media
Zero Degree@cryptobazigar

Pivoted to $Clawnch ($1.65mn) and $HermesOS ($250k). Absolute bargain at these prices. Because people are preferring Hermes by @NousResearch over @openclaw. Right now only @Clawnch_Bot and @HermesOScloud have integrated Hermes tech. Both will have the first movers advantage.

English
2
2
7
1.3K
anon
anon@Jeffoy88·
@Wayland_Six so what is that? your project isnt part of it and you arent part of nous research.
English
1
0
0
66
Ash
Ash@Wayland_Six·
This new Nous Research paper may end up being one of the most economically important AI breakthroughs in years. Not because it makes models smarter. But because it may dramatically reduce the cost and time required to train them. Most people completely misunderstand what frontier AI training actually looks like. Training a modern large language model is not just “running ChatGPT on a computer.” It involves: - gigantic data centres filled with GPUs - enormous electricity usage - massive cooling infrastructure - months of nonstop computation - and training runs that can cost hundreds of millions of dollars And that’s before you even know if the experiment worked. Now imagine if someone finds a way to make that process 2-3x more efficient. → Not by changing the final AI model. → Not by inventing a whole new architecture. → But simply by changing HOW the model learns during training. That’s what makes this new Nous Research paper so important. The technique is called Token Superposition Training (TST). The simple explanation is this: Normally, an AI model learns language one token at a time. Word. Next word. Next word. Next word. Trillions and trillions of times. That process is incredibly expensive. What Nous is proposing is: during the early stages of training, the model may not actually need to learn every token individually yet. Instead, it can temporarily learn from compressed groups of tokens together. So instead of learning from: “The cat sat on the mat” as completely separate token predictions... the model briefly learns from blended groups of token information during early training. That sounds like it should completely break the model. But apparently...it doesn’t. Because later in training, the system switches back to normal token-by-token learning so the model can recover precision and refine itself properly. And according to their results: the final model quality remains competitive while training becomes dramatically faster. That’s the important part people are missing. The final inference model stays the same. Meaning: - no new chatbot architecture - no new serving stack - no retraining the entire ecosystem around a new model type - no weird compatibility layer Just: far more efficient training. That matters because the biggest bottleneck in AI right now is increasingly economics and infrastructure. The world is running out of: - high-end GPUs - power capacity - data centre infrastructure - training bandwidth AI progress is no longer just about: “who has the smartest researchers.” It’s increasingly about: “who can train and iterate fastest.” And iteration speed is everything. If a lab can: - train models faster - run more experiments - test more ideas - spend less money per run - and occupy GPU clusters for less time they accelerate their entire research loop. That compounds hard. Which is why algorithmic efficiency breakthroughs like this can become insanely important. Historically, software-level efficiency improvements often end up creating more impact than raw hardware improvements. And this paper is basically trying to do exactly that for LLM training. Now, important caveat: This has NOT yet been validated on frontier-scale trillion-parameter models. The paper tested: - 270M - 600M - 3B dense models - and a 10B MoE setup So nobody should pretend this is already proven at GPT-5.x scale. But if these results continue scaling upward... this could become one of those papers people look back on later and realise quietly changed the economics of AI training itself.
Nous Research@NousResearch

Today we release Token Superposition Training (TST), a modification to the standard LLM pretraining loop that produces a 2-3× wall-clock speedup at matched FLOPs without changing the model architecture, optimizer, tokenizer, or training data. During the first third of training, the model reads and predicts contiguous bags of tokens, averaging their embeddings on the input side and predicting the next bag with a modified cross-entropy on the output side. For the remainder of the run, it trains normally on next-token prediction. The inference-time model is identical to one produced by conventional pretraining. Validated at 270M, 600M, and 3B dense scales, and at 10B-A1B MoE. The work on TST was led by @bloc97_, @gigant_theo, and @theemozilla.

English
5
3
25
4K
anon
anon@Jeffoy88·
Why does $PENGU @pudgypenguins keep selling unlocked tokens? Doesn’t that show a lack of confidence in your own token?
English
1
0
6
2.2K