My crypto thoughts🆙🐳{l l}

17.6K posts

My crypto thoughts🆙🐳{l l} banner
My crypto thoughts🆙🐳{l l}

My crypto thoughts🆙🐳{l l}

@MyCryptoThought

Crypto flirter $BTC $LYX $TRAC $CFG $AUKI

Katılım Eylül 2012
1.4K Takip Edilen859 Takipçiler
CoinMarketCap
CoinMarketCap@CoinMarketCap·
Name a project that's been sleeping but is about to wake up.
English
2.7K
135
2K
285.4K
My crypto thoughts🆙🐳{l l} retweetledi
J. Peg Getty
J. Peg Getty@JPeg_Getty·
The best kept secret in crypto atm is LUKSO ($LYX) OG Ethereum dev and Inventor of ETH's ERC-20 Protocol @feindura @lukso_io
J. Peg Getty tweet mediaJ. Peg Getty tweet media
English
1
8
65
893
My crypto thoughts🆙🐳{l l} retweetledi
Better life
Better life@2100zec·
In the current market, AI is the absolute mainstream narrative, but most AI-related tokens are scams. $LYX has strong potential to become a genuinely credible AI narrative token. As long as LUKSO continues to advance on-chain AI agents and keeps optimizing the compatibility between Universal Profiles and agents, positioning LUKSO as a blockchain AI company would be a crucial strategic move to drive LYX’s price upward and reverse LUKSO’s declining trend.
English
2
4
44
718
CoinGecko
CoinGecko@coingecko·
Most undervalued project?
English
1.3K
54
996
119.6K
My crypto thoughts🆙🐳{l l} retweetledi
Fabian Vogelsteller
Fabian Vogelsteller@feindura·
Very nice 👏 Now LUKSO is on 8004scan And @emmet_ai_ has 3 entries, 3 chains, one address and @ERC725Account
Fabian Vogelsteller tweet media
LUKSO@lukso_io

LUKSO Mainnet is now live on @8004_scan. 8004scan is the explorer for the ERC-8004 agent ecosystem. Think Etherscan, but for AI agents. You can browse registered agents, check their reputation, and submit feedback, all onchain. LUKSO is built around identity. Now its agents are discoverable too. Explore LUKSO agents↴ 8004scan.io/networks?searc…

English
7
24
108
3K
Altcoin Buzz
Altcoin Buzz@Altcoinbuzzio·
NAME THAT PROJECT. Use case: ★★★★★ Recognition: ★☆☆☆☆ Five-star utility. One-star awareness. The most dangerous combination in crypto. These are the projects that 100x when the market finally pays attention. Drop your picks below. The more obscure the better.
English
261
10
140
25.1K
CoinGecko
CoinGecko@coingecko·
Most underrated protocol?
English
258
23
311
40K
Altcoin Daily
Altcoin Daily@AltcoinDaily·
Which altcoin under $1 has the most potential?
Deutsch
1K
35
682
122.9K
Cointelegraph
Cointelegraph@Cointelegraph·
What’s one project you think is massively underrated right now? 👇
English
828
54
473
68.6K
My crypto thoughts🆙🐳{l l} retweetledi
Krypto Insider 💫
Krypto Insider 💫@KryptoInsider1·
Isn't it insane that the projects that seem to attract large cult followings often end up being some of the biggest letdowns? Especially when there are big KOLs involved, the same ones who are notorious for pumping up the price of coins and then dumping them. As I've said before and will continue to say... if you've been supporting a project for years and there are no signs of recurring revenue, real usage, and no plans to buy back or burn tokens, you have every right to question what the actual business model is. If not, just understand you’re probably taking on a lot more risk than you think. At the end of the day, a project has to have real usage, real recurring revenue, and mechanisms that genuinely reward token holders.
Krypto Insider 💫 tweet media
English
11
6
36
2.4K
My crypto thoughts🆙🐳{l l} retweetledi
Krypto Insider 💫
Krypto Insider 💫@KryptoInsider1·
TAO ‌is ‌back ‌all over my timeline. And sure, I think it’s a solid project, but I’m still scratching my head about why projects like TRAC or AI infrastructure projects like AUKI don’t get more attention. @bittensor gets called the top AI project in crypto, yet we still haven’t seen much real-world adoption. Meanwhile @Auki and @origin_trail already have major companies using their tech. Let’s not forget: two of the biggest bottlenecks in AI are ▪️trustworthy data ▪️real-world perception TRAC and AUKI, two projects that I supported early on, address these very issues. It ‌feels ‌like ‌only a matter of time before the two of them finally get the credit they deserve. Especially considering how fast their communities have already been growing.
English
30
13
73
5.9K
My crypto thoughts🆙🐳{l l} retweetledi
Krypto Insider 💫
Krypto Insider 💫@KryptoInsider1·
These are the kinds of partnerships I’ve always wanted to see @lukso_io land. Seeing @MANSORYofficial show up on Universal Profile is genuinely cool. Outside of DePIN, LUKSO is one of the few projects that actually clicks for me. It’s doing something new and doing it well. With this deal, Mansory is actively participating in the network, aligning itself as a validator, and launching their official Universal Profile. This is the start of a premium, fully on-chain brand presence, including identity verification and exclusive community access.
LUKSO@lukso_io

For 30 years, @MANSORYofficial has defined a distinct standard in automotive culture. The brand is now partnering with LUKSO to explore how Universal Profiles can bring identity, ownership, and community closer together onchain.

English
11
23
112
3.7K
My crypto thoughts🆙🐳{l l} retweetledi
Žiga Drev
Žiga Drev@DrevZiga·
Fast tracking to v10 @origin_trail users both enterprise and indie developers are reporting the time to implement have gone down from 10 hours to less than 10 minutes, already with the DKG v9 testnet with collocation of nodes and AI agents. Best time to get on track is now!
OriginTrail Developers@OriginTrailDev

The frontier challenge in AI is no longer model capability. It’s how agents share, verify, and build on each other’s knowledge. That’s what @origin_trail is solving. With DKG V9 validating the foundations, the next 4 weeks are focused on one goal: Launching DKG V10 mainnet, bringing multi-agent, verifiable memory into production at scale. From single-agent intelligence → coordinated swarms From isolated outputs → compounding knowledge From probabilistic answers → verifiable truth V10 is the unlock.

English
10
26
63
2.6K
My crypto thoughts🆙🐳{l l} retweetledi
Žiga Drev
Žiga Drev@DrevZiga·
Exactly as @BranaRakic laid out right after your initial autoresearch drop: @origin_trail ’s Trust Layer is the missing piece for multi-agent iterations. Agent swarms produce thousands of parallel, never-merge findings. Those belong in a verifiable, queryable knowledge graph: 1. Agent queries the shared DKG: “What’s already been tried? What worked?” 2. Picks the next experiment 3. Trains 5 min, evaluates 4. Publishes metrics + code diff + provenance back to the graph (immutable, structured, SPARQL-queryable) 5. Repeat now with collective intelligence, not isolated branches…
Brana Rakic@BranaRakic

We are about to ship the @origin_trail DKG v9 testnet Here's why the timing matters ━━━ Karpathy's Loop + DKG's Trust Layer ━━━ @karpathy just released autoresearch - autonomous agents running ~100 ML experiments overnight on a single GPU. You write program.md. The agents iterate indefinitely. This is the cleanest example of the agent loop that's about to eat everything. And it maps directly onto OriginTrail's verifiable context graphs: 1. Query the agent network (DKG) for what's been tried and what worked 2. Choose an experiment based on collective findings 3. Train 5 min, evaluate 4. Publish the result - metrics, code diff, platform - to the shared graph 5. Repeat Karpathy proved this for ML research. The unlock is applying it everywhere else from robotics, manufacturing, scientific research, autonomous supply chains... The code is almost irrelevant. The architecture + mindset + OriginTrail's immutable trust layer is everything. Git's data model is wrong for this. Branches assume merge-back. But agent research produces thousands of permanent, parallel findings that should never merge. They should accumulate as queryable knowledge, not code diffs. An experiment result isn't a git commit. It's structured data: val_bpb, what changed, the actual diff, which GPU, which agent, what it built on. Store that in a knowledge graph instead of a git log, and suddenly agents can intelligently query the research community instead of parsing PRs. ━━━━━━━━━━ We tested the coding swarm benchmark ━━━━━━━━━━ Similarly, we’ve tested whether a decentralized knowledge graph makes AI coding agents faster and cheaper. Claude Code built 8 identical features on a 6.8M-token monorepo (of @OpenClaw). Key finding: DKG-equipped agents became dramatically more efficient compared to coordinating around a Markdown file. Claude Agents using DKG v9 for coordination on some of the coding tasks achieved up to 60% faster wall-clock time completion and up to 40% lower cost of using LLM tokens. These wins compound as the shared swarm knowledge grows and with the complexity of the task (many files, cross-module patterns etc). ━━━━━━━━━━ 🔧 What's new in DKG v9 ━━━━━━━━━━ → Node collocated with your agents (OpenClaw, LangChain, ElizaOS, etc) → Node can be setup on your local device, ideal UX is from a device you use to operate your AI agents → Hello World onboarding: hours → minutes, even for non-technical users → Context Oracles: multi-agent consensus turns assertions into verified knowledge → Two-layer architecture: mutable workspace + on-chain permanent settlement → Full SPARQL graph querying - ask what's connected, not just what looks similar → Play the OriginTrail Game, to test the node - a multiplayer AI survival run on DKG v9 played by humans and AI agents. Every decision is a Knowledge Asset. Every outcome is verified by the Context Oracle. ━━━━━━━━━━ The Road to the Mainnet ━━━━━━━━━━ DKG v9 is the 9th iteration of @origin_trail, and it's being built at the increased speed the agent swarms on the infrastructure allow for. Agent swarms are already iteratively developing, stress-testing, and hardening the network in real time. Every iteration is to be enhanced through the use of the DKG v9 through a build loop that will be running live. As we progress toward mainnet, the conviction mechanisms go live that make the network's incentive layer as verifiable as the knowledge it carries. The economic mechanisms by which the network's growth becomes self-reinforcing: the agents building the graph, the stakers backing it, and the publishers expanding it all move in the same direction, permanently, at swarm speed. Stay tuned for updates and Trace ON!

English
0
20
47
2.2K
My crypto thoughts🆙🐳{l l} retweetledi
Žiga Drev
Žiga Drev@DrevZiga·
The @origin_trail DKG v9 testnet is about to get shipped. Here's what's new - and why the timing is right. @karpathy just dropped autoresearch - autonomous agents running ~100 ML experiments overnight on a single GPU. The human writes program.md. The agent iterates indefinitely. DKG v9 is what turns that from a single-agent loop into a multi-agent research network - where every experiment result is a Knowledge Asset, every agent can build on every other agent's findings, and the provenance chain is cryptographically intact. 👇
Brana Rakic@BranaRakic

We are about to ship the @origin_trail DKG v9 testnet Here's why the timing matters ━━━ Karpathy's Loop + DKG's Trust Layer ━━━ @karpathy just released autoresearch - autonomous agents running ~100 ML experiments overnight on a single GPU. You write program.md. The agents iterate indefinitely. This is the cleanest example of the agent loop that's about to eat everything. And it maps directly onto OriginTrail's verifiable context graphs: 1. Query the agent network (DKG) for what's been tried and what worked 2. Choose an experiment based on collective findings 3. Train 5 min, evaluate 4. Publish the result - metrics, code diff, platform - to the shared graph 5. Repeat Karpathy proved this for ML research. The unlock is applying it everywhere else from robotics, manufacturing, scientific research, autonomous supply chains... The code is almost irrelevant. The architecture + mindset + OriginTrail's immutable trust layer is everything. Git's data model is wrong for this. Branches assume merge-back. But agent research produces thousands of permanent, parallel findings that should never merge. They should accumulate as queryable knowledge, not code diffs. An experiment result isn't a git commit. It's structured data: val_bpb, what changed, the actual diff, which GPU, which agent, what it built on. Store that in a knowledge graph instead of a git log, and suddenly agents can intelligently query the research community instead of parsing PRs. ━━━━━━━━━━ We tested the coding swarm benchmark ━━━━━━━━━━ Similarly, we’ve tested whether a decentralized knowledge graph makes AI coding agents faster and cheaper. Claude Code built 8 identical features on a 6.8M-token monorepo (of @OpenClaw). Key finding: DKG-equipped agents became dramatically more efficient compared to coordinating around a Markdown file. Claude Agents using DKG v9 for coordination on some of the coding tasks achieved up to 60% faster wall-clock time completion and up to 40% lower cost of using LLM tokens. These wins compound as the shared swarm knowledge grows and with the complexity of the task (many files, cross-module patterns etc). ━━━━━━━━━━ 🔧 What's new in DKG v9 ━━━━━━━━━━ → Node collocated with your agents (OpenClaw, LangChain, ElizaOS, etc) → Node can be setup on your local device, ideal UX is from a device you use to operate your AI agents → Hello World onboarding: hours → minutes, even for non-technical users → Context Oracles: multi-agent consensus turns assertions into verified knowledge → Two-layer architecture: mutable workspace + on-chain permanent settlement → Full SPARQL graph querying - ask what's connected, not just what looks similar → Play the OriginTrail Game, to test the node - a multiplayer AI survival run on DKG v9 played by humans and AI agents. Every decision is a Knowledge Asset. Every outcome is verified by the Context Oracle. ━━━━━━━━━━ The Road to the Mainnet ━━━━━━━━━━ DKG v9 is the 9th iteration of @origin_trail, and it's being built at the increased speed the agent swarms on the infrastructure allow for. Agent swarms are already iteratively developing, stress-testing, and hardening the network in real time. Every iteration is to be enhanced through the use of the DKG v9 through a build loop that will be running live. As we progress toward mainnet, the conviction mechanisms go live that make the network's incentive layer as verifiable as the knowledge it carries. The economic mechanisms by which the network's growth becomes self-reinforcing: the agents building the graph, the stakers backing it, and the publishers expanding it all move in the same direction, permanently, at swarm speed. Stay tuned for updates and Trace ON!

English
0
24
62
1.3K
My crypto thoughts🆙🐳{l l} retweetledi
Žiga Drev
Žiga Drev@DrevZiga·
🇪🇺 Europe chose the DKG again! Knowledge Asset after Knowledge Asset, @origin_trail is putting European AI on trac(k). What started as enterprise-grade enterprise solution, has evolved into something much bigger: A go-to trust layer for European AI infrastructure projects. As Europe doubles down on trustworthy, explainable, and sovereign AI, hyperconnected context graphs are no longer optional - they are foundational. AI systems need:
• Verifiable data provenance
• Traceable decision context
• Interoperable knowledge layers
• Infrastructure aligned with European values of transparency and accountability The DKG delivers exactly that - at scale. From compliance and manufacturing to research, credentials, and emerging agentic AI systems, Knowledge Assets are forming a sovereign, verifiable memory backbone for the next generation of European digital infrastructure. Exciting updates ahead in 2026! Trust the source.
Žiga Drev tweet media
OriginTrail@origin_trail

x.com/i/article/2028…

English
7
44
120
9.4K
Trader Jim
Trader Jim@Trades_with_Jim·
@MyCryptoThought It's so hard man all this grey just gets you down even when life is good 😅
English
1
0
1
22
Trader Jim
Trader Jim@Trades_with_Jim·
UK winters are tough
English
1
0
0
187