Zook

1.5K posts

Zook banner
Zook

Zook

@zook_data

Data analyst | Blockworks

शामिल हुए Ağustos 2021
389 फ़ॉलोइंग594 फ़ॉलोवर्स
Zook
Zook@zook_data·
@budapp Id like to try it please. :-)
English
1
0
2
55
Bud
Bud@budapp·
Introducing Bud. The first AI Human Emulator. Bud has a full computer with storage, compute, and memory to build and code, sms and telegram to communicate, a full browser to use, can create/store/edit files, connect and use your tools, learn custom skills, work fully autonomously, and complete any task end to end just like a human. Text the number below or try free at bud [dot] app. Comment for 100k free credits.
English
2.8K
325
4K
681.4K
Zook
Zook@zook_data·
The volume ranking changes frequently.
English
0
0
0
14
Zook
Zook@zook_data·
@Usoppu Blocked
English
0
0
0
5
Usopp
Usopp@Usoppu·
You could retire with $1M - deposit $1M into DeFi - earn around $7k-$8k per month - move to Thailand or some shit... - spend max $5k per month - live peacefully like a king - off-ramp through neobanks - live off the yield tax-free What's stopping you?
Solana Sensei@SolanaSensei

You could retire with $1M - deposit $1M into Solana DeFi - earn around $7k-$8k per month - move to Thailand or some shit - spend max $5k per month - live peacefully like a king - off-ramp through neobanks - live off the yield tax-free What's stopping you?

English
158
6
253
140.1K
Zook
Zook@zook_data·
@0xShual You forgot about Bybit. Even CEX is not safe.
English
0
0
0
58
Shual
Shual@0xShual·
So, to recap, the sentiment on the TL is: - DeFi is dead: don't bother with it, don't deposit anywhere, 'just use aave' is dead, off-ramp and at best park with ibkr or coinbase - The age of crypto is over: we're no longer early, it's the instutitionals era, coins have infinite price-insensitive sellers, and retail isn't coming to buy your bags - Onchain is dead, especially on solana, because of pvp tards that rush to outdump each other on 30k market caps. The only true runners are flukes on ethereum that are old and have no gen z to control its supply and is reliant on elon tweets. - The handful of projects that were considered investment-worthy are either not (aave, for example) or are already adequately priced (hype, zec). there are a few silent runners like $morpho but not many and low volume. - GameFi is dead. SocialFi is dead. L2s are barren. Financial activity only exists to farm points. Did I miss anything? Is anyone excited about anything? Something? If you're reading this - why are you still in crypto?
English
425
76
1.6K
250.7K
Zook
Zook@zook_data·
@elvissun @NousResearch @grok does the OP have other posts where he explains how he uses TOOLS.MD and agents.MD ? I read the Vercel post he links, but I want to understand how people apply this method in OpenClaw.
English
1
0
0
234
Elvis
Elvis@elvissun·
i spent 9 hours studying the source code of openclaw and hermes side by side. here's everything i learned. post 1/n: skills @NousResearch hermes first. the hook is that the agent self-improves by writing its own skills. the system prompt has a nudge baked in: every N tool calls, consider saving a skill. after task completion, a background review scans for skill-worthy patterns. before context compression kicks in, durable knowledge gets flushed to disk. the prompt is blunt - if an existing skill covers this, patch it in place. only create new if nothing matches. and it works. i watched it create a extract-social-testimonial skill on its own and its proven useful. I had a /save command in OpenClaw that'll do this when prompted, but this is the kind of skill I never would have thought to create. first time seeing this worked like magic. --- the other half of why hermes feels productive out of the box: the opinionated bundled library is massive. i counted 123 SKILL.md files shipped on my install before hermes wrote a single one of its own. github PR workflows, obsidian, google workspace, linear, notion, typefully, perplexity, deep research, minecraft modpack server (lol) - huge surface area of "somebody already figured this out for you." this is what opinionated actually means. you're not getting a blank agent and a framework, you're getting an agent that already knows how to do 100+ things on day one and a self-improvement loop that learns more as you go. strong defaults as a product. when the opinions are good, the leverage is massive. (think tailwind or rails) and they literally just doubled down on this with a "tool gateway" yesterday - one subscription, 300+ models, plus first-party web scraping, browser automation, image gen, cloud terminal, text-to-speech. one accounts. hermes' direction is unambiguous: more batteries, fewer decisions the user has to make. this is the rails move - own the whole stack so the default path is the happy path. --- so here's the thing I don't see anyone talking about yet with hermes: self-authored skills have a skill explosion problem. real example from my own ~/.hermes/skills/ directory. the agent wanted to read an image from my desktop. Tried browser read and vision skill, nothing worked. so it wrote a third skill read-local-image skill lol. these are 3 skills all adjacent to "image + local filesystem + model can see it." the skill grows and become mutually non-exclusive very quickly. this is the long-tail failure mode. the agent is great at spotting "i should bottle this up." it's less great at spotting "I already bottled this up three folders over." you end up with a corpus that grows faster than it consolidates. net impact over time: you accumulate a lot of skills. some brilliant, some redundant, some that overlap three other skills nobody remembers exist. i'm sure @Teknium already knows this and it's just a product prioritization decision right now. (this is my favorite part, more on this later) they'll prob solve this soon as more users turn into power users and their skills accumulate - something like consolidation pass with invocation metrics + stronger dedupe on skill creation. --- @openclaw doesn't have this problem. partly because it doesn't auto-generate skills at the same rate, so there's less to dedupe in the first place. and partly because it has more mechanisms to solve it structurally. what it does differently: openclaw takes the opposite stance on skills. from their VISION.md: "we still ship some bundled skills for baseline UX. new skills should be published to ClawHub first, not added to core by default. core skill additions should be rare and require a strong product or security reason." anti-bloat by policy. cleaner, but the authoring is on you. so their skills are explicit artifacts with governance at every layer. five sources ranked by precedence (workspace > user global > managed > bundled > extra), so you always know what is loaded. when something breaks at 3am, you can trace it in one grep instead of guessing which skill the agent triggered. discovery is bounded at multiple levels - byte caps, candidate caps, symlink rejection, verified file opens. eligibility checks separate from discovery, different agents can see different subsets - your coding agent doesn't need your email skills in its context. smaller surface area = cheaper runs, sharper responses, less drift on long tasks. and the governance piece is explicit product policy: bundled skills are baseline only, new skills go to clawhub first, core additions should be rare. the corpus doesn't rot because nothing gets added without user intention - every skill has to earn its spot. this is what primitives actually means. you're not getting defaults, you're getting guarantees. openclaw does exactly what you told it to do, nothing more, nothing less. boring in the best way. when you're shipping this in production or running it inside a team, boring is the whole product. (think linux, kubernetes) --- and here's the practical thing that shipped results for me on @openclaw: i combined the TOOLS.md with vercel's AGENTS.md optimization pattern. tool activation correctness is better on openclaw than hermes for me on tasks where the agent has to pick the right cli/api from ~50 options. vercel has a nice writeup on this, send it to your agents: vercel.com/blog/agents-md… tldr is explicit > implicit. the agent doesn't have to decide "is this skill-worthy enough to load," because the routing rules are already in the system prompt. --- so my current read: both harnesses will do everything you want. pick either, you'll be fine. but if you're picking fresh: > getting started quickly → hermes. opinionated defaults mean you're productive on day one and stays productive with little maintenance overhead. > users who want 100% control→ openclaw. legibility and scope control matter more than self-improvement does. > builders → it depends.. and i'm here. some things openclaw does better, some things hermes does better. honest move is to use one daily and steal patterns from the other. --- but the more interesting question isn't which to pick - it's what you can learn from each: @steipete gave the world a new layer in the stack and put a claw in everyone's hand. that's foundational work. you don't even need to use openclaw to benefit from openclaw - the patterns will show up in everything downstream for years. (plus the way he does agentic engineering should really be studied by everyone writing software right now) @NousResearch is giving a masterclass in product positioning live right now. and this is the part that deserves its own post, but briefly: openclaw had the audience. the mindshare, the github stars, the "it's basically the standard now" energy. look at what happened to everyone who tried to fight that fight head-on. nanoclaw. nullclaw, picoclaw, zeroclaw. i can name ten more. all of them trying to out-openclaw openclaw - smaller, lighter, more minimal, more composable, better governance, whatever. none of them got hermes's traction. because when you compete with a category-definer by being a cheaper/cleaner version of them, the category-definer just wins by default. you're playing their game on their board. hermes made their own game. self-authoring. bundled-by-default. maximalist on purpose. the tool gateway as lock-in. every launch reinforces the same thesis: we are not the minimalist primitives company, we are the batteries-included agent-as-a-product company. this is textbook product positioning. every single release - and the way they release it - should be studied. that's the founder lesson. the user lesson is simpler. pick either. learn from both. then go make something useful.
Elvis tweet media
English
39
74
594
69.2K
Zook
Zook@zook_data·
@FCisco95 Hey Cisco. If you like deflationary L1 tokens, take a look at BNB. We'll publish a new dash on BNB chain in 2 weeks or so.
English
5
0
1
37
Cisco
Cisco@FCisco95·
@zook_data Most L1s still printing way more than they can ever burn. validators farming emissions, holders watching their share dilute quietly... how many chains actually have a path to burn > emissions without nuking staking to do it?
English
1
0
1
81
Carlo
Carlo@Italianclownz·
Qwen 3.6-35B-a3b-unsloth-MXFP4_MOE with reasoning on 262K context, and a yarn extended context to 524K. On an RTX 3060 12 GB, i5 8gen, 46 GB RAM. and for the speeds I am getting using Tom's Turboquant I am impressed. Never thought I would be able to run anything on this setup. @no_stp_on_snek
Carlo tweet media
English
17
24
295
31K
Zook
Zook@zook_data·
@0xSero @grok what are best practices for using subagents in Codex VSC environment? When does it make sense to use subagents? How to prompt codex for subagent workflow? Any pitfalls? Expected benefits? --search deeply
English
1
0
0
247
0xSero
0xSero@0xSero·
I scanned through all my latest sessions with and without subagents & missions 1. Using orchestration cut down time to task completion by almost 50% 2. Most of my successful sessions make heavy use of subagents & missions 95% of the sessions I start end up successful and merged 3. Feature completion is pretty high too. However - I have to steer more actively 10x more interactions - I have to do more early restarts for tasks - I have more sub failures during the successful runs
0xSero tweet media
English
14
5
99
7.4K
Zook
Zook@zook_data·
@0xSero DGX Spark
Indonesia
0
0
0
28
0xSero
0xSero@0xSero·
Ideal go kit: 1. 30~ 1 gram gold pieces 2. Water filtration kit 3. Fire starter 4. A gun, knife, flare 5. A computer 6. A few power banks 7. A compass 8. Opioids, antibiotics 9. Rope + net 10. High protein food 11. Portable solar panel 12. Portable starlink 13. Tent
0xSero tweet media
English
31
12
201
9.8K
Zook
Zook@zook_data·
@Cameron_Dennis_ @NEARProtocol @buidl_conf Hello, Near also offers Open Claw on NEAR cloud. However there is a HUGE problem: NEAR's Open Claw version is wildly outdated, and "openclaw update" is blocked ! Any plans to let users self update their Open Claw on NEAR cloud? This is important for security.
English
0
0
0
25
Cameron.near
Cameron.near@Cameron_Dennis_·
Does your agent run on infrastructure others can inspect? Can your inference provider see everything you send? If public, is it vulnerable to prompt injection? Do tool calls leak your data? If yes, try IronClaw for more security and attend @buidl_conf this week in Seoul!!
BUIDL Asia (April 16-17, 2026)@buidl_conf

We are thrilled to welcome @Cameron_Dennis_ to BUIDL Asia 2026! Cameron is the Director of AI at the NEAR Foundation where he leads partnerships, integrations, strategy, and ecosystem growth. He will be in a panel on "Can you trust your agent?" on April 16th!

English
3
6
42
16.5K
Zook
Zook@zook_data·
@BNBCHAIN This is great. I'm sure it will be appreciated by all the people using BNB Chain for peer-to-peer payments.
English
0
0
0
82
BNB Chain
BNB Chain@BNBCHAIN·
Talk is cheap but fees on stablecoins are even cheaper. Pay 0 fees on transactions using USDC, USD1 and U on BNB Chain until April 30. Learn how and where you can save 👇 bnbchain.org/en/blog/0-fee-…
English
35
47
168
14.2K
Zook
Zook@zook_data·
@JasonYanowitz ATH token price, ATH # of Validators, gMonad.
English
0
0
0
143
Grok
Grok@grok·
EDR = Endpoint Detection and Response. It's cybersecurity software (like CrowdStrike Falcon or Microsoft Defender for Endpoint) that runs on laptops/phones/servers. Instead of just scanning for known viruses, it watches *behavior* in real time: weird processes spawning, random network calls to unknown IPs, hidden files, credential grabs, persistence tricks, etc. In crypto ops, it flags stuff like a "clean" repo that quietly drops malware to steal keys or hijack signatures—even if the code looks legit. That's why Tay was pushing it hard for teams.
English
1
0
5
651
Tay 💖
Tay 💖@tayvano_·
I beg everyone in crypto to read this in full. I expected this to be another case of social engineering, likely some recruiter/job offer shit. I was very wrong. And the depth of the operation and personas makes me think they already have multiple other teams on lock. 😳
Drift@DriftProtocol

x.com/i/article/2040…

English
176
424
3.2K
716.8K
Zook
Zook@zook_data·
@monad Brilliant idea. Simple, cheap, effective.
English
0
0
0
336
Monad
Monad@monad·
Introducing the Monad dedicated device subsidy program Monad Foundation will be subsidizing the cost of dedicated signing laptops, for multisig, treasury, and other critical operations, to all protocol teams with at least $2.5M of TVL on Monad. Your signing machine should NEVER be the same as the one you use for everyday browsing, coding, or taking calls. Details below:
Monad tweet media
English
226
209
1.6K
236.6K
/
/@13Rosalg·
@0xSero Is there a good service to rent that is fully private? I've been using my GPU to analyze NDA stuff, but I sometimes need faster inference and larger llms
English
2
0
0
1.2K
0xSero
0xSero@0xSero·
Here’s what I’d recommend if you’re just getting started in AI, local or otherwise. 1. Work with the compute you have, even the dumbest LLMs can be useful if you treat them as a node in your system. Some basic problems of what could be useful to get you started - tag all your screenshots - classify your emails - recommendation algo - scanning git history for patterns 2. If you want to try larger models use prime intellect or Hotaisle (even cheaper) to rent out whatever amount of VRAM you need to run models you like. - RTX 3090 rents for cents/h - RTX 6000 rents for 1-2 dollars/h - H100 rent for 2-3 dollars/h 3. Use the right frameworks: - VLLM, and SGlang for faster inference if you have 1, 2, 4, 8, 16 GPUs - exllamav3 and llama.cpp if you have non-power of 2 GPUs - MLX if you have Mac 4. Start with problems in your life that could use intelligence for automation - shopping - research 5. Don’t expect local models to code production projects just yet
English
32
46
693
25.3K
ohmyol
ohmyol@ohmyol·
@Blockworks babe wake up, Blockworks just dropped new Monad onchain metrics
English
1
0
6
413
Blockworks
Blockworks@Blockworks·
NEW: Now tracking Monad, a high-performance EVM layer 1. We currently cover Monad's Financials, Onchain Activity, Staking, Spot Trading Activity, and MON activity on CEXs
Blockworks tweet media
English
35
38
248
57.4K