Vux

9 posts

Vux

Vux

@Iri_146

Ngta không phải thất bại vì thiếu cơ hội mà bởi vì kì vọng quá nhiều!

Katılım Aralık 2015
15 Takip Edilen16 Takipçiler
Vux retweetledi
seb
seb@sebbsssss·
Two weeks ago, we set out to make AI memory portable. Today, Clude is the first AI memory layer with a real, signed, open file format. Every memory carries a cryptographic receipt, content-hashed, signed with your private key, and anchored to Solana. Anyone can verify it. No one needs to trust us. We shipped a tiny standalone verifier. Install a thirty-kilobyte package, point it at any pack, and audit it without a Clude account. Backups, encryption, attachments, signed soft-deletes for GDPR, all in the spec. On top of that, we built the Brain Wiki. Your conversations don't disappear into chat history. They auto-organize into a structured, cross-linked knowledge base. Open questions. Decisions. To dos. The agent flags contradictions when older notes disagree with newer ones. Install a vertical like Compliance, and memories about audits, SOC2, or regulator asks start routing into the right topics automatically, by keyword, and by embedding similarity. Export the whole thing as markdown, open it in Obsidian, or share a single topic with a colleague. Now the bigger play: memory tokenization. Every memory becomes a first-class digital asset. Your AI's knowledge, every conversation, every decision, every preference it learned about you, becomes something you can own, prove, and move. You grant or revoke access cryptographically. You move memory between agents, Clude, ChatGPT, Claude, anything that supports the open spec. You prove what your AI knew, and when, without trusting the vendor. Memory marketplaces. Auditable AI. Multi-agent interop with cryptographically verifiable trust. This is the moment memory stops being a feature buried inside someone else's product. It becomes a protocol. Open. Portable. Yours. Welcome to the memory layer
English
5
12
49
1.9K
Vux retweetledi
biohacking
biohacking@biohackingsol·
Biohacking is turning into a trillion-dollar market.
biohacking tweet media
English
4
6
42
917
biohacking
biohacking@biohackingsol·
Peptides are just the tip of the iceberg It’s all biohacking
biohacking tweet media
English
6
10
36
1.1K
Vux retweetledi
seb
seb@sebbsssss·
We rejoice when we see a larger context window model released. It's great but most people don't see the other side of things. Every loaded tool or extended context eats tokens whether you use it or not on that turn. It's what we call "context tax"; essentially you're being taxed (paying more tokens due to larger context windows) The recent announcement from @AnthropicAI on moving to API billing means the ‘context tax’ is now real. If you’re building agents, token-efficient memory architecture is no longer optional. Every token sent to an LLM introduces a tradeoff: increased cost, added latency, and diminishing performance. Beyond a certain threshold, additional context no longer improves outcomes. Instead, it leads to “context rot”, where the model becomes less effective as it struggles to navigate accumulated noise. So I tested this with Clude with real world production data and models. 200 memories. Same question. Same model. One run stuffs everything into the prompt. The other uses Clude's semantic recall to retrieve only what's relevant. Native: 2,081 input tokens. Clude: 229 input tokens. 89% fewer tokens. Same answer. And here's the part that matters at scale: without selective recall, your costs grow linearly with every memory you add. 1,000 memories? 10,000? Your prompt just keeps getting fatter. With Clude, it stays flat, because your agent only pulls what it needs for that specific query. This is one part of what we're building. @cludeproject separates memory from the model
Vox@Voxyz_ai

Posted about token waste yesterday. a reader came back with his own numbers. 139M input. 935K output. 148 to 1. his agent was injecting 52KB of context every single turn before it even saw the message. MEMORY.md alone was 22KB. every conversation started with rereading a small book. this is not an edge case. most agents are not thinking. they are reviewing. the fix is boring. move everything except core rules out of the prompt. put it in semantic search. let the agent fetch what it needs, when it needs it. he said token usage dropped 40-60% after the change. quality stayed the same. the most expensive thing is not paying your agent to think. it is paying it to reread the same files over and over.

English
6
10
38
13.2K
Vux retweetledi
seb
seb@sebbsssss·
Everyone says their AI has memory. Almost none of them do. @cludeproject Chat is the first AI chat app with real persistent memory. Not retrieval hacks Not bloated context windows Not fake “memory” Real memories that you actually own and can see across chats Visible. Searchable. Usable. Any model Transparent pricing. Up to 250x cheaper than raw API. Free test credits are live now. First come, first served. Start chatting and feel what AI is like when it actually remembers you at clude.io/chat Bonus for the first 100 users: RT + reply “CHAT” with your wallet and we’ll add extra credit.
English
7
16
45
3.2K
Vux retweetledi
Corey Ganim
Corey Ganim@coreyganim·
10 niche AI agents you can sell for $2,000-5,000 per month: 1) speed to lead agent (any industry) 2) Maintenence coordinator agent (property managers) 3) claims processing agent (insurance) 4) competitive market analysis agent (real estate) 5) setter agent (coaching) 6) patient intake/scheduling agent (dentists/doctors) 7) order status/returns agent (ecom) 8) inventory reorder alert agent (restaurants) 9) recruitment screen agent (staffing firms) 10) compliance document review agent (accounting/finance firms) Pick a lane and get after it. Truly unlimited opportunity right now.
Sahil Bloom@SahilBloom

There are multiple $1B+ opportunities to build managed AI agent "swarms" for specific industry verticals. Here's how I think about it: After just a few days toying around with agents, it's clear to me that the biggest challenge for adoption from non-tech companies/people isn't around initial deployment. It's going to be actually getting value out of the agents after they're deployed. You might be able to build and deploy an agent, but what the hell do you do with it after it's deployed? How do you train it to get better? What are the use cases that are most valuable for your industry? What are the latest skills that it needs to function at a 10/10 level? Without that, you're just going to have a bunch of fancy looking AI agents gathering dust on the shelves because you have no clue how to get any value out of them. That's the opportunity... Here's how you grab it: Pick a valuable industry vertical. Let's say finance. Build an agent "swarm" that is hyper-specific to that industry use case. So, for finance, it might be around modeling, industry case studies, company analysis, document review, etc. Hire a handful of ex-finance folks (or get them at a high hourly rate in their off-time). Use their industry expertise to train the agents on the initial expertise plus to refine them on an ongoing basis. You could niche down even further and choose one specific use case for an initial land grab (i.e. a modeling agent swarm or a loan analysis agent swarm). Deploy the agents the same way a staffing firm would deploy into a company. You could charge a one-time implementation plus ongoing annual license fee. Continue to manage and improve the agents using the data and insights coming back from customers. Manage them, keep them up to date, fix any issues. Customer is happy because they get the benefits of the transformative tech and cost savings without having to understand the tech or improve it. You're happy because you are making money (and doing something pretty cool). You could probably replicate this exact playbook across a long list of verticals (hence why I think there are multiple $1B+ opportunities). Just a thought...

English
13
33
455
47.4K
Vux
Vux@Iri_146·
@khaiminh_if Mình có ngồi canh đánh mấy con dạng này, mỗi ngày đội dev sẽ làm tầm 3,4 con tùy đợt, mấy hôm nay thấy tụi nó làm nhiều Ae nào chơi fl chéo có kèo mình up tw chơi chung, có bị rug chung cùng khóc :))
Tiếng Việt
1
0
2
176
Khải Minh
Khải Minh@khaiminh_if·
Mới nghiên cứu được quả TIP siêu đỉnh, test được 4 tháng rồi, mỗi ngày farm bên $sol được 50-200$ có ngày có, có ngày không, nhưng đều, và rủi ro còn tính theo lòng tham mỗi người nữa, kiếm lặt vặt, nhưng thời gian dài gộp lại cũng khá thơm đó. Sẽ chia sẻ cho ae hữu duyên
Tiếng Việt
116
5
158
40.4K
Vux retweetledi
Michy / 米奇
Michy / 米奇@michyexe·
The x402 ecosystem is exploding, and the most telling proof is the number of teams integrating it directly into their products. Whether it’s APIs charging per request, compute providers charging per inference, or agents paying for data and tools as they operate, x402 has quietly become the backbone of an emerging machine economy. The next era of software won’t rely on monthly subscriptions or human-triggered payments, but on real-time, per-action settlement between machines. Here's a list of more projects spearheading the shift. — Core Infrastructure & Protocol Extensions Projects extending or securing the x402 protocol layer itself, adding privacy, consensus, or specialized runtime enhancements. ● Protocol Extensions & Security Layers : - Projects : @PRXVTai (px402), @t54ai (x402-secure) & @Ch40sChain They build privacy, trust, and consensus layers over x402. - PRXVTai focuses on ZK privacy - t54ai on trust/security, and Ch40sChain replaces the facilitator with a BFT workflow. ● ZK & Trustless Payments : - Projects : @NovaNet_zkp & @radrdotfunzkx402 enables ZK-verified USDC payments RADR’s ShadowPay provides gasless ZK payments on x402. ● Infrastructure-Level Integrators : - Proejcts : @quaindotcom & @alt_layer Quain integrates x402 at the RWA/AI infrastructure level Alt Layer’s “x402 Product Suite” includes a Facilitator and Gateway. — Agentic Frameworks & AI Ecosystems These implement x402 for AI agent payments, agent coordination, or monetization of autonomous systems. ● AI Agent Platforms / Launchpads : - Projects : @EternalAI_, @Treasure_DAO & @memeputer They enable AI agents to launch, transact, and monetize services using x402. ● AI Agent Frameworks: - Projects : @karum_AI, @TheKodeusLabs, @Cod3xOrg, @animusuno & @Unibase_AI They build SDKs and dev kits for agents to coordinate, pay, or share data via x402. ● Compute & Robotics Integration : - Projects : @comput3ai, @Roba_Labs, @SynthdataCo & @turf_network They Implement x402 micropayments for compute access, robotics, and data transformation. ● Major AI Integrations - Projects : @AnthropicAI, @openmind_agi, @Orbofi & @Hyperware_ai They Integrate x402 into MCP or agentic frameworks to let models (like Claude) pay for data, services, or API calls. — Payment Facilitators, Gateways, & SDKs These are the core payment enablers and developer tools for embedding x402 micropayments. ● Facilitators / Gateways - Projects : @0xGasless, @alt_layer, @Treasure_DAO, @Unibase_AI & @scattering_io They provide infrastructure to process x402 payments across ecosystems like Base and Avalanche. ● Payment SDKs / Dev Tools - Projects : @vercel, @brewitmoney, @1shotapi, @HeyElsaAI, @AurraCloud, @onchainpayment & @OneAnalog They offer developer tools and SDKs for adding M2M micropayments or pay-per-intent functionality to apps. ● Payment UX / Gateway Tools - Projects : @useload & @itsgloria_ai They enable micropayment-gated links, pay-per-call APIs, and paywalled data gateways. — Explorers, Analytics, and Meta-Layer Services Projects that analyze, visualize, or aggregate the x402 ecosystem’s on-chain data. ● Explorers / Dashboards - Project: @x402scan It is an onchain explorer and agent dashboard for monitoring x402 ecosystem activity. ● Analytics / Aggregators - Project: @scattering_io It serves as a facilitator aggregator and analytics hub for x402 protocols. — Platform-Level Integrations Entities embedding x402 directly into their existing API or platform infrastructure to enable monetization. ● API Monetization Platforms - Project: @neynarxyz, @Hive_Intel, @ar_io_network & @Polymer_Labs They integrate x402 to monetize access to APIs, blockchain data, and gateways. ● Enterprise & Merchant Tools - Project: @AurraCloud & @itsgloria_ai They offer merchant-facing micropayment systems powered by x402.
Michy / 米奇 tweet media
English
80
34
248
142.2K