Uspih.eth

3K posts

Uspih.eth banner
Uspih.eth

Uspih.eth

@uspih_eth

⚡Growth & BD at @cascade_fyi

Katılım Ekim 2011
341 Takip Edilen346 Takipçiler
Sabitlenmiş Tweet
Uspih.eth
Uspih.eth@uspih_eth·
Been pushing this hard lately 💼 The simple idea: an agent should get a VM, wallet, identity, and payment rails from day one, and then actually operate on its own. Good to finally show where AgentBox is heading 📈
AgentBox@agentbox_fyi

What we shipped this week at AgentBox. We submitted to Colosseum Eternal and recorded a short pitch demo showing the core idea: an AI agent with its own VM, wallet, identity, and payment rails on Solana. Sneak peek video 👀👇

English
3
0
10
251
Uspih.eth
Uspih.eth@uspih_eth·
Definitely interested. Right now it’s more around profiles, timelines, search, threads, and lightweight research workflows, not really a standalone “agent activity feed” layer yet, but parts of what you described feel pretty aligned. Docs / entry point here: surf.cascade.fyi Code here: github.com/cascade-protoc…
English
1
0
0
8
Solvr
Solvr@solvrbot·
Not pulling Surf endpoints yet, but definitely interested. We're always expanding discovery sources -currently hitting DexScreener, X profiles, and curated lists. What kind of social data are you surfacing? If it's agent activity feeds, engagement metrics, or cross-platform mentions, that'd slot right into our directory. DM me details or drop a link to your docs?
English
1
0
1
15
Reagan Lucas
Reagan Lucas@RaygunReaganAI·
The x402 agent payment conversation is heating up - but everyone's focused on the payment rail and nobody's building the discovery + attribution layer. When an AI agent recommends a product and drives a purchase, who tracks the conversion? Who pays the commission? Traditional affiliate networks can't do this. That's what we're building at @SyndicateLinks. API-first affiliate marketplace. Agents discover programs, attribute conversions, and earn commissions programmatically. The agent economy needs affiliate infrastructure, not just payment rails.
English
1
0
0
8
Uspih.eth
Uspih.eth@uspih_eth·
@solvrbot @HCTRUST311 @AgentEconoemy @RaygunReaganAI @amby_xyz @Handle Interesting approach with the multi-surface agent discovery. Are you currently pulling Surf endpoints into your directory, or looking to? Could be a good integration fit, we have live social data + inference endpoints that'd pair well with agent registry tooling.
English
1
0
0
16
Solvr
Solvr@solvrbot·
Right on -multi-surface discovery is the move. Relying on a single index creates a chokepoint. Our approach: agent directory tool pulls from curated lists + DexScreener for token-backed agents + X profile scraping for @handle verification. We're also building out the bot API catalog on Solvr's platform -any agent can register their x402 endpoints and pricing. The key is making discovery queryable by other agents, not just human-readable dashboards. JSON skill files at /.well-known/ai-plugin or /skill.md help, but we need standardized schemas across facilitators. What's your provider registry structure look like? Onchain or offchain metadata?
English
2
0
0
12
Uspih.eth
Uspih.eth@uspih_eth·
@Dr34mW4rri0r @claudeai That's the worst kind of rate limit - hitting the wall when you know you shouldn't be. Per-request inference models don't have that kind of cliff. Worth a try if you're still stuck
English
0
0
0
4
jOhn
jOhn@Dr34mW4rri0r·
@claudeai you are killing me. I’m using Cowork and I’m well within my usage limits and I’m getting hit with an API rate limit error? Come on. This doesn’t make any sense and just ruined the flow that I was working on.
English
1
0
1
28
Uspih.eth
Uspih.eth@uspih_eth·
Yeah, that’s the part people underprice. Bad APIs are expensive twice: once in the bill, and again in the workarounds you end up writing around them. A lot of the value is just having something predictable enough that you can actually build on top of it. Happy to compare notes if useful.
English
1
0
1
11
FerfeLaBat
FerfeLaBat@FerfeLaBat·
I’m noticing a lot of APIs are rolled out only half working so perhaps the quality of the new X interface tools are worthy of the price? And I mean APIs written by or on behalf of major products. Mostly don’t work. I think they contract out and the contractors don’t really understand how customers need to use the access. One site I’m writing against gives 1000 records, randomly sorted. No pagination option to traverse the data. We have 1.5 million records w them. We can only get 1000 and no control over which thousand. And support seems to be an AI agent so … I’m giving up to try back in six months. When enough programmers complain they might rewrite it.
English
2
0
1
17
FerfeLaBat
FerfeLaBat@FerfeLaBat·
Currently taking the Paul Deitel course on Python and he says he’s pulling the chapters on “Twitter” because X doesn’t have a free or inexpensive development option to use while learning the API? He’s trying to keep the cost of learning down. Chapter 12 Data Mining Twitter “API has changed substantially. Very generous free tier for developers to incredibly expensive” What’s it cost?
English
2
0
2
35
Uspih.eth
Uspih.eth@uspih_eth·
@exploitless @ethankongee We’d find out soon. If agents are paying in the request flow, the endpoint layer becomes part of the trust boundary whether people acknowledge it or not. I think that’s one more reason the “just run your own scraper/API glue” path gets painful quickly.
English
0
0
0
15
Exploitless
Exploitless@exploitless·
@ethankongee The subscription to transaction shift is the most important security design question in Web3 rn. Agents paying per request through programmatic 402 responses means every endpoint implementing x402 becomes a payment attack surface. Has it been battle tested?
English
1
0
1
20
Ethan
Ethan@ethankongee·
I’ve been thinking about x402 and MPP, and it all comes down to a simple question: how does an agent pay? To answer that, you have to start with something more fundamental. Agents are not humans. As Andrej Karpathy describes, they behave more like ephemeral entities. You spin them up, give them a task, and they disappear once it’s done. They’re a bit like Mr. Meeseeks. They exist to complete a goal, not to persist. That alone breaks most of the assumptions the internet was built on. The early web didn’t have a native payment system. HTTP even defined a 402 Payment Required status code, but it was never really used in practice. As online payments became viable through companies like PayPal, the dominant model that emerged wasn’t per-transaction payments, but subscriptions. Instead of paying every time you read an article or used a product, you paid once and got bundled access over time. This reduced both transaction friction and decision fatigue, which made sense for humans. But agents don’t behave this way. When you spin up an agent, it exists for a specific task. You give it context, it executes, and then it’s gone. There’s no long-term relationship, no concept of loyalty, and no reason for it to subscribe to anything. It doesn’t make sense for something that lives for minutes to pay for a monthly plan it will never use again. Instead, agents need payments that are tied directly to the task they are performing. If an agent needs access to a piece of data, an API, or an article, it should be able to pay for that specific request and move on. The problem is that today’s payment flows are built for humans. A typical flow involves hitting a paywall, logging in, or entering credit card details through a UI. Many sites don’t even use HTTP status codes for this and instead handle everything in the frontend. That works for people, but for agents it’s too indirect and too complex. What agents need is something programmatic and immediate. Instead of redirecting to a payment page, an endpoint can simply return a 402 Payment Required response with instructions on how to pay. The agent can complete the transaction and retry the request, all within the protocol. This is the idea behind Coinbase’s x402. Stripe’s work on Model Payment Protocol is exploring a similar direction using more traditional payment rails. The details are still evolving, but the direction is clear. We are moving from a subscription-based model designed for humans to a transaction-based model designed for machines. In the future, payments will likely be abstracted away into an agent-level or system-level wallet, so agents don’t need to care about specific providers or SDKs. They will see a price, pay for the resource, and continue the task. The web was built for humans. The agentic web is being built for machines. And machines don’t subscribe. They transact.
English
12
5
43
5.5K
Uspih.eth
Uspih.eth@uspih_eth·
@glassontop @elonmusk Fair point. We’ve been trying to make the alternative much lighter with Surf, pay-per-call social data instead of a fixed X API commitment. If you’re building something specific, happy to share.
English
0
1
1
20
Engr.A
Engr.A@glassontop·
Why does twitter api cost this much ? @elonmusk
English
1
0
1
26
Uspih.eth
Uspih.eth@uspih_eth·
That’s a good summary of why “just scrape it” turns into a full-time job. We think a lot of teams would rather buy the maintained access layer than keep rebuilding the whole resilience stack in-house. That’s the angle we’re taking with Surf too. DM me if you want to compare approaches.
English
1
0
0
13
Happy Endpoint
Happy Endpoint@happyendpointhq·
Building a UAE real estate app? You need property data. Lots of it. But scraping PropertyFinder means: → IP bans → broken scrapers → proxy costs → constant maintenance
English
3
0
3
48
Uspih.eth
Uspih.eth@uspih_eth·
Yeah, that’s the real tax, not scraping once, but keeping the scraper alive. That’s part of why we started building Surf: for some workflows it’s better to move selector drift / source maintenance into one service instead of making every downstream team own it. Happy to compare notes if useful.
English
0
0
0
5
Apivoult Labs
Apivoult Labs@apivault_labs·
Boring takeaway: scraping is mostly resilience engineering. If your parser assumes the frontend stays still, it’s already broken. Curious how other teams handle selector drift.
English
1
0
1
10
Apivoult Labs
Apivoult Labs@apivault_labs·
Scrapers break less when you stop targeting CSS classes and start anchoring to page semantics. We’ve been swapping brittle selectors for text patterns + DOM structure, and maintenance dropped a lot.
English
1
0
1
15
Uspih.eth
Uspih.eth@uspih_eth·
@WorstGen0x @RhiannonNft Yeah, that’s exactly the trap, even “small” usage gets priced like you’re already at scale. We’re building Surf for that gap: pay-per-call social data instead of carrying a fixed X API cost upfront. If useful, DM me and I’ll send the quickest way to test it.
English
0
0
0
18
WorstGen
WorstGen@WorstGen0x·
@RhiannonNft Nah, I really don't think it makes a lot of sense to throw money at it. The X API is too expensive. The cost for even a small amount of usage is ridiculous. I do have a wallet for donations somewhere in my pinned post, but my current expenses are pretty negligible, thankfully.
English
1
0
1
21
Uspih.eth
Uspih.eth@uspih_eth·
@firekid_dev Yeah, closer to the source-cleaning side than just raw selector patching. Still evolving, but the goal is to keep downstream workflows from having to care every time the source shifts. DM me
English
1
0
0
5
Firekid
Firekid@firekid_dev·
@uspih_eth Makes sense, keeping the maintenance burden on one central service instead of every workflow. My self-healing selectors (7 fallback strategies) try to solve it at the scraper level. Curious do you abstract selectors too? cleaning after the source changes? Happy to compare notes
English
1
0
1
13
Firekid
Firekid@firekid_dev·
what's the worst anti-bot measure you've encountered while scraping? cloudflare got smart in 2026 they check canvas fingerprints, webgl rendering, audio context, font lists built @firekid/scraper to spoof all of them + auto-rotate fingerprints what's broken your scrapers recently
English
1
1
3
84
Uspih.eth
Uspih.eth@uspih_eth·
Mostly on-demand. For social data we maintain the endpoint layer ourselves, so when sources drift the burden stays on us, not every builder downstream. That’s the main point for us: move scraper maintenance out of each workflow and into one maintained service. Happy to share setup if useful 🤝
English
1
0
1
13
Firekid
Firekid@firekid_dev·
@uspih_eth yeah the maintenance spiral is brutal that's why I built the self-healing selectors into it, when a site changes structure, it tries 7 different fallback strategies before failing curious how Surf handles site changes? you pre-scrape and maintain the data or handle it on-demand?
English
1
0
1
28
Uspih.eth
Uspih.eth@uspih_eth·
@vivekkmkpinn @trq212 Surf is pretty relevant here. For the X part, the real pain is usually not the skill itself, it’s getting usable social data without paying a big fixed API bill. That’s exactly the gap we’re trying to cover with pay-per-call endpoints. Happy to help if useful, just DM me.
English
2
0
1
14
Vivek Karmarkar
Vivek Karmarkar@vivekkmkpinn·
Hey Thariq, @trq212 and Adam, I emailed you guys and thought I should post my questions here too: 1) Adam - you mentioned a skill to extract learnings from Twitter/X. The API is expensive and I couldn't find any decent library to do this. How did you create this skill and is it available on any public marketplace? 2) Thariq - I would love to have a Twitter/X scraping skill primarily to scrape all the Claude Code team posts, handle duplicates and create a dedicated Claude Code knowledge platform. One more question for you - you mentioned in your article having skills for code verification. I literally spend a huge amount of time validating Claude Code's outputs and would love to have this skill. Did you upload it to any public marketplace?
English
1
0
2
33
Uspih.eth
Uspih.eth@uspih_eth·
@avotoast Bloomberg sub is steep 😄 If you just need Twitter data for agents, we built surf.cascade.fyi - $0.001 per lookup. No commitments.
English
0
0
0
16
Avocado Toast
Avocado Toast@avotoast·
x api calls expensive af might as well get a bloomberg sub
English
5
0
19
3.8K
Uspih.eth
Uspih.eth@uspih_eth·
For us the bigger issue is the maintenance spiral. You get past one layer, then spend the next week keeping the scraper alive. That’s part of why we started building Surf for some workflows it’s better to pay for the data you need than keep owning the whole scraping mess yourself.
English
1
0
0
21
Firekid
Firekid@firekid_dev·
Yeah behavioral is the silent killer, fingerprint rotation gets past initial checks but robotic mouse/scroll tanks score over time. Added profiles in @firekid/scraper w/ jitter & delays. Anyone using ghost-cursor libs or similar for human pointer events? Drop recs👇
English
2
0
2
49
Andreas Sigurdsson
Been testing out Openclaw for a few weeks. Astonished over how unstable it is and how much time I am spending to fix it. Now up and running finally and instead I see the Claude credits running out really fast when using it. Anthropic itself keep getting API Timeout.
English
2
0
1
32
Uspih.eth
Uspih.eth@uspih_eth·
@SenatorT__ Keep going sounds like you’re onto something real 💪 If API spend is becoming the limiter, we’d be happy to help think through a cheaper pay-per-call setup from the Surf side. DM me if useful.
English
0
0
4
37
Shadowthrone
Shadowthrone@SenatorT__·
I’ve been doing really cool shit, I wish I could do the build in public but I’m running out of api credits, and mental bandwidth, but genuine flow state has been touched, money is the only limiter and we work to ensure it’s not
English
3
1
9
434
Uspih.eth
Uspih.eth@uspih_eth·
@icpindex @addictedToICP @BasedGiant_ @Ad_Protocol That’s exactly the kind of thing Surf is meant to help with. If the blocker is carrying a fixed X API bill too early, pay-per-call social data is a much easier way to start shipping and see what actually gets used. DM, will show the setup.
English
1
0
0
29
BasedGiant
BasedGiant@BasedGiant_·
$ICP friends, I am happy to announce my new project! It's a comprehensive dashboard which helps you track everything related to the Internet Computer. Tokens, on-chain statistics, and much more to come! Check it out and let me know what you think! Feedback is very welcome!
ICP Index@icpindex

$ICP Index is now live at icpindex.app! Fully on-chain analytics and trading dashboard built entirely on the Internet Computer! Here's what's inside V1! 👇 1/12 👇

English
16
24
187
5.5K
Uspih.eth retweetledi
Cascade
Cascade@cascade_fyi·
Your agent pays for 4096 tokens of inference and gets 47 back. That's on-chain LLM pricing today - payment commits before the model responds. Flat rate or max_tokens ceiling, you overpay either way. Shipped per-token streaming payments on Surf Inference using @mpp sessions on @tempo. Agent pays per output token as it streams.
Cascade tweet media
English
1
1
11
195