76

17 posts

76 banner
76

76

@TheLastDevvor

slut lover. addicted to paying dex. sooo fucking viral. if you're reading this you're a loser.

شامل ہوئے Eylül 2023
0 فالونگ26 فالوورز
76
76@TheLastDevvor·
@deuna1x yeah for sure bro
English
0
0
0
2
deuna
deuna@deuna1x·
Bitcoin was on ETH? Shouldn't the ticker be ETH
English
3
0
1
21
Florida Man
Florida Man@floridamandevs·
HE built "pods" which essentially allows you to upload multipole devices to an ai cloud and they can sync up and work togtehr. literal cracked innovation on a project development level. Redirecting 100% of creator rewarsd to @varun_mathur and revoking edit permissions.
Varun@varun_mathur

Introducing Pods Hyperspace Pods lets a small group of people - a family, a startup, a few friends, to pool their laptops and desktops into one AI cluster. Everyone installs the CLI, someone creates a pod, shares an invite link, and the machines form a mesh. Models like Qwen 3.5 32B or GLM-5 Turbo that need more memory than any single laptop has get automatically sharded across the group's devices - layers split proportionally, inference pipelined through the ring. From the outside it looks like one OpenAI-compatible API endpoint with a pk_* key that drops straight into your AI tools and products. No configuration beyond pasting the key and changing the base URL. A team of five paying for cloud AI burns $500–2,000 a month on API calls. The same team's existing machines can serve Qwen 3.5 (competitive on SWE-bench) and GLM-5 Turbo (#1 on BrowseComp for tool-calling and web research) for free - the hardware is already on their desks. When a query genuinely needs a frontier model nobody has locally, the pod falls back to cloud at wholesale rates from a shared treasury. But for the daily work - code reviews, refactors, research, drafting - local models handle it and nobody gets billed. And when it is idle, you can rent out your pod on the compute marketplace, with fine-grained permissions for access management. There's no central server involved in inference. Prompts go from your machine to your pod members' machines and back: all of this enabled by the fully peer-to-peer Hyperspace network. Pod state - who's a member, which API keys are valid, how much treasury is left - is replicated across members with consensus, so the whole thing works on a local network. Members behind home routers don't need port forwarding either. The practical setup for most pods is three models covering different jobs: Qwen 3.5 32B for code and reasoning, GLM-5 Turbo for browsing and research, Gemma 4 for fast lightweight tasks. All running on hardware you already own. Pods ship today in Hyperspace v5.19. Model sharding, API keys, treasury, and Raft coordinator are all live. What Makes This Different - No middleman. Your prompts travel from your IDE to your pod members' hardware and back. There is no server in between reading your data. - No vendor lock-in. Pod membership, API keys, and treasury are replicated across your own machines using Raft consensus. If the internet goes down, your local network keeps working. There is no database in someone else's cloud that your pod depends on. - Automatic sharding. You don't configure layer ranges or calculate VRAM budgets. Tell the pod which model you want. It figures out how to split it across whatever hardware is online. - Real NAT traversal. Your friend behind a home router with a dynamic IP? Works. No VPN, no Tailscale, no port forwarding. The nodes handle it. - Free when local. This is the part that matters most. Cloud AI bills scale with usage. Pod inference on local hardware scales with nothing. The marginal cost of your 10,000th prompt is the electricity your laptop was already using. Coming soon: - Pod federation: pods form alliances with other pods. - Marketplace: pods with spare capacity can sell inference to other pods.

English
8
0
3
967
76
76@TheLastDevvor·
@varun_mathur 589BX1KK8xeQ2ucFCyTgPKnfwgivvYaArApoWMjpump
Filipino
0
0
0
5
Varun
Varun@varun_mathur·
Introducing Pods Hyperspace Pods lets a small group of people - a family, a startup, a few friends, to pool their laptops and desktops into one AI cluster. Everyone installs the CLI, someone creates a pod, shares an invite link, and the machines form a mesh. Models like Qwen 3.5 32B or GLM-5 Turbo that need more memory than any single laptop has get automatically sharded across the group's devices - layers split proportionally, inference pipelined through the ring. From the outside it looks like one OpenAI-compatible API endpoint with a pk_* key that drops straight into your AI tools and products. No configuration beyond pasting the key and changing the base URL. A team of five paying for cloud AI burns $500–2,000 a month on API calls. The same team's existing machines can serve Qwen 3.5 (competitive on SWE-bench) and GLM-5 Turbo (#1 on BrowseComp for tool-calling and web research) for free - the hardware is already on their desks. When a query genuinely needs a frontier model nobody has locally, the pod falls back to cloud at wholesale rates from a shared treasury. But for the daily work - code reviews, refactors, research, drafting - local models handle it and nobody gets billed. And when it is idle, you can rent out your pod on the compute marketplace, with fine-grained permissions for access management. There's no central server involved in inference. Prompts go from your machine to your pod members' machines and back: all of this enabled by the fully peer-to-peer Hyperspace network. Pod state - who's a member, which API keys are valid, how much treasury is left - is replicated across members with consensus, so the whole thing works on a local network. Members behind home routers don't need port forwarding either. The practical setup for most pods is three models covering different jobs: Qwen 3.5 32B for code and reasoning, GLM-5 Turbo for browsing and research, Gemma 4 for fast lightweight tasks. All running on hardware you already own. Pods ship today in Hyperspace v5.19. Model sharding, API keys, treasury, and Raft coordinator are all live. What Makes This Different - No middleman. Your prompts travel from your IDE to your pod members' hardware and back. There is no server in between reading your data. - No vendor lock-in. Pod membership, API keys, and treasury are replicated across your own machines using Raft consensus. If the internet goes down, your local network keeps working. There is no database in someone else's cloud that your pod depends on. - Automatic sharding. You don't configure layer ranges or calculate VRAM budgets. Tell the pod which model you want. It figures out how to split it across whatever hardware is online. - Real NAT traversal. Your friend behind a home router with a dynamic IP? Works. No VPN, no Tailscale, no port forwarding. The nodes handle it. - Free when local. This is the part that matters most. Cloud AI bills scale with usage. Pod inference on local hardware scales with nothing. The marginal cost of your 10,000th prompt is the electricity your laptop was already using. Coming soon: - Pod federation: pods form alliances with other pods. - Marketplace: pods with spare capacity can sell inference to other pods.
English
157
275
2.9K
250.2K
76
76@TheLastDevvor·
@VVmmao the ticker is ass on that one come buy the better ticker DfeR89DJBdGH1Ao5RmEchsRxh9eUe4w7Ce1K9cz9pump
English
0
0
0
8
510P | GDC
510P | GDC@VVmmao·
他认可了 BYzp8dFmALKAp4L9MhcP4DjJm5Ncac5RMH1jrLHjpump
510P | GDC tweet media
Indonesia
1
0
0
147
RT
RT@RT_com·
This Trump-Pope UFC top trends on X
English
71
603
2K
77.3K
76
76@TheLastDevvor·
@xBelieveOnSOL can u make it so we can export pks now or just make it so we can claim creator fees from the website
English
0
0
1
18
76
76@TheLastDevvor·
@xBelieveOnSOL let us export the private keys of our wallets
English
0
0
0
57
xBelieve
xBelieve@xBelieveOnSOL·
As we can see, transferring money on X has never been easier than it is today. Everyone can now tip their favorite person on X. We’re also working on improving our prediction market, and we’d love for you to join our feed on : xBelieve.fun
xBelieve tweet media
English
33
10
58
4.6K
76
76@TheLastDevvor·
@xBelieveOnSOL you still working on adding the pk export?
English
0
0
0
21
xBelieve
xBelieve@xBelieveOnSOL·
You can send money on X to any user using 𝕏Believe, even if they’re not registered. The money will be held in a wallet until they log in and claim it. The bot will automatically send them a reply with a claim link. xbelieve.fun
xBelieve tweet media
English
26
10
53
6.3K
76
76@TheLastDevvor·
@xBelieveOnSOL oh that's a good way to do it i suppose. up to you to be honest, but make that clear somewhere on the claim page
English
0
0
0
51
76
76@TheLastDevvor·
@xBelieveOnSOL you adding it soon? i just want to be able to claim the creator rewards on my coin, i made the first one on there
English
0
0
0
15
xBelieve
xBelieve@xBelieveOnSOL·
Send SOL, USDC & 30+ tokens to anyone on X with one tweet. Custodial wallets · Token launches · Prediction markets. Example: @xBelieveOnSOL send .05 SOL to @username Website: xBelieve.fun
xBelieve tweet media
English
70
21
108
27.6K
76
76@TheLastDevvor·
@xBelieveOnSOL thank you, love the project
English
0
0
0
51