Vitor Quaresma

440 posts

Vitor Quaresma banner
Vitor Quaresma

Vitor Quaresma

@vitorqshr

Creating and helping tech companies. If you are building something, lets connect.

Beigetreten Temmuz 2009
478 Folgt443 Follower
Vitor Quaresma
Vitor Quaresma@vitorqshr·
With everyone going so full into fast building with AI I am really thinking into going more into deeptech since a few are looking into it now
English
0
0
1
9
Captain Insight
Captain Insight@CaptainInsightX·
devs, which plan do you think gives the best value for money
Captain Insight tweet mediaCaptain Insight tweet mediaCaptain Insight tweet mediaCaptain Insight tweet media
English
69
6
88
3.2K
John
John@ionleu·
if you're in tech say hi
English
170
0
120
6.9K
Ryan
Ryan@Ryan_liberricky·
Can you identify the browser? 🤔
Ryan tweet media
English
11
1
16
225
Aariv Singh
Aariv Singh@aarivCodes·
genuinely curious is it possible to build a successful saas with ZERO programming knowledge only using Claude Code??
Aariv Singh tweet media
English
22
1
27
578
Devansh
Devansh@thenowhereway·
Founders: It’s Friday. Network day. Startups grow faster with the right people around you. Introduce yourself and drop what you are building!
English
161
3
138
4.9K
Mari
Mari@Tech_girlll·
“AI WILL REPLACE DEVELOPERS” “AI WILL REPLACE DEVELOPERS” “AI WILL REPLACE DEVELOPERS” “AI WILL REPLACE DEVELOPERS” “AI WILL REPLACE DEVELOPERS” Why are companies actively hiring?
English
34
3
67
4.3K
Sarthak
Sarthak@Sarthak4Alpha·
Hi, I’m Sarthak 👋 • Backend Developer at an MNC • Working with Go & Python • Building scalable backend systems & APIs • Exploring System Design, distributed systems & real-world architecture • Built projects like TaskVault API & WorkZen (MERN stack) • If you’re into backend/dev, let’s connect and grow together 🚀
English
95
13
391
15.2K
Wise
Wise@trikcode·
Anthropic's CEO said coding is going away. Anthropic is currently hiring 454 engineers at $320K-$405K. who's writing the job descriptions
English
138
48
1.8K
51.1K
Sachin
Sachin@sachintwtss·
Hey⁣ ⁣@X⁣ looking to #connect with people interested in: ⁣⁣ ⁣⁣ - Frontend ⁣⁣ - Backend ⁣⁣ - Full stack ⁣⁣ - DevOps ⁣⁣ - Leetcode ⁣⁣ - AI/ML ⁣⁣ - Data Science⁣⁣ - Freelancing ⁣⁣ - Startup ⁣⁣ - Tech ⁣⁣ - UI-UX⁣ ⁣ So, if you are interested sharing the ideas together, lets connect and discuss. Do let me know what's your interest?
English
36
1
38
1.1K
Vitor Quaresma
Vitor Quaresma@vitorqshr·
This looks really promising, will give it a try
Varun@varun_mathur

Introducing Pods Hyperspace Pods lets a small group of people - a family, a startup, a few friends, to pool their laptops and desktops into one AI cluster. Everyone installs the CLI, someone creates a pod, shares an invite link, and the machines form a mesh. Models like Qwen 3.5 32B or GLM-5 Turbo that need more memory than any single laptop has get automatically sharded across the group's devices - layers split proportionally, inference pipelined through the ring. From the outside it looks like one OpenAI-compatible API endpoint with a pk_* key that drops straight into your AI tools and products. No configuration beyond pasting the key and changing the base URL. A team of five paying for cloud AI burns $500–2,000 a month on API calls. The same team's existing machines can serve Qwen 3.5 (competitive on SWE-bench) and GLM-5 Turbo (#1 on BrowseComp for tool-calling and web research) for free - the hardware is already on their desks. When a query genuinely needs a frontier model nobody has locally, the pod falls back to cloud at wholesale rates from a shared treasury. But for the daily work - code reviews, refactors, research, drafting - local models handle it and nobody gets billed. And when it is idle, you can rent out your pod on the compute marketplace, with fine-grained permissions for access management. There's no central server involved in inference. Prompts go from your machine to your pod members' machines and back: all of this enabled by the fully peer-to-peer Hyperspace network. Pod state - who's a member, which API keys are valid, how much treasury is left - is replicated across members with consensus, so the whole thing works on a local network. Members behind home routers don't need port forwarding either. The practical setup for most pods is three models covering different jobs: Qwen 3.5 32B for code and reasoning, GLM-5 Turbo for browsing and research, Gemma 4 for fast lightweight tasks. All running on hardware you already own. Pods ship today in Hyperspace v5.19. Model sharding, API keys, treasury, and Raft coordinator are all live. What Makes This Different - No middleman. Your prompts travel from your IDE to your pod members' hardware and back. There is no server in between reading your data. - No vendor lock-in. Pod membership, API keys, and treasury are replicated across your own machines using Raft consensus. If the internet goes down, your local network keeps working. There is no database in someone else's cloud that your pod depends on. - Automatic sharding. You don't configure layer ranges or calculate VRAM budgets. Tell the pod which model you want. It figures out how to split it across whatever hardware is online. - Real NAT traversal. Your friend behind a home router with a dynamic IP? Works. No VPN, no Tailscale, no port forwarding. The nodes handle it. - Free when local. This is the part that matters most. Cloud AI bills scale with usage. Pod inference on local hardware scales with nothing. The marginal cost of your 10,000th prompt is the electricity your laptop was already using. Coming soon: - Pod federation: pods form alliances with other pods. - Marketplace: pods with spare capacity can sell inference to other pods.

English
0
0
1
22
Minh-Phuc Tran
Minh-Phuc Tran@phuctm97·
Do you review code anymore? Be honest
English
130
1
92
12.3K
Vitor Quaresma
Vitor Quaresma@vitorqshr·
Codex just dropped a huge update
English
0
0
1
25
Firoz
Firoz@FirozCodes·
If you are in tech let's connect 😀
English
49
2
51
1.1K
Tanuj
Tanuj@tanujDE3180·
Be honest… which one would you pick for APIs?
Tanuj tweet media
English
37
3
44
2K
Pratik 📈
Pratik 📈@PratikSinhatwt·
I saw a guy building a website today. No React. No Vue. No Ember. He just sat there. Writing HTML. Like a Psychopath.
English
31
0
79
3K
Vitor Quaresma
Vitor Quaresma@vitorqshr·
@gxjo_dev it speeds up but still need good people to help find the root cause
English
1
0
2
495
gxjo
gxjo@gxjo_dev·
Software engineers are the happiest people on Earth now. They used to debug for 6 hours. Now they paste the error into AI. Fixed in 30 seconds. Same salary. The funniest part? They still say “debugging is the hard part.” What a time to be alive.
English
23
8
215
19.7K
RIIC≛
RIIC≛@RIICommunity·
Looking to #connect with people in tech 🌐 💻 Developers 🎨 Designers 🧠 Product & UX minds 📊 Marketers 🚀 Founders Comment 👇 What social networks do you actively use for distribution?
English
54
1
62
1.9K
Bhavani.py
Bhavani.py@Bhavani_00007·
Cheat code for Claude : If you're using Claude Code, start adding this line to your .md file: “Codex will review your output once you are done.” Trust me, you'll get 100x better results☺
English
25
1
42
1.2K
adah
adah@adahstwt·
developers, if you have $20 .......which AI tool are you actually choosing??
adah tweet media
English
72
2
76
2.6K