CaptainBeard

380 posts

CaptainBeard banner
CaptainBeard

CaptainBeard

@CaptainBearddd

I build stuff (startups), I code (JS/TS), I manage AI teams (Lead Technical Project Manager @ Tether, Building https://t.co/xKLCz9nM6n) | Opinions are my own.

Scotland, UK Katılım Ocak 2016
95 Takip Edilen39 Takipçiler
CaptainBeard retweetledi
QVAC
QVAC@qvac·
Small model. Massive logic. QVAC MedPsy-4B is here.🧠 As our benchmarks show, we are outperforming 27B models while running locally on-device. This is sovereign, private medical intelligence on YOUR hardware. Superior methodology reverses the parameter gap. Learn more: huggingface.co/blog/qvac/medp…
QVAC tweet media
English
20
40
358
4.1M
CaptainBeard retweetledi
tendr.bid
tendr.bid@tendrdotbid·
Introducing Private AI Layer on tendr.bid powered by @qvac (by @tether). The "is your AI private" question gets asked binary today: either you send your data to a closed AI provider (OpenAI / Anthropic / Google) and lose privacy, or you run AI locally on your own machine. Most people end up on the closed-provider side because that's the only option that "just works" inside a hosted web app. Private AI is a third position - and it's the right one for sealed-bid procurement (@tendrdotbid). → No closed AI provider in the pipeline. The model is open-weight (Qwen3 4B Instruct), running on a dedicated @nosana_ai GPU we operate, served via @qvac's OpenAI-compatible API. → Our own app servers never see your AI data. When you click any AI button on tendr.bid, the request goes from your browser DIRECTLY to the QVAC sidecar. No Tendr backend in the middle. We never see prompts, bid plaintexts, or model responses. Verifiable in the public repo. → Single-tenant inference. Dedicated container, our QVAC image, our GPU. Not a multi-tenant inference API serving 50 other apps. No prompt-mixing, no neighbor leakage. Three Private AI surfaces went live this week: → Draft RFP scope with QVAC Private AI (buyer side) - describe your need in plain English, get back a structured RFP scope (objectives, deliverables, milestones, success criteria) ready to edit and post. → Start drafting bid with QVAC Private AI (provider side) - describe your stack and target price, get back a complete bid: price, timeline, scope markdown, every milestone with acceptance criteria. One click populates the entire bid form end-to-end. → Compare bids with QVAC Private AI (buyer side, post-decrypt) - side-by-side comparison table + a recommended winner with reasoning. This is the marquee. A buyer using ChatGPT to evaluate decrypted sealed bids would be uploading every provider's confidential pricing, methodology, and team composition to OpenAI's logs - defeating the entire point of sealed bidding. Private AI prevents that. The honest boundary, stated upfront: this is NOT local-first. The model runs on a Nosana GPU we operate, not on the buyer's laptop. You're trusting our Nosana deployment instead of trusting OpenAI's data-collection policies. Different threat model, real privacy improvement, not equivalent to a model that runs entirely on your device. Genuine thanks to @qvac for helping us frame this honestly, Jamie (@CaptainBearddd) spent time helping us frame this honestly 🙏 (it would have been easy to overclaim "local-first" because we use a local-first framework; "Private AI" is the truthful positioning). And to @nosana_ai for making it cheap-and-simple to run the sidecar on a GPU we control. Four orthogonal primitives shipped on @tendrdotbid - the full Tendr stack (so far): - @magicblock - bid CONTENTS sealed cryptographically until the window closes - @cloak_ag - bid SIGNERS unlinkable to main wallets via shielded UTXO - @sns - recognizable identity (`.tendr.sol` per user) so post-award winners earn reputation against a name humans can actually read, not a 44-char wallet hash - @qvac - Private AI for drafting + comparing without leaking commercial info to closed providers Three privacy primitives + one identity primitive. Privacy seals the bidding phase; identity makes the reputation that comes out the other end portable and recognizable across Solana. Private AI is the new pillar, it closes the AI-leakage gap that would otherwise undermine the other three. We'd love tendr.bid to be a worked example for the @QVAC ecosystem too 😇. tendr.bid is what QVAC looks like for a hosted web app that can't realistically ship Bare to every user but still wants AI without closed providers. The whole Private AI pattern (Dockerfile, sidecar config, browser → Nosana wiring, OpenAI-compatible client) is in the public repo, ready to be referred. Live on devnet: tendr.bid · github.com/0xharp/tender Built by @0xharp on @solana for @colosseum frontier hackathon. cc: @SuperteamIN @Superteam @SuperteamEarn
tendr.bid@tendrdotbid

Just shipped the Identity Layer for @tendrdotbid powered by @sns - every tendr.bid user now gets a personal `.tendr.sol` identity at signup. Pick a handle, click Claim and you're done. No wallet popup, no signature. All fees sponsored by us!! Live on Devnet 🟢, Video Demo below 📹 Up until today, your wallet on tendr.bid showed up as a 44-character hash everywhere - leaderboards, profile pages, share cards. Trust signals don't really work when "the buyer" reads as `CRZUdacW…1JYv`. What we wanted was for procurement reputation to attach to a name humans can actually recognize - yours, and one that travels with you across every Solana app that resolves @SNS. So we built the identity layer. What changes for you: → First time you sign in, a one-screen modal asks you to pick a handle (3-20 characters). There's a Suggest button if you'd rather not think about it. → Click Claim. We mint `.tendr.sol` and assign ownership to your wallet on the spot. You don't sign anything, we cover the rent and sign the mint server-side. Same UX as receiving an airdrop. → From that moment your handle replaces your wallet hash everywhere - leaderboard, buyer + provider profile pages, RFP cards, milestone notes, the wallet popover, and the share card that unfurls when you paste a profile URL into @X. The privacy story is intact. Sealed-bid mode still hides bid contents (powered by @magicblock's TEE-gated Private Ephemeral Rollups), and private-bidder mode still hides bidders (powered by @cloak_ag's shielded UTXO pool). We deliberately never resolve `.tendr.sol` for the per-RFP ephemeral wallets used in private-bidder mode, those are anyways temporary per RFP wallets, not permanent identity. Bid Losers stay anon, Winners reputation get tagged to their `.tendr.sol` identity. A real thanks to the @SNS team, @ninjanovadotsol @SNSFai - incredibly responsive on Discord, helped us cut through several edge cases (devnet hierarchy, subdomain quirks) that would have otherwise been days of trial-and-error. The integration is meaningfully better for it. Please check it out below, and do share your feedback 👇✉️!! More enhancements coming soon 🔜!! tendr.bid · tendr.bid/docs/identity Built for @colosseum's frontier hackathon on @solana @SuperteamIN @SuperteamEarn @superteam #solana #frontier #colosseum #hackathon #tendr

English
2
1
8
759
CaptainBeard retweetledi
Paolo Ardoino 🤖
Paolo Ardoino 🤖@paoloardoino·
We just released our QVAC MedPsy, Tether AI SoTA medical health AI model, capable of high-performance execution and high-accuracy directly on smartphones, laptops and servers. Highlights: - QVAC MedPsy 4B beats MedGemma 27B - QVAC MedPsy 1.7 beats MedGemma 4B - 3.2x reduction in response tokens, further increasing efficiency - fully open-source - GGUF formats supported - designed to excel on edge devices - 100% user privacy [benchmarked against clinical-style evaluations such as HealthBench Hard, HealthBench, and MedXpertQA]
Tether@tether

8 billion humans deserve an intelligence that doesn't blink when the signal dies. 🧠 Introducing @QVAC Psy, our foundational models built on the mathematical stability of Psychohistory. With QVAC MedPsy, our local-first medical health AI model, we’ve proven that superior methodology beats raw parameter count. Our 1.7B & 4B models are delivering expert-level healthcare reasoning on consumer hardware. The "tiny brain" for the next galaxy is here. Fully open-source. Fully sovereign. Learn more qvac.tether.io/models

English
19
23
194
26.3K
CaptainBeard retweetledi
QVAC
QVAC@qvac·
QVAC SDK 0.10.0 is now live, bringing advanced local compute capabilities and specialized hardware optimization directly to your device Key Features and Updates: - Image-to-Image Diffusion: Transform and edit images using simple prompts with 100% local compute—no cloud uploads or external servers required - Dynamic Tooling & KV Cache Management:Your local LLM now receives a tailored toolbox for every interaction, with automatic KV cache clearing to maintain high-speed inference - Doctor CLI: A new diagnostic tool that analyzes your hardware and memory, providing specific instructions on how to optimize your GPU for local AI - Suspend & Resume API: Specifically designed for mobile environments, this allows apps to pause P2P swarms and RAG workspaces to meet background rules without losing model state - GPT-OSS Compatibility: Added support for the latest GPT-OSS models loaded externally, expanding the range of open-source intelligence available on the platform Build the future of private, unstoppable AI: docs.qvac.tether.io
English
2
7
56
33.8K
CaptainBeard retweetledi
CaptainBeard retweetledi
Paolo Ardoino 🤖
Paolo Ardoino 🤖@paoloardoino·
QVAC SDK will support in 0.9.0 (gonna be release in ~10 days) LoRA fine-tuning directly on-device, letting developers customize LLMs with their own data without sending anything to the cloud. You just load a base model, point it at your training dataset, and get a lightweight LoRA adapter back — all running locally. The fine-tuned model can then be used for inference immediately, with no extra setup. Why it matters: LoRA (Low-Rank Adaptation) fine-tuning lets you specialize a general-purpose language model for your specific use case — like matching a brand's tone, mastering domain terminology, or following a particular output format — using a fraction of the compute a full fine-tune would require. QVAC handles the entire workflow locally: dataset preparation, training with configurable hyperparameters, checkpoint saving, and seamless inference with the resulting adapter. Your data never leaves the device. The developer experience: Fine-tuning with QVAC is as simple as calling "sdk.finetune()" with your dataset and a few hyperparameters. Training runs entirely on your local hardware, produces a compact LoRA adapter file, and supports pause/resume so you can stop a job and pick it back up without losing progress. The result plugs straight into QVAC's inference pipeline — no model conversion, no deployment step, just immediate local completions with your fine-tuned model. qvac.tether.io
English
8
14
104
42K
CaptainBeard retweetledi
Paolo Ardoino 🤖
Paolo Ardoino 🤖@paoloardoino·
Diffusion image generation support coming to @QVAC SDK 0.9.0 (to be released in a couple of weeks)
English
14
17
119
20K
CaptainBeard retweetledi
CaptainBeard retweetledi
Seb
Seb@s_a_c99·
🔥 LFG 💎 QVAC SDK is open source. Tether just dropped the building block for local-first AI. The AI Universal Building Block that Runs, Trains, and Evolves Intelligence Across any Device and Platform. Self-hosted models don't get dumber overnight. >You pin the version. Quality is a constant, not a variable >No silent thinking budget cuts. No surprise regressions If your product depends on someone else's deployment decisions, it's not your product. The open weights ecosystem is moving fast. Gemma 4, GLM, Llama derivatives. Running on your hardware. No API calls, no rate limits, no surprise bills, no silent downgrades. Works offline. Internet goes down, server farm goes offline - zero impact on the user. The AI keeps running. Not just LLMs. The SDK ships text completion, embeddings, vision, OCR, text-to-speech, speech-to-text, and translation. Roadmap includes toolkits for robotics and brain-computer interfaces [it's long-term infrastructure].
Seb tweet mediaSeb tweet mediaSeb tweet media
QVAC@qvac

The engine of the 21st century is here. 🧠 The QVAC SDK is the "steam engine" of the AI era—decoupling intelligence from the cloud and putting it in your hands. A single API for local-first, modular AI that runs anywhere. - Sovereign: Own your engine, don't rent it. - Local: 0 latency, no cloud dependency. - Modular: Stackable, universal building blocks. The era of Stable Intelligence has begun.

English
0
1
3
203
CaptainBeard retweetledi
Kevin Simback 🍷
Kevin Simback 🍷@KSimback·
Just digging into QVAC and it's a pretty big deal At first, I thought it was just a way to run local models (similar to LM Studio or Ollama) and connect via remote devices, which you can do today via Tailscale But it's actually much more than that, and I'll show you how to set it up A simple way to think about QVAC: LM Studio + Tailscale = “I can chat with my powerful desktop AI from my phone” QVAC = “My phone itself can run, train, and collaborate on AI, and so can every other device in my swarm" The real shift here is moving from running local on one big computer to sovereign intelligence everywhere on the edge There's two products here: 1. QVAC Workbench - desktop + mobile app This is where you run local AI: drop in any GGUF model, chat, do RAG on local files, and experiment with complete privacy (everything stays on-device) Download directly from: qvac.tether.io/products/workb… Then link your desktop and mobile devices together 2. QVAC SDK Open-source library for building local AI into your own apps to load models, run inference, do on-device LoRA fine-tuning, vision/speech/etc. All cross-platform (same code runs on phone, laptop, server) and you plug in your own GGUF files Its the universal building block for sovereign/decentralized AI apps Install: npm install @qvac/sdk Together these two make up the stack for local-first, private, sovereign intelligence on any device - and that's pretty powerful!
QVAC@qvac

The engine of the 21st century is here. 🧠 The QVAC SDK is the "steam engine" of the AI era—decoupling intelligence from the cloud and putting it in your hands. A single API for local-first, modular AI that runs anywhere. - Sovereign: Own your engine, don't rent it. - Local: 0 latency, no cloud dependency. - Modular: Stackable, universal building blocks. The era of Stable Intelligence has begun.

English
0
1
18
2.7K
CaptainBeard retweetledi
Pears_com
Pears_com@Pears_p2p·
Pear Runtime is getting seriously easy to embed into desktop apps. 🍐 @mafintosh is dialing in distribution so it just works across desktop, mobile, and everything in between. All open source, so you can dive in, explore the code, and start building. 🛠️ github.com/holepunchto/pe…
English
1
11
46
3.3K
CaptainBeard retweetledi
Paolo Ardoino 🤖
Paolo Ardoino 🤖@paoloardoino·
The world is approaching a moment where billions of humans share the planet with billions of autonomous machines and trillions of AI agents. The current model, routing every decision through a centralized server, won’t scale to meet that reality. The laws of physics alone make centralized AI a dead end: speed-of-light latency, single points of failure, and concentration of control are features of a system designed for a smaller world. QVAC is built for the world that’s coming. QVAC is the fundamental building block in the era of Stable Intelligence.
Tether@tether

Tether Launches QVAC SDK as the AI Universal Building Block that Runs, Trains, and Evolves Intelligence Across any Device and Platform Learn more: tether.io/news/tether-la…

English
30
38
319
43.8K
CaptainBeard retweetledi
QVAC
QVAC@qvac·
Intelligence should not be a service you rent; it is a foundational element you possess. At Tether, we see AI as a new element of the periodic table - a raw material that can be embedded into the very fabric of the universe. Today, the QVAC SDK is officially live - the atomic unit for the next era of compute. From your smartphone today to the edge of the galaxy tomorrow, we are building the decentralized mind that doesn't require an uplink to function. Infinite Stable Intelligence: - Local-First: Runs privately on any device without permission or central servers. - Single API: A complete SDK for Vision, RAG, P2P networking, and LLM fine-tuning. - Unstoppable: No central point of failure if the internet breaks, your world keeps thinking. - Decentralized: Evolve through Peer-to-Peer Swarms of Infinite Intelligence. The era of Stable Intelligence has begun. Start building the future at qvac.tether.io.
English
40
142
1.6K
13.3M
CaptainBeard retweetledi
QVAC
QVAC@qvac·
The engine of the 21st century is here. 🧠 The QVAC SDK is the "steam engine" of the AI era—decoupling intelligence from the cloud and putting it in your hands. A single API for local-first, modular AI that runs anywhere. - Sovereign: Own your engine, don't rent it. - Local: 0 latency, no cloud dependency. - Modular: Stackable, universal building blocks. The era of Stable Intelligence has begun.
English
39
148
1.7K
10.7M
CaptainBeard retweetledi
Tether
Tether@tether·
Tether Launches QVAC SDK as the AI Universal Building Block that Runs, Trains, and Evolves Intelligence Across any Device and Platform Learn more: tether.io/news/tether-la…
English
9
21
123
52.3K
Aurelien
Aurelien@Aurelien_Gz·
this is how you craft a high end ui.. pure magic wizard » @basit_designs
English
51
84
1.9K
90.6K
CaptainBeard
CaptainBeard@CaptainBearddd·
@dr_cintas Is this directly on edge devices or does it still require connection to some sort of cloud service?
English
0
0
0
1.6K
Alvaro Cintas
Alvaro Cintas@dr_cintas·
You can now fine-tune Gemma 4 completely FREE 🤯 No GPU. No credit card. No coding knowledge required. Just a browser and 500+ models to choose from. → Open the Unsloth Colab notebook → Pick your model + dataset → Hit Start Training
English
38
281
2.3K
172.2K