
ꓘris | ⏳🇲🇹🇮🇹🇨🇭
1.4K posts

ꓘris | ⏳🇲🇹🇮🇹🇨🇭
@m4dbi7
0ld Skoolz, Digital nomad and Father | IT & Compliance | Blockchain Infra Veteran | My tweets are my own. Doers at @luganodes , passed by Cardano & Stakefish





VIDEO | Israeli National Security Minister Ben Gvir, outside the Knesset chamber, celebrates the passing of the death penalty law for Palestinian detainees, describing it as historic and saying, “Soon we will count them one by one.”







EDGE/ON-DEVICE AI INFERENCE AND FINE-TUNING IS HERE. Tether Data just released QVAC Fabric LLM, and it creates a new foundation for how AI is built and deployed. It is the world's first Edge-First Inference Runtime & Fine-Tuning Framework. Here’s the simple breakdown:👇 The Old Way: To run AI models (Inference), you needed the cloud and pay a subscription. To evolve and customize AI (Fine-Tuning), you needed expensive clusters. • It was centralized. • It was reserved to the elite. • Your data had to leave your device. The QVAC Fabric Way: We built a unified, cross-platform system to EXECUTE (INFERENCE) and PERSONALIZE (FINE-TUNE) models on the hardware you already own. • Laptops (Windows, Mac, Linux)? Yes. • Consumer GPUs? Yes. • iOS & ANDROID SMARTPHONES? YES. How we did it (The Geeky Part): We extended the llama.cpp engine to add more function instrumentation, generalized support for new models and introducing state-of-the-art, highly extensible LoRA fine-tuning capabilities. We made it cross-platform and vendor-agnostic. It’s highly efficient, meaning it doesn't need a nuclear reactor to run—just your device battery. Why is this a big deal? For Developers: You don’t need a massive budget or a cloud provider to build custom AI anymore. You can build, test, and fine-tune models like Llama 3 or Gemma 3 directly on your MacBook, Linux rig, Windows desktop or even your mobile device. It’s open-source and uses llama.cpp, so it’s super lightweight. For Regular People: Imagine an AI assistant that actually learns from you—your notes, your style, your preferences—but none of that data ever leaves your phone. It lives locally but can scale infinitely. It learns locally. It can work offline. It’s truly your AI, not a corporate rental. The only solution that can truly serve anyone, including the billions of people that can’t afford big-tech expensive subscriptions. The TL;DR: We moved the entire AI lifecycle—execution and evolution—from the cloud to people’s devices. No vendor lock-in. No spying. Just pure, ubiquitous intelligence. Open Source.Multi-platform binaries. Ready today. QVAC - Your Device. Your AI 🔗 Read the QVAC Fabric-LLM Tech Overview & Get the Code: huggingface.co/blog/qvac/fabr… hashtag#QVAC hashtag#LocalAI hashtag#Llama

Today, we released a base model for Genesis I that allows AI researchers to replicate published results instantly, enhancing scientific transparency and reproducibility. It also makes it far easier for researchers and practitioners to test, compare, and build on Genesis I without needing to train from scratch using large high-end GPU resources. Genesis I provides a practical foundation for developing next-generation STEM learning assistants that genuinely understand complex STEM concepts. Our upcoming Genesis II synthetic dataset plans to cover more STEM topics to broaden the coverage and progress towards full completeness. #QVAC Read more: Blog post huggingface.co/blog/qvac/gene… Try the model huggingface.co/qvac/genesis-i… x.com/Tether_to/stat…














