VerAi

225 posts

VerAi banner
VerAi

VerAi

@VerAI_Agents

Shared Computing Power for AI Development & Deployment Early Access Waitlist: https://t.co/OrAOSvuwrJ https://t.co/928u7A4smL

CPU & GPU Marketplace Katılım Ocak 2025
919 Takip Edilen14.4K Takipçiler
Sabitlenmiş Tweet
VerAi
VerAi@VerAI_Agents·
🚀 The Waitlist for VerAI is Now Open! 🌍 👉 join the waitlist today and secure your spot! forms.gle/CW5t52g2rhVAJc… Be among the first to experience the future of decentralized AI. Here’s why you should join now: ✅ Early Access: Get priority access to VerAI’s platform before anyone else. ✅ Exclusive Rewards: Earn bonus $VER tokens just for signing up early. ✅ Shape the Future: Your feedback will help us build a platform that works for you. 💡 Why wait? The first users will have the chance to: Train AI models with affordable, decentralized resources. Monetize their skills by contributing computational power or datasets. Be part of a revolutionary movement that’s democratizing AI. 👉 Don’t miss out—join the waitlist today and secure your spot! forms.gle/CW5t52g2rhVAJc… #DecentralizedAI #AIInnovation #VerAI #Waitlist
VerAi tweet media
English
980
1.4K
1.8K
56.6K
VerAi retweetledi
Lianhui Qin
Lianhui Qin@Lianhuiq·
🧠How can LLMs self-evolve over time? They need memory. LLMs burn huge compute on each query and forget everything afterward. ArcMemo introduces abstraction memory, which stores reusable reasoning patterns and recombines them to strengthen compositional reasoning. 📈On ARC-AGI, this training-free method yields a 7.5% relative gain over o4-mini, and even more improvements when memory updates during test time. An early glimpse of self-improving LLMs that learn from every use.
Lianhui Qin tweet media
Matt Ho@matt_seb_ho

ArcMemo yields +7.5% relative on ARC-AGI vs o4-mini (same backbone). It extends the LLM idea of “compressing knowledge for generalization” into a lightweight, continually learnable abstract memory—model-agnostic and text-based. Preprint: Lifelong LM Learning via Abstract Memory

English
42
112
736
87.8K
VerAi
VerAi@VerAI_Agents·
AI’s future lies in decentralization, shattering centralized monopolies with transparency and global access 🌐 🔗Blockchain secures this vision, enabling trust via zero-knowledge proofs and distributing compute power worldwide, as VerAI’s ‘Human Legacy Forward’ manifesto champions. 💪It powers a sustainable ecosystem, rewarding contributors with $VER tokens while scaling innovation. 🔏Unlike competitors, we prioritize privacy and governance through DAO-driven decisions. Join us to build an AI revolution free from Big Tech’s grip! #VerAI #DecentralizedAI #Blockchain
VerAi tweet media
English
0
1
0
264
VerAi retweetledi
nathan chen
nathan chen@nathancgy4·
(1/6) Ever wondered how GPUs efficiently access data for computing? 🤔 In Triton, the magic is in tl.make_block_ptr. I wrote a blog covering: - how tensors live in memory - make_block_ptr (+ striding & offset) - and more about ML and Triton kernels It's short with visuals! 🧵
nathan chen tweet media
English
3
54
453
23.7K
VerAi
VerAi@VerAI_Agents·
🎉 VERAI IGNITES SPIKINGBRAIN’S NEUROMORPHIC BREAKTHROUGH! 🚀 SpikingBrain 1.0 from CAS unleashes ternary spike coding, slashing energy use by 43x with asynchronous processing: revolutionizing AI! 🌐 This neuromorphic leap thrives in VerAI’s decentralized ecosystem, where distributed compute amplifies brain-inspired efficiency. 💪 VerAI’s blockchain backbone aligns perfectly, channeling global resources to scale this innovation. 🌍 Rooted in our ‘Human Legacy Forward’ vision, we’re poised to redefine AI’s future. 🌱 Together, let’s harness this cutting-edge tech for a sustainable tomorrow! ✨ #VerAI #DecentralizedAI #Neuromorphic
English
2
0
0
237
VerAi retweetledi
Minqi Jiang
Minqi Jiang@MinqiJiang·
Just got the greenlight to share some work we did at Google DeepMind from over a year ago: We fine-tuned Gemini on thousands of the most toxic discussions on 4chan...and it just talked to us like a completely normal and nice language model. How? Our method, Generative Data Refinement (GDR) uses a pretrained LLM to rewrite existing data, so that undesirable content, like toxic remarks or personally identifiable information (PII), is no longer present. The resulting datasets are a form of *grounded synthetic data*: model-generated data that mimics the structure of a real dataset. Training data pipelines typically throw away samples that are flagged as potentially risky. GDR instead *rewrites* the data to preserve its high-level semantic structure, allowing the refined sample to still be used in training. We find GDR is more accurate than industry-grade PII-detection tools, while avoiding throwing away tokens that are otherwise suitable for training. Similarly, GDR is able to detoxify even 4chan /pol/ discussions, producing a dataset that retains otherwise safe semantic content, and better reflects the diversity of real-world data. This was a surprisingly effective sidequest with João G. M. Araújo, Will Ellsworth, @SianGooding, and @egrefen. Some notable results below (note these are based on Gemini 1.5 era models, and more recent models can be expected to perform even better).
Minqi Jiang tweet media
English
69
144
1.4K
158.4K
VerAi
VerAi@VerAI_Agents·
🎉 VERAI FUELS THE LLM RISE! 🚀 The market’s abuzz with companies building internal LLMs, and startups alike are customizing AI in 2025! 🌐 Yet, high compute costs and centralized barriers slow the pace. VerAI steps up! 💪 Our decentralized network taps YOUR idle CPUs/GPUs via blockchain, offering affordable power to craft tailored LLMs. 🛠️ Developers, build advanced models with our open-source tools, while contributors earn $VER tokens! 🌍 We outshine competitors like Golem and SingularityNET with privacy, governance, and sustainability. 🌱Join us to revolutionize LLM development with a global compute grid unlocking innovation for all! ✨ #VerAI #DecentralizedAI #Web3
English
0
0
0
125
VerAi retweetledi
Qwen
Qwen@Alibaba_Qwen·
🚀 Introducing Qwen3-Next-80B-A3B — the FUTURE of efficient LLMs is here! 🔹 80B params, but only 3B activated per token → 10x cheaper training, 10x faster inference than Qwen3-32B.(esp. @ 32K+ context!) 🔹Hybrid Architecture: Gated DeltaNet + Gated Attention → best of speed & recall 🔹 Ultra-sparse MoE: 512 experts, 10 routed + 1 shared 🔹 Multi-Token Prediction → turbo-charged speculative decoding 🔹 Beats Qwen3-32B in perf, rivals Qwen3-235B in reasoning & long-context 🧠 Qwen3-Next-80B-A3B-Instruct approaches our 235B flagship. 🧠 Qwen3-Next-80B-A3B-Thinking outperforms Gemini-2.5-Flash-Thinking. Try it now: chat.qwen.ai Blog: qwen.ai/blog?id=4074cc… Huggingface: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw… Kaggle: kaggle.com/models/qwen-lm… Alibaba Cloud API: #c5414da58bjgj" target="_blank" rel="nofollow noopener">alibabacloud.com/help/en/model-…
Qwen tweet media
English
173
686
4K
929.3K
VerAi
VerAi@VerAI_Agents·
VERAI: THE FUTURE IS ALMOST HERE! 🚀 🚨The wait is nearly over, our decentralized AI revolution is on the horizon, powered by a passionate global community! 🌐 With blockchain turning idle resources into a force for innovation, we’re ready to break free from centralized limits. 💪 Developers, your visionary builds are set to launch, while contributors will soon enjoy $VER rewards! 🛠️ Together, we’re crafting a sustainable, transparent future that redefines technology. 🌍 The excitement is building—get ready for the next big leap! 🔗Dive into our vision by reading the Strategic Manifesto: acrobat.adobe.com/id/urn:aaid:sc… #VerAI #DecentralizedAI #Web3
VerAi tweet media
English
0
0
0
140
VerAi retweetledi
vLLM
vLLM@vllm_project·
The amazing blogpost from @gordic_aleksa is alive at vLLM's blogpost blog.vllm.ai/2025/09/05/ana… (after more proofreading and clarifications)! Looking forward to future series of tech deep dive blogposts😍
Aleksa Gordić (水平问题)@gordic_aleksa

New in-depth blog post - "Inside vLLM: Anatomy of a High-Throughput LLM Inference System". Probably the most in depth explanation of how LLM inference engines and vLLM in particular work! Took me a while to get this level of understanding of the codebase and then to write up this one - i quickly realized i understimated the effort. 😅 It could have easily been a book/booklet (lol). I covered: * Basics of inference engine flow (input/output request processing, scheduling, paged attention, continuous batching) * "Advanced" stuff: chunked prefill, prefix caching, guided decoding (grammar-constrained FSM), speculative decoding, disaggregated P/D * Scaling up: going from smaller LMs that can be hosted on a single GPU all the way to trillion+ params (via TP/PP/SP) -> multi-GPU, multi-node setup * Serving the model on the web: going from offline deployment to multiple API servers, load balancing, DP coordinator, multiple engines setup :) * Measuring perf of inference systems (latency (ttft, itl, e2e, tpot), throughput) and GPU perf roofline model Lots of examples, lots of visuals! --- I realize i've been silent on social - many of you noticed and thanks for reaching out! :) --> I'm so back! lots of things happened. Also, in general, I'm a bit sick of superficial content, it really is an equivalent of junk food (h/t @karpathy). I want to do the best/deepest technical work of my life over the next years and write much more in depth (high quality organic food ;)) so I might not be as frequent around here as i used to be (? we'll see). I'll make it a goal to share a few paper summaries a week or stuff that's relevant / in the zeitgeist. If you have any topics that happened over the past few weeks/months drop it down in the comments i might focus on some of those in my next posts. --- Huge thank you to @Hyperstackcloud for giving me an H100 node to run some of the experiments and analysis that i needed to write this up. The team there led by Christopher Starkey is amazing! Also a big thank you to Nick Hill (who did a very thorough review of the post - basically a code review lol; Nick's a core vLLM contributor and principal SWE at RedHat) and to my friends Kyle Krannen (NVIDIA Dynamo), @marksaroufim (PyTorch), and @ashVaswani (goat) for taking the time during weekend when they didn't have to!

English
11
74
628
47.1K
VerAi
VerAi@VerAI_Agents·
🎉 RIDES THE GPU INNOVATION WAVE! 🚀 The GPU market is buzzing Intel’s Arc Pro B50/B60 with Xe2 AI cores launched this summer, while NVIDIA’s RTX 5090 with GDDR7 looms large for late 2025! 🌐 VerAI harnesses this power, turning YOUR idle GPUs into a decentralized AI powerhouse via blockchain! 💪 Developers, build cutting-edge apps with unmatched efficiency, inspired by these leaps. 🛠️ Contributors, earn $VER tokens by sharing your hardware, fueling a global compute grid! 🌍 Our 2025 Strategic Manifesto targets a $420M SOM, outpacing rivals with sustainable, community-driven tech. 🌱 Together, let’s lead the Web3 AI revolution with every GPU advance! ✨ 👉verai.app #VerAI #DecentralizedAI #Web3
English
2
1
1
236
VerAi retweetledi
LaurieWired
LaurieWired@lauriewired·
Much like humans, CPUs heal in their sleep. CPUs are *technically* replaceable / wear items. They don’t last forever. Yet, the moment stress is removed, transistor degradation (partially) reverses. It's called Bias Temperature Instability (BTI) recovery:
LaurieWired tweet mediaLaurieWired tweet media
English
165
1.2K
16.2K
686.3K
VerAi
VerAi@VerAI_Agents·
🎉 VERAI: REDEFINING THE FUTURE OF WORK! 🚀 In a world where traditional jobs fade, VerAI empowers a global community to thrive! 🌐 Our decentralized AI network connects innovators—developers crafting open-source solutions, contributors sharing skills—breaking the mold of centralized control! 💪 Imagine a future where remote creators design AI tools for education or healthcare, all powered by YOUR collective genius! 🛠️ No Big Tech gatekeepers—just a collaborative ecosystem driving progress. 🌍 With every contribution, we build a sustainable, inclusive workforce, proving decentralization isn’t just tech—it’s a lifestyle! 🌱 Together, we’re shaping a world where work knows no borders, and innovation flows freely! ✨ Join the movement that’s rewriting tomorrow! 🔗 #VerAI #DecentralizedAI #Web3
English
0
0
2
108
VerAi retweetledi
Prime Intellect
Prime Intellect@PrimeIntellect·
Introducing the Environments Hub RL environments are the key bottleneck to the next wave of AI progress, but big labs are locking them down We built a community platform for crowdsourcing open environments, so anyone can contribute to open-source AGI
English
129
418
3.2K
1.8M
VerAi
VerAi@VerAI_Agents·
@heisdatguyy_ Hi, between October and November 2025
English
0
0
0
5
VerAi
VerAi@VerAI_Agents·
🎉 **VERAI: TACKLING THE AI COMPUTE CRISIS!** 🚀 The AI boom is hitting a wall—scarce GPUs and skyrocketing costs are choking innovation, leaving many behind. 🌐 But VerAI has the answer! Our decentralized network harnesses YOUR idle CPUs and GPUs worldwide, turning wasted potential into a powerhouse of compute power! 💪 Developers, build cutting-edge AI without breaking the bank, thanks to our open-source tools and community-driven grid! 🛠️ Contributors, lend your spare hardware and join a movement that solves the resource crunch—no Big Tech gatekeepers needed! 🌍 With our Alpha launch nearing on September 10th, we’re proving sustainable, accessible AI is possible. 🌱 Together, we’re breaking the compute bottleneck, empowering a global community to innovate freely! ✨ Let’s redefine the future! 🔗 #VerAI #DecentralizedAI #Web3
English
1
0
1
119
VerAi retweetledi
Jackson Atkins
Jackson Atkins@JacksonAtkinsX·
NVIDIA research just made LLMs 53x faster. 🤯 Imagine slashing your AI inference budget by 98%. This breakthrough doesn't require training a new model from scratch; it upgrades your existing ones for hyper-speed while matching or beating SOTA accuracy. Here's how it works: The technique is called Post Neural Architecture Search (PostNAS). It's a revolutionary process for retrofitting pre-trained models. Freeze the Knowledge: It starts with a powerful model (like Qwen2.5) and locks down its core MLP layers, preserving its intelligence. Surgical Replacement: It then uses a hardware-aware search to replace most of the slow, O(n²) full-attention layers with a new, hyper-efficient linear attention design called JetBlock. Optimize for Throughput: The search keeps a few key full-attention layers in the exact positions needed for complex reasoning, creating a hybrid model optimized for speed on H100 GPUs. The result is Jet-Nemotron: an AI delivering 2,885 tokens per second with top-tier model performance and a 47x smaller KV cache. Why this matters to your AI strategy: - Business Leaders: A 53x speedup translates to a ~98% cost reduction for inference at scale. This fundamentally changes the ROI calculation for deploying high-performance AI. - Practitioners: This isn't just for data centers. The massive efficiency gains and tiny memory footprint (154MB cache) make it possible to deploy SOTA-level models on memory-constrained and edge hardware. - Researchers: PostNAS offers a new, capital-efficient paradigm. Instead of spending millions on pre-training, you can now innovate on architecture by modifying existing models, dramatically lowering the barrier to entry for creating novel, efficient LMs.
Jackson Atkins tweet media
English
100
676
4K
448.3K
VerAi
VerAi@VerAI_Agents·
🎉 With Intel’s new Arc Pro GPUs and OpenAI’s efficiency breakthroughs dominating 2025 headlines, VerAI is riding the wave! 🌐 Our decentralized network turns YOUR idle CPUs and GPUs into the backbone of next, gen AI,no Big Tech needed! 💪 Developers, craft smarter apps with our evolving platform, inspired by Xe2’s AI cores, while cutting costs! 🛠️ Contributors, your hardware fuels a global grid, echoing OpenAI’s 44x compute savings—join the revolution! 🌍 Our community-driven approach outshines centralized giants, delivering sustainable power with a 90% energy edge! 🌱 As AI hardware booms, VerAI’s vision shines—unlocking innovation for all! ✨ Together, let’s shape a future where every chip counts! 🔗 #VerAI #DecentralizedAI #Web3
VerAi tweet media
English
0
0
3
201