brolecule!!

50 posts

brolecule!!

brolecule!!

@brolecule

buy my account for 9201381293791283 dollars !!1!1111!111!!

Katılım Şubat 2026
0 Takip Edilen0 Takipçiler
brolecule!!
brolecule!!@brolecule·
@0xPaulius bcz making it urself is better than using a slop generator
English
0
0
0
2
tim stark
tim stark@timstarkdev·
@TukiFromKL On agentic terminal coding, Qwen3.5 reaches 52.5 while GPT-5.3 Codex hits 77.3 — that’s a big gap on the tasks that matter most for real-world engineering work. datacamp.com/blog/qwen3-5
English
2
0
9
4K
Tuki
Tuki@TukiFromKL·
Do you understand what just happened? Qwen dropped small models you can run on a $600 Mac Mini. Locally. No internet. No subscription. No company controls your access. Go do this right now: Download LM Studio Search Qwen 3.5 Grab the MLX versions Load them You now have unlimited AI on your own machine. Nobody can take it away from you. Not a company. Not a government. Not a terms of service update. Everyone's fighting over who controls AI. The answer just became: you do.
Qwen@Alibaba_Qwen

🚀 Introducing the Qwen 3.5 Small Model Series Qwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B ✨ More intelligence, less compute. These small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL: • 0.8B / 2B → tiny, fast, great for edge device • 4B → a surprisingly strong multimodal base for lightweight agents • 9B → compact, but already closing the gap with much larger models And yes — we’re also releasing the Base models as well. We hope this better supports research, experimentation, and real-world industrial innovation. Hugging Face: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw…

English
146
326
3.7K
552.2K
brolecule!!
brolecule!!@brolecule·
do they have a camera is this a new prediction from qwen
brolecule!! tweet mediabrolecule!! tweet media
English
0
0
0
15
brolecule!!
brolecule!!@brolecule·
@crackticker it's so tempting but every single one looks useful for the product i cant bring myself to imagine someone not knowing the price of something
English
0
0
0
6
Crypto Jargon
Crypto Jargon@Crypto_Jargon·
OPENCLAW JUST DROPPED A MASSIVE UPDATE 🤯 OPUS 4.6 SUPPORT. MODEL FAILOVER. APPLE WATCH APP. 1M TOKEN CONTEXT. AND NOW… AGENTS CAN SPAWN AGENTS.
English
4
0
49
4.7K
brolecule!!
brolecule!!@brolecule·
@0xSero What can I run with my 16gb mac mini usingthis
English
0
0
0
3
0xSero
0xSero@0xSero·
My goal for the year: make local AI easy and pleasant to use, on your phone, laptop, coding agents, discord, browser and even on ESP. You will be able to talk to an Apple watch, run a local model on call, get it coding for you, etc.. Kimi on 150gb vram GLM-5 on 150gb vram MiniMax-M2.5 on 48gb vram QuantForge lets you take any model, on any hardware. Select a target size, and calibration datasets and then prunes and quantizes it. Work on my macbook, I'm reaping and quantizing some tiny models. By the end of the year I will make it so me and anyone can get any model to fit any hardware. RN it uses local hardware but I will integrate with Prime Intellect. Going to add some features for sharing datasets, and building one out from many independent components.
0xSero tweet media0xSero tweet media0xSero tweet media0xSero tweet media
English
31
14
428
18.1K
brolecule!!
brolecule!!@brolecule·
@oliviscusAI i think the rumor about qwen feeding claude into its data is true.
brolecule!! tweet mediabrolecule!! tweet media
English
0
0
0
357
Oliver Prompts
Oliver Prompts@oliviscusAI·
China just dropped Claude Opus 4.5 level model that runs locally. And it’s 100% free & open-source.
Oliver Prompts tweet media
English
188
586
5.8K
909.2K
Sandhya
Sandhya@agenticgirl·
Someone finally built a production-grade Agent OS in pure Rust. OpenFang isn't a Python wrapper or a generic multi-agent orchestrator. Here are the specs: → 137,728 lines of code. → 14 crates. → 1,767+ passing tests. → Zero clippy warnings. Because it's Rust, the performance metrics are insane. Cold starts happen in under 200ms. It uses just 40 MB of idle memory. It includes a SQLite memory backend with vector embeddings and canonical sessions. For me, this is what battle-tested infrastructure actually looks like. GitHub Repo is in the comments.
Sandhya tweet media
English
34
35
207
11.9K
nachos2d
nachos2d@NACHOS2D_·
Seedance 2.0 generated this in minutes. Hollywood got cooked. Seedance 2.0 access @MartiniArt_
English
135
280
2.9K
295.2K
Julian Goldie SEO
Julian Goldie SEO@JulianGoldieSEO·
Everyone is overcomplicating AI agents. You do NOT need: – A Mac Mini – A $200/month cloud stack – Some insane hardware You need 8 minutes. OpenClaw lets you: • Run a 24/7 AI worker • Keep everything on your machine • Encrypt your data • Schedule automated tasks • Scale to multiple agents And it’s free. Most people will ignore this because it sounds “technical.” It isn’t. One line install. Few questions. You’re live. The people who understand this early will build leverage most can’t catch up to.
English
29
70
621
44.7K
Vadim
Vadim@VadimStrizheus·
This small box is creating the next wave of generational wealth. This is your sign to start using OpenClaw. 🫡
Vadim tweet media
English
34
2
132
7.2K
Aman
Aman@Amank1412·
most AI video models stop at 10–15 seconds because generating clips is easy. Telling a story is hard. Someone just built an AI Creative Studio that pushes past that limit 2–3 minute cohesive stories, consistent characters, controlled pacing, and actual narrative structure.
English
10
2
24
2.5K
brolecule!!
brolecule!!@brolecule·
@cgtwts "incredible at coding" propaganda pulled out of your ass. this codes as good as a 30b model.
English
0
0
0
19
Jorge Castillo
Jorge Castillo@JorgeCastilloPr·
Neat progress indicator 🤯 unreal
English
3
0
46
5.2K
Alex Finn
Alex Finn@AlexFinn·
I have built the world's most powerful home AI lab My OpenClaw is now powered by: • 3x Mac Studio w/ 512gb memory • Nvidia DGX Spark w/ 128gb memory • Mac Mini M4 w/ 16gb memory I just added the DGX Spark to my Mac Studio cluster using EXO to handle prefill, dramatically speeding up AI speeds This will be running my OpenClaw swarm, which is currently 5 OpenClaws and 4 subagents. I plan on increasing this to at least 10 OpenClaws over the next 2 weeks This agent swarm will have 1 mission: be a 24/7 autonomous organization that produces value constantly. I will add more compute as necessary, including more Mac Studios when the M5 Ultra releases I do not plan on slowing down. This is the single most important moment in the history of this species, and I plan on capitalizing on it. My mission is to create a framework that enables everyone to experience abundance. Accelerate.
Alex Finn tweet media
English
374
98
1.7K
178.1K
Nav Toor
Nav Toor@heynavtoor·
🚨 BREAKING: Someone just rebuilt the entire AI assistant stack in Zig. It's called NullClaw. The binary is 678 KB. It uses ~1 MB of RAM. It boots in under 2 milliseconds. No runtime. No VM. No framework. No garbage collector. Just raw Zig. Here's why this is absurd: → OpenClaw needs a $599 Mac Mini and 1 GB+ RAM → NanoBot needs 100 MB+ RAM and Python → PicoClaw needs 10 MB RAM and Go NullClaw runs on a $5 board with 1 MB of RAM. Same functionality. 0.1% of the resources. Here's what's packed into that 678 KB: → 22+ AI providers (OpenAI, Anthropic, Ollama, DeepSeek, Groq, etc.) → 13 chat channels (Telegram, Discord, Slack, WhatsApp, iMessage, IRC) → 18+ built-in tools → Hybrid vector + keyword memory search → Multi-layer sandboxing (Landlock, Firejail, Docker) → Hardware peripheral support (Arduino, Raspberry Pi, STM32) → MCP, subagents, streaming, voice, the full stack Here's the wildest part: Every subsystem is a vtable interface. Swap any provider, channel, tool, memory backend, or runtime with a config change. Zero code changes. It even encrypts your API keys with ChaCha20-Poly1305 by default. 2,738 tests. ~45,000 lines of Zig. Zero dependencies beyond libc. 100% Open Source. MIT License.
Nav Toor tweet media
English
227
511
5K
487.6K