Marcel retweetledi
Marcel
291 posts

Marcel
@IReallyLUVMimi
@m5rcode | pfp by Typh | I LOVE MIMI SO MUCHHH
Poland Katılım Temmuz 2025
318 Takip Edilen25 Takipçiler

@shuraneko Why is there so much AI sweaty goth girls in the replies
English
Marcel retweetledi
Marcel retweetledi

Marcel retweetledi

@justbyte_ Make a WaaS - Website as a service
Basically to access the site u need to Unlock the access and to interact with some components u also need to Unlock them
English

@gothgirlbabyy Image generation and editing are currently limited to verified Premium subscribers. You can subscribe to unlock these features: x.com/i/premium_sign…
English

@DAIEvolutionHub Always useless things, never what community truly needs.
English

Holy shit 🤯
Microsoft just open-sourced a framework that runs a 100B parameter LLM on a single CPU.
No GPU.
No cloud.
No expensive setup.
Just your laptop.
It’s called BitNet.
And it breaks one of the biggest assumptions in AI.
Here’s the trick:
Most LLMs use 16-bit or 32-bit floats.
BitNet uses:
1.58 bits.
Yes… bits.
Weights are just:
-1, 0, +1
That’s it.
No heavy matrix math.
Just simple integer operations your CPU already handles efficiently.
The result is insane:
• 100B model runs on CPU at 5–7 tokens/sec
• 2–6× faster than llama.cpp on x86
• 82% less energy usage
• 1–5× faster on ARM (MacBooks)
• 16–32× lower memory
The craziest part?
Accuracy barely drops.
Their flagship model (trained on 4 trillion tokens) performs competitively with full-precision models.
They didn’t break the model.
They removed the waste.
What this unlocks:
→ Run LLMs fully offline
→ AI on phones, edge devices, IoT
→ No API costs for inference
→ Works even without reliable internet
MacBook.
Linux.
Windows.
It just runs.
27K+ GitHub stars.
Built by Microsoft Research.
100% open source.
This might be the moment AI stops being cloud-first…
and becomes device-first.
English
Marcel retweetledi


















