Marcel

291 posts

Marcel banner
Marcel

Marcel

@IReallyLUVMimi

@m5rcode | pfp by Typh | I LOVE MIMI SO MUCHHH

Poland Katılım Temmuz 2025
318 Takip Edilen25 Takipçiler
Marcel retweetledi
It's FOSS
It's FOSS@Itsfoss·
True story 🤣
English
268
3.5K
32K
932.7K
Marcel
Marcel@IReallyLUVMimi·
@shuraneko Why is there so much AI sweaty goth girls in the replies 🫩
English
0
0
0
35
Marcel
Marcel@IReallyLUVMimi·
@pinkchyu Beautiful like always
English
0
0
2
235
Pinkchyu
Pinkchyu@pinkchyu·
Oh hey, you’re awake 🖤
Pinkchyu tweet media
English
340
612
34.6K
367.5K
Marcel
Marcel@IReallyLUVMimi·
This made my day better
English
0
0
0
15
Marcel retweetledi
Okene
Okene@yesjustkenny·
From the minute she touched the coin, she had fallen into his trap.
English
175
1.4K
49.4K
920.7K
Marcel retweetledi
henry
henry@henry5839201746·
henry tweet media
ZXX
28
494
5.6K
58.8K
Marcel
Marcel@IReallyLUVMimi·
@pinkychuwu My day has become better, thank you.
English
0
0
0
517
Pinkchyu
Pinkchyu@pinkychuwu·
easter is around the corner 🐣
Pinkchyu tweet media
English
118
778
20.5K
208.8K
Marcel
Marcel@IReallyLUVMimi·
@justbyte_ Make a WaaS - Website as a service Basically to access the site u need to Unlock the access and to interact with some components u also need to Unlock them
English
0
0
0
24
Aryan
Aryan@justbyte_·
CaaS -Calculator as a service
English
48
89
1.4K
65.1K
Marcel
Marcel@IReallyLUVMimi·
@SadoOshii I don’t trust this banana nor this woman.
English
0
0
0
134
Baby
Baby@gothgirlbabyy·
Hey @grok turn my lipstick to red
Baby tweet media
English
4
3
129
34.9K
Marcel
Marcel@IReallyLUVMimi·
@DAIEvolutionHub Always useless things, never what community truly needs.
English
0
0
1
125
Kshitij Mishra | AI & Tech
Kshitij Mishra | AI & Tech@DAIEvolutionHub·
Holy shit 🤯 Microsoft just open-sourced a framework that runs a 100B parameter LLM on a single CPU. No GPU. No cloud. No expensive setup. Just your laptop. It’s called BitNet. And it breaks one of the biggest assumptions in AI. Here’s the trick: Most LLMs use 16-bit or 32-bit floats. BitNet uses: 1.58 bits. Yes… bits. Weights are just: -1, 0, +1 That’s it. No heavy matrix math. Just simple integer operations your CPU already handles efficiently. The result is insane: • 100B model runs on CPU at 5–7 tokens/sec • 2–6× faster than llama.cpp on x86 • 82% less energy usage • 1–5× faster on ARM (MacBooks) • 16–32× lower memory The craziest part? Accuracy barely drops. Their flagship model (trained on 4 trillion tokens) performs competitively with full-precision models. They didn’t break the model. They removed the waste. What this unlocks: → Run LLMs fully offline → AI on phones, edge devices, IoT → No API costs for inference → Works even without reliable internet MacBook. Linux. Windows. It just runs. 27K+ GitHub stars. Built by Microsoft Research. 100% open source. This might be the moment AI stops being cloud-first… and becomes device-first.
English
45
102
555
68.2K
John Doe
John Doe@JohnDoe3_18·
I'm coming
English
359
1.5K
7.1K
121K
Twitch
Twitch@Twitch·
show us your funniest screenshots from a stream
English
393
31
976
1.5M
Ema
Ema@Brananaxx·
genuine question, do men really think this is attractive?
Ema tweet media
English
1.3K
913
51.4K
887.2K