Mr. Fister

3.4K posts

Mr. Fister

Mr. Fister

@MrFisterthick

I am a fan of art. #noscrubs

Katılım Eylül 2021
293 Takip Edilen64 Takipçiler
BM@Bongi
BM@Bongi@BONGINKOSI14465·
What happened to this bull 🥺
English
278
109
3.5K
3.3M
Mr. Fister
Mr. Fister@MrFisterthick·
@pmddomingos Put a baby in a library and never say a word or connect an AI to a library and never say a word. See who makes out better
English
0
0
0
29
Pedro Domingos
Pedro Domingos@pmddomingos·
If LLMs are so smart, why do they need all these prompts, harnesses, post-training, scaffolding, etc.?
English
363
47
930
121.2K
Mr. Fister
Mr. Fister@MrFisterthick·
@SawyerMerritt Well, it can’t be any worse than delta in-flight, but it sounds like they’re trying to be worse
English
0
0
0
0
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
NEWS: Delta Airlines has announced they are partnering with Amazon's LEO to bring high-speed Wi-Fi to its airplanes. "Delta will introduce Amazon Leo on hundreds of Delta aircraft, starting with an initial installation on 500 aircraft beginning 2028, and work with Amazon to expand its popular Delta Sync Wi-Fi and seatback experiences."
Sawyer Merritt tweet media
English
512
51
863
376.6K
Mr. Fister
Mr. Fister@MrFisterthick·
@icanvardar You might want to look up the definition of that word
English
0
0
0
0
Mr. Fister
Mr. Fister@MrFisterthick·
@CuriosityonX Why would something exist before something. You already have something
English
0
0
0
21
Curiosity
Curiosity@CuriosityonX·
If Big Bang start the Universe, what existed before the big bang?
English
3.9K
609
4.6K
754.3K
fidexCode
fidexCode@fidexcode·
"How did you know I generated the code with AI?" The code:
fidexCode tweet media
English
577
107
1.9K
201.9K
Mr. Fister
Mr. Fister@MrFisterthick·
@NaveenDoesStuff @TechByTaraa running on visual 6 from some dude’s personal XP because management won’t approve a new license key while is still working
English
0
0
0
15
tara_
tara_@TechByTaraa·
Google uses C++. Meta uses C++. Microsoft uses C++. Amazon uses C++. Apple uses C++. Adobe uses C++. NVIDIA uses C++. Intel uses C++. Tesla uses C++. What stopping you from learning C++?
English
339
69
1.6K
85.8K
Mr. Fister
Mr. Fister@MrFisterthick·
@toxiccowboy1 @snarky555 You’d think it was impossible for people to not know AI forgeries by now, but you’d be sure it was impossible to trick someone with an MS paint forgery lol
English
0
0
0
4
Toxic Cowboy 🤠
Toxic Cowboy 🤠@toxiccowboy1·
Mr Dinero has learned a valuable lesson. Muslims do not care about Dinero. He was a means to an end that once achieved? Is no longer needed and tossed aside like garbage. All leftists and Democrats need to learn from this.
Toxic Cowboy 🤠 tweet media
English
4.3K
7.9K
27.6K
401.3K
john M
john M@fjmunster·
@zerohedge @GloriaBerard4 Well. The magnetic poles would have something to do with that.. it’s a physics thing. No more, no less.
English
7
0
6
3.8K
Mr. Fister
Mr. Fister@MrFisterthick·
@Joeg1484 @straceX I copied a private git repository and a file named “kb” and told Claude to turn that into an AI search engine. I had to copy and paste the command line one time. Git push main or something. I don’t know. I just copied and pasted it
English
0
0
0
6
AlwaysLinux
AlwaysLinux@Joeg1484·
@straceX At 8, I was coding assembly and then learned BASIC, C, and some C++ by the time I was in Jr. High - REAL programming languages! Python.. LOL, thats not a programming language... Its a scripting tool HAH!
GIF
English
13
0
28
1.1K
Strace
Strace@straceX·
13 years old kid and already coding in Python. What were you doing at his age?
English
562
10
170
69.3K
Mr. Fister
Mr. Fister@MrFisterthick·
@straceX Cobbler apprenticeship. Wasn’t prepared for a world where computers did my job faster and better than me
English
0
0
1
104
Mr. Fister
Mr. Fister@MrFisterthick·
24 cockfags like giving half their money to ghouls
English
0
0
0
19
Mr. Fister
Mr. Fister@MrFisterthick·
When we should have been looking off the coast of California in the ocean the whole time
English
0
0
0
7
Mr. Fister
Mr. Fister@MrFisterthick·
What if the reason we look to the sky for a god is because the first humans with language lived through a cataclysmic event(s) where changes in the sky/stars would directly change life on earth
English
1
0
0
20
Noor
Noor@noorrietje·
You know how hard it is to have an IQ of 140 as female and find a man that intellectuel matches your personality ??
English
1.1K
35
1.1K
334.1K
Young Kings
Young Kings@HeyYoungKings·
You’re on a date with a cute girl She says: “You probably take every girl here” How do you reply?
English
651
13
2.4K
681.4K
Mr. Fister
Mr. Fister@MrFisterthick·
@JacquiDeevoy1 We evolved from a common ancestor. They evolved into apes and we evolved into humans.
English
0
0
0
16
Jacqui Deevoy
Jacqui Deevoy@JacquiDeevoy1·
If we evolved from apes, how come apes aren’t still morphing into humans? Why don’t we see human/ape hybrids in various stages of evolution in the wild?
English
3.2K
369
8.5K
2.5M
spencer
spencer@techspence·
Tell me you’ve worked in IT without telling me you’ve worked in IT. I’ll go first… Did you try turning it off and back on again?
English
2.3K
29
1.4K
145K
Mr. Fister retweetledi
Guri Singh
Guri Singh@heygurisingh·
Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. It's called BitNet. And it does what was supposed to be impossible. No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed. Here's how it works: Every other LLM stores weights in 32-bit or 16-bit floats. BitNet uses 1.58 bits. Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for. The result: - 100B model runs on a single CPU at 5-7 tokens/second - 2.37x to 6.17x faster than llama.cpp on x86 - 82% lower energy consumption on x86 CPUs - 1.37x to 5.07x speedup on ARM (your MacBook) - Memory drops by 16-32x vs full-precision models The wildest part: Accuracy barely moves. BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat. What this actually means: - Run AI completely offline. Your data never leaves your machine - Deploy LLMs on phones, IoT devices, edge hardware - No more cloud API bills for inference - AI in regions with no reliable internet The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine. 27.4K GitHub stars. 2.2K forks. Built by Microsoft Research. 100% Open Source. MIT License.
English
873
2.6K
15.3K
2.2M