efebic

478 posts

efebic banner
efebic

efebic

@efebic

0.6931 가입일 Ağustos 2013
422 팔로잉133 팔로워
고정된 트윗
efebic
efebic@efebic·
@jsuarez @Stone_Tao You don’t read through every line of all your dependencies either. I trust Codex about as much as a 2 weeks old 90 Star repository. it keeps getting better.
English
1
0
1
294
efebic
efebic@efebic·
This llama.cpp patch note speaks to me
efebic tweet media
English
0
0
0
21
efebic
efebic@efebic·
@jack_whitcomb_ I hate all you bitter men piling on this woman who is proud of her husband
English
3
0
7
4.1K
efebic
efebic@efebic·
@pleometric @mathtician Bro your framing of weirdness is so clearly derivative of the work it tries to contrast as conventional horror. It reads like a lovecraft fan fic.
English
1
0
1
12
Pleometric
Pleometric@pleometric·
@mathtician Cthulhu has become just another tentacle monster you can point to, and the association to cthulhu mythos as something scary is exactly because of that framing.
English
1
0
8
454
efebic
efebic@efebic·
It’s doing the heavy lifting
English
0
0
0
11
efebic
efebic@efebic·
@Sauers_ That’s how you would do it too if you could hold the entire program in your mind
English
0
0
1
251
Sauers
Sauers@Sauers_·
How ChatGPT edits Python files lol
Sauers tweet media
English
6
2
157
7.2K
efebic
efebic@efebic·
@AgustinLebron3 And it actually does matter whether you lose your money to other traders, or if you are being systematically fleeced by fees and a house edge.
English
0
0
1
6
efebic
efebic@efebic·
@AgustinLebron3 Don’t you live in a country with regular sport gambling? Of course people gamble on Polymarket but I don’t want you to decide what is and what isn’t valuable information.
English
2
0
2
72
Agustin Lebron
Agustin Lebron@AgustinLebron3·
So many cautionary tales, most stay silent. Some don't, and credit to them. Every bit helps.
Keanu@KeanuLanu

I’ve personally had issues with it growing up as I was introduced to it back when I was 17. I was lucky enough to inherit $16,000 from my grandparents yet stupid enough to gamble it away. It happened the worst way possible for me because I started out winning 10x my money from $500 to $5000 playing roulette online, and I did it slowly. Then just as slowly, I lost it all. In frustration I deposited 25% of my inheritance to win it back. Then lost it all again. It was 4am and I had school in just a few hours as I was sitting there with my laptop in bed. I was emotional, felt shame, and desperate to rid myself of those feelings I wanted to win it back and justified it by saying I would stop if I just got to breakeven, so I deposited another 25%, but this time I made a promise to myself that this was going to be it, no matter how painful. I didn’t play it slow this time, I bet it all in one bet, and lost it all. I can’t even begin to describe how terrified, shameful and disgusted I was with myself. Every fibre of my being wanted me to take the last remaining half and make my money back, but that night I overcame those urges and in a defining moment of my life stopped before it got too late. I never told my parents for obvious reasons, but if I did I could’ve gotten my money back on because it was illegal for them to allow me to play in the first place, but I never realised at the time and even if I did I’m not sure I would’ve been able to withstand the pain from telling them, and that’s how gambling sends you down a dark path very quickly. I don’t know where I would’ve been today if I didn’t manage to stop myself at the last minute. For that reason I’m personally deeply against gambling and I find it unethical, but I understand why people without that experience can think otherwise. I thought I would share to help others, it’s never too late to stop.

English
3
0
39
6.3K
efebic
efebic@efebic·
@msimoni Bad take Codex UX is better than chatgpt, Claude code UX is better than claude, and btop is better than any OS’s perf monitor.
English
0
1
2
48
Manuel Simoni
Manuel Simoni@msimoni·
Gonna send this picture to anyone developing a TUI
Daniel Colascione@dcolascione

@msimoni It's a fad, nothing more. The people driving this are hipsters shopping at vinyl record stores. The pre-September era holds a certain cachet. This whole terminal emphasis is performing the technical rigor that built the internet by aping its most superficial qualities.

English
4
0
24
2.1K
efebic
efebic@efebic·
Why do chinese game devs in particular like to lock basic graphic settings behind 20 minutes of login screens and tutorials? the game launched with a messed up resolution? good luck!
English
0
0
0
24
efebic
efebic@efebic·
Aaakash contradicts himself to diminish pewdiepie. It upsets strivers when an intelligent outsider participates in their special field of interest with no motive besides curiosity.
Aakash Gupta@aakashgupta

PewDiePie didn’t “train his own LLM.” He fine-tuned an existing open-source model on coding benchmarks. His model started at 8%, crawled to 16% after format fixes, and one run hit 19.6% that briefly passed GPT-4o on a single benchmark before he couldn’t consistently reproduce it. The tweet makes it sound like a YouTuber casually built a frontier lab in his bedroom. What actually happened is more interesting: a guy with a $41,000 home rig of 10 GPUs and 424GB of VRAM spent months failing, retraining, and iterating on dataset quality until he squeezed marginal gains out of a fine-tune. This is the part worth paying attention to. The entire arc from October 2025 to now tells you where AI tooling has actually landed. PewDiePie went from building his first PC to running Qwen 235B locally, vibe-coding a custom chat UI, orchestrating multi-agent voting systems, and now fine-tuning models on custom datasets. He did most of this through AI-assisted coding itself. The video is literally called “I wish I never did this project.” He’s documenting how painful and tedious the process was. That honesty is the signal. The hype accounts strip that away and replace it with “what the f*ck, YouTuber beats DeepSeek.” The real takeaway: fine-tuning on specific benchmarks with curated data can let anyone temporarily spike a score past models that cost hundreds of millions to train. That tells you everything about how narrow benchmark gaming has become, and nothing about general capability. PewDiePie knows this. The people quote-tweeting him with shock emojis do not.

English
0
0
1
138
efebic
efebic@efebic·
@SheriefFYI @kiaran_ritchie @timsoret Reading the post I expected some signal processing wizardry like a clever use of Gaussian pyramids, but this is a video model. “purely interpreting pixels” sure, pixels and terabytes of data. Also wtf does “Very few humans can pull this off“ mean?
efebic tweet media
English
0
0
0
36
efebic
efebic@efebic·
You can no longer sort by upload date on youtube. only by slop algo or popularity which is also slop algo. why is this pillar of international culture in the hands of abusive morons? bring back the garage lady
efebic tweet media
English
0
2
12
549
Damek
Damek@damekdavis·
We've now reached 30+ problems in the repo. Problem 11b has been solved by @PI010101 and collaborators! I've learned a lot about problem sourcing while helping to building this. I'll write a post about that at some point.
Damek tweet media
Damek@damekdavis

Paata Ivanisvili (@PI010101), Terry Tao, and I started a new repository of "Optimization constants in mathematics" that anyone can use to benchmark AI models or just make progress on by any means. Examples already in the repo include the Grothendieck constant, Berry-Esseen constant, the matrix multiplication exponent, the cap set constant, and several others. We’re hoping to crowdsource a high-quality list of such constants and ultimately turn this into a website with a thriving community of contributers, similar to the “Erdos Problems” website. Currently we’re hosting the site on github. I’ve put the link to the repo below. The repo includes instruction on how to make a contribution. Please let us know any suggestions for improvement!

English
8
11
121
17.2K
efebic
efebic@efebic·
Why is the Urbit logo on this device?
Teknium (e/λ)@Teknium

.@altryne convinced me to let the agent have some room to grow and build on and i just thought itd be fun soooo

English
0
0
0
37