Pixelomo

1K posts

Pixelomo banner
Pixelomo

Pixelomo

@suth_a

Free-range pixel herder working the pixel prairies way out east...

Tokyo-to, Japan Katılım Aralık 2012
403 Takip Edilen214 Takipçiler
Pixelomo
Pixelomo@suth_a·
@YoheiNishitsuji I may have reverse engineered this using your code snippet and Claude...
English
0
0
1
170
Taelin
Taelin@VictorTaelin·
Ok, my final GPT-5.3 Feedback: - It is the best model for compiler work - It writes code carefully and generates bug-free code - It is capable of executing incredibly hard prompts - Definitely the smartest model available IMO Problems: 1. It is NOT capable of grasping intent. It will just take your prompt at face value, no matter how obvious it is. In many cases. It is EXTREMELY frustrating to work with because of that. Sometimes it finds an interpretation to my literal words that I couldn't even anticipate. Working with GPT-5.3 is a test of patience and a good part of the job is making sure I anticipate all possible dumb ways it could interpret my prompt, and write exact words to drift it away from that potencial interpretation. And then it still finds a way. 2. It is a merciless complexity monster. When it comes to writing code, it has no shame. It is careless. It will just add, add, add, and never remove or cleanup. Even worse, it will often add nearly identical functions, instead of just using or adapting what exists. That goes against its own interests, because, past certain threshold, it will start under-performing (like all models). Often, after I ask for a feature, I'll just write a follow up prompt like "your code works but is way longer than needed, your goal now is to simplify it as much as possible" or variations of that. 3. It still forgets everything the day after. Not much to say about this, obviously a fundamental issue with LLMs that is *not* satisfactorily solved with memory or agents. And that's it. I strongly suggest OpenAI to take these 3 aspects seriously and explicitly train for them. Regarding 1, Opus does that just fine, so, I'm sure there's a way. Regarding 2, it shouldn't be hard, but it has to be done carefully, because if you just try to minimize token count, the model will tend to *minify* the code (use short variable names, make code-golf like uglifications). That is NOT what you want. You want to train it to reduce code size by: A. Removing redundancies. If a functionality is already implemented, it should FIND IT and USE IT. Sometimes this will require some modifications, but that's always better than writing the same logic twice. B. Abstracting the common pattern out. Often there will be 2 long functions, F() and G(), that can be merged into a parametrized function FG(), and then, F() and G() become specialized instances of FG(). This is universally desirable and teaching a model to do it will wield amazing results in practical productivity. C. Using simpler logic whenever possible. Sometimes there is just a simpler way to implement an algorithm or procedure. You should teach the model to favor that. Regarding 3, until there is a major breakthrough that solves continual learning, I think OpenAI should work in a product that allows us to at least mitigate the issue. Some people claim to have luck with nightly LoRA's. Being able to do that with codex models on my domain would be amazing.
English
151
68
1.5K
157.5K
BuckGup🟨
BuckGup🟨@BuckGup1108·
@suth_a @DabbaNetwork It’s facts, check the explorer data. Dabba is extremely keen on web3 and have observed the downfalls of launching a token before your products/services have net sustaining revenue
English
1
0
0
37
BuckGup🟨
BuckGup🟨@BuckGup1108·
TGE is approaching fast for @DabbaNetwork and they are on track to launch in the green! 548K DAU 110K hotspots deployed $5.2MM ARR Updated burn mechanics to percentage of revenue vs per DC pricing. There's also perhaps a very cool app that's on the way 👀 dabba-onchain-mvp.vercel.app
BuckGup🟨 tweet media
English
1
3
13
688
Codetard
Codetard@codetaur·
new CRT shader attempt
English
31
282
4.1K
70.6K
Pixelomo
Pixelomo@suth_a·
@BuckGup1108 @blockgraze It's not garbage you just don't know how to use it. Spotify lead dev stated that none of their team have written code for the past year. Try Claude Code Agent Teams, you can build a complex web app in an hour.
English
1
0
0
191
BuckGup🟨
BuckGup🟨@BuckGup1108·
@blockgraze If you speak to any senior dev at a top 100 tech companies none of them are impressed. If anything they are annoyed as it’s incredibly sloppy to use as a tool day to day. For the billions spent it’s garbage. LLMs are not it for actual work use cases
English
26
5
521
42.5K
blockgraze
blockgraze@blockgraze·
"bro have you tried clawdbot it's so insane" "a little but not much, what do you use it for" "it's crazy many you can do anything with it" "what are you doing with it" "you gotta try it" "try it for what" "don't get left behind man"
English
386
986
23.5K
765.9K
Pixelomo
Pixelomo@suth_a·
@AppleSupport ELI5 why my icloud storage was at 5.1GB, I had 50K emails, I deleted 10K, and now my storage is at 5GB?
English
0
0
0
13
Pixelomo
Pixelomo@suth_a·
@notch I call this tetrishead, play anything too long and you start seeing patterns in the real world which remind you of the game
English
0
0
0
14
notch
notch@notch·
You know that GTA Online afterglow where you kinda live in video game land still? Still hear the sounds, kinda still notice cars in that special way. I played Hyperbolica. Went back upstairs to go to bed, also a bit stoned. And yeah, my trophy LA house feels a bit like the café.
English
41
9
631
90.7K
notch
notch@notch·
I will not clarify.
notch tweet media
English
290
323
18.6K
446.3K
Pixelomo
Pixelomo@suth_a·
@OpenAI GPT hasn't improved since 4 released in 2023. Without Ilya, you're slowly crumbling
English
0
0
0
65
Pixelomo
Pixelomo@suth_a·
@mrdoob I've been wondering if I could recreate an entire classic arcade game just using shaders
English
0
0
0
12