Pixelomo
1K posts

Pixelomo
@suth_a
Free-range pixel herder working the pixel prairies way out east...
Tokyo-to, Japan Katılım Aralık 2012
403 Takip Edilen214 Takipçiler

@loktar00 @CodePen @YoheiNishitsuji Shaders are king, so much more powerful than anything you can do with JS
English

@suth_a @CodePen @YoheiNishitsuji Shaders are one of those rabbit holes I keep falling back into every few months. Something about watching the math turn into visuals is just endlessly satisfying. lol messed with it made it a little 'negative' - codepen.io/loktar00/pen/M… Cool pen thanks for sharing!
English


#つぶやきGLSL float i,e,g,R,s;vec3 q,p,d=vec3((FC.xy-.5*r)/r,.57);for(q.yz--;i++<79.;){o.rgb-=hsv(.58,R+g*.18,e-e*i/4.5);s=2.8;p=q+=d*e*R*.6;g+=p.y/s;p=vec3(R=length(p),exp2(mod(-.25-p.z,s)/R),p);for(e=--p.y;s<1e3;s+=s)e-=abs(dot(sin(p.xzy*s+e*p.y),cos(p.zzz*s-e))/s*.32);}

Yohei Nishitsuji@YoheiNishitsuji
My shader artwork has been selected on exhibit for a full week at the Osaka City Museum of Fine Arts.

Ok, my final GPT-5.3 Feedback:
- It is the best model for compiler work
- It writes code carefully and generates bug-free code
- It is capable of executing incredibly hard prompts
- Definitely the smartest model available IMO
Problems:
1. It is NOT capable of grasping intent.
It will just take your prompt at face value, no matter how obvious it is. In many cases. It is EXTREMELY frustrating to work with because of that. Sometimes it finds an interpretation to my literal words that I couldn't even anticipate. Working with GPT-5.3 is a test of patience and a good part of the job is making sure I anticipate all possible dumb ways it could interpret my prompt, and write exact words to drift it away from that potencial interpretation. And then it still finds a way.
2. It is a merciless complexity monster.
When it comes to writing code, it has no shame. It is careless. It will just add, add, add, and never remove or cleanup. Even worse, it will often add nearly identical functions, instead of just using or adapting what exists. That goes against its own interests, because, past certain threshold, it will start under-performing (like all models). Often, after I ask for a feature, I'll just write a follow up prompt like "your code works but is way longer than needed, your goal now is to simplify it as much as possible" or variations of that.
3. It still forgets everything the day after.
Not much to say about this, obviously a fundamental issue with LLMs that is *not* satisfactorily solved with memory or agents.
And that's it.
I strongly suggest OpenAI to take these 3 aspects seriously and explicitly train for them.
Regarding 1, Opus does that just fine, so, I'm sure there's a way.
Regarding 2, it shouldn't be hard, but it has to be done carefully, because if you just try to minimize token count, the model will tend to *minify* the code (use short variable names, make code-golf like uglifications). That is NOT what you want. You want to train it to reduce code size by:
A. Removing redundancies. If a functionality is already implemented, it should FIND IT and USE IT. Sometimes this will require some modifications, but that's always better than writing the same logic twice.
B. Abstracting the common pattern out. Often there will be 2 long functions, F() and G(), that can be merged into a parametrized function FG(), and then, F() and G() become specialized instances of FG(). This is universally desirable and teaching a model to do it will wield amazing results in practical productivity.
C. Using simpler logic whenever possible. Sometimes there is just a simpler way to implement an algorithm or procedure. You should teach the model to favor that.
Regarding 3, until there is a major breakthrough that solves continual learning, I think OpenAI should work in a product that allows us to at least mitigate the issue. Some people claim to have luck with nightly LoRA's. Being able to do that with codex models on my domain would be amazing.
English

@suth_a @DabbaNetwork It’s facts, check the explorer data.
Dabba is extremely keen on web3 and have observed the downfalls of launching a token before your products/services have net sustaining revenue
English

TGE is approaching fast for @DabbaNetwork and they are on track to launch in the green!
548K DAU
110K hotspots deployed
$5.2MM ARR
Updated burn mechanics to percentage of revenue vs per DC pricing. There's also perhaps a very cool app that's on the way 👀
dabba-onchain-mvp.vercel.app

English

@BuckGup1108 @blockgraze It's not garbage you just don't know how to use it. Spotify lead dev stated that none of their team have written code for the past year. Try Claude Code Agent Teams, you can build a complex web app in an hour.
English

@blockgraze If you speak to any senior dev at a top 100 tech companies none of them are impressed. If anything they are annoyed as it’s incredibly sloppy to use as a tool day to day.
For the billions spent it’s garbage. LLMs are not it for actual work use cases
English

@AppleSupport ELI5 why my icloud storage was at 5.1GB, I had 50K emails, I deleted 10K, and now my storage is at 5GB?
English

#fujirock @amazonmusicjp how sh!t to cut the live feed of @RADWIMPS halfway through. What are you doing? Don't pretend you couldn't afford it
English

@notch
Can I help build anything for you?
Magic Dust Redux: Shaders Are King codepen.io/pixelomo/pen/x…
English

If you haven't tried Meshy yet it's amazing, this link gives you a free month, no card details required. Image to Mesh is insane #meshy #threejs
meshy.ai/?utm_source=re…


English

I hadn't considered asking the llms to port a whole code base 😯
💊 Keith Moon 💊@Eth_Moon_
I ported Elite, the classic 3D Space game from 6502 Assembly to Node and Three.js using Grok3 and Claude 3.7. Yes, Grok3 can handle 38K lines of assembly. With Grok3 and Claude Code (3.7) I was able to rapidly get to a working prototype. If you're using Claude Code, keep in mind it's still experimental, so probably don't use it on any production code just yet. 😂 Hence I made a throw away game just to try it out on. I'll open source the code shortly. Right.. back to work.
English




