ricursive

177 posts

ricursive

ricursive

@ricursive

Katılım Aralık 2022
38 Takip Edilen7 Takipçiler
Mark Manson
Mark Manson@Markmanson·
Beware: learning more is a smart person’s favorite form of procrastination.
English
75
152
1.6K
35K
ricursive
ricursive@ricursive·
@rfleury You admitted I'm in the field of computer programming. I could never leave now. 😉
English
0
0
0
161
Ryan Fleury
Ryan Fleury@rfleury·
@ricursive Please leave the field of computer programming and never ever think about re-entering it
English
1
0
5
778
ricursive
ricursive@ricursive·
@sama UX feature request. Can ChatGPT web and Codex get message timestamps enabled by default?
English
0
0
0
30
ricursive
ricursive@ricursive·
@rfleury I think it depends on how the model was trained. Let's take huffman encoding as an example the most common recurring pattern gets the least amount of bits. Maybe a single sentence or even a word can become enough to reproduce that verbatim or in different permutations.
English
0
0
0
20
Ryan Fleury
Ryan Fleury@rfleury·
“Vibecoding”, i.e. ~hands-off usage of LLMs to rapidly generate code without regard for the actual code’s contents, for novel applications, can literally never be non-slop, because—as I’ve described before—there is not enough bits of information content in prompts to express the user’s exact desires in sufficient detail, and the desired solution is not expressed in training data (due to the problem’s novelty). Only a sentient human developer can relate to another human user to determine what is desirable, and design the software such that it accomplishes this desirable outcome, and carefully verify that it is doing that, rather than something else (potentially undesirable). This is true even for the combinatoric space implied by the training data, for instance if the novel problem is merely novel in that it combines pieces of existing solutions. There needs to be a guiding force to know what to combine and how. The more detailed the prompt becomes, the more human oversight (the more human-guided round trips with the LLM), the closer it becomes to actual code (i.e. detailed execution instructions for a computer).
Ryan Fleury@rfleury

@yacineMTB Contradiction of terms

English
56
61
901
53.9K
ricursive
ricursive@ricursive·
@rfleury @bkaradzic Lke he said if you were to repurpose this for WebGL. I did have horrible performance issues with state changes years ago too until I cached them, for glBindTexture that's probably fine but maybe Chromium or Firefox it's GPU "driver" shim fixed it by now.
English
0
0
0
30
Ryan Fleury
Ryan Fleury@rfleury·
@bkaradzic This still causes significant performance penalties on WebGL? I mean, it should probably not be an issue if you are not switching pipelines all the time? Which is ideally what you have, if you batch things appropriately.
English
4
0
12
4.4K
Ryan Fleury
Ryan Fleury@rfleury·
Stop writing OpenGL state machine management bugs with this one weird trick
Ryan Fleury tweet media
English
20
4
406
56.2K
ricursive
ricursive@ricursive·
@TravelerOfCode @Misfortuneee I don't trust AI blindly in my domain at all but that's why I make use of it. I know when it's fucking shit up. It's also not true, you don't need to be good at drawing things to recognize good art.
English
0
0
0
64
TravelerOfCode
TravelerOfCode@TravelerOfCode·
@Misfortuneee everyone trusts ai in the domain they cant evaluate. its always the other side that looks unreliable. blind spots are symmetrical
English
2
0
35
2.4K
Fortune K.
Fortune K.@Misfortuneee·
Talked to someone who said, "As a programmer, vibe coding is so bad and unreliable, gen AI is really only good for art and animation" And when I said the same thing in reverse he got defensive lol
English
84
1.1K
27.1K
346.3K
Alex Goldring
Alex Goldring@SoftEngineer·
Apparently animating more than ~20 characters in modern graphics engines is a big deal😅 324 skinned characters animating independently in WebGPU (browser) Each character is playing animating with a completely separate skeleton and a timeline. No two characters sample the same time. Character has 66 bones and 28,106 triangles I want to stress that there is no instancing of any kind here, and CPU is not involved at all.
English
31
30
397
56.6K
ThePrimeagen
ThePrimeagen@ThePrimeagen·
I love efficiency of the modern system
English
363
152
8.5K
574.9K
Noah
Noah@NoahKingJr·
TELL ME SOMETHING YOU CAN DO THAT CLAUDE CANNOT
English
3.1K
71
1.8K
899.8K
ricursive
ricursive@ricursive·
@SebAaltonen I don't know how useful AGENTS.md are. I never use them myself and just assume anything you tell it, it could do those things because of ironic process theory. Sure without it it could also do that but I think less likely. I agree that you should review LLM code though.
English
1
0
2
935
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
You have to review all LLM code! Codex 5.5 tried to push this awful hack to our Metal backend when it was coding font rendering. It decided to implement hacky "robust buffer access" style OOM check inside the shader and hacked our whole Metal binding architecture to add a special bind group slot 30 (hardcoded) to deliver sizes of all buffer bindings. This of course made the binding model super slow and required extra data for each buffer.
Sebastian Aaltonen tweet media
English
38
26
685
87.6K
ricursive
ricursive@ricursive·
@abdimoalim_ I don't think it matters. The problem would be having abundance to work on computers instead of surviving. Convincing people to make computers is the difficult part. There wouldn't be enough time and resources either way so generations would forget until we start over or don't.
English
0
0
4
1.7K
@abdimoalim.bsky.social
@abdimoalim.bsky.social@abdimoalim_·
Only a few individuals would be able to reconstruct a computer from first principles in the event of a civilizational collapse.
English
232
208
5.7K
176.3K
ricursive
ricursive@ricursive·
@thsottiaux Did you recently reduce the Cybersecurity warnings? I kept getting these warnings all the time and it even stops Codex too. This week has been awful and it seemingly stopped but I'm not sure.
English
0
0
0
12
Tibo
Tibo@thsottiaux·
What are we obviously not getting right with Codex?
English
2.8K
29
2.5K
611.2K
Sam Altman
Sam Altman@sama·
i keep thinking i want the models to be cheaper/faster more than i want them to be smarter but it seems that just being smarter is still the most important thing
English
2.4K
386
13.2K
1.1M
ricursive
ricursive@ricursive·
@TheGingerBill The same reason you started making your own programming language perhaps.
English
0
0
0
2.6K
gingerBill
gingerBill@TheGingerBill·
I don't know if a lot of people have thought why this happened. To make Linux viable for the layman, Valve had to make Proton (derived from Wine) so that Win32 API became the first and only stable ABI on Linux. Why did Linux Distro devs not care about stable ABI historically?
sudox@kmcnam1

English
125
81
2.4K
633K
ricursive
ricursive@ricursive·
@nicbarkeragain Are we calling common sense techniques now? Has AI fried everyone's brain?
English
1
0
0
1.2K
ricursive
ricursive@ricursive·
@trq212 When you forget to launch Claude with --dangerously-skip-permissions and then close it and relaunch with --resume does it invalidate the entire cache? For some tasks I run on a VM I sometimes forget to do this and then reopen it quickly with --resume, does the cache invalidate?
English
0
0
0
26
ricursive
ricursive@ricursive·
@SebAaltonen Have you tried GPT-5.3-Codex-Spark? I heard it was bit worse but faster and more finetuned for code. Not sure if I want to waste my time with it though.
English
0
0
0
285
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
First time I hit the limit of my both ChatGPT plans at the same time. Codex 5.5 xhigh seems to be consuming tokens like crazy.
English
25
2
122
14.1K
ricursive
ricursive@ricursive·
@saynothingetal @francoisfleuret It'll just move the bottleneck and eventually clog it up at the senior software engineer and you're also burning him out if you keep sending him slop. You already see it happening with open-source software PRs, they don't accept them anymore from AI slop.
English
0
0
0
16
saynothingetal
saynothingetal@saynothingetal·
@francoisfleuret You’re deluding yourself. I’ve seen engineers who were about to be fired become passable after adopting AI so long as a senior reviewed their code with scrutiny.
English
1
0
0
195
François Fleuret
François Fleuret@francoisfleuret·
I have the impression that using coding agents for complex tasks requires a sense of feasibility and consistency of the results that can only comes from years of actual programming. I am deluding myself? A sub-par programmer can really shine with agents?
English
112
10
461
39.5K