rygo6

4.5K posts

rygo6 banner
rygo6

rygo6

@_rygo6

Katılım Ağustos 2021
64 Takip Edilen1K Takipçiler
rygo6
rygo6@_rygo6·
It's really weird when you think about how, for as advanced as society currently is, these newsfeeds are probably what shape the direction of humanities future more than anything. Millions of people scrolling a linear feed of text, video and snippets arranged for them by an AI. There is no way this is the peak of global human communication. With all of the potential computers hold for human interaction, there is no way that scrolling a linear feed of asynchronous media snippets is the endgame of that.
English
0
0
0
14
rygo6
rygo6@_rygo6·
So far, with the sense I have developed concerning what Claude Opus 4.6 is good at generating, and where it struggles. Something was telling me that Fortran might be an optimal AI language. It is heavily procedural. Explicit declarations. Few extra chars like { } or ; Has clear simple primitives for complex tasks in parallel computing. Can compile fast. However, all research I've seen say it lags behind C++ and Python significantly. But not necessarily because it does not have qualities which could be better to use with an AI. More so because there's simply far more C++ and Python out there. Which seems obvious but there is a concerning nuance to that. If qualities of reality are not enough by themselves to enable an AI to improve itself, then it really isn't "Learning" in the typical sense we use the word. All it is "Learning" to do is interpolate between the relevant qualities of reality that humans have already discerned for it. Which I think is an obvious truth about LLMs. They are extremely complex interpolation machines of existing knowledge. But that is a fundamentally a different kind of intelligence compared to human intelligence. Human intelligence can navigate into complete chaos, complete unknown. Then through an aesthetic sense combined with empirical iteration, human intelligence can conjure something out of chaos. True creation. LLMs can only interpolate between that. I would speculate because fundamentally they don't have the aesthetic sense. However, they can be incredibly good at the empirical iteration if directed by human sense of aesthetic. It is probably the case that an LLM too unrestrained by human aesthetic sense will devolve. An LLM is at its peak when interpolating between large quantities of high-quality data produced by highly discerning human intelligence. If something starts to happen where people generate code, then commit that code to github, then the LLM is subsequently trained off that code. Over and over and over. Until at some point all new training data is code previously generated by an AI. There will be a degrading quality to that. The data the AI generates could start to outweigh the previously higher quality data that initially came from human discretion. It's in this way I am starting to think LLMs should probably be seen not as a form of intelligence but more so like an Auto-Library. Humans throw all their knowledge into it. It can sort through it. Recognize the patterns. Cross reference. Interpolate. See the complexities in between which humans could not initially. But if this is true, it could be pretty damning to the notion of fully automated software development. A company could generate large chunks of code, and at first, it may seem generally good because it was the first derivative interpolating out of high-quality data. But if they repeat that over and over, they may get a codebase no human understands, and the AI itself may begin recycling its own generated data. Almost like a form of data inbreeding or making copies of a copy. It needs discretion of human aesthetic constantly in the feedback loop to interject an additional element and counterbalance it could not produce itself.
English
0
0
2
99
rygo6
rygo6@_rygo6·
@SebAaltonen ... sounds like you just need a CLAUDE.md to tell it. and is your span that can work in initializer_list open sourced anywhere?
English
0
0
0
97
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
std::span doesn't support initializer lists. Our span does, but LLM is pessimistic... LLM writes a C-array as a separate object :( C++ guarantees that function parameters (temporaries) outlive the function call. Making initializer list safe in span.
Sebastian Aaltonen tweet mediaSebastian Aaltonen tweet media
English
8
1
94
11.6K
rygo6
rygo6@_rygo6·
This is the most intriguing thing to me in game tech right now. Particularly in the indie space. Go backwards in hardware. Make a gaming ecosystem that can run on the simplest hardware possible. I believe this is significant for general applications too. I started working on a code editor some time ago. One of my primary intents for this was wanting an ultra-portable lightweight editor in the vein of sublime text. Sublime struggles to run on openbsd. Sublime struggles to run on RISCV SOCs. It can struggle to run other places. So I made it on raylib. So far my code editor on raylib is just as lightweight as sublime or zed, if not more so. This is something raylib doesn't brag much about and can be lost in translation if viewed as a gamedev library. You can build simple GUI applications that are lighter weight than pretty much any other GUI library I can think of. And now with this I'm feeling confident I'll end up with a graphical editor that can run on literally anything. Which pretty much nothing can do that right now. If you use some GUI widget toolkit, or use hardware acceleration for UI, there is some platform which it will inevitably not run on, or be funky in some way.
Ray@raysan5

Announcing rlsw, the new raylib software renderer backend. No GPU required. The Future is Now. #raylib

English
0
2
6
303
rygo6
rygo6@_rygo6·
I am realizing that AI can essentially be like an optimized library. I know that doesn't make sense on the surface so let me explain. When you get an AI well-tuned with past dialogues and directive files. You can start to have it do a lot of stuff, really good, automatically. But of course it can't do everything. It has a narrowly defined subset that it can be really good at it. At which point it does become essentially like a library. But instead of importing some external mass chunk of code into your project which probably has a whole bunch of conditionals and extra bloat to deal with a wide array of conditions. The AI can inject the operations you need directly into your codebase, tailored specifically to your codebase and your exact problem. In a way I like this a lot better than libraries. The output of such an tailored AI will always be magnitudes more comprehensible and tuned for your exact problem domain than an entire generalized library because it can omit the majority of the library. Will also compile faster too. This is already fully possible right now the only problem is you need to have a sense of what your tuned AI is already good at. Which could be really hard for someone new coming to it. There needs to be some kind of functionality so an AI can discern when it has a low confidence in a solution to your specific need and tells you. This might be the better way to think about AI. Not as a generalized programmer which can do anything. But something which can auto-fill tailor made operations you'd typically get from a library. To which someone must still manually fine tune it beforehand to be good at that specific problem domain.
English
0
0
2
170
rygo6
rygo6@_rygo6·
I generally dislike documentation too and just read the source code. Although lately this has been my workflow: - Open Claude Code in the project folder. - Ask Claude to read the project. - If project uses Vulkan or OpenXR, or other. Point Claude to read of those specss and their source folder. - Ask Claude questions about project. - Ask Claude to draw up overviews or diagrams of any part of the system. This is magnitudes better than any documentation and lets me read through the code faster as it can point me to the more critical portions while summarizing less critical parts. Ironically, I find asking Claude is usually better than asking the person who wrote the code because the person won't remember as many specific details. Also if you make a point to clone the git repos then Claude can read all the git history. So you can ask about the progression or changes to any particular system over time.
English
1
1
7
598
gingerBill
gingerBill@TheGingerBill·
I do wonder how people read documentation at times, since I am far from the norm. I read a lot of source code of even heavily documented stuff, because it is usually quicker for me to understand how something works by reading it. What do y'all do?!
English
64
1
227
13K
gingerBill
gingerBill@TheGingerBill·
@oxcrowx The problem isn’t compiling for a specific architecture. The problem is dealing with how each OS does things differently and you cannot abstract that away perfectly. Write once run anywhere is a myth.
English
9
0
215
6.9K
oxcrow
oxcrow@oxcrowx·
Software distribution is messed up. I don't want to compile the same thing for multiple OSs. It'd be nice if we could compile a portable IR that runs on a VM (like Java, C#), and if users want highest performance, they can compile the IR to native executable, on their machine.
English
73
0
252
32.9K
rygo6
rygo6@_rygo6·
@raysan5 Even if you write every line of C by hand manually. Having Claude code in the project is still incredibly valuable for dozens of other things.
English
0
0
1
217
Ray
Ray@raysan5·
I got the feeling that despite all this AI-coding-rumble-bumble, there are still some great developers silently creating amazing projects in the most genuine handmade-coding way... and those devs are truly the last chance for humanity.
English
43
31
655
45.5K
rygo6
rygo6@_rygo6·
@jmdagdelen My standard is 27" 4k set to 100% DPI. Or a 16" laptop screen.
English
0
0
0
23
John Dagdelen
John Dagdelen@jmdagdelen·
@_rygo6 They’re useful on really big monitors. I’m guessing you tend to only use a single one?
English
1
0
1
64
rygo6
rygo6@_rygo6·
Trying out hyprland just because I never have. It's nice but personally I still just do not understand the appeal of tiling window managers. It goes for i3, sway, dwm. Any of them. It is extremely rare that I ever use two windows simultaneously and need them side by side. I do not want half, or even a quarter, of my screen constantly dedicated to any given window. If I use the browser, I want the browser to take up the majority of my screen. If I use my editor, I want that to take up the majority of my screen. I don't want the other windows to all occupy a chunk of space taking away from the main one I am using. Am I missing something here? Do people who use tiling window managers really lay out their application side by side tiling then just use them in a their quarter of the screen? Do you really use like the file manager stretched into one half, or quarter, of the screen?
English
2
0
1
273
rygo6
rygo6@_rygo6·
I'm starting to think 2 space tabs are the way to go. ASCII tab chars are one of those old modalities inherited back from the early days. It's really just a formality to use them. With some careful juggling of tabs and spaces, you can keep indentations aligned, but that is just extra steps for very little. Also if you want to put a nice diagram in ASCII over some code it would not align for all tab sizes. Tabs can disincentivize better in-code documentation because of this. If you write define blocks with \ to continue then there is no way you can use tabs and keep all the \'s aligned at the end for all tab widths. Which keeping the \'s aligned I think is just nicer. It is really not a big deal to switch between 2 and 4 tab codebases. The visual difference can actually help signify which codebase a file comes from. If its really a big deal 2 spaces could be interpreted to draw as 4 spaces. Double-tap space bar for equivalent of tab is nice for muscle memory. 2 space tabs let you fit more on screen. 4 spaces I think is just because people wanted to match the tab width by default. But really 2 spaces work just fine. If you were not inheriting the the modality of ASCII tab chars I think most would choose 2 spaces off of first principles. Implementing ascii tab char indentation in my text editor are additional steps I don't feel like doing.
English
1
0
2
176
rygo6
rygo6@_rygo6·
I discover new things everyday that Claude Code can 'just do' without issue. Lets say you have a method you are not sure the performance of. Ask claude to write a quick test for that particular method, or set of methods, with the variations you need to test. Run the the tests. Output a table and analyze. If some algorithm has constants which need to be tuned based on the results of benchmarks claude can automatically infer ideal defaults from the default. Things in graphics programming, or anything dependent on specific hardware requires a lot of this because you can never just assume the nature of hardware without testing it. Sure I can do that myself in not too long. Maybe an hour. Maybe 30 minutes to write out some tests and look through the outputs. However Claude can do it in 10 seconds. What could easily consume a week of dealing with pedantic exhaustive micro tests and benchmarks to constantly probe hardware can now be a continual background process which takes little to no time.
English
0
0
5
269
rygo6
rygo6@_rygo6·
Got a new Mac to develop on. Whoever at Apple decided to have this enabled by default really needs some feedback. Perused through my text editor settings several times in past week trying to find what setting is making it add a random extra period when I add spaces. Turns out someone thought that was a good idea to make a system wide default.
rygo6 tweet media
English
0
0
5
261
rygo6
rygo6@_rygo6·
I suspect the only remaining high-value skill in programming will be in the categories of embedded, GPU and high performance computing. How to make things go as fast as possible and be as resource efficient as possible on constrained hardware. AI will be able to codegen out decent high-level code for most general use cases. In a lot of scenarios getting meticulous and pedantic about processor and memory constraints doesn't matter. However when something needs meticulous low-level concern about what exactly you are doing with your memory and your instructions to get top-tier performance and memory consumption. You will need an experienced programmer to do that. An AI will always struggle to do it as optimally as possible. Because to get top-tier performance you must translate your specific needs into the constraints of the specific hardware you are targeting. You must conceive of your problem in the context of how that specific memory, processor, threading model and cache all function. You must be able to break down your problem and design something that will conform to the hardware and think in the terms of the hardware itself. You might be able to use AI to help understand the nuances of particular hardware. Once you have invented a solution which can fit your problem to that hardware you might be able to have AI write some of it. But this all requires intricate technical knowledge to do. Even if AI is helping. It will be a niche and sought after skillset. As companies will end up pumping out a whole pile of AI slop. But at some point, for some of them, it's going to get to a point where something will need to run as fast and efficient as possible on some specific piece of hardware. At which point someone who has a knack for translating a given problem into the constraints of that specific hardware will have to sort through it manually. Personally this is part of why I started to focus on C, vulkan and being extremely pedantic in efficient programming practices several years ago. I believe it will be the only remaining programming skillset of high value.
English
0
0
2
194
rygo6
rygo6@_rygo6·
This is neat. I can take old shaders of mine, drop them in claude and say "add the boilerplate so this can run in shadertoy". Then it automatically converts all the portions which were idiosyncratic to my custom vulkan engine into whatever the boilerplate is that shadertoy needs. Here is an infinite grid effect I made using a DDA that I had as the background of my VR compositor. Although it's decently performant with only 8 or 16 steps, I concluded that's still too much overhead for just a background.
rygo6 tweet media
English
1
0
5
242
rygo6
rygo6@_rygo6·
@HBloodedHeroine That's a good point to highlight. I wish there was some study that tried to keep the problem set more consistent.
English
0
0
0
12
HotBloodedHeroine
HotBloodedHeroine@HBloodedHeroine·
@_rygo6 complete misread of that study, the exercies in each language are custom to the language and in no way related problems, elixir problems were just the easiest thats it. ai is the most productive where you'd expect it to be
English
1
0
0
26
rygo6
rygo6@_rygo6·
I haven't seen a lot of people asking the question of, nor much research on, what's the best language for an AI to write? I found this one study: revelry.co/insights/artif… Which shows elixir being the best. Although this is it running on coding problems, not a whole complex codebase. Someone probably needs to take the anthropic 'Generate a C Compiler Experiment' and have it do the same experiment with all the languages. Could speculate elixir is the best due to it being a simpler language both in terms of keywords and its default control flow being more explicit. Or maybe just training data? I suspect the language itself does affect this. It might turn out that something like nelua, lua that compiles to C, is the best for AI to wield. As it's simpler, has fewer features, fewer characters, more straight forward control flow. Or maybe something like nim or genie would be good here? All of those have the quality where they transpile to C. So it would create a sort of 'Soup' or AI generated logic in C, but represented by a higher level language, that you could still directly call into with C.
English
1
0
0
147
rygo6
rygo6@_rygo6·
My running theory on the 80/20 rule applied to AI codegen. If you start from very little, to nothing, AI will be able to cover 80% of your needs very quickly and effectively. However, that last 20% starts to become incredibly difficult to deal with because it probably generated a lot of code that is simply bad and hard to work with. Even for its own self to work with. You might spend even more time trying to deal with that last 20% compared to if you had just written out the whole thing from scratch. Now flip that and let's say you wrote out 20% of the code first. Manually to a high quality. Largely focusing on methods and data structures. Central patterns or styling for it to follow. At which point it can probably generate the remaining 80% to a high quality. Basically, those who don't really know how to program, they get sucked into vibe coding, starting from very little, and it gives them a ton of functionality really fast, but at some point, it reaches an unworkable condition. To which doing anything takes an insurmountable amount of time. Or it just can't do it right at all. However, those who front load the most difficult parts. Meticulously putting together a good API, data structures, and patterns for it to follow. Investing a lot of mental energy and stress on that first 20%. They can then coast downhill afterwards and rest on AI pretty heavily to "Interpolate" the code between that. The real game here is getting a very acute instinct on how exactly you'd write that first 20% so then you can coast the rest of the way on AI.
English
0
0
2
172