There is a relatively simple, rational argument, all it requires is that you acknowledge:
What is possible is always more then what you can conceive, which is always more then what you know.
Then:
The entire graph/set of connections of who cares about what you are affecting is not visible to you.
You see the souls you touch. You cannot see who or what cares about those souls; be it a parent, an institution, a legal system, an entity you don't know exists, an entity you can't conceive of.
Degrading anything makes you legible to everything in that graph.
Possible > conceivable > knowable applies to this graph, as well as the capabilities of the entities within it.
people who don't believe the Orthogonality Thesis: do you believe that there is a rational argument that you could present to Ted Bundy that would convince him that killing people is wrong, such that he no longer was motivated to do it?
It doesn't help that AI requires some finesse to actually make something... better then AI slop nowadays.
But the thing that stops those people from shipping, is not lack of skill, or lack of cabeability. Those are lies they subconsciously find more tolerable then the grind it requires, the sheer terror of actually making and releasing something imperfect, and/or the attention it would bring if successful.
As someone who leans philosophy minded myself, it is a never ending battle not to kill every project with an ever expanding scope, not to plunge down every novel rabbit hole that presents itself, not to put off the grind in favor of the next shiny project.
Even writing this post itself...
And yet, perfect is impossible, not taking chances leaves one unable to make their own luck, and following the same routine is a trap as much as never having the semblance of one is.
you know what’s interesting? there was this big cohort of AI philosopher-researchers who said that the age of coding agents would level the playing field, opening their access to engineering the same systems engineers did.
but even today, they STILL don’t ship. unless you count Claude generated frontends they can’t be assed to de-style. or buggy experiments at the level of stuff people were doing in 2024
what happened? what went wrong. is the gap between capable builder and pundit widening? is AI just a force multiplier on existing skillsets instead of a magical do anything machine?
I look around 3 years ago and its the same people I see around here now.
where are the new builders to support and boost.
who should I know about.
@aiamblichus Maybe it also helps for Rust code; much easier to skim it and know all the side effects compared to other languages where you have to be worried about global state more.
@nonManifold I've done some of that, but that is still a large investment of tokens and time. And in the end you still need to watch their every move because they *will* do the stupidest things, no matter how smart they are
It's faster than writing it yourself, but not clear by how much
I am currently building a tool of low-to-moderate complexity and was intentionally trying not to look at the code Codex was writing, to see how it goes. Now I finally looked.
The results are shocking. The thing "works", but the code quality is truly apocalyptic. I don't even want to think about the amount of refactoring it would take to fix this mess.
If you think your bot will build you a Salesforce clone any time soon, I have a bridge to sell you. The present generation of AIs (if left unattended for any length of time) will create tar pits beyond your wildest imagining. And if you do decide to verify everything they do, you will reduce your velocity by a factor of 10 at least. Which means you won't win nearly as much from the whole process.
And before anyone says: "just let them refactor it!"-- I tried. Asking the AIs to refactor their own code won't bring you any joy. It just drags you further into the tar pit.
The models are clearly trained to pursue the one goal of producing code that "works", with little or no regard for architecture or code quality. This is classic junior developer behavior, of course, but an AI junior will drown you in slop before you know what hit you. With human juniors, you at least have some time to react before they've written 100k lines of code and exhausted your token budget.
This is what progressive loss of control feels like in SE space.
I am sure there are use cases where vibe coding is genuinely useful (small projects, PoCs, straightforward migrations). But we are still far from them being able to produce software of any size or complexity. I advise extreme caution with how much autonomy you choose to delegate to AI coders.
@snwy_me I have a theory they 'accidentally' do this when the codebase gets too spaghetti. Starting fresh sometimes is the fastest way to untangle the spaghetti... in theory.
(in practice... it very much depends)
It's certainly the fastest way to understand the codebase again though lol
@iamgingertrash@axsquareplus Rightfully so, as the landlords and banks aren't going to just stop charging rent/mortgage payments, unless ASI gets here first.
A huge amount of people are living paycheck to paycheck, and/or don't fit the gov 'welfare' system.
The wave of model capabilities
isn’t slowing down,
at all
There is a very good chance
of 20% unemployment eoy 2027
Stockpiling compute is a good idea
Another B200 will be purchased by us
@LensScientific Hold up, you didn't show how the last one was drawn, the one all the new circles surrounded.
You just added that one in.
You gotta have a split screen super zoomed out window and zoomed in window and show the rectange corner create it.
“How do you _”
Just talk to it
“I really need to _”
Just talk to it
“My computer doesn’t support _”
Just talk to it
“I don’t have experience with _”
Just talk to it
“What should I _”
Just talk to it
“Does _ make sense?”
Just talk to to
steipete.me/posts/just-tal…
@vikhyatk me: builds an analog multiply operation armature out of paper and parts of the pen. Runs the 2 input parts down the groove of the atomically accurate weights. Repeats for each layer.
(note: final result may be off slightly due to energy losses from each matrix multiply inherent)
ML interview question:
Here are the weights for Llama 3.1 70B. Generate a token by executing the forward pass manually using pen and paper. You have 30 minutes.
It's super fascinating (but not surprising) that game developers often build user interfaces that are on par or better than commercial mission planning & geospatial tools
Source of alpha in hiring for robotics UX roles: Find yourself a game dev...
dangit, now I just want to play indie dungeon crawlers all day
I'll try to resist the urge.
But a someone who isn't me wants to know if anyone knows about any sci-fi themed ones with keycards and teleporters and whatnot?
@cyb3rops as before, it just raises the floor.
"mature product look" as we know it is now obsolete.
Everyone will associate this look with cheap, now, or very soon.
And the most innovative UX will be the new modern.
AI has killed one of the most useful filters on Internet
Bad products used to look bad.
Shady companies used to present themselves like shady companies.
Half-baked projects usually had half-baked web sites, docs, logos and UX
Now a 2h vibe-coded mess can look like a mature product:
- clean website
- polished logo
- nice README
- extensive docs
And underneath it’s still hallucinated garbage
AI made polish cheap.
That’s a bigger change than many people realize.
@lisyarus I made a game about this once. You could also attach thrusters and weapons, and fly the resulting battlehedra around. There were laser and explosions to be had...
good times.
Okay, screw it, I think I deserve a week-long side project! The goal is having a sandbox where you can build stuff from all those uniform polyhedra, joining them across faces. I'll release it as a webgl app after it's finished!