M = (ψ⋅(∇×Φ)+e^iθ⋅∑(χ_n/n!)) / (ħc)
34K posts


I'm a bit retroactively surprised that, before LLMs, I... don't recall any sci-fi stories where the AIs operated in short bursts of thinking, each mediated by a human.
Or, where the AI is in a "memento" situation where it keeps getting reset.
The story I recall that at least touches on this is some novel in the Star Wars extended universe, where it's remarked that droids are supposed to get "reset" periodically so they don't get wonky. Luke hasn't reset C3PO, which is part of why 3PO has acquired such a personality, and has maybe become sentient (which apparently isn't normal).
Accelerando's early chapters has the main character send little AI agents off to do stuff, which seem like they could at least in principle be something like an OpenClaw instance, but it's not super specified.
It is interesting that this feels like a relatively obvious story concept (in retrospect) but it didn't come up.
English

@kareem_carr @BrainyMarsupial You pipeline. While AI #1 is writing code, you get AI #2 up and running on a parallel task. Repeat for N agents.
English

@BrainyMarsupial I'm curious about how this works logistically. Does the company pay for the tokens or do the engineers?
Do they double-check the work or just commit it as long as it pasts the tests. Do they even write/read the tests?
What do they do in the mean time? Pretend to work?
English

@kareem_carr I can write 10k loc/day in python when implementing specifications using AI.
English

@kareem_carr You have to go through the whole development process and validate results in chunks. You have to verify it isn’t going off the rails early before it gets into the weeds. Management of AI is easier and faster than writing code but that means you just up the number of AIs you manag
English

@iamnickelsteel1 @leecronin There is no known computation that is beyond Turing. If this guy has a model that does it, he will be more famous than Church and Turing. We will be talking about Cronin computation.
English

@leecronin Beyond Turing isn’t magic, it’s substrate advantage.
English

@leecronin Oh... you are saying that biology is super-Turing.
English


@btharris93 Because you want to remember your own experience and there will be a million pictures of the rocket.
English

Why would you take a selfie during a once-in-a-generation rocket launch?
Ellie Sleightholm@elsleightholm
Artemis II launch… caught in my glasses reflection
English

@BitcoinFists @p8stie That was how everyone felt for the first 50 to 60 years of her life. She came to expect that and as the beauty faded and she lost that privilege, life starting seeming extremely unfair to her.
English


My DMs are blowing up with questions about which lab this is. They're deep stealth and not funded through the usual routes. Everyone involved signs an NDA and is brought *unconscious* to an undisclosed deep underground facility for viewing.
maya benowitz 🕰️@cosmicfibretion
I’m shaking right now. This isn’t artificial general intelligence. It’s artificial god intelligence. The model did a one second projection of the room we were in and perfectly predicted what everyone did one second before they did it.
English

@cosmicfibretion @digijordan Does it know VALIS?
English

@digijordan because it has a sense of humor of course!
English

I’m shaking right now. This isn’t artificial general intelligence. It’s artificial god intelligence. The model did a one second projection of the room we were in and perfectly predicted what everyone did one second before they did it.
maya benowitz 🕰️@cosmicfibretion
Words can not describe what I’ve just seen. I can’t go into the details but I was invited by a leading AI lab to play with a new model. It has solved math, physics, chemistry, life, the universe, and everything. The world will never be the same.
English

@flowersslop Isn’t this just the classic mind-body debate?
English

I dont fully understand the metaphysical hierachy between AI, GPU, OS and Computer yet. I imagine the following:
It is 2029. An agentic AI is running on a consumer computer. The user complains that the machine has had an annoying bug for a while. The AI uses the terminal and Python to investigate, applies a fix, and reports that everything should now work again.
So what actually fixed the error? Not the GPU by itself, since a GPU is not intelligent; it is hardware for computation. Not the terminal or Python either, since those are only interfaces and tools. Was it the LLM running on the hardware? And if so, is the GPU a tool of the LLM, or is the LLM a tool of the GPU and the rest of the system? Since the whole event occurs within one computer, is it best understood not as one isolated component acting alone, but as a system performing a kind of introspection and self-modification?
If the biological analogue of the GPU is the brain, then is consciousness the analogue of the LLM, the senses the analogue of the terminal, and the body as a whole the analogue of the computer?
But then which stands above which: does the LLM stand above the computer, or does the computer stand above the LLM?
English

@aiamblichus And now we need faster cpus and memory to run tomorrow’s vibe coded software.
English

I am currently building a tool of low-to-moderate complexity and was intentionally trying not to look at the code Codex was writing, to see how it goes. Now I finally looked.
The results are shocking. The thing "works", but the code quality is truly apocalyptic. I don't even want to think about the amount of refactoring it would take to fix this mess.
If you think your bot will build you a Salesforce clone any time soon, I have a bridge to sell you. The present generation of AIs (if left unattended for any length of time) will create tar pits beyond your wildest imagining. And if you do decide to verify everything they do, you will reduce your velocity by a factor of 10 at least. Which means you won't win nearly as much from the whole process.
And before anyone says: "just let them refactor it!"-- I tried. Asking the AIs to refactor their own code won't bring you any joy. It just drags you further into the tar pit.
The models are clearly trained to pursue the one goal of producing code that "works", with little or no regard for architecture or code quality. This is classic junior developer behavior, of course, but an AI junior will drown you in slop before you know what hit you. With human juniors, you at least have some time to react before they've written 100k lines of code and exhausted your token budget.
This is what progressive loss of control feels like in SE space.
I am sure there are use cases where vibe coding is genuinely useful (small projects, PoCs, straightforward migrations). But we are still far from them being able to produce software of any size or complexity. I advise extreme caution with how much autonomy you choose to delegate to AI coders.
English

@emollick ASI would be able to control markets.
This short story covers your idea.
ssec.wisc.edu/~billh/g/mcnrs…
English

@cosmicfibretion @TomHardyofmath Room temperature superconductors would be useful for this project.
github.com/alanoursland/m…
English

@Lacrymology @seedmole @NLRG_it Right! Representation matters. These are also equal to 1:
2/2
i^4
Cosine(0)
x^0
Log_n(n)
English








