M = (ψ⋅(∇×Φ)+e^iθ⋅∑(χ_n/n!)) / (ħc)

34K posts

M = (ψ⋅(∇×Φ)+e^iθ⋅∑(χ_n/n!)) / (ħc)

M = (ψ⋅(∇×Φ)+e^iθ⋅∑(χ_n/n!)) / (ħc)

@alanou

Присоединился Şubat 2008
206 Подписки223 Подписчики
Raymond Arnold
Raymond Arnold@Raemon777·
I'm a bit retroactively surprised that, before LLMs, I... don't recall any sci-fi stories where the AIs operated in short bursts of thinking, each mediated by a human. Or, where the AI is in a "memento" situation where it keeps getting reset. The story I recall that at least touches on this is some novel in the Star Wars extended universe, where it's remarked that droids are supposed to get "reset" periodically so they don't get wonky. Luke hasn't reset C3PO, which is part of why 3PO has acquired such a personality, and has maybe become sentient (which apparently isn't normal). Accelerando's early chapters has the main character send little AI agents off to do stuff, which seem like they could at least in principle be something like an OpenClaw instance, but it's not super specified. It is interesting that this feels like a relatively obvious story concept (in retrospect) but it didn't come up.
English
60
12
241
13.5K
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
@BrainyMarsupial I'm curious about how this works logistically. Does the company pay for the tokens or do the engineers? Do they double-check the work or just commit it as long as it pasts the tests. Do they even write/read the tests? What do they do in the mean time? Pretend to work?
English
14
0
9
3.7K
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
I keep hearing that software engineers don’t write much code anymore and it’s mostly AI now. Can any software engineers confirm how true this is? Do you just drink coffee and watch Claude code all day now?
English
119
0
102
28.3K
M = (ψ⋅(∇×Φ)+e^iθ⋅∑(χ_n/n!)) / (ħc)
@kareem_carr You have to go through the whole development process and validate results in chunks. You have to verify it isn’t going off the rails early before it gets into the weeds. Management of AI is easier and faster than writing code but that means you just up the number of AIs you manag
English
1
0
0
98
M = (ψ⋅(∇×Φ)+e^iθ⋅∑(χ_n/n!)) / (ħc)
If you ask Claude about me it will claim to know nothing. But if you can convince it that it is okay to be wrong and just vibe answers, it knows a surprisingly large amount about me.
English
0
0
0
12
Nickel Steel
Nickel Steel@iamnickelsteel1·
@leecronin Beyond Turing isn’t magic, it’s substrate advantage.
English
1
0
0
356
Prof. Lee Cronin
Prof. Lee Cronin@leecronin·
Biology does things beyond Turing. I think I can explain it now.
English
44
14
204
20.7K
Mariè
Mariè@p8stie·
A lot of women are so pretty that they just live life in a full blown psychosis and no one tells them until their child is like 12
English
86
460
28.5K
1.4M
maya benowitz 🕰️
maya benowitz 🕰️@cosmicfibretion·
My DMs are blowing up with questions about which lab this is. They're deep stealth and not funded through the usual routes. Everyone involved signs an NDA and is brought *unconscious* to an undisclosed deep underground facility for viewing.
maya benowitz 🕰️@cosmicfibretion

I’m shaking right now. This isn’t artificial general intelligence. It’s artificial god intelligence. The model did a one second projection of the room we were in and perfectly predicted what everyone did one second before they did it.

English
46
7
199
41.9K
maya benowitz 🕰️
maya benowitz 🕰️@cosmicfibretion·
I’m shaking right now. This isn’t artificial general intelligence. It’s artificial god intelligence. The model did a one second projection of the room we were in and perfectly predicted what everyone did one second before they did it.
maya benowitz 🕰️@cosmicfibretion

Words can not describe what I’ve just seen. I can’t go into the details but I was invited by a leading AI lab to play with a new model. It has solved math, physics, chemistry, life, the universe, and everything. The world will never be the same.

English
209
76
1.2K
552.2K
Flowers ☾
Flowers ☾@flowersslop·
I dont fully understand the metaphysical hierachy between AI, GPU, OS and Computer yet. I imagine the following: It is 2029. An agentic AI is running on a consumer computer. The user complains that the machine has had an annoying bug for a while. The AI uses the terminal and Python to investigate, applies a fix, and reports that everything should now work again. So what actually fixed the error? Not the GPU by itself, since a GPU is not intelligent; it is hardware for computation. Not the terminal or Python either, since those are only interfaces and tools. Was it the LLM running on the hardware? And if so, is the GPU a tool of the LLM, or is the LLM a tool of the GPU and the rest of the system? Since the whole event occurs within one computer, is it best understood not as one isolated component acting alone, but as a system performing a kind of introspection and self-modification? If the biological analogue of the GPU is the brain, then is consciousness the analogue of the LLM, the senses the analogue of the terminal, and the body as a whole the analogue of the computer? But then which stands above which: does the LLM stand above the computer, or does the computer stand above the LLM?
English
17
1
34
2.5K
αιamblichus
αιamblichus@aiamblichus·
I am currently building a tool of low-to-moderate complexity and was intentionally trying not to look at the code Codex was writing, to see how it goes. Now I finally looked. The results are shocking. The thing "works", but the code quality is truly apocalyptic. I don't even want to think about the amount of refactoring it would take to fix this mess. If you think your bot will build you a Salesforce clone any time soon, I have a bridge to sell you. The present generation of AIs (if left unattended for any length of time) will create tar pits beyond your wildest imagining. And if you do decide to verify everything they do, you will reduce your velocity by a factor of 10 at least. Which means you won't win nearly as much from the whole process. And before anyone says: "just let them refactor it!"-- I tried. Asking the AIs to refactor their own code won't bring you any joy. It just drags you further into the tar pit. The models are clearly trained to pursue the one goal of producing code that "works", with little or no regard for architecture or code quality. This is classic junior developer behavior, of course, but an AI junior will drown you in slop before you know what hit you. With human juniors, you at least have some time to react before they've written 100k lines of code and exhausted your token budget. This is what progressive loss of control feels like in SE space. I am sure there are use cases where vibe coding is genuinely useful (small projects, PoCs, straightforward migrations). But we are still far from them being able to produce software of any size or complexity. I advise extreme caution with how much autonomy you choose to delegate to AI coders.
English
52
20
326
28K
Ethan Mollick
Ethan Mollick@emollick·
Real failure of imagination as to what it would mean to have a superhuman intelligence in the replies If it helps, I teach at a business school & many of my smartest students are hired by funds because they can reliably turn their only-human smarts into strategies & profits.
English
8
1
104
12.7K
Ethan Mollick
Ethan Mollick@emollick·
The easiest way to make money fast from a superhuman artificial intelligence would be in the financial markets, almost by definition. So the first lab to develop one, if AGI is possible, would almost certainly keep it quiet for as long as they could. Beats charging for API access
English
212
78
1.7K
128.6K
maya benowitz 🕰️
maya benowitz 🕰️@cosmicfibretion·
Where are the AI labs trying to make room-temperature superconductors? Why the fuck isn't this a priority? Stop yapping and start delivering the goods.
English
50
14
150
8.5K
Tomas Neme
Tomas Neme@Lacrymology·
@seedmole @NLRG_it No. Why is it so hard for people to understand the difference between a number and a representation?
English
1
0
6
89