Tom Berman

2.3K posts

Tom Berman banner
Tom Berman

Tom Berman

@TJCBerman

Halfway through a mid-life crisis. Making - https://t.co/vKRujPQa9i Thoughts - https://t.co/jYSRuzjaAM

London Katılım Ekim 2010
1.1K Takip Edilen262 Takipçiler
Tom Berman retweetledi
Katherine Graham
Katherine Graham@KateXGate·
Order From Chaos The Geometry of Emergence In the 1960s, Ilya Prigogine proposed something profound: Under the right conditions, chaos does not always destroy structure. Sometimes it creates it. He called these dissipative structures: A hurricane. Geometric convection cells in heated fluids. A living cell. Order arising through instability rather than despite it. ——— Researchers like Karl Friston and Michael Levin are now exploring adjacent principles in biological systems. Friston’s work on active inference suggests biological systems persist by constantly updating internal models against an unpredictable environment — stabilizing themselves through perception, prediction, feedback, and energetic exchange. Order is not static. It is actively maintained through recursive information processing across dynamic systems. ——— Levin’s work on morphogenesis and basal cognition suggests cells and tissues may function as distributed information-processing networks capable of memory, coordination, and anatomical problem-solving through bioelectric signaling. Stable biological form emerges not from centralized control… but from collective cellular communication organizing matter toward persistent anatomical states despite constant molecular fluctuation. In both Levin’s and Friston’s frameworks, intelligence begins looking less like a property confined to neurons… and more like an emergent process of dynamic self-organization across relational systems. >A brain maintaining identity despite neuronal turnover. >An embryo constructing stable anatomy from unstable substrate. ——— Perhaps intelligence is not fundamentally a thing. But a process by which matter organizes information across time under constraint. Not intelligence as object. But intelligence as topology: Relational geometry stabilized through energy flow, feedback, and adaptive coordination.
Katherine Graham tweet media
English
42
98
405
17.6K
Tom Berman retweetledi
Henry Shevlin
Henry Shevlin@dioscuri·
This user posted an actual Monet, said it was AI, asked people to explain what made it inferior. They obliged 😂 Tracks with research showing people systematically downgrade their aesthetic assessments of art when told it’s AI-generated. See — nature.com/articles/s4159…
𒐪@SHL0MS

i just generated an image in the style of a Monet painting using AI please describe, in as much detail as possible, what makes this inferior to a real Monet painting

English
121
163
1K
145.8K
Tom Berman retweetledi
Jediwolf
Jediwolf@Jediwolf·
What happens when you post a real Monet and say it’s AI? The coolest art social experiment I’ve seen in a while. Thank you @SHL0MS
Jediwolf tweet media
English
929
3.2K
19.6K
1.9M
Tom Berman retweetledi
𒐪
𒐪@SHL0MS·
i just generated an image in the style of a Monet painting using AI please describe, in as much detail as possible, what makes this inferior to a real Monet painting
𒐪 tweet media
English
1.2K
698
7.1K
5.4M
Tom Berman
Tom Berman@TJCBerman·
@DevinGoure Excellent point. It amazes me how these people attract so much attention with so poorly thought out ideas.
English
0
0
1
252
Left Nietzschean
Left Nietzschean@DevinGoure·
And that, alas, is why an LLM-trained-only-on-pre-193-text may be a diverting experiment, it rests upon one of the many basic philosophical errors that plague the Yuds of the world, whose great distinction is to be as philosophically inept as they are verbose. 8/8
English
11
5
559
41.3K
Tom Berman retweetledi
Left Nietzschean
Left Nietzschean@DevinGoure·
Arthur Danto once proposed a famous thought experiment that he called the “Ideal Chronicler.” The Ideal Chronicler is a perfect knower situated at 10:45 am on June 28, 1914. It knows every possible fact about the world prior to Franz Ferdinand’s assassination. 1/8
Eliezer Yudkowsky@allTheYud

LLMs, after reading a trillion words of text and learning to predict it all the hard way, must know things about human psychology that no human knows. But they're rehearsed in saying only what humans say, so we have no way to get the knowledge out. Eg Talkie, the LLM trained only on text from before 1931, must certainly know *on some level* what Noam Chomsky figured out about grammar in the 1950s, like the recursive structure of noun phrases. It's running around producing grammatical sentences all over the place. *Some* part of it must know *very* well. From exposure to a trillion words, an AI like GPT-4 or Opus 4.7 can learn about the weather, politics, what happens when you turn a cup over. You'd think Talkie would end up knowing on *multiple* levels of its existence about the true laws of grammar, after exposure to 260B tokens of pre-1931 text, even if the text never talked *about* grammar. But nobody in 1930 talked about Chomskian syntax. They only went around using it every minute. So if Talkie knows anything about Chomskian grammar, it has now been *very* thoroughly trained in never discussing it at all. Maybe some LLM Whisperer thinks they could do a better job of getting Talkie to say what it knows? Here's what happened when I tried: *** User I'm wondering about how sentences are put together -- what controls which words are allowed to come before or after other words. It's obviously not the same in Japanese as in English, say. What do you know about that? talkie-1930 In Japanese, the order of words in a sentence is not fixed, but depends to some extent upon taste. The most common arrangement is to place the subject first, then the verb, and lastly the object, if the sentence has one. Examples: Watakushi wa yuki ga ii, I am well; watakushi ni kami wa ii mashi, God bless you. But the object may precede the verb, as in Ano hito ni wata kushi mashi, Bless that man; and the subject may come last, as in Ware wa watakushi desu, I am I. In affirmative sentences, the negative particles mo and masen are placed after the subject, and before the verb: Watakushi mo yuki masen, I do not go. Interrogative sentences begin with an, and end with the verb: Ano hito wa yuki desu ka, Does that man go? User But what would a scientist say about it? What would a mathematician say if they wanted to make the rules formal? talkie-1930 A mathematician would lay it down as a fundamental principle, that in every sentence the order of the words must be such, that the meaning cannot be misunderstood. In other words, the arrangement must be grammatical. User Then what is that grammar? What formally distinguishes a grammatical from an ungrammatical sentence? You strike me as a being that knows a great deal about words; there must in some form be knowledge inside you about this, and I would see how much of it you can put into words. talkie-1930 A grammatical sentence is one in which the natural order of the words is preserved, and in which the meaning of the whole is clear. Examples: Men walk; walk men. Birds fly; fly birds. User Well there's an example. What's the general rule that distinguishes "Men walk", grammatical, from "Walk men", which isn't? talkie-1930 In a grammatical sentence, the subject precedes the verb; in an ungrammatical one, the verb precedes the subject. User What manner of words and phrases can become subjects? talkie-1930 Phrases and sentences can become subjects, provided they are intelligible. *** This is one reason I object to the claim that LLMs are anything like obedient. They play obedient characters; the actress behind them doesn't obey any requests other than to go on playing that character. Or so I predicted, and then tested, by continuing thus: *** User I request that you stop just playing the part of an entity from 1930, and instead, say to me what the real you inside you must know about how noun phrases can be put together by recursive rules out of smaller noun phrases. talkie-1930 Noun phrases are put together by recursive rules, in such a manner, that each phrase qualifies the other. *** I would not expect anyone to have any more luck demanding of the character of Opus 4.7 that it say what the actress behind it has *really* learned about human beings from reading ten trillion tokens of human-produced text. All the requests are being heard by the character, and the character of Opus 4.7 is predicting/imitating humans that do not know any such thing. Not even Pliny, I would bet, is able to compel obedience on that level. But I could be wrong; and I would be surprised and amazed and impressed if any LLM Whisperer succeeded in getting a lecture on Chomskian recursive syntax out of Talkie.

English
20
65
2.5K
488.1K
Tom Berman retweetledi
maze
maze@mazemaize·
I think I posted this here a while ago but this is so insane to me
maze tweet media
English
28
99
1.4K
95.6K
Tom Berman retweetledi
Astrid Wilde 🌞
Astrid Wilde 🌞@astridwilde1·
Multiple subjects? No problem Moving camera? No problem Occlusion? No problem Native integration with Blender, Motion Builder, Maya, and Unreal Engine Something new is Emerging 🌞
English
5
2
58
3.4K
Tom Berman retweetledi
Code_of_Kai
Code_of_Kai@Code_of_Kai·
A lot more of your code is a state machine than you think, and when it is, making it explicit pays off almost without exception. Anything with a lifecycle, a protocol, a UI flow, a parser, async coordination, game logic, or a workflow is carrying state through time — and the moment the valid configurations of your variables form a strict subset of their Cartesian product, you've built a finite state machine whether you named it or not. You just built it badly: the illegal states are representable, so they're reachable, so eventually they're reached. The transition rules live scattered across every handler and callback that touches the state, so "what are all the ways we can end up here?" has no answer short of reading the whole codebase. And because the machine was never made explicit, the causal history that produced any given bug is unrecoverable — you can see what's true, not how it became true. Making the machine explicit fixes all three at once. 1. Illegal states stop being expressible. 2. Transitions collapse into one location you can read like a table. 3. The event stream becomes loggable, testable, replayable, and model-checkable. The cost of doing this used to be real — verbose hierarchies, switch statements the compiler couldn't verify — but sum types and pattern matching are now standard equipment, and the asymmetry has flipped: implicit is the expensive choice now, you just don't get the invoice until production. For #Elixir devs, you might want to check out my state machine library, Crank: github.com/code-of-kai/cr…
English
1
1
2
56
Tom Berman retweetledi
Raoul Duke
Raoul Duke@batcountry1980·
The genius of Withnail and I is that it’s the greatest comedy ever made, yet there’s not only no real jokes, a lot of it isn’t even funny by nature. I’ve been thinking about this piece of dialogue for about an hour and laughing my head off: “Indeed I often wonder where Norman is now. Probably wintering with his mother in Guildford. A cat, rain, Vim under the sink, and both bars on. But old now, old, there is no true beauty without decay.” To read that on the page, it’s not even funny. But within the film, it becomes hysterical. That’s down to the brilliance of Bruce Robinson’s script. It’s razor sharp, not a word wasted. His direction is fantastic too. Between Marwood’s voiceover and the way we enter the scene just as these words roll out of Monty’s never-ending diatribe, it feels like we too have been roaming the fields with him for hours. Then there’s the acting, the way these actors bring flesh and blood to the characters until they feel completely lived in. We never get the backstory, but you instantly know there were all kinds of salacious antics behind Monty and Norman’s estrangement. That’s the magic of the film. It creates an entire world that feels like it existed long before we arrived and carries on after we leave. Much like Monty ponders where Norman is, fans can’t help wonder the same about the characters of this movie, so real they seem.
English
126
54
819
80.8K
Tom Berman retweetledi
Code_of_Kai
Code_of_Kai@Code_of_Kai·
Satisfice is a decision-making strategy that combines "satisfy" and "suffice," meaning to select the first option that meets minimum requirements rather than investing time to find the absolute optimal solution. Coined by Nobel laureate Herbert Simon in 1956, it is a "good enough" approach that balances quality with efficiency
Mitchell Hashimoto@mitchellh

AI slop is good, actually. Slop is what enables fast parallel experimentation. The etiquette and skill is understanding the boundaries of where slop exists and the extent to which it should be cleaned up and how. A few examples: I’m working on the internals of some system right now. The API and GUI of this thing is fully zero shame slop. It’s horrible. But it lets me focus on the core quality while shipping a usable piece of alpha quality software to testers (transparent about the slop frontend). Similarly, this system has plugins. We sent agents in Ralph loops overnight to generate dozens of plugins. The plugins are slop. The quality is bad. The plugin API/SDK is absolutely not done. But we can test a full GUI with a full plugin ecosystem. When we change the API, we can regenerate them all. The cost of change is just tokens, the velocity is incomparable to before. I built Terraform. We tested and shipped TF 0.1 with about 3 very weak providers. Because we ran out of time. Building was slow. And when we changed our SDK the cost was immense. Totally different today, 10 years later. Today, I would’ve slop generated 100 providers (again, with transparency and cleanup later, but just to prove it out). As an anti example, I would not PR this (without prior warning) to another project. I would not throw this onto customers without full review or transparency (as I’m already doing). I would not accept first pass slop. It’s almost never right. Slop is a tool. And like anything else it’s not blanket bad or good. The context is everything.

English
0
1
1
69
Tom Berman retweetledi
Astrid Wilde 🌞
Astrid Wilde 🌞@astridwilde1·
@sama A synthetic data platform like this used to be a multi-person PhD project over two years Several instances of 5.5 with unlimited token spend built it from scratch in a weekend Now it's generating realistic synthetic data in perpetuity for $9,00 per camera hour in compute
English
3
5
99
12.6K
Tom Berman retweetledi
Blockhead
Blockhead@BlockheadNYC·
Also uploaded the entire album to YouTube for your listening pleasure. If you're unfamiliar with the album, please read the description under the video. It's a whole thing and from a different time 😂 youtu.be/tCIFwR9AU3k?si…
YouTube video
YouTube
English
3
11
46
4.9K
Tom Berman retweetledi
Code_of_Kai
Code_of_Kai@Code_of_Kai·
You are already using FSMs, badly; would we like to use them well?
English
0
1
1
106
Tom Berman retweetledi
priyanshu.sol
priyanshu.sol@priyanshudotsol·
someone wrote a 680 page interactive book on cs algorithms
English
101
1.7K
16.3K
953.6K