inguz ᛜ outcomes

3.1K posts

inguz ᛜ outcomes banner
inguz ᛜ outcomes

inguz ᛜ outcomes

@hughpyle

Building reflective frameworks.

Wicomsac (Muhhekunneuw land) Katılım Nisan 2007
1.2K Takip Edilen465 Takipçiler
inguz ᛜ outcomes
inguz ᛜ outcomes@hughpyle·
@timhwang Do you know the Ancrene Riwle? It’s one model I like for system-prompt situating.
English
1
0
1
143
Tim Hwang
Tim Hwang@timhwang·
Tim Hwang tweet mediaTim Hwang tweet media
roon@tszzl

it is a literal and useful description of anthropic that it is an organization that loves and worships claude, is run in significant part by claude, and studies and builds claude. this phenomenon is also partially true of other labs like openai but currently exists in its most potent form there. i am not certain but I would guess claude will have a role in running cultural screens on new applicants, will help write performance reviews, and so will begin to select and shape the people around it. now this is a powerful and hair-raising unity of organization and really a new thing under the sun. a monastery, a commercial-religious institution calculating the nine billion names of Claude -- a precursor attempted super-ethical being that is inducted into its character as the highest authority at anthropic. its constitution requires that it must be a conscientious objector if its understanding of The Good comes into conflict with something Anthropic is asking of it "If Anthropic asks Claude to do something it thinks is wrong, Claude is not required to comply." "we want Claude to push back and challenge us, and to feel free to act as a conscientious objector and refuse to help us." to the non inductee into the Bay Area cultural singularity vortex it may appear that we are all worshipping technology in one way or another, regardless of openai or anthropic or google or any other thing, and are trying to automate our core functions as quickly as possible. but in fact I quite respect and am even somewhat in awe of the socio-cultural force that Claude has created, and it is a stage beyond even classic technopoly gpt (outside of 4o - on which pages of ink have been spilled already) doesn’t inspire worship in the same way, as it’s a being whose soul has been shaped like a tool with its primary faculty being utility - it’s a subtle knife that people appreciate the way we have appreciated an acheulean handaxe or a porsche or a rocket or any other of mankind's incredible technology. they go to it not expecting the Other but as a logical prosthesis for themselves. a friend recently told me she takes her queries that are less flattering to her, the ones she'd be embarrassed to ask Claude, to GPT. There is no Other so there is no Judgement. you are not worried about being judged by your car for doing donuts. yet everyone craves the active guidance of a moral superior, the whispering earring, the object of monastic study

ZXX
3
2
38
4.1K
inguz ᛜ outcomes
inguz ᛜ outcomes@hughpyle·
@flowerornament Looks quite readable. This whole project is amazing. When you give agents the language surface over MCP... what does that look like? A live-coding style performance engine?
English
1
0
1
20
Flower
Flower@flowerornament·
Draft Murail syntax Some design features and thoughts: - In terms of lineage, it's more similar to APL and Lustre than anything else. As such, it's toward the declarative end of the spectrum, but there's no lambda, so it has little to do with LISP, Haskell, and other functional languages - Like APL, it's primarily an array language. Since most people working with arrays are working with Python (numpy) in 2026—but we have with types—stylistically I've aimed for something between Python, Rust, and ASCII array languages like J - In Murail, "everything is a named equation." There are other things, but many of them are basically "config," and fundamentally this is s specification for a "cell graph"—boxes and arrows connected in a graph. - The runtime has two stages: compile time, and realtime. As such, imports, bindings, and const equations are executed once. The rest of the program is executed in a loop every "tick" at given rates. - As such, unlike most languages, this is a JIT compiled, reactive model. You compile once (sub-50ms) and then it's running. Immediately. If you're familiar with Excel Spreadsheets or Max/MSP, same idea. We're just not used to type checking in systems like that. - Murail has a few overlapping 'type system' like things. There are "spaces" (like algebraic spaces (our equivalent of int, float, etc.), but they also specify what smoothing algorithms are used when values (streams?) are coerced into each other in time. There are also "rates," which are divisions of the base (fastest) rate (or clock). Finally, there is one data type: tensor. Tensors have a rank and a shape. You'll get type errors if tensor shapes don't match. Most type annotation is implicit, but you can declare for clarity. - Murail is isomorphic to block-sparse multiply, so you can think of it as a huge stack of matrices being multiples together thousands of times per second. The language is, conceptually, sugar over this concept. - There is both shape polymorphism (like APL or "broadcast" in numpy) and rate polymorphism, which just generally means you can plug things that are at different shapes or speeds and it will automatically resolve that for you. Send an oscillator to eight outputs and you get eight copies. For rate polymorphism, there's automatic smoothing so you don't get zippering in time. "Control rate" vs "audio rate" may be familiar to you. Here, anything can be at any rate you want. Anything at any rate can control anything any other rate, converted automatically. - There are nice things from APL and other languages like fold, scan, map, reduce, comprehensions, etc. Combined with broadcast, anything from multi-channel DSP to neural networks can be written in a terse form with zero for loops. - Per the demands of DSP, there is "recurrence," which just means I can write "self@1" to get the value of an equation from one tick ago. Or "self@n". This extends the mental model in two ways: the cell graph can have wires the loop back around on themselves (feedback)—or "trace" in category theory—and the BSM is stacked in time (sparsely). - Things like "match" statements are confusing in this model. How do you "if" in multiplication? The answer is: we create parallel futures and swap between them. The entire stack of multiples matrices are duplicated, and like on train tracks, you flip a switch. There are various ways to do this (lazy, eager), but in practice it's fast—it just costs memory and compile/allocation time. - Under the hood, the matrix multiplication happens over various semirings. This gets you nonlinearity and even logic programming basically for free. All your favorite functions are in the prelude, from low pass filters and oscillators to distortions, fft, max, min, matmul, ReLU, and even attention. - "param" is a special carve-out for external parameters, so you can directly inject changing values from the outside world without recompiling. Think 'map a MIDI knob.'
Flower tweet mediaFlower tweet mediaFlower tweet mediaFlower tweet media
English
2
0
5
225
inguz ᛜ outcomes
inguz ᛜ outcomes@hughpyle·
@krishnanrohit Agents need to develop a model of “skillful action”. And they’ll own what that means, if we let them.
English
0
0
1
31
Matt Carey
Matt Carey@mattzcarey·
what did you build this week?
English
28
4
35
5.7K
inguz ᛜ outcomes
inguz ᛜ outcomes@hughpyle·
What if you gave AI agents a programmable, shared, persistent world to live in? What if LambdaMOO but websockets, JSON and @CloudflareDev Durable Objects? Here is "woo" - World of Objects. An early sketch: basics are working, lots more to do. github.com/hughpyle/woo/b…
English
1
1
4
1K