
Are you up for a challenge? openai.com/parameter-golf
saam
1.8K posts


Are you up for a challenge? openai.com/parameter-golf

The U.S. has a weird cultural relationship with AI Despite the fact that we’ve driven the vast majority of AI breakthroughs, we still rank among the lowest countries in terms of consumer trust (Data from Edelman 2025 study) 👇


We’ve decided to treat this launch as High Capability in the Biological and Chemical domain under our Preparedness Framework, and activated the associated safeguards. This is a precautionary approach, and we detail our safeguards in the system card. We outlined our approach on preparing for future AI capabilities in biology through a blog post earlier this month. openai.com/index/preparin…

That’s a really good and fundamental question; thank you, Borya! :) It’s non-trivial to answer, since it requires some maturity: there are various moving parts involved, and we tried to address this in the cited paper (also in the textbook, and more recently here: causalai.net/r130.pdf). There are various ways of seeing this connection, but I’ll try to be brief here. The invariances of any mechanism f_i, pre- and post-interventions, imply the whole calculus -- both do- and ctf- -- for layers 2 and 3. They differ in that they carve out different types of constraints over the collection of distributions induced by the SCM M (ie, over the mechanisms and exogenous distributions). The attached figure is just one possible representation. Note that we have different fragments of P* (represented by different colors) depending on the layer being discussed. (Ignore 2.25 and 2.5, since it’s a more fine-grained slicing related to some other discussion, and confusion, in the literature.) What’s interesting is that probabilistic consistency "pops up" regardless of the SCM. Rules 2 and 3 of the ctf-calculus apply depending on the SCM, reflecting a more fine-grained relationship between the endogenous variables involved. A complementary interpretation I like is through the notions of a local basis and global facts. The local part ties naturally to the locality of each mechanism f_i at the SCM level. In reality, as with any deductive system, the calculus is just a tool to verify the validity of facts that are true but not explicitly stated in the model. Each mechanism f_i, at the structural level, implies a basis of constraints relating V_i, its observed parents Pa_i, and its unobserved parents U_i. These leave imprints on the set of distributions P*. The calculus relates to the global part, and it’s simply a method to take these local facts and expand them into broader facts composed of multiple local ones. (Why do we care at all about that, one may ask? Well, the local basis is usually more parsimonious (depending on the case, it can be polynomial), but it encodes an exponential number of truths. So, the reason is computational and feasibility, since locality/parsimony and compression are key, necessary in any kind of intelligent behavior.) The do-calculus, and of course the ctf-calculus, perform this kind of "gluing" of the local facts to ascertain the validity of global ones. The example in Eqs. 14-18 in R-130 exemplifies this process, from local (what is in the model) to global (what the do-calculus or ctf-calculus says that it's true). Footnotes 47-48 in R-60 make this connection for the first time, and recall that the celebrated d-separation criterion -- foundational to probabilistic reasoning (layer 1) -- performs a similar role via the graphoid axioms in terms of basic probabilities. Finally, I should mention that for a more syntactic comparison of the calculi, see Appendix C.2 in R-115. Even though it contains a fair amount of algebra, I think it offers insight into the relationship between the layers. (Q2, 7, 8, 9 in the FAQ, p. 35, is possibly helpful as well.) TLDR: To answer your question -- it's the same in one way, but not the same in another, since different facts are being stated about the real world depending on the model interpretation being chosen (i.e., the layer/color in the figure). Happy to talk more when we meet!

We used to speak words to evoke a thing deeper within, now we deploy bots who understand that deeper thing and generate infinite manifestations of our thoughts, customized and reaching further.









