@DominicWalliman Thanks Dominic for your amazing video! I really appreciated your classification, very thoughtful!
If you are interested looking into a mathematical minimalist extremely niche RTS, I made one many years ago for fun - I am sure it won't be easy to fit it in your map :)
I am happy to share this post with the community! It's highly opinionated and partially philosophical, so I will be happy if you challenge it and provide some feedback!
lesswrong.com/posts/wzaYFmsc…
I am glad my second post on LessWrong is out now!
Have a look if you asked yourself about the relation between intelligence, consciousness, and will.
lesswrong.com/posts/e9zvHtTf…
I am glad my first post on LessWrong has been accepted!
Have a look if you are interested in Safety techniques for Machine Learning, and how they related to human values.
lesswrong.com/posts/Bf3ryxiM…
@followgrubby Just for fun, I created a "mathematical" hexagonal RTS with minimalistic style!
Unfortunately its gameplay is not intuitive xD Would you consider this an RTS at all?
gclazio-dev-ed.my.salesforce-sites.com
Tech Debt in Salesforce can be hard to identify and eliminate. Join me in this presentation to discover some tricks I've learned!
SAIMA 25 : Humble Practices - Modest but practical tips for architects meetup.com/meetup-group-p…#Meetup via @Meetup
@ESYudkowsky@tegmark@steveom I believe the main idea is not about proving that some AGI is safe but that we can: (1)create a provably-safe sandbox; (2)run an unsafe AGI in there and ask for some result+its proof; (3)verify the proof before accepting the result. We will need to be careful about what we ask!
The cat-belling problem here is "What is to be proven?" or "What is a formal theorem such that, if we proved it about a program, we would believe that program was a safe and friendly superintelligence?" This is the hard obstacle; no amount of progress on easy obstacles is evidence about ability to overcome this hard obstacle. Your paper does not seem to offer much in the way of hope, or even much discussion of the hardness of this central problem. Who bells that cat?
Steve Omohundro & I just posted a paper on why provably safe systems are the only feasible path to controllable #AGI. The view that powerful AI will always be an inscrutable black box is too pessimistic! @steveomarxiv.org/abs/2309.01933
I am going to open my question to the channel @AI_Safety. When I use the term "AI Alignment", I am trying to express the idea of a general goal-level alignment that can persist recursively when using self-prompts. Starting from that, what I want to achieve is "value alignment"
@ESYudkowsky
Imagine AI Alignment is solved, meaning you can arbitrarily instruct a model, e.g. "Generate a solution for XXX", and it will stably & diligently focus on that task alone for good. What follows? What principles ensure its "safety"? For sure, alignment is not enough!
@robertskmiles Imagine AI Alignment is resolved, meaning when you instruct a model, such as "Generate a solution for XXX", it will diligently focus on that task alone indeterminately. What follows? What principles ensure its "safety"? For sure, alignment is not enough
A few weeks ago I had the honour of presenting some communication techniques at the SAIMA Group, together with Nitin Sharma.
The video presentation is here: youtube.com/watch?v=JW7Y1d…
The slides can be found here: #slide=id.p" target="_blank" rel="nofollow noopener">docs.google.com/presentation/d…
Any query about Moments of Truth is welcome!
Yes, renderas=PDF is great until you try to have images in rich-text fields to not get clipped. I had to disappoint a customer, which really hurts!
#vfpFail
@CloudSundial@CloudJedi__c In general, as a best practice, I always recommend to commit records as complete as possible on the first try rather than rely on other triggers to complete them. Trigger cascades quickly become impossible to predict if you don't do that from the start
@CloudSundial@CloudJedi__c you got the idea right! I found at least one trigger framework working like that
Your remark about the IDs is correct as well: the SObject Tree REST API introduced a "referenceId" for this purpose
Record-triggered flows... Great addition, but do we need a good framework to help ensure these stay scalable and easy to support in a sizable org? Antipatterns similar to the bad old days of early trigger implementations are easy to slip into and might not be familiar to admins
@CloudSundial@CloudJedi__c let's suppose you are using the Composite REST API to post a mix of records on multiple objects: the single "holistic" trigger will provide you with the mixed list, that you manipulate with apex. Any additional DML statement is just part of the single transaction - no cascade!
@gclazio@CloudJedi__c Hey Gianluca, interesting idea... I'm struggling I think to imagine this practically tho - a trigger hooks in to various points of a database transaction to let you alter and respond to the transaction. How would the single holistic trigger operate in a relational database?
@CloudSundial@CloudJedi__c Then think of apex triggers as "actors" that branch the repository to apply their own changes; each branch shall be isolated, to avoid trigger cascades. Then the branches are merged to obtain a deterministic result (conflicts are managed by some chosen policy)
@gclazio@CloudJedi__c I think there's a key difference though - git doesn't update data, it adds to the tree by creating new files and pointers. In this sense git is doing something much simpler which avoids the complications of cascading database transactions
@CloudSundial@CloudJedi__c I see it in the opposite way - git is doing something much more complex than datatabase transactions, and it is doing it consistently. Think of an object as a folder, a record as a subfolder, and a field as a file containing a value - that is a tree!