
Lol wat? My wife brought this pasta sauce home tonight… what blockchain does this use lmao?
Steve | STABILITY Protocol
2.5K posts

@HeySteveG
working at @STABILITYinc - a blockchain with no crypto and no fees. Infra to Prove Everything.

Lol wat? My wife brought this pasta sauce home tonight… what blockchain does this use lmao?

‼️ This is the last year on Earth that humans will create the majority of data. Next year, AI will take the lead, and with it comes a flood of unverifiable content, decisions, and outcomes. The Big Four are stepping in with AI audit services. That's a start. But reputation-based assurance can't scale with what's coming. Is it time we started thinking less about trust - and more about proof? ⬇️Check out our latest article from Klay Nichol explaining the shift from trust infrastructure to Proof Infrastructure - and why it matters more now than ever. linkedin.com/pulse/from-tru…


‼️ This is the last year on Earth that humans will create the majority of data. Next year, AI will take the lead, and with it comes a flood of unverifiable content, decisions, and outcomes. The Big Four are stepping in with AI audit services. That's a start. But reputation-based assurance can't scale with what's coming. Is it time we started thinking less about trust - and more about proof? ⬇️Check out our latest article from Klay Nichol explaining the shift from trust infrastructure to Proof Infrastructure - and why it matters more now than ever. linkedin.com/pulse/from-tru…


‼️NEW WHITEPAPER The world is getting flooded with data. It's getting harder to verify what's real and what isn't. Information superabundance, accelerated by generative AI, is breaking society’s traditional framework of “Trust Infrastructure” - trust based on self-verification, authority, and reputation - which can no longer scale to verify truth, source, or integrity of such massive volumes of data. Proof Infrastructure offers a solution: a novel overlay of digital communications where all data is untampered and independently provable without relying on centralized platforms or intermediaries. Read more: stabilityprotocol.com/documents/stab…

Good post from @balajis on the "verification gap". You could see it as there being two modes in creation. Borrowing GAN terminology: 1) generation and 2) discrimination. e.g. painting - you make a brush stroke (1) and then you look for a while to see if you improved the painting (2). these two stages are interspersed in pretty much all creative work. Second point. Discrimination can be computationally very hard. - images are by far the easiest. e.g. image generator teams can create giant grids of results to decide if one image is better than the other. thank you to the giant GPU in your brain built for processing images very fast. - text is much harder. it is skimmable, but you have to read, it is semantic, discrete and precise so you also have to reason (esp in e.g. code). - audio is maybe even harder still imo, because it force a time axis so it's not even skimmable. you're forced to spend serial compute and can't parallelize it at all. You could say that in coding LLMs have collapsed (1) to ~instant, but have done very little to address (2). A person still has to stare at the results and discriminate if they are good. This is my major criticism of LLM coding in that they casually spit out *way* too much code per query at arbitrary complexity, pretending there is no stage 2. Getting that much code is bad and scary. Instead, the LLM has to actively work with you to break down problems into little incremental steps, each more easily verifiable. It has to anticipate the computational work of (2) and reduce it as much as possible. It has to really care. This leads me to probably the biggest misunderstanding non-coders have about coding. They think that coding is about writing the code (1). It's not. It's about staring at the code (2). Loading it all into your working memory. Pacing back and forth. Thinking through all the edge cases. If you catch me at a random point while I'm "programming", I'm probably just staring at the screen and, if interrupted, really mad because it is so computationally strenuous. If we only get much faster 1, but we don't also reduce 2 (which is most of the time!), then clearly the overall speed of coding won't improve (see Amdahl's law).




















@FantomFDN In crypto, history repeats itself 2018 called, it wants its ideas back 😂 (For the record, it’s a great idea, I’m sure others tried it too, it’s just missing one key ingredient...) medium.com/ontologynetwor…
