
Chris W
3.5K posts

Chris W
@nycthinker
Swedish-American GenX. Flaneur-Traveler. Fan of Darwin, John Boyd.






Why has AI psychosis affected primarily high level executives? Is it because they have no easy way to empirically see the limitations?


OPENAI: CUTS CHATGPT BUSINESS ANNUAL PRICE TO $20/SEAT FROM $25 - WEBSITE



My theory about why so many on the left remain in denial about AI is that their worldview rests on a load-bearing notion of “the tech industry” as being composed of vapid morons whose accomplishments will always be superficial, never “real,” always based on some grand theft. With social media and search, the theft was manipulation of people’s minds. With Amazon it was worker exploitation. With Apple, it was a mix of these. In the left retelling of the story, no value whatsoever was created from these technologies. All a trick. With AI the “grand theft” in the telling of the left is the use of copyright-protected data in pre-training. This one is a particularly dangerous mindworm for them, since they identify with the “artists and writers” from whom they imagine this training data was “stolen.” This is why things like “mode collapse” from synthetic data, stochastic parrotry, “it can only mimic things it has seen on the web” and similar are so core to the argument for the left: it supports the notion of “tech bro” thieves—who lest we forget, and they never will let us, have no “liberal arts” training!—continuing their unbroken string of robberies. Of course the “grand theft” notion is an old motif on the left, relating as it does to a zero-sum mindset about economics, business, and growth that is. more traditionally associated with the left, though the lines have always been blurry, since the zero-sum mindset is above all else a *human* fallacy and thus a useful tactic in mass politics of all valences. The lines have become especially blurry lately, as has been widely observed. Anyway, the notion that AI *is* a genuinely world-changing technology, that it can “go beyond” its “stolen” training data, breaks this load-bearing conception of the tech industry as vapid and superficial and, more importantly, of the people within it as blood-sucking thieves.




The West will be stronger if NATO is dissolved, because only then will Europe take defense seriously. (With apologies to Poland and the Baltics, who don't deserve this.)






Europe needs to reopen the Strait of Hormuz way more than America does, but refuses to help America do it. And Europe is within range of Iran's missiles, unlike America, but refuses to help America get rid of them. Maybe it's time to abandon Europe to its fate.





OpenAI’s Greg Brockman just ended a three-year argument. Can a text model actually understand reality? Or is it just expensive autocomplete? Greg Brockman: “We have definitively answered that question. It is going to go to AGI.” Definitively. Not a forecast. Not a theory. A closing statement. Brockman: “We have line of sight to these much, much better models that are coming this year.” A roadmap tells you where you are going. A targeting system tells you what you are about to hit. The bottleneck inside OpenAI is no longer the science. The math is solved. Brockman: “The amount of pain within OpenAI that we’ve had to decide how to allocate compute… goes up, not down.” They are not stuck on an equation. They are feeding something that keeps getting hungrier. And they cannot stop feeding it. The constraint is not human genius. It is the physical grid. And then he said this. Brockman: “The kinds of applications that we’ve always dreamed of are starting to come into reach. Like, for example, solving unsolved physics problems.” Unsolved physics. Not better search results. Not faster code reviews. Not smarter chatbots. The actual laws of the universe. Everything humanity wrote down, every equation, every argument, every failed theory, fed into a machine that is now finishing our sentences about the universe. Most of the internet is still debating whether the machine is conscious. OpenAI is not waiting for a consensus. They are allocating compute and locking in the schedule. The argument about what these models are is over. What happens next is not a question anymore. It is a schedule.


so... I audited Garry's website after he bragged about 37K LOC/day and a 72-day shipping streak. here's what 78,400 lines of AI slop code actually looks like in production. a single homepage load of garryslist.org downloads 6.42 MB across 169 requests. for a newsletter-blog-thingy. 1/9🧵









