Krešimir Končić
8.7K posts

Krešimir Končić
@Kreshomir
Owner at @neuralab // Writing a book on the future of programming and why ‘coding is the easier part’ // Not active here // https://t.co/O7ru5KJBDJ

Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.





The world is splitting between people who engage with reality to build the future and professional outrage artists spinning fantasy in the name of “accountability.” Wired has cast its lot with the latter. Wired talked to 37 people (including trying to talk to one employee's mother!) and discovered some Pultizer-winning stuff: defense manufacturing is hard, Grimm didn't like his lunch, and that we hold our people to the highest standards. Truly groundbreaking. After I suggested someone should buy them last month, this reads less like journalism and more like a petty grudge. An increasingly irrelevant tech publication put us in their burn book. Newsflash @Wired: this changes nothing about what the Pentagon needs or what our adversaries fear. What this half-reported screed can't capture (because it wouldn't know how and didn't take us up on our offers to help) is where we actually are: scaling faster than anyone in this industry, fixing problems as we find them, and building things this country hasn't built in generations. Don't like it? Don't Work at Anduril.



@VasiliyZukanov At some point everyone re-realises that *unless you understand every bit of how the code works you actually haven't saved yourself any time at all* (at least for software where failure is consequential - with building your own personal tools it doesn't matter).





EF, last year: Hey, we want to listen to you users to make Ethereum better. EF, now: Jk, we looked at the real world. We don't like building for it after all, we'll go back to building cypherpunk stuff only. This is the EF going back to its old ways, undoing the changes from last year. I have feared this would happen because Vitalik clearly wasn't in with his heart. But whatever they say about the "ecosystem" being able to take care of this, the fundamental problems remain: - there are very few voices in ACD caring about real world Ethereum usage - there is nobody doing Ethereum BD (everyone else who is doing this also has their own separate interests)









@kevinroose Why do you think coders are generally okay with AI-generated code, but writers seem to generally not be okay with AI-generated writing? Assuming both are reviewed by humans.











