🥔🥔🥔@argofowl
my main takeaways from core memory’s “the great reset at openai” — ep 67 with sam altman and greg brockman:
> greg framed openai’s operating edge as the pairing of sam’s “grand ambition” with execution discipline: sam keeps raising the target, and greg keeps forcing focus back to “the most important thing.” [05:04]
> sam said openai’s biggest strategic disagreement was never whether safety mattered, but how to talk about it. he credited greg with resisting a safety frame that could become more about power than actual safety. [07:52]
> greg argued that “agi going well” cannot be solved in a paper or by one technical intervention. openai’s strategy is iterative deployment plus broad social adaptation, not secretive lab-only alignment. [09:37]
> sam said the ai field has not done a good enough job explaining why building superintelligence would actually lead to a better everyday life for ordinary people. his point was that people do not just want cancer cures; they want agency, prosperity, meaningful work, and even a “right to adversity.” [11:58]
> one of greg’s clearest optimistic claims was that ai matters when it becomes personally useful, not just abstractly impressive. health-navigation stories and agent-enabled entrepreneurship are the bridge from “superintelligence” to lived value. [14:07]
> sam said chatgpt was “by far not the most impressive technological thing” openai had built, but it changed public opinion because people could feel the value directly. product, not explanation, is how the world updates. [21:28]
> greg described openai’s destination as “personal agi”: an ai that knows you, is trustworthy in domains like finance and health, and blurs the line between work agent and personal assistant. [26:07]
> sam and greg were unusually direct that llms can get to much better writing and personality. sam’s logic was that if this approach can solve open maths problems, it should be able to learn one person’s taste in writing. [33:47]
> greg teased that openai had more new models coming soon on the writing/personality front. [34:17]
> greg said the right way to judge model weaknesses is not the current snapshot but the slope. he acknowledged disappointment around gpt-5 writing and personality, but said openai has “line of sight” to improving it. [34:25]
> greg said openai launched chatgpt despite a competing school of thought that powerful ai should be built in secret. for him, broad deployment is part of making society resilient. [36:24]
> sam laid out the uncomfortable economics of ai: one future gives everyone a much higher floor but also creates trillionaires and worse inequality. he argued that cheap, abundant compute is the key lever to keep ai from becoming a rich-only advantage. [39:04]
> greg sharpened that point: “ai is opportunity for everyone if you have access.” without compute, even talented, agent-native kids cannot turn skill into mobility. [42:30]
> sam said the us has no credible fast catch-up plan in hardware and manufacturing except ai plus general-purpose robotics. without that, he agreed the current trajectory “looks terrible.” [47:18]
> greg said openai is in a “transition to agents.” coding has moved from autocomplete, to editor sidebars, to codex as an agent-management platform where humans keep roughly the high-level 20% and agents handle implementation details. [50:21]
> greg said “consumer” and “enterprise” are becoming less useful categories because agents will unlock tiny companies with previously impossible revenue. openai is reorganising around “solving goals across all contexts.” [55:18]
> greg identified sora as the clearest deprioritised product because its models are not unified with the core gpt series and the use case does not align as directly with openai’s agentic product suite. [56:08]
> greg argued that the business does not constrain openai’s ambition but enables it. his point was that revenue allows openai to scale compute, and compute deployed into products is a “profit centre,” not just a cost centre. [59:05]
> sam denied openai is pulling back on infrastructure. site-level choices may change, but he said the company will keep building “as much compute as we possibly can.” [59:46]
> openai is still explicitly pursuing robotics, but sam cautioned it is not near a “chatgpt moment.” greg also explicitly declined to give timelines. [01:00:53]
> they indicated that chips, networking, and the broader browser/super-app direction are still active priorities, even if the company is narrowing focus elsewhere. [01:00:38]
> greg conceded anthropic got ahead in applying coding models to messy real-world repos, not just programming benchmarks. he said that competition forced openai to improve, and now codex compares favourably head-to-head with claude. [01:02:21]
> sam criticised “too dangerous to release” rhetoric as sometimes legitimate, but also potentially useful as “fear-based marketing.” openai’s preferred path is mitigations, trusted access, and broader release rather than keeping ai in the hands of a small trusted class. [01:04:42]
> sam said anthropic was not treated well in the government cyber-model fight. he criticised threats like dpa or supply-chain-risk pressure, while also saying labs should not refuse to help defend the country. [01:07:15]
> sam blamed some ai drama on people who “only trust themselves to get it right” because they see the stakes as infinite. he argued ai should be a collective human project, not one person or ideology’s victory. [01:10:50]
> sam said agi no longer feels hypothetical: “somebody’s going to get to agi now,” and he suggested about five companies could reasonably do it, but he did not give an exact date. [01:15:04]
> greg said the elon/openai trial is a chance to finally tell openai’s version of history. according to him, sam, greg, ilya, and elon all agreed a for-profit path was necessary; the breaking point was elon demanding majority equity, ceo status, and “absolute control.” [01:17:42]