morgan —

51.6K posts

morgan — banner
morgan —

morgan —

@morqon

twitter moron

london Joined Nisan 2008
2.5K Following9.6K Followers
Pinned Tweet
morgan —
morgan —@morqon·
i come to this website to learn about change management
English
4
4
93
0
morgan —
morgan —@morqon·
i get the same sense, but locate the turning point maybe seven years earlier, post-topolski, the rude awakening of 2016 and its aftermath gave them a new master narrative, casey on the tech-lash beat, nilay moving from “technology is the centre of culture” to “technology is political change” the founding tagline “technology and how it makes us feel” gets a new answer: not great, actually
English
1
0
0
78
morgan —
morgan —@morqon·
the verge’s transition from hyperactive optimism to depressed cynicism is the narrative arc of an entire generation
Sam Sheffer@samsheffer

English
6
5
147
10.2K
morgan — retweeted
morgan —
morgan —@morqon·
inside openai, by end of march: (1) for any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal (2) the default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions
Greg Brockman@gdb

Software development is undergoing a renaissance in front of our eyes. If you haven't used the tools recently, you likely are underestimating what you're missing. Since December, there's been a step function improvement in what tools like Codex can do. Some great engineers at OpenAI yesterday told me that their job has fundamentally changed since December. Prior to then, they could use Codex for unit tests; now it writes essentially all the code and does a great deal of their operations and debugging. Not everyone has yet made that leap, but it's usually because of factors besides the capability of the model. Every company faces the same opportunity now, and navigating it well — just like with cloud computing or the Internet — requires careful thought. This post shares how OpenAI is currently approaching retooling our teams towards agentic software development. We're still learning and iterating, but here's how we're thinking about it right now: As a first step, by March 31st, we're aiming that: (1) For any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal. (2) The default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions. In order to get there, here's what we recommended to the team a few weeks ago: 1. Take the time to try out the tools. The tools do sell themselves — many people have had amazing experiences with 5.2 in Codex, after having churned from codex web a few months ago. But many people are also so busy they haven't had a chance to try Codex yet or got stuck thinking "is there any way it could do X" rather than just trying. - Designate an "agents captain" for your team — the primary person responsible for thinking about how agents can be brought into the teams' workflow. - Share experiences or questions in a few designated internal channels - Take a day for a company-wide Codex hackathon 2. Create skills and AGENTS[.md]. - Create and maintain an AGENTS[.md] for any project you work on; update the AGENTS[.md] whenever the agent does something wrong or struggles with a task. - Write skills for anything that you get Codex to do, and commit it to the skills directory in a shared repository 3. Inventory and make accessible any internal tools. - Maintain a list of tools that your team relies on, and make sure someone takes point on making it agent-accessible (such as via a CLI or MCP server). 4. Structure codebases to be agent-first. With the models changing so fast, this is still somewhat untrodden ground, and will require some exploration. - Write tests which are quick to run, and create high-quality interfaces between components. 5. Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high - Ensure that some human is accountable for any code that gets merged. As a code reviewer, maintain at least the same bar as you would for human-written code, and make sure the author understands what they're submitting. 6. Work on basic infra. There's a lot of room for everyone to build basic infrastructure, which can be guided by internal user feedback. The core tools are getting a lot better and more usable, but there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use. Overall, adopting tools like Codex is not just a technical but also a deep cultural change, with a lot of downstream implications to figure out. We encourage every manager to drive this with their team, and to think through other action items — for example, per item 5 above, what else can prevent a lot of "functionally-correct but poorly-maintainable code" from creeping into codebases.

English
7
7
133
22.5K
morgan —
morgan —@morqon·
@fqure call it adrenaline futurism josh in 2011: “everything that we think about, talk about, and dream about is on the brink” “we think the best is yet to come”
morgan — tweet media
English
0
0
0
49
fqure
fqure@fqure·
@morqon Josh shtick was never ‘optimism’
English
1
0
3
121
Zephyr
Zephyr@zephyr_z9·
@morqon so Greg will be Koding??
English
2
0
3
534
morgan — retweeted
Fidji Simo
Fidji Simo@fidjissimo·
Companies go through phases of exploration and phases of refocus; both are critical. But when new bets start to work, like we're seeing now with Codex, it's very important to double down on them and avoid distractions. Really glad we're seizing this moment.
Berber Jin@berber_jin1

SCOOP - OpenAI is planning to simplify its product experience and launch one "superapp" -- part of its broader effort to instill more discipline and focus into the business, and beat back the threat posed by Anthropic more here in our @WSJ story wsj.com/tech/openai-pl…

English
91
35
528
122.2K
am.will
am.will@LLMJunky·
Finally proud to announce that I've joined the GPU Minor Leagues. 2 x RTX 6000 Pro. I have six months to pay off the second GPU lol. You are all TERRIBLE influences.
am.will tweet media
English
63
8
412
15.9K
morgan —
morgan —@morqon·
@iMonkPro in retrospect, paul miller’s year offline was an early sign of things to come
English
0
0
2
327
morgan — retweeted
Theo
Theo@theojaffee·
Negative sentiment toward AI is a luxury belief
Theo tweet media
English
29
22
226
14.6K
morgan —
morgan —@morqon·
“ai for human alignment is not only a technical project. it is an existential one. the question is no longer only: how do we align machines with humans? it is also: what would it mean to use machines to help humans align with the lives they themselves most want to live?”
Houda Nait El Barj@Houda_nait

x.com/i/article/2034…

English
0
0
3
389
morgan — retweeted
Jarred Sumner
Jarred Sumner@jarredsumner·
Astral was my first (& so far only) angel investment. A small check in their Series A.
English
10
3
436
19.9K
morgan —
morgan —@morqon·
openai secures a large allocation of samsung’s next-gen memory for their custom chip, the third largest allocation after nvidia and AMD openai’s chip production starts this summer, should launch by ‌year-end
Jukan @GTC2026@jukan05

[EXCLUSIVE] Samsung Electronics Breaks Into OpenAI — Sole Supplier of 800 Million Gb HBM4 Samsung Electronics will become the first and sole supplier of next-generation High Bandwidth Memory 4 (HBM4) to OpenAI, the world’s largest artificial intelligence company. OpenAI plans to integrate Samsung’s HBM4 into its first-generation in-house AI chip, codenamed “Titan.” With this win following its earlier HBM4 supply agreement with NVIDIA, Samsung is being credited with cementing its leadership in the advanced AI chip market. According to industry sources on the 19th, Samsung Electronics has agreed to supply OpenAI with up to 800 million gigabits (Gb) of HBM4 (12-layer product) in the second half of this year. That volume represents approximately 7% of Samsung’s total planned HBM output for the year (over 11 billion Gb), and roughly 15% of its HBM4-specific production (approximately 5.5 billion Gb). The allocation is understood to be the third largest after NVIDIA and AMD. The HBM4 units Samsung will deliver are destined to sit alongside OpenAI’s first-ever AI chip, Titan Gen 1, developed in partnership with Broadcom. TSMC is expected to begin production in Q3, with a launch targeted for year-end. The deal is seen as particularly significant given that OpenAI — the company that ignited the generative AI boom with the launch of ChatGPT in 2022 — selected Samsung as its inaugural HBM supplier. OpenAI also sits at the center of the United States’ Stargate Project, a planned $500 billion AI infrastructure initiative. First Fruits After the JY Lee–Altman Meeting… AI Chip Orders Keep Coming OpenAI operates hundreds of thousands of AI chips across its data centers to deliver generative AI services, having relied heavily on NVIDIA’s general-purpose AI semiconductors. More recently, the company concluded that it needed its own custom chips optimized for inference workloads — a trend that has become central to next-generation AI development. An industry insider noted that “OpenAI has been investing heavily in R&D to successfully mass-produce its Titan chip,” adding that “Samsung satisfied the stringent HBM4 requirements that OpenAI set out, which is what made this deal possible.” Given that OpenAI has chosen Samsung as its first-ever HBM supplier, observers believe Samsung HBM is likely to be incorporated into future Titan generations as well. HBM stacks multiple DRAM dies vertically — similar to floors in an apartment building — delivering greater capacity and faster data transfer speeds than conventional DRAM, making it the memory of choice for AI applications. Micron Technology has projected that the global HBM market will grow from approximately $35 billion in revenue last year to around $100 billion by 2028. Samsung’s HBM business had a difficult stretch through last year, suffering back-to-back failures in NVIDIA’s qualification tests for HBM3 and HBM3E — a significant blow to the pride of the world’s top memory chipmaker. The turnaround came in May 2024, when Vice Chairman Jeon Young-hyun made the bold decision to redesign the DRAM at the core of Samsung’s HBM. This year, Samsung passed NVIDIA’s HBM4 qualification without a single design revision, enabling direct shipment of mass-production units. On March 18th, AMD announced it had designated Samsung as its preferred HBM4 supplier. Demand for the prior-generation HBM3E is also said to be surging. Samsung is reportedly targeting over 5 billion Gb of HBM3E for Google’s Tensor Processing Units (TPUs) — also developed in partnership with Broadcom — in addition to its NVIDIA supply commitments. Behind the OpenAI HBM4 deal, JY Lee, Chairman of Samsung Electronics, reportedly played a pivotal role. In October of last year, Lee met with OpenAI CEO Sam Altman and exchanged a Letter of Intent (LOI) covering the supply of cutting-edge AI memory including HBM, laying the groundwork for the agreement that has now come to fruition.

English
0
2
45
4K