Jack Lenz

217 posts

Jack Lenz

Jack Lenz

@jl_zenexcel

doing only what matters now, in service of what matters most

Katılım Haziran 2024
986 Takip Edilen56 Takipçiler
Sabitlenmiş Tweet
Jack Lenz
Jack Lenz@jl_zenexcel·
Smooth clade operator.
Jack Lenz tweet media
English
0
0
1
51
Jack Lenz
Jack Lenz@jl_zenexcel·
Prompt says what matters. Rules constrain what is safe. Schema defines what is valid. State shows what is current. CLI defines what is executable. MCP exposes what is connectable. API defines what is addressable. Files store what is durable. Logs preserve what happened. Tests prove what still works. Approvals gate what can change. Agent decides what to do next.
English
1
0
0
11
Jack Lenz retweetledi
Chrys Bader
Chrys Bader@chrysb·
i spoke to a founder yesterday - their CTO finally read their agent-made codebase after months and panicked when he realized it was impossible to understand wtf was going on my rule of thumb is: if your codebase starts written by agents, don’t try to understand it instead, align at the architectural level before any building happens, and ask the agent to maintain a living architecture diagram of how the system works there are three altitudes that matter: - Top-level: architecture - Mid-level: patterns & abstractions - Low-level: file-level code in today’s world, a CTO should be deeply concerned with #1. #2 matters too, but not as critical as #1. if #1 and #2 are dialed in, #3 is where most of the high leverage agentic gains live. as long as you understand the architecture and critical interfaces, it becomes much easier to reason about ground truth and meaningfully iterate understanding and informing the architecture / patterns / abstractions give your codebase maximum longevity and agent maintainability
Chrys Bader tweet media
English
28
51
478
100.3K
Jack Lenz retweetledi
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️
One of the biggest AI things happening in 2026 is the changing of organizations. The middle layers are going away. It’s about to get a lot harder to pretend you provide value when you don’t. Not settled on the model, but I think every org will have 3 main roles: 1. Idea people come up with the thing 2. Technical SMEs build and maintain the thing 3. Sales and marketing sell the thing Top employees will have lots of all three and will be world-class in at least one. What massively goes away are managers and facilitators that aren’t creators, SMEs, or sellers. Basically anyone who’s not directly shipping in one or more of the three categories. Very hard to predict, obviously, but I think the order in which these then also get eaten by AI are probably: 1. SMEs without creation or selling 2. Idea people without SME knowledge 3. Sellers without SME knowledge Selling might be the most durable skill in a world full of nearly free ideas and expertise.
English
32
13
141
15.9K
Jack Lenz retweetledi
Matt Stockton
Matt Stockton@mstockton·
Building agentic software right now is weird because for a lot of it, the patterns are not obvious yet. Do you give the agent custom tools, or let it use normal shell commands? Do you let it write code? If yes, how much rope do you give it? Do you use a model provider’s SDK, a framework like Deep Agents, or just start with a tiny harness and grow it yourself? When do you use subagents? What stays in the parent context? When a subagent finishes, what actually gets passed back? If the agent is operating on files, how do you structure the filesystem so it can make progress? For search, do you embed everything and give the agent a search tool? Put it in a vector DB? Or do something simpler? How do you connect external systems? CLI, MCP, direct APIs? If you connect a bunch of tools, how do you keep the context window from turning into sludge? What belongs in prompts vs deterministic code? What should skills actually be? Instructions, tools, workflows, examples? Which model do you use? How do you upgrade without breaking things? How do you handle compaction and context distillation? Do you build for one workflow, or something more general? Where do structured outputs help? Where do LLM judges actually help? Tests, evals, monitoring, prod checks? A lot of this is more art than science right now. I’m still very much learning it. Mostly by running the agent a lot and looking at what comes out. Where does it get stuck. What does it misuse. What does it forget. What does it keep doing even though you told it not to. After enough reps you start to see the patterns. That part is fun. It also makes it obvious how early we still are.
English
2
1
11
835
Jack Lenz retweetledi
Kun Chen
Kun Chen@kunchenguid·
one of the things I'm doing now that weren't possible without AI is OPINIONS.md I get a ton of value and believe everyone should create one. I manage mine with Hermes agent from @NousResearch and gpt 5.5 shared my full OPINIONS.md and setup in blog.kunchenguid.com/p/everyone-sho…
English
5
8
121
8.7K
Jack Lenz retweetledi
Matt Pocock
Matt Pocock@mattpocockuk·
4 of the most confusing terms in AI, defined: Model: a blob of parameters, written during training. Does next-token prediction and nothing else. Stateless. Harness: everything around the model that turns it into an agent: tools, system prompt, context window management, etc. Environment: the world the agent acts on. Anything outside the harness that the agent perceives and acts on via tools. Agent: a model, harnessed, in an environment. --- Opus is a model. Claude Code and Claude Web are different agents, because their harnesses differ - even though the models are the same. The file system is an environment. MCP servers add tools to the environment.
English
75
112
1.3K
75.9K
Jack Lenz retweetledi
Jack Lenz retweetledi
Claude
Claude@claudeai·
In Cowork, Claude can now build live artifacts: dashboards and trackers connected to your apps and files. Open one any time and it refreshes with current data.
Claude tweet media
English
666
1.6K
19.4K
6.5M
Jack Lenz retweetledi
Theo - t3.gg
Theo - t3.gg@theo·
Agent harnesses aren't the black magic many of y'all seem to think they are. To prove it, I built one.
English
157
228
3.6K
867.5K
Jack Lenz retweetledi
Yann LeCun
Yann LeCun@ylecun·
@Noahpinion Most "leading AI figures" think this p(doom) estimates are complete bullshit and the existential risk is essentially zero. But most of them are silent. The doomers attract a disproportionate amount of attention, of course.
English
68
50
806
36.3K
Jack Lenz retweetledi
Aaron Levie
Aaron Levie@levie·
Another week on the road meeting with a couple dozen IT and AI leaders from large enterprises across banking, media, retail, healthcare, consulting, tech, and sports, to discuss agents in the enterprise. Some quick takeaways: * Clear that we’re moving from chat era of AI to agents that use tools, process data, and start to execute real work in the enterprise. Complementing this, enterprises are often evolving from “let a thousand flowers bloom” approach to adoption to targeted automation efforts applied to specific areas of work and workflow. * Change management still will remain one of the biggest topics for enterprises. Most workflows aren’t setup to just drop agents directly in, and enterprises will need a ton of help to drive these efforts (both internally and from partners). One company has a head of AI in every business unit that roles up to a central team, just to keep all the functions coordinated. * Tokenmaxxing! Most companies operate with very strict OpEx budgets get locked in for the year ahead, so they’re going through very real trade-off discussions right now on how to budget for tokens. One company recently had an idea for a “shark tank” style way of pitching for compute budget. Others are trying to figure out how to ration compute to the best use-cases internally through some hierarchy of needs (my words not theirs). * Fixing fragmented and legacy systems remain a huge priority right now. Most enterprises are dealing with decades of either on-prem systems or systems they moved to the cloud but that still haven’t been modernized in any meaningful way. This means agents can’t easily tap into these data sources in a unified way yet, so companies are focused on how they modernize these. * Most companies are *not* talking about replacing jobs due to agents. The major use-cases for agents are things that the company wasn’t able to do before or couldn’t prioritize. Software upgrades, automating back office processes that were constraining other workflows, processing large amounts of documents to get new business or client insights, and so on. More emphasis on ways to make money vs. cut costs. * Headless software dominated my conversations. Enterprises need to be able to ensure all of their software works across any set of agents they choose. They will kick out vendors that don’t make this technically or economically easy. * Clear sense that it can be hard to standardize on anything right now given how fast things are moving. Blessing and a curse of the innovation curve right now - no one wants to get stuck in a paradigm that locks them into the wrong architecture. One other result of this is that companies realize they’re in a multi-agent world, which means that interoperability becomes paramount across systems. * Unanimous sense that everyone is working more than ever before. AI is not causing anyone to do less work right now, and similar to Silicon Valley people feel their teams are the busiest they’ve ever been. One final meta observation not called out explicitly. It seems that despite Silicon Valley’s sense that AI has made hard things easy, the most powerful ways to use agents is more “technical” than prior eras of software. Skills, MCP, CLIs, etc. may be simple concepts for tech, but in the real world these are all esoteric concepts that will require technical people to help bring to life in the enterprise. This both means diffusion will take real work and time, but also everyone’s estimation of engineering jobs is totally off. Engineers may not be “writing” software, but they will certainly be the ones to setup and operate the systems that actually automate most work in the enterprise.
English
255
646
5.3K
1.8M