Dennis Green lieber

4K posts

Dennis Green lieber banner
Dennis Green lieber

Dennis Green lieber

@greenlieber

3x CPO, 2x founder. Now building @propane_ai - AI interviews for customer intelligence. https://t.co/tCqmrlNx2T, Neurons, Duuoo. LFC dad. Product animal.

Copenhagen Katılım Ocak 2009
2.7K Takip Edilen1.1K Takipçiler
Karri Saarinen
Karri Saarinen@karrisaarinen·
We’ve had very accelerated growth these past quarters and are getting close to hitting a large revenue milestone at @linear It made me think about the beginning. It’s still amazing to me that you can start something from zero and, over time, see it grow into something meaningful. One marker of that progress is our changelog: 236 updates, published weekly or every other week since the start. A long trail of product bets, improvements, and fixes accumulated over years. linear.app/changelog/time…
Karri Saarinen tweet media
English
9
0
200
13.2K
Dennis Green lieber
Dennis Green lieber@greenlieber·
@karpathy said it best: build the Iron Man suit, not the robot. Tony Stark could have stayed home and let the machine do it. He didn't. That's the point. Product management isn't dying. The thinking, the judgment, the vision, that's not going anywhere. But the people who do it are about to become ten times more powerful. Why i'm bullshit on this space.
English
0
0
0
26
Dennis Green lieber
Dennis Green lieber@greenlieber·
We sat down with feedback on how far into the PM process @propane_ai could go. The team's response? Hold my beer. We're building something that goes further than you'd expect. Stay tuned. If you're a designer or PM, I'd love to show you what's coming. DM me.
English
0
0
1
22
Dennis Green lieber retweetledi
Patrick OShaughnessy
Patrick OShaughnessy@patrick_oshag·
Brian on why pure people managers won't survive AI: "I don't think people that only manage people will have any value in the future. Everyone's going to have to be a hybrid people manager or manager IC. In other words, even the managers need to code. You can't just be these managers where you're people's therapists and you're just doing meetings, just one-on-ones. People who have lots of recurring one-on-ones are not going to survive. That kind of leadership style is not gonna work. You need to have context. I hear about heads of design, they don't actually manage the design. Johnny Ive manages the design. He designs and he leads people. A design leader who only manages the people that's crazy to me. The way Frank Lloyd managed his design team is through the work. You don't manage the people, you manage the work. I think a lot of people will survive this age of AI. The two types of people that will not survive are pure people managers, and people that are rigid and don't want to change and evolve."
Patrick OShaughnessy@patrick_oshag

My guest today is Brian Chesky (@bchesky), founder and CEO of Airbnb and one of the great consumer founders of the last 20 years. Paul Graham coined "founder mode" based on Brian's experience running Airbnb. This conversation is about what comes after it, what he calls AI founder mode, and how it will force founders to focus even more on the details. We talk about his eleven-star exercise for finding product market fit, why your first hire should be a recruiter, and why Airbnb's $100B IPO became one of the saddest days of his life. Brian still comes across like the 17 year-old at the Rhode Island School of Design (RISD) who picked to study industrial design. His heroes are all artists. Da Vinci, Van Gogh, Walt Disney, and Steve Jobs, all of whom were working the week they died because they loved what they did. Rick Rubin taught him that an artist is only an artist when they make things for themselves. Now Brian believes AI is the opportunity for all of us to do the same. Enjoy! Timestamps: 1:00 Studying Industrial Design 11:33 AI Founder Mode 17:02 Lack of Consumer AI Companies 22:10 Small Teams and Focused Problems 30:52 The Evolution from Founder to CEO 38:13 The 11-Star Experience 41:07 AI as a Canvas for Creativity 48:17 Detaching from Success 53:12 Founder-Led Moats 58:34 The Next Chapter of Airbnb 1:03:08 What Endures in the Age of AI 1:06:43 Lessons from Bodybuilding 1:10:20 The CEO's No. 1 Job 1:17:01 Activating Talent 1:20:39 The Kindest Thing

English
65
103
976
322.9K
Dennis Green lieber retweetledi
Ian Miles Cheong
Ian Miles Cheong@ianmiles·
The 'vibecoding' hype is officially hitting a wall. We have passed the peak of inflated expectations regarding AI eliminating all software developers. David Sacks recently broke down the reality check the tech industry is facing, citing insights from Aaron Levie and Matthew Yglesias. The consensus is shifting: people do not actually want to 'vibe code' their own complex applications. The real consumer demand is simple. We want professionally managed software companies to leverage AI coding assistants to build better, cheaper products. The translation is straightforward: just lower your prices, do not make the end user vibe code. While agentic coding is an undeniable boon for professional developers looking to scale their output, and fantastic for beginners learning the ropes, it breaks down when casually building complex software. Casual users are not equipped to take on the ongoing risks of system upgrades, routine maintenance, bugs, and cyber security threats. Chamath Palihapitiya takes it a step further, calling this casual approach to enterprise software a massive risk rather than just a tax on knowledge workers. He predicts that we will eventually see a public company completely torch its enterprise value because someone tried to vibe code their way out of a problem, leading to inevitable firings. As Jason Calacanis points out, this is exactly how the technology adoption lifecycle works. The industry is currently moving from the peak of inflated expectations down into the trough of disillusionment. AI agents will eventually climb the slope of enlightenment and become a highly productive standard, but the idea of replacing the entire professional developer workforce overnight was just a phase in the cycle. FT: @theallinpod @jason @davidsacks @chamath @friedberg
English
52
59
449
101.1K
Dennis Green lieber
Dennis Green lieber@greenlieber·
If your company is 3–5 years old, you raised at peak valuations and you're still working off the hangover. If you're 10+, you just got repriced by 70%. Being a new company/product, 1-6 people, no politics, no overhead — just building. Good time to start. To all the people that are losing jobs, use the time to build with the experience you have, learn new skills and change the domain or process you are passion about.
English
0
0
0
29
Dennis Green lieber
Dennis Green lieber@greenlieber·
Data and AI is not simple. Most tools are not ready for the trust bar enterprise teams set. We decided early we would not let that stop us. SOC 2 Type II before anyone asked. The agentic world moves fast, trust should not be the bottleneck. Thanks @TrustVanta for making it possible.
Propane@propane_ai

We just hit SOC 2 Type II. The agentic era means your product data is always in motion. Research, signals, customer context flowing through a live workspace. That data is the product. Keeping it safe is not optional.

English
0
0
1
40
Dennis Green lieber
Dennis Green lieber@greenlieber·
The PM tool graveyard is filling up fast. And most of them still think they have time. Good luck — we're coming for you.
GIF
English
0
0
0
9
Dennis Green lieber
Dennis Green lieber@greenlieber·
You either got replaced by a skill, a plugin, or an agent. Or a new player just walked in and took all your customers
English
1
0
0
11
Dennis Green lieber
Dennis Green lieber@greenlieber·
If your PM tool wasn't already agentic 6 months ago, you're not late. You're done.
English
1
0
0
42
andrew chen
andrew chen@andrewchen·
bullish on the PM role quietly becoming the most important role in tech again when anyone can build, the person who decides WHAT to build becomes the bottleneck
English
283
178
2.3K
227.2K
Dennis Green lieber retweetledi
Ben Fleming
Ben Fleming@benfleming__·
trust + quality do not go in the same sentence as AI for many… we’re changing that at @propane_ai, with Evals, giving you the confidence to make robust product decisions, grounded in reality🔬 we all owe quality to users and builders! what would take this to the next level?
English
1
2
4
80
Ben Fleming
Ben Fleming@benfleming__·
beautiful day to ship from copenhagen🇩🇰 grateful to cycle through this city every day then ship with some great people over at @propane_ai something so efficient about this city - low crime, high trust and definitely velocity/focus on the right things!
Ben Fleming tweet mediaBen Fleming tweet media
English
1
0
4
91
Dennis Green lieber retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
from weights → context → harness engineering (evolution of agent landscape from 2022-26) the biggest shift in AI agents had nothing to do with making models smarter. it was about making the environment around them smarter. here's how agent engineering evolved in just 4 years, across three distinct phases: 𝗽𝗵𝗮𝘀𝗲 𝟭: 𝘄𝗲𝗶𝗴𝗵𝘁𝘀 (𝟮𝟬𝟮𝟮) everything was about the model itself. bigger models, more data, better training. scaling laws told us that progress = more parameters. RLHF and fine-tuning shaped behavior. if you wanted a better agent, you trained a better model. this worked great for single-turn tasks. ask a question, get an answer. but it hit a wall fast. updating one fact meant retraining. auditing behavior was nearly impossible. and personalization across millions of users from one frozen set of weights? not happening. 𝗽𝗵𝗮𝘀𝗲 𝟮: 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 (𝟮𝟬𝟮𝟯-𝟮𝟬𝟮𝟰) the realization: you don't always need to change the model. you can change what the model sees. prompt engineering, few-shot examples, chain-of-thought, RAG. suddenly the same frozen model could behave completely differently based on what you put in front of it. developers stopped fine-tuning and started iterating on prompts and retrieval pipelines instead. it was cheaper, faster, and surprisingly effective. but context windows are finite. long prompts get noisy. models attend unevenly (the "lost in the middle" problem is real). and every new session starts fresh with zero memory of what happened before. context made agents flexible. it didn't make them reliable. 𝗽𝗵𝗮𝘀𝗲 𝟯: 𝗵𝗮𝗿𝗻𝗲𝘀𝘀 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 (𝟮𝟬𝟮𝟱-𝟮𝟬𝟮𝟲) this is where we are now, and the shift is fundamental. the question changed from "what should we tell the model?" to "what environment should the model operate in?" the model is no longer the sole location of intelligence. it sits inside a harness that includes persistent memory, reusable skills, standardized protocols (like MCP and A2A), execution sandboxes, approval gates, and observability layers. the model stays the same. what changes is the task it's being asked to solve. a concrete example: a coding agent asked to implement a feature, run tests, and open a PR. without a harness, the model must keep repo structure, project conventions, workflow state, and tool interactions all inside a fragile prompt. with a harness, persistent memory supplies context, skill files encode conventions, protocolized interfaces enforce correct schemas, and the runtime sequences steps and handles failures. same model. completely different reliability. 𝘁𝗵𝗲 𝗽𝗮𝘁𝘁𝗲𝗿𝗻 𝗮𝗰𝗿𝗼𝘀𝘀 𝗮𝗹𝗹 𝘁𝗵𝗿𝗲𝗲 𝗽𝗵𝗮𝘀𝗲𝘀 𝗶𝘀 𝘀𝗶𝗺𝗽𝗹𝗲: - weights encoded knowledge in parameters (fast but rigid) - context staged knowledge in prompts (flexible but ephemeral) - harnesses externalized knowledge into persistent infrastructure (reliable and governable) each phase didn't replace the previous one. it layered on top. weights still matter. context engineering still matters. but the center of gravity has moved outward. the most consequential improvements in agent reliability today rarely come from changing the base model. they come from better memory retrieval, sharper skill loading, tighter execution governance, and smarter context budget management. building better agents increasingly means building better environments for models to operate in. there's a great paper on this: Externalization in LLM Agents: A Unified Review of Memory, Skills, Protocols and Harness Engineering paper: arxiv.org/abs/2604.08224 i also published this deep dive (article) on agent harness engineering, covering the orchestration loop, tools, memory, context management, and everything else that transforms a stateless LLM into a capable agent.
Akshay 🚀 tweet media
Akshay 🚀@akshay_pachaar

x.com/i/article/2040…

English
42
208
1.1K
154K
Dennis Green lieber
Dennis Green lieber@greenlieber·
@damienghader I think there's absolutely room for both personas in the market. Some people need to build something light and not think about anything. Some people want the power of the most advanced coding agent system.
English
1
0
2
214
damien
damien@damienghader·
Anthropic vs. Lovable for apps. Who will win the 99%? Breakdown ↓ Lovable is a full-stack app creator with: • simple workflows • tight iteration loops • deep understanding of non-technical builders Claude will without a doubt be powerful and just as capable. The hard part for any NEW builder is: going from idea → usable product → continuous iteration Not just in one prompt. This requires: • opinionated UX • guided flows • persistent context Lovable was optimized for this exact workflow from day one. The battle will come down to user experience.
damien tweet media
English
12
4
56
3.6K