Luuk Derksen

1.9K posts

Luuk Derksen banner
Luuk Derksen

Luuk Derksen

@luckylwk

Co-Founder of @orbiit_ai (acq. by @hivebrite). Building applied AI.

Amsterdam, The Netherlands Katılım Kasım 2013
1.8K Takip Edilen653 Takipçiler
Sabitlenmiş Tweet
Luuk Derksen
Luuk Derksen@luckylwk·
I am proud to be able to say that @orbiit_ai has been acquired by @hivebrite! 🎉 Our AI-powered matching software is joining forces with the leading community engagement platform. A moment of immense pride for the entire Orbiit team! 👉 hivebrite.io/hivebrite-acqu…
hivebrite@hivebrite

📣 Hivebrite acquires @orbiit_ai, a pioneering AI-powered matching company, to take community engagement to the next level. Welcome aboard, Orbiit team! 🙌 @BilyanaFreye @luckylwk ➡️ Discover more: lnkd.in/ejcZPCNe

English
2
0
7
471
Luuk Derksen
Luuk Derksen@luckylwk·
@chintanturakhia Out of curiosity: What are you using to run the harness/agents on at Base/Coinbase? Anything like Cloudflare or Modal sandboxes or did you opt for a custom solution?
English
0
0
0
15
Luuk Derksen retweetledi
andrew chen
andrew chen@andrewchen·
in a world of agents, the product role is going to split into two jobs: - one that organizes humans (stakeholders, design, eng) - one that organizes agents (prompts, evals, workflows, etc) Both will be in pursuit of offering the right products to customers, but how you get there will dramatically change. What happens to the typical product rituals? Instead of PRDs, OKRs, standups, product reviews, we'll need the equivalent for agents. Couple wild ideas here... instead of standups: the equivalent is that agents will report back to us based on run logs and anomaly flags. no one needs to say what they did yesterday, the system already did thousands of things. the question is where it broke, where it surprised you, and where it got better. Show us the patterns, the trends, the edge cases - particularly the ones the agents didn't fix automatically. the daily ritual becomes reviewing deltas, scanning failures, and deciding which ones matter. less reporting, more triage instead of OKRs: we’ll need adversarial agents that continuously monitor/grade the system and detect patterns, scoring outcomes on an hourly or daily basis. Rather than setting a quarterly goal of "increase X by 5%" and revisiting slowly -- instead, management will be able to monitor success in real-time and detect trends/patterns towards overall goals instead of PRDs: we won't need waterfall. Prototyping will rule the day, and we’ll need a living agentic loop that mediates customer feedback/ratings and what's being prioritized and built. you don’t hand it to eng, you deploy it into the agent loop. if it’s wrong, it fails visibly and you can revert. if it’s right, it produces the right output instead of product reviews: we'll need simulation systems to examine agent behavior in different scenarios. In an agentic world where UI shifts from buttons/menus to agents automatically doing things, you'll want to examine their behavior before you deploy. You rewind decisions, fork alternate paths, and see how different prompts or constraints would have changed outcomes. the review becomes interactive. less storytelling, more counterfactuals. The PM sits in the middle of this split. On the human side, still aligning taste, risk tolerance, and strategy across people. On the agent side, shaping the actual behavior of the system through prompts, evals, and feedback loops. one side is persuasion. The other is instrumentation. the best ones will collapse the gap, translating intent directly into systems that act on it. the fascinating part is that the agentic loop will run 10000x faster than the human one, and of course, you can "hire" them faster. Thus the “organizing humans” half starts to feel slow and lower impact unless it directly improves the agent loop. Eventually the PM will shift towards agents and maybe ignore the human coordination altogether...
English
80
54
580
57.5K
Chintan Turakhia
Chintan Turakhia@chintanturakhia·
Run this prompt frequently. You're welcome.
Chintan Turakhia tweet media
English
121
315
6.9K
909K
Benji Taylor
Benji Taylor@benjitaylor·
Readout is a fully native macOS app I’ve been building for myself. It provides a real-time overview of your dev environment and Claude Code config. All local, no account required. It's still very much a beta, but now available to try: readout.org
Benji Taylor tweet media
English
232
152
3.2K
491.9K
Luuk Derksen retweetledi
Jared Friedman
Jared Friedman@snowmaker·
Software engineering changed more in the last 3 months than the preceeding 30 years. Everything about running a software company needs to be rethought from first principles.
English
355
710
6.7K
541.3K
Luuk Derksen
Luuk Derksen@luckylwk·
@tobi This is fantastic, I was already loving the whole setup 💎
English
0
0
0
174
Alex Lieberman
Alex Lieberman@businessbarista·
I want to start a community dedicated to Claude Code. It’s become the gateway drug to coding and experiencing the power of AI for tons of people. This will be a space for people to share killer use cases, agentic workflows, proven prompts, and connect with other CC obsessives. Comment “Claude” if you want to join.
English
7.1K
201
6.3K
620.3K
Luuk Derksen
Luuk Derksen@luckylwk·
The X feature I didn’t know I wanted all along: article reading mode. So simple and so good👏 @nikitabier
Luuk Derksen tweet mediaLuuk Derksen tweet media
English
0
0
1
15
Luuk Derksen
Luuk Derksen@luckylwk·
@levelsio What does it say about the EU that I am impressed by their turnaround speed to decline 🤣
English
0
0
0
279
Luuk Derksen
Luuk Derksen@luckylwk·
@levie When everything becomes context, most of ‘our’ roles will become more about context curation than ‘managing’ agents
English
1
0
2
125
Aaron Levie
Aaron Levie@levie·
A core AI agent product management principle is just figuring out what a very smart person -without any initial context whatsoever- would need to perform the task successfully. The whole game is just doing everything possible to get just the right information into the context window to ensure that the agent gets access to the most relevant data and tools to execute. Every time we’re trying to figure out why something works or doesn’t work about an agent, usually it just boils down to the fact that a human would need totally different or meaningfully more context to execute the same action. Usually then the problem lies somewhere in the agent’s use of tools (like search), or not giving the agent enough data to work with, or sometimes giving it too much, or not explaining the task or objective properly, and so on. The great thing is that every one of these issues is tractable. The models will just keep getting better at every one of these issues. And you can always throw more compute at the problem in whatever form that is (more reasoning, more planning, more data retrieved, etc.) - it’s just a matter of cost/speed tradeoffs. Very interesting new space to be building for.
English
43
56
429
81K