Charlotte Kleverud

1.1K posts

Charlotte Kleverud banner
Charlotte Kleverud

Charlotte Kleverud

@lottite

building @momentalos | swede in the bay area

Menlo Park, CA Katılım Temmuz 2011
604 Takip Edilen317 Takipçiler
Charlotte Kleverud
Charlotte Kleverud@lottite·
@lewiscarhart If you’re so much better - which I thought you were, you’re great at marketing! - it’s just ungraceful to bash a smaller competitor
English
1
0
3
60
Lewis ⚡ soc2/acc
Lewis ⚡ soc2/acc@lewiscarhart·
OneLeet is not a serious platform. No device agent Manual screenshots Less than 100 integrations Terrible policies and vendor management (Your ex customers have shared screenshots with us) I laughed out loud when I saw you guys managed to convince someone to invest $30M
Lewis ⚡ soc2/acc tweet media
English
2
1
13
4.1K
erin griffith
erin griffith@eringriffith·
A detailed and brutal look at the tactics of buzzy AI compliance startup Delve "Delve built a machine designed to make clients complicit without their knowledge, to manufacture plausible deniability while producing exactly the opposite." substack.com/home/post/p-19…
English
64
55
956
707.7K
Adam Draper ⏻
Adam Draper ⏻@AdamDraper·
accidently typed in gail.com instead of gmail, and its my new favorite website.
Adam Draper ⏻ tweet media
English
118
1.2K
16.4K
592.8K
Charlotte Kleverud
Charlotte Kleverud@lottite·
@big_duca Came across this today. For comparison: Google employees in Sweden make on average 190k per year
Charlotte Kleverud tweet media
English
0
0
1
64
Duca
Duca@big_duca·
Man if you’re in tech, you better be in the US. I was curious so I looked at a job posting. 10 year experience staff engineer. Based in Europe. Limited PTO. And my comp for my first job when I was 19 was 3x the listed salary.
English
21
2
51
7.9K
Brianne Kimmel
Brianne Kimmel@briannekimmel·
Hosting some unexpected, quirky, uniquely SF experiences over the coming weeks. If you're new to the city or want to meet new people outside of work, please DM me.
English
23
7
195
13.7K
Charlotte Kleverud
Charlotte Kleverud@lottite·
I wish there was a more established scale of agency; in my world real agents can proactively figure out how to solve problems and even what problems to solve, coordinate with agents and humans autonomously, use tools and apps like a human to drive outcomes, learn from their mistakes and improve. But that bar seems pretty high
English
1
0
1
45
søren
søren@sorenrood·
agents (1) automating one off tasks vs (2) orchestrating 20 day processes across 5 people and 8 systems are fundamentally different systems design problem. imo the latter is far more interesting right now. single player // multi player
English
2
2
12
830
Charlotte Kleverud
Charlotte Kleverud@lottite·
@petergyang @andrewchen Agree with the latter but OKRs - didn’t they work because of politics and optics ie… humans? I’m finding them quite useful for agents since they are, in their essence, just goals and their success criteria
English
0
0
0
56
Peter Yang
Peter Yang@petergyang·
@andrewchen Hot take: OKRs, standup, and this waterfall crap never worked anyway :)
English
6
1
38
2.7K
andrew chen
andrew chen@andrewchen·
in a world of agents, the product role is going to split into two jobs: - one that organizes humans (stakeholders, design, eng) - one that organizes agents (prompts, evals, workflows, etc) Both will be in pursuit of offering the right products to customers, but how you get there will dramatically change. What happens to the typical product rituals? Instead of PRDs, OKRs, standups, product reviews, we'll need the equivalent for agents. Couple wild ideas here... instead of standups: the equivalent is that agents will report back to us based on run logs and anomaly flags. no one needs to say what they did yesterday, the system already did thousands of things. the question is where it broke, where it surprised you, and where it got better. Show us the patterns, the trends, the edge cases - particularly the ones the agents didn't fix automatically. the daily ritual becomes reviewing deltas, scanning failures, and deciding which ones matter. less reporting, more triage instead of OKRs: we’ll need adversarial agents that continuously monitor/grade the system and detect patterns, scoring outcomes on an hourly or daily basis. Rather than setting a quarterly goal of "increase X by 5%" and revisiting slowly -- instead, management will be able to monitor success in real-time and detect trends/patterns towards overall goals instead of PRDs: we won't need waterfall. Prototyping will rule the day, and we’ll need a living agentic loop that mediates customer feedback/ratings and what's being prioritized and built. you don’t hand it to eng, you deploy it into the agent loop. if it’s wrong, it fails visibly and you can revert. if it’s right, it produces the right output instead of product reviews: we'll need simulation systems to examine agent behavior in different scenarios. In an agentic world where UI shifts from buttons/menus to agents automatically doing things, you'll want to examine their behavior before you deploy. You rewind decisions, fork alternate paths, and see how different prompts or constraints would have changed outcomes. the review becomes interactive. less storytelling, more counterfactuals. The PM sits in the middle of this split. On the human side, still aligning taste, risk tolerance, and strategy across people. On the agent side, shaping the actual behavior of the system through prompts, evals, and feedback loops. one side is persuasion. The other is instrumentation. the best ones will collapse the gap, translating intent directly into systems that act on it. the fascinating part is that the agentic loop will run 10000x faster than the human one, and of course, you can "hire" them faster. Thus the “organizing humans” half starts to feel slow and lower impact unless it directly improves the agent loop. Eventually the PM will shift towards agents and maybe ignore the human coordination altogether...
English
80
53
582
56.1K
Kristof
Kristof@CoastalFuturist·
If there’s enough interest I’d like to make a group chat for people using openclaw / hermes agent heavily I really want to understand some good use cases, best practices, and just have a place for people to talk shop Comment if you’re interested
English
322
4
327
20.5K
Charlotte Kleverud
Charlotte Kleverud@lottite·
It’s been three strange years for product people > 2023: become an AI product manager, whatever that is > 2024: vibe code or you won’t make it > 2025: Claude code or you won’t make it > 2026: everyone is a product manager anyway - this one is for everyone new to product
Momental@momentalos

x.com/i/article/2033…

English
0
0
0
187
Dmitry Shevelenko
Dmitry Shevelenko@dmitry140·
no company founded >2026 will organically grow to >10k employees
English
27
6
97
18.4K
Charlotte Kleverud
Charlotte Kleverud@lottite·
@rryssf_ “The part nobody has solved yet: When one agent updates shared memory, the other agent has no way of knowing when that update is visible or what happens if both write conflicting information at the same time.” -> we’re solving it in @momentalos
English
0
0
0
59
Robert Youssef
Robert Youssef@rryssf_·
🚨 BREAKING: AI agents can't share memory without corrupting it. Here's why every multi-agent system being built right now is sitting on a time bomb: > When two AI agents work on the same task, they share memory. One reads while the other writes. Sometimes simultaneously. And there are zero rules governing any of it. > Computer scientists solved this exact problem in the 1970s. They called it memory consistency. Every processor, every operating system, every database runs on it. AI agents skipped the memo entirely. > We built entire multi agent frameworks AutoGen, LangGraph, CrewAI without a single consistency model underneath them. The result: > agents overwriting each other's work > reading stale information and treating it as fact > producing conflicting outputs with zero awareness that a conflict exists UC San Diego mapped the fix using classical computer architecture as the blueprint: > three memory layers (I/O, cache, long-term storage) > two critical missing protocols: one for sharing cached results between agents, and one for defining who can read or write what and when The part nobody has solved yet: When one agent updates shared memory, the other agent has no way of knowing when that update is visible or what happens if both write conflicting information at the same time. Every multi agent system in production today is running without these rules. That's not a future problem. That's the current state of the entire industry.
Robert Youssef tweet media
English
56
87
383
24.4K
Charlotte Kleverud
Charlotte Kleverud@lottite·
@carlhua Hey, we never thought of ourselves as an IDE - we’re building @momentalos where you assign agents (like Claude code) to an objective that it breaks down into success metrics, problems to solve and features to build based on your goals and context happy to chat
English
1
0
4
1.2K
Carl Hua
Carl Hua@carlhua·
i used to write flight software - you know, the one that flies jets and spacecrafts.. and if I had claude/codex back in the day, i wouldnt even need to look at code. hear me out, people say AI slop this, AI slop that, and the code it generated is trash etc etc. the thing is, with a highly tightened coding standard, and a requirement traceability down to say ~30-50 lines of code per requirement, you almost no longer need to look at the code it produced before doing testing. 2022 - oh wow, AI can write some code in the chat app 2023-2024 - cursor is amazing - it can write some code and understands the context 2025- the code is getting better and better, but often with mistakes 2026 - given the right guideline and specs, it is an EXPECTATION that the code should work the first or second round. I foresee in the near future, we would have IDE that no longer prioritizes displaying code. instead the IDE would be a tool to orchestrate agents, with responses gathering and display, so like an agent command center. I already see people building some of this but i think people are thinking about it wrong - don't take IDE or terminal as a baseline, we need to completely revamp it. This excites me because for once, we are going to revolutionize how coding is done. wait.. its not coding anymore, its more general than that, but you get the point! the subagent framework today is largely inadequate - we need a layer of LLMs on top of analyzing agents and then allow humans to control. but not all subagents should report back to the same upper LLM. whos building this? I will invest.
English
84
10
276
46.4K
Charlotte Kleverud
Charlotte Kleverud@lottite·
We negotiated our rental lease like this last year - shared the prompt and the output, pointing out where the contract was unreasonably strict. Since we shared the prompt, our landlord could see that we weren’t biased but instead aimed to make it hold up in Californian court in case of a dispute
English
0
0
0
21
Troy Kirwin
Troy Kirwin@tkexpress11·
soon you won't prep a deck for a team meeting to make a business decision instead you'll prep and discuss a prompt - one that susses out the right parameters/inputs driving the outcome the result of the meeting will be: - aligning on the prompt - feeding it into a simulation engine -> getting the ultimate recommendation
English
10
5
31
2.2K