stas kaufman

41 posts

stas kaufman banner
stas kaufman

stas kaufman

@stask_85

Building Kavim — a visual canvas for AI chats & ideas Open-source • BYOK • Infinite canvas Sharing the journey in public

Katılım Ağustos 2011
23 Takip Edilen5 Takipçiler
stas kaufman
stas kaufman@stask_85·
@naasking @QuentinAnthon15 same here, copy-pasting summaries into new threads gets old fast. I’ve been playing with a tool that just branches the convo automatically, so you keep your current line going while exploring the new idea on the side. makes debugging or reasoning feel way smoother.
English
0
0
2
13
Sandro Magi
Sandro Magi@naasking·
@QuentinAnthon15 I often start new chat windows for this reason, but I've lately been thinking of asking it to restate the problem, summarize the solutions that have failed so far, and copy-pasting that into a new window. Not for code specifically, but for debugging general issues. Ever tried?
English
1
0
2
119
Quentin Anthony
Quentin Anthony@QuentinAnthon15·
I think cases of LLM-overuse can happen because it's easy to optimize for perceived enjoyment rather than time-to-solution while working. Me pressing tab in cursor for 5 hours instead of debugging for 1:
English
8
78
1.1K
158.9K
stas kaufman
stas kaufman@stask_85·
@BawdyAnarchist_ @QuentinAnthon15 yeah that’s a clever system - kind of like manual version control for chats. we’ve been trying something similar visually - each fork becomes its own node you can follow or merge back. turns the conversation into a living map instead of a scroll of text.
English
0
0
1
11
BawdyAnarchist
BawdyAnarchist@BawdyAnarchist_·
If you want a really solid way to prune and manage your context, use nano-gpt.com I regularly pursue rabbit trails of peripheral knowledge, then return to the moment I forked off, and duplicate the main context from there, which removes the messages afterward. If I screwed up phrasing in a query, I delete the response, edit my query, and resend. It the LLM is having difficulty getting something correct, I delete the interim messages and keep only the last one where they got it right. I can even edit their own responses to correct subtle misunderstanding of design or instruction, rather than having to pollute the conversation with additional corrections. Lately I am discovering that the "Pro" and "thinking" models might even be less performant than their "Flash/Mini/Light" counterparts for things like code comparison and troubleshooting. Maybe the "overthink" the code, or it's just too much processing on a large body of detailed code, but I've seen sometimes better performance from the faster models (non-quant). If for no other reason than I get rapid responses, lower cost, and faster iteration. Sometimes I just want simple things. "Add logging to these points." "Make this straightforward change." "Drop-in replacement function for myfunc() along the discussed lines." If I need a high level architectural conversation and exploration of tooling which I'm unfamiliar with, the pro/thinking models do much better and make fewer errors. But they often make MORE errors and fail under context rot FASTER than lighter models. Anyways, just some thoughts. You would probably really enjoy nano-gpt.com. Accountless, you store your own convos, and loads of models to choose from.
English
1
0
1
637
stas kaufman
stas kaufman@stask_85·
@QuentinAnthon15 This hit home. “Context rot” perfectly sums up why long chats collapse. I’ve been testing a canvas setup that lets you branch off mid-flow instead of opening new chats - so you can explore tangents but still see the full story. Makes the “don’t lose your thread” problem disappear
English
0
0
0
11
Quentin Anthony
Quentin Anthony@QuentinAnthon15·
Along this point, there's a long tail of issues that cause an LLM to choke: - "Context rot", where models become distracted by long+irrelevant contexts (especially from long conversations). See x.com/simonw/status/…. You need to open a new chat often. This effect is worsened if users try to chase dopamine, because "the model is so close to getting it right!" means you don't want to open a new chat when you feel close. - Training data distribution. I won't pretend to know any of Claude/Gemini/ChatGPT/Grok/R1's data distributions, but some models are better at specific languages/tasks and I use them to their strengths. Unfortunately, I only know what model maps to which part of my personal workflow from asking lots of different models the same questions whose answers I roughly know. E.g. I may use Gemini-2.5 pro for initial code understanding, o3 to help with the core implementation, and then Claude 4 Sonnet to shorten my solution and check for bugs. Tools like cursor often don't let me see what's going into context, so I don't use them. My mental model on what this specific model is good at, or can tolerate, breaks down. I use model APIs directly through my IDE or a local chat interface. Doesn't mean this is the "correct" way, just what works for me :)
Simon Willison@simonw

Workaccount2 on Hacker News just coined the term "context rot" to describe the thing where the quality of an LLM conversation drops as the context fills up with accumulated distractions and dead ends #44310054" target="_blank" rel="nofollow noopener">news.ycombinator.com/item?id=443087…

English
8
13
414
74.6K
stas kaufman
stas kaufman@stask_85·
@optimistbloop @ericmitchellai Totally... tabs are the poor man’s branching 😅 We’ve been playing with it on a canvas instead: keep the main line visible, then spin off a side chat when you want to go deeper, without losing the original. Way nicer than juggling 6 ChatGPT windows.
English
0
0
0
59
bloop
bloop@optimistbloop·
Very useful. I see it as a step towards codex like UI, spinning off multiple chats with one view you can look over. I currently have multiple ChatGPT tabs open for work, seems like it’d be easier to just write a follow up and have it branch off another conversation line while continuing with the current one
English
1
0
0
98
Eric
Eric@ericmitchellai·
Powerful models are... powerful, yes. But making them more collaborative and steerable is equally important in making them useful! This update is a huge usability win and I'm curious to hear what people think of it!
OpenAI@OpenAI

You can now interrupt long-running queries and add new context without restarting or losing progress. This is especially useful for refining deep research or GPT-5 Pro queries as the model will adjust its response with your new requirements. Just hit update in the sidebar and type in any additional details or clarifications.

English
6
9
313
49.6K
stas kaufman
stas kaufman@stask_85·
@arseniys_ @emollick same. rolling is ok, but real branching wins once you explore alternatives. we’re doing that in Kavim - AI chats live on a canvas and you can fork a reply without breaking the original.
English
0
0
0
23
Arseniy
Arseniy@arseniys_·
@emollick Idk if Claude has branching but it’s actually better than a rolling context if you plan ahead a little bit. Been using this heavily in gpt
English
1
0
2
102
Ethan Mollick
Ethan Mollick@emollick·
Claude's unwillingness to continue conversations after the context window is full is very frustrating. I am okay losing early context if I am working interactively on a project & making progress, I am not okay with suddenly being cut off and being forced to start a new chat.
Ethan Mollick tweet media
English
410
121
2.7K
227.5K
stas kaufman
stas kaufman@stask_85·
@emollick 100% this. Hard cutoffs kill momentum. We’ve been trying a canvas-style flow where you can keep the main convo but spin off a branch when it gets too long, instead of starting from zero. Feels way closer to how people actually work.
English
0
0
0
5
stas kaufman
stas kaufman@stask_85·
@shakermanjonas @emollick yep, this 👆 tech-wise everyone can summarize/compact, but the interface never shows where you were and where you branched. we’re building that visual layer so you can see the main thread + side threads without losing context.
English
0
0
1
10
Jonas
Jonas@shakermanjonas·
@emollick they have a way to fork a chat on web, they have a way to compact (in cli), but there are no good methods for pruning or compacting web chats yet. nothing technically difficult, the ux is simply not there yet
English
2
0
2
546
stas kaufman
stas kaufman@stask_85·
@robleclerc Yeah, totally get this. Once you try to dig into one part of the answer, you lose the flow. We’ve been playing with a canvas that lets you branch off a thought without breaking the main thread, kind of like how real conversations wander
English
0
0
0
2
Rob Leclerc
Rob Leclerc@robleclerc·
the problem with current llm interfaces is that if i want to clarify something in an earlier part of the response without breaking the flow, i can’t. what i want is a way to create a small side branch to ask questions about that section (and to be able to do this recursively). since tangents are a natural part of how we communicate, think and learn, why hasn’t anyone built this?
English
1
0
1
185
stas kaufman
stas kaufman@stask_85·
@lakshyaag @arpit_bhayani @rogutkuba Same! linear chats make edits painful. In a side project we’re testing auto-branching on new ideas + a quick linear history so you can jump back fast. Been surprisingly effective
English
0
0
0
49
lakshya
lakshya@lakshyaag·
@arpit_bhayani I find myself going back and editing prompts on these platforms, but 100% agreed on the hierarchical aspect. @rogutkuba you were making something along these lines right?
English
1
0
0
420
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
Gemini, Claude, and ChatGPT should have built-in support for hierarchical exploration. Our brain naturally thinks in follow-ups, but the current interface is linear. This makes it difficult to dig deeper. The worst part of the UX is when a follow-up question leads to an endless page scroll, and you have to spin your scroll wheel ten times just to find where you forked off. And no, I'm not looking for some random wrapper to solve this. Ideally, this should be baked into the core UX of the popular tools :)
English
40
17
338
22.3K
stas kaufman
stas kaufman@stask_85·
@arpit_bhayani Agree 100%. We’ve been testing an ‘branch-by-default’ canvas + a linear-history sidebar so you can zoom out or read like a story. Curious what interaction would make this feel ‘obvious’ to you, maybe auto-branch on edit, or manual fork button?
English
0
0
0
18
Sudip S Jamwal
Sudip S Jamwal@JamwalSudip·
Completely agree on the UX pain! I've actually built an extension that sorta solves this - it adds branching capabilities to ChatGPT conversations so we can explore multiple follow-ups questions while digging deeper on to a particular thing. It also gives you pop-up that shows the original prompt and a click on that helps you navigate to that fork.
Sudip S Jamwal tweet media
English
1
0
2
58
stas kaufman
stas kaufman@stask_85·
@ValtteriValo yeah, totally get this. the “confidence without understanding” part hits hard - that’s why I’ve been exploring more transparent AI tools, where you can see how ideas evolve instead of guessing.
English
0
0
0
3
Valtteri
Valtteri@ValtteriValo·
working with these tools is frustrating as fuck because: - they’re pretty good often enough that they get your expectations up. - their failure modes are often incomprehensible. working with humans you mostly know what went wrong and why something is hard. with these tools, they can fuck up completely trivial things in spectacular fashion. - they feign understanding of the task and nature of the problem at hand, often very convincingly. the learning curve for these tools is irregular and retarded it’s like something you’ve learned in past interactions goes out the window in another one. ofc all of this is somewhat self explanatory to anyone who understands the basics of how these tools work but it still stings that we’re supposedly close to some kind of productivity breakthrough, idk, one we’ve never seen before even perhaps, and still you get these bitter reminders that some fundamental problems remain that we probably haven’t even gotten started understanding.
dimd00d@dimd00d

Codex lied about an API, lied about reading the documentation, lied that it shows that its right, then lied that it was actually an example linked in MSDN, then called the whole thing "yeah I misrepresented it, but its just noise". i would take "you are absolutely right" instead.

English
1
0
1
324
Boten Anna
Boten Anna@bussypounder999·
I just caught my manager at work, using a mouse jiggler. lolol he was afk on a miro board. u know how it shows cursor movements? i could see it jiggle every 3 mins. BAHAHAHAHAA W manager L microsoft teams leason learned: if u gonna afk with a mouse jiggler dont have miro open
English
1
0
4
405
rohildev
rohildev@rohildev·
What is Nimo? Nimo is an Intelligent Canvas that connects all your apps, agents, and workflows into one unified workspace. We created an AI orchestration layer, chromium browser inside the Canvas that helps people get work done faster without switching tools. 1. Save 4-6 hours per week. 2. Reduce cognitive loads of context switching. 3. Save cost by having a unified AI subscriptions. 4. Privacy focused - All data, your memory & context store only on your Computer and your iCloud. 5. You can create your own Dynamic Apps and reuse it and solve your daily workflow problems. 6. You feel a new home, eventually to build your own things inside Nimo. If you’re spending too much money on multiple AI tools, managing multiple subscriptions, and dealing with scattered workflows, let me know. Happy to set up Nimo and fix these things.
English
9
2
27
1.1K
stas kaufman
stas kaufman@stask_85·
Big fan of this local-first direction 👏 I’m working on Kavim: github.com/deep-elegant/k…, a privacy-first AI canvas for brainstorming - similar mindset, different focus. Great to see this space grow!
Y Combinator@ycombinator

BrowserOS (@browserOS_ai) is an open-source, privacy-first alternative to ChatGPT Atlas & Perplexity Comet. No vendor lock-in: use any LLM or search engine AI agents run locally. No tracking you for ads or data collection. Available for Mac/Win/Linux. browseros.com Congrats on the launch, @nv_sonti, @ThatNithin!

English
0
0
0
42
stas kaufman
stas kaufman@stask_85·
@aidenybai I try to build something that might help with the “organized thoughts” part — not self-organizing yet 😅 just experimenting with how ideas can branch and connect better. playing with it in github.com/deep-elegant/k…
English
0
0
0
20
Aiden Bai
Aiden Bai@aidenybai·
I want my thoughts to self-organize traditionally tools like obsidian/roam enable you to manually organize knowledge graph. i just have 1 big doc i should just dump stuff into a box, have it organized+saved, then search and explore when i need to
English
78
7
370
59.1K
Proton
Proton@ProtonPrivacy·
As 2025 winds down, we’re continuing to build tools that let you take control of your digital life without compromising privacy. Our latest roadmap reflects your feedback and our mission: to empower you, not exploit you. Here’s a peek at what’s coming next 👇 1/4
English
34
29
479
39.6K
stas kaufman
stas kaufman@stask_85·
@IAmJacobson אחי יש מצב שזה השיר שזימזמתי בבוקר?
עברית
1
0
0
0