Danny Tuppeny

4.6K posts

Danny Tuppeny banner
Danny Tuppeny

Danny Tuppeny

@DanTup

Software Developer, Works on @DartCode

Cheshire, England, UK Katılım Temmuz 2009
189 Takip Edilen1.7K Takipçiler
Danny Tuppeny
Danny Tuppeny@DanTup·
@luke_pighetti I don't know the details, but maybe allowing they agent to trigger it coud be beneficial? (it wouldn't happen unexpectedly while it's making changes, and it can trigger just once after many edits?) Guessing though, haven't tried using AI for Flutter apps yet!
English
1
0
1
33
Luke
Luke@luke_pighetti·
@DanTup only a google engineer could overthink just adding a global dart environment setting that enables reload on save
English
1
0
0
27
Luke
Luke@luke_pighetti·
refusal to trigger flutter hot reload on file save looks pretty dumb now that hot reload is totally broken for agentic coding workflows
English
10
2
27
3.8K
Luke
Luke@luke_pighetti·
@DanTup Claude CLI
English
1
0
0
71
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts Although, executing commands falls back to non-sandbox automatically, but browser use does not, so you need to explicitly disable sandbox for that to work
English
0
0
0
17
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts It wsn't, that gave an error about it being invalid. With v1 it just complained about no compatible models.. But see follow-up, there is an option for local LLMs in settings (it's just not in the onboarding). All up and running now :)
English
1
0
0
19
ani
ani@anaisbetts·
Has anyone successfully set up an OpenClaw'alike with a local LLM model? Even with a 27B model it seems too Dumb, and it's incredibly slow too (~7tok/sec on Strix Halo)
English
2
0
1
556
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts FWIW, shell commands worked fine without me configuring anything (ofc. they're running in the main container, I'd rather that than any part of this own the host!)
English
0
0
0
10
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts "If you cannot mount the Docker socket, Moltis will run in “no sandbox” mode" sounds like it just always does this though. I filed an issue that the docs are confusing anyway, and I'll just test it out.
English
2
0
0
22
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts FWIW, with Qwen3-Coder-Next it says ~28t/s with all defaults, just a few prompts asking for some code and poems
Danny Tuppeny tweet media
English
0
0
0
17
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts Turns out there is this option once you've gone through onboarding (by skipping the LLM) and then go into settings. Not sure why this isn't in onboarding 🙃
Danny Tuppeny tweet media
English
1
0
0
15
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts hmm, wait - these seem to contradict? One says they'll fail, other says they will run. I guess I will have to try it out...
Danny Tuppeny tweet media
English
1
0
0
15
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts I'd seen the name but not looked at it, will have a look. Simpler = good IMO, as long as I can extend it if needed. It'd just be nice to have a base to start from and not have to implement everything myself (I've started doing so a few times, and keep aborting 😄)
English
1
0
0
12
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts This looked good, but fails for me because of this. Giving it the Docker socket is basically giving it root access to the host? #docker-socket-sandbox-execution" target="_blank" rel="nofollow noopener">github.com/moltis-org/mol…
English
0
0
0
15
ani
ani@anaisbetts·
@DanTup So far Moltis has been relatively sane though a bit bare bones
English
2
0
0
30
Luke
Luke@luke_pighetti·
@DanTup i haven’t changed my CI strat so yes tests and linters. you?
English
1
0
0
44
Luke
Luke@luke_pighetti·
CI is too slow in the age of agentic coding
English
8
1
19
2.4K
Danny Tuppeny
Danny Tuppeny@DanTup·
@mraleph "If helpful, I can try to provide a minimal repro" Like the one already posted in the original report it was replying to? 🤔
English
0
0
2
168
Slava Egorov
Slava Egorov@mraleph·
you ain't fooling me mr. I-use-slop-generator-to-make-what-I-think-are-useful-and-thoughtful-comments
Slava Egorov tweet media
English
3
0
18
2K
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts Yeah, I was surprised/disappointed at this response. If your bot username is guessable (or searchable?) or you post a screenshot, and one can extract all your info, use your LLM, execute terminal commands on your box? (and I suspect not everyone runs these in containers 😞)
English
1
0
0
14
ani
ani@anaisbetts·
@DanTup "Regarding access control: you're not missing anything. CoPaw is currently designed primarily as a personal assistant, so we don't yet have built-in user permission features like allowlists to restrict who can DM the bot." That is *nuts*. Every bot has this
English
2
0
0
41
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts I want something I can run entirely in a container (which rules out some that want to launch their own containers for tasks), in a language I'm familiar with, and that doesn't have huge security issues (CoPaw seemed decent until I tried telegram... -> #issuecomment-3987841004" target="_blank" rel="nofollow noopener">github.com/agentscope-ai/…)
English
1
0
0
32
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts Yeah, I haven't tried OpenClaw because of the boasting of unreviewed AI code, but I do like the idea of delegating some tedious life tasks to something so searching for something similar but smaller/simpler (that works well w/ local LLM).
English
1
0
0
25
Danny Tuppeny
Danny Tuppeny@DanTup·
@anaisbetts For Qwen3.5? Haven't tried it yet and hadn't thought about that. I wish there were more benchmarks for the quants that fit on 128GB machines (both speed and intelligence - it doesn't seem reliable to compare the unquantized scores)
English
1
0
0
22
ani
ani@anaisbetts·
@DanTup For whatever reason, none of the documented ways to disable it with llama.cpp seem to work
English
1
0
0
23