Supreme Leader Wiggum

31.6K posts

Supreme Leader Wiggum banner
Supreme Leader Wiggum

Supreme Leader Wiggum

@ScriptedAlchemy

Infra Architect @ ByteDance. Maintainer of @webpack @rspack_dev - creator of #ModuleFederation #auADHD #synesthesia own opinions.

Redmond, WA Katılım Haziran 2018
720 Takip Edilen18.5K Takipçiler
Sabitlenmiş Tweet
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
If I were to stream… and it was mostly just me doing my day to day work, while talking to chat, would anyone watch? Setup isn’t going to be anything stellar, no real agenda - I’d pretty much just go do stuff across 5 or 6 repos in parallel & talk through what I’m thinking
English
20
0
31
11.8K
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
You can just make gpt pro return a git patch. Download it and apply it
English
0
0
3
784
Supreme Leader Wiggum retweetledi
Néstor
Néstor@nstlopez·
I saw a lot of people saying Module Federation doesn't work with Angular. Felt like there was a lack of examples out there. So here's two: one with @vite_js 8 and another with @rspack_dev Rsbuild.
English
2
3
14
1.1K
Yagiz Nizipli
Yagiz Nizipli@yagiznizipli·
We're slowly getting there with the performance of @X web (lower means better)
Yagiz Nizipli tweet media
English
24
9
416
21.3K
Oliver
Oliver@oliver_bauer·
@ScriptedAlchemy @nstlopez Can I also join the discord ? Currently building my second trading after a first on trend / news follower strategy
English
1
0
0
75
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
Going full quant and training financial model. Stealing some ideas from TikToks recommendation engine around realtime adaptive training of the model.
Supreme Leader Wiggum tweet media
English
5
1
24
2.1K
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
@jeffscottward Yeah I use them a lot. What would be nice is if this were “stacked” worktrees since usually work has dependencies on other agents. So restacking and rebasing a worktree stack would be quite useful
English
0
0
0
377
jeffscottworld
jeffscottworld@jeffscottward·
Is there anyone out there who uses git worktrees really aggressively or agent orchestrators? Like creating multiple branches within a work tree off of a base repo? ———— We are potentially cooking up something really crazy in Maestro and wondering if more than one branch per work tree makes any sense realistically? Typically it’s one feature branch per Work tree right? @ScriptedAlchemy @theo @kenwheeler @elonmusk @jbrukh @Jason Please Retweet! #git #worktree #ai #ui #agent #orchestration #claude #codex
jeffscottworld tweet media
English
8
1
5
3.1K
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
@nstlopez Check discord message. I sent you the “master plan” that contains the system end to end. From project creation to deployment
English
2
0
2
197
_damkode
_damkode@derekdamko·
@ScriptedAlchemy You do not have enough VRAM for any of the large open weight models. Example GLM 5 is 1.5 TB on disk which will 1.7 loaded into ram. Then you will need a large context window so you are probably looking at close to 2TB of ram needed for fine tuning.
English
1
0
0
323
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
I have a B200 cluster. Looking to fine tune a model. What’s the best OSS model that’s near GPT 5.4 or opus 4.7
English
18
0
26
7.3K
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
@omniwired Cost. I wanna fine tune a new tokenizer encoder/decoder. Only have enough burn for 1 or 2 runs.
English
1
0
1
254
OmniWired
OmniWired@omniwired·
@ScriptedAlchemy Is it coding? Then GLM 5.1 But why only try one, do Kimi and deepseek pro next. Fine-tune to do what
English
1
0
1
381
Julian Harris
Julian Harris@julianharris·
@ScriptedAlchemy Not sure how much RAM you have or whether you are ok with open weights but Deepseekv4 is killer.
English
1
0
0
587
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
@kentcdodds @MichaelThiessen @ZephyrCloudIO Token burn makes it hard progress. But, I do believe 80% of the concept can be created with Claude code or codex plugin, just use a LLM do perform the dependency linking and frontmatter. Plan to rewrite superpowers plugin and add a MCP, which should mostly recreate it.
English
0
0
2
277
Supreme Leader Wiggum retweetledi
Kent C. Dodds 🏹
Kent C. Dodds 🏹@kentcdodds·
I figured out how I'm going to teach Product Engineering and it's going to use @ZephyrCloudIO to do it. So jazzed about this! Stay tuned for the first cohort announcement!
English
8
4
52
6.8K
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
@yolaplace GLM on par with GPT 5.2 - perfectly acceptable. Especially with 5.4 as reviewer and team lead over GLM subagent
English
0
0
2
88