Sabitlenmiş Tweet
Supreme Leader Wiggum
31.6K posts

Supreme Leader Wiggum
@ScriptedAlchemy
Infra Architect @ ByteDance. Maintainer of @webpack @rspack_dev - creator of #ModuleFederation #auADHD #synesthesia own opinions.
Redmond, WA Katılım Haziran 2018
720 Takip Edilen18.5K Takipçiler
Supreme Leader Wiggum retweetledi

I saw a lot of people saying Module Federation doesn't work with Angular. Felt like there was a lack of examples out there.
So here's two: one with @vite_js 8 and another with @rspack_dev Rsbuild.
English

@yagiznizipli @X I’ll help you guys move off webpack
English


@oliver_bauer @nstlopez No discord. Just DMs in discord
English

@ScriptedAlchemy @nstlopez Can I also join the discord ? Currently building my second trading after a first on trend / news follower strategy
English

@ScriptedAlchemy @nstlopez Wait you have your own discord channel?
English

@jeffscottward Yeah I use them a lot. What would be nice is if this were “stacked” worktrees since usually work has dependencies on other agents. So restacking and rebasing a worktree stack would be quite useful
English

Is there anyone out there who uses git worktrees really aggressively or agent orchestrators?
Like creating multiple branches within a work tree off of a base repo?
————
We are potentially cooking up something really crazy in Maestro and wondering if more than one branch per work tree makes any sense realistically?
Typically it’s one feature branch per Work tree right?
@ScriptedAlchemy
@theo
@kenwheeler
@elonmusk
@jbrukh
@Jason
Please Retweet!
#git #worktree #ai #ui #agent #orchestration #claude #codex

English

@nstlopez Check discord message. I sent you the “master plan” that contains the system end to end. From project creation to deployment
English

@derekdamko Can likely get another 1tb vram if necessary.
English

@ScriptedAlchemy You do not have enough VRAM for any of the large open weight models. Example GLM 5 is 1.5 TB on disk which will 1.7 loaded into ram. Then you will need a large context window so you are probably looking at close to 2TB of ram needed for fine tuning.
English

@omniwired Cost. I wanna fine tune a new tokenizer encoder/decoder. Only have enough burn for 1 or 2 runs.
English

@ScriptedAlchemy Is it coding? Then GLM 5.1
But why only try one, do Kimi and deepseek pro next.
Fine-tune to do what
English

@HOARK_ Turboquant for inference tho, not training. But yes.
English

@ptremblay Want a big 16fp model. Like GLM or kimi
English

@ScriptedAlchemy what about a 1.58-bit distill of Qwen 3.6 27B? (BitNet Distillation) arxiv.org/abs/2510.13998
English

@julianharris 1tb of ram, can get more if needed tho.
English

@ScriptedAlchemy Not sure how much RAM you have or whether you are ok with open weights but Deepseekv4 is killer.
English

@kentcdodds @MichaelThiessen @ZephyrCloudIO Token burn makes it hard progress.
But, I do believe 80% of the concept can be created with Claude code or codex plugin, just use a LLM do perform the dependency linking and frontmatter. Plan to rewrite superpowers plugin and add a MCP, which should mostly recreate it.
English

@MichaelThiessen @ZephyrCloudIO @ScriptedAlchemy Nope, something else. Still curious where that tool is going Zack!
English
Supreme Leader Wiggum retweetledi

I figured out how I'm going to teach Product Engineering and it's going to use @ZephyrCloudIO to do it. So jazzed about this! Stay tuned for the first cohort announcement!
English

@yolaplace GLM on par with GPT 5.2 - perfectly acceptable. Especially with 5.4 as reviewer and team lead over GLM subagent
English

Chinese models to the rescue! GLM has been pretty good
Milan Jovanović@mjovanovictech
Wild times are coming
English
Supreme Leader Wiggum retweetledi





