Supreme Leader Wiggum

31.6K posts

Supreme Leader Wiggum banner
Supreme Leader Wiggum

Supreme Leader Wiggum

@ScriptedAlchemy

Infra Architect @ ByteDance. Maintainer of @webpack @rspack_dev - creator of #ModuleFederation #auADHD #synesthesia own opinions.

Redmond, WA Katılım Haziran 2018
720 Takip Edilen18.5K Takipçiler
Sabitlenmiş Tweet
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
If I were to stream… and it was mostly just me doing my day to day work, while talking to chat, would anyone watch? Setup isn’t going to be anything stellar, no real agenda - I’d pretty much just go do stuff across 5 or 6 repos in parallel & talk through what I’m thinking
English
20
0
31
11.9K
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
No. It converts posts and content into numerical representations for various algorithms which in turn are used to train model weights. Like novelty, expected decay, industry. I use embedders for creating clusters and vectors on the actual original text, but that’s not useing the stuff the LLM does.
English
0
0
0
12
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
Reading 5000 news articles is not actually that expensive. Hell, I had single threads that totaled 300 million tokens this afternoon alone. I need to ingest a total of 24,000 events. (Bluesky posts and news feeds) - the LLM converts this into a json object with various fields.
Supreme Leader Wiggum tweet media
English
3
0
8
745
kitze
kitze@thekitze·
yo, i'm actually worried. codex limits are genuinely insane so it's sus af .. i feel this is an intentional move for a honeymoon period until we get over the claude → codex migration and then we get rugpulled hard
English
255
36
2.3K
493.7K
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
lol v3 of my stock trading system learned of inverse Cramer on its own
Supreme Leader Wiggum tweet media
English
1
0
21
1.3K
Supreme Leader Wiggum retweetledi
Meteor.js
Meteor.js@meteorjs·
So, @rspack_dev 2.0 was launched a few weeks ago and Meteor is on the official ecosystem list 🎉 The big win for large Meteor apps: persistent cache now drops memory usage by 20%+, and SWC minimizer cache hits make builds ~50% faster. Less RAM, faster builds. We'll take it 🤝
Meteor.js tweet media
English
1
10
53
4.2K
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
You can just make gpt pro return a git patch. Download it and apply it
English
1
0
10
2.3K
Supreme Leader Wiggum retweetledi
Néstor
Néstor@nstlopez·
I saw a lot of people saying Module Federation doesn't work with Angular. Felt like there was a lack of examples out there. So here's two: one with @vite_js 8 and another with @rspack_dev Rsbuild.
English
2
4
23
1.9K
Yagiz Nizipli
Yagiz Nizipli@yagiznizipli·
We're slowly getting there with the performance of @X web (lower means better)
Yagiz Nizipli tweet media
English
25
9
419
21.5K
Oliver
Oliver@oliver_bauer·
@ScriptedAlchemy @nstlopez Can I also join the discord ? Currently building my second trading after a first on trend / news follower strategy
English
1
0
0
79
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
Going full quant and training financial model. Stealing some ideas from TikToks recommendation engine around realtime adaptive training of the model.
Supreme Leader Wiggum tweet media
English
4
1
24
2.2K
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
@jeffscottward Yeah I use them a lot. What would be nice is if this were “stacked” worktrees since usually work has dependencies on other agents. So restacking and rebasing a worktree stack would be quite useful
English
0
0
0
379
jeffscottworld
jeffscottworld@jeffscottward·
Is there anyone out there who uses git worktrees really aggressively or agent orchestrators? Like creating multiple branches within a work tree off of a base repo? ———— We are potentially cooking up something really crazy in Maestro and wondering if more than one branch per work tree makes any sense realistically? Typically it’s one feature branch per Work tree right? @ScriptedAlchemy @theo @kenwheeler @elonmusk @jbrukh @Jason Please Retweet! #git #worktree #ai #ui #agent #orchestration #claude #codex
jeffscottworld tweet media
English
8
1
5
3.2K
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
@nstlopez Check discord message. I sent you the “master plan” that contains the system end to end. From project creation to deployment
English
2
0
2
207
_damkode
_damkode@derekdamko·
@ScriptedAlchemy You do not have enough VRAM for any of the large open weight models. Example GLM 5 is 1.5 TB on disk which will 1.7 loaded into ram. Then you will need a large context window so you are probably looking at close to 2TB of ram needed for fine tuning.
English
1
0
0
326
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
I have a B200 cluster. Looking to fine tune a model. What’s the best OSS model that’s near GPT 5.4 or opus 4.7
English
18
0
26
7.3K
Supreme Leader Wiggum
Supreme Leader Wiggum@ScriptedAlchemy·
@omniwired Cost. I wanna fine tune a new tokenizer encoder/decoder. Only have enough burn for 1 or 2 runs.
English
1
0
1
257
OmniWired
OmniWired@omniwired·
@ScriptedAlchemy Is it coding? Then GLM 5.1 But why only try one, do Kimi and deepseek pro next. Fine-tune to do what
English
1
0
1
385
Julian Harris
Julian Harris@julianharris·
@ScriptedAlchemy Not sure how much RAM you have or whether you are ok with open weights but Deepseekv4 is killer.
English
1
0
0
591