Scribe

240 posts

Scribe banner
Scribe

Scribe

@LearnWithScribe

AI-powered side hustles to land your first $1K client in 30 days. From manual outreach to automated systems. No code needed

Bergabung Ocak 2021
420 Mengikuti20 Pengikut
Tweet Disematkan
Scribe
Scribe@LearnWithScribe·
Deal 🤝 Follow for follow. If we move as a team, we can all earn on Twitter. PS: I also create strong content on AI and automation.
Scribe tweet media
English
0
0
0
66
Scribe
Scribe@LearnWithScribe·
@7a7zz It was barely usable yesterday
English
1
0
1
19
7A7z
7A7z@7a7zz·
wow GLM 5.1 is slow on opencode go to be fair , other models arnt this slow so I think this is just cuz of heavy traffic
English
1
0
1
272
Scribe
Scribe@LearnWithScribe·
@opencode When the support for wsl ? I didn't find it in the documentation
English
0
0
0
13
Marko Denic
Marko Denic@denicmarko·
This is my website. Guess the stack!
Marko Denic tweet media
English
708
17
1.6K
366.1K
Scribe
Scribe@LearnWithScribe·
@0xSero So good that medium reasoning is enough for almost of tasks
English
0
0
1
476
0xSero
0xSero@0xSero·
GPT-5.3-Codex is still the best coding agent, no doubt about it. GPT-5.4 is better at computer use, but doesn't match the sheer autistic power Codex holds.
0xSero tweet media
English
63
15
897
80.4K
Scribe
Scribe@LearnWithScribe·
@0xSero @0xSero Can I use M2.7 for coding task , glm is actually just garbage and opencode rate limit is too low. How was your feelings ?
English
0
0
0
73
Scribe
Scribe@LearnWithScribe·
Miss x 2 boost codex so hard
English
0
0
0
5
Scribe
Scribe@LearnWithScribe·
@Prince_Canuma @Kurtulmehtap Sad because industry is moving toward more moe than just dense models, but I think , it can useful for increasing the expert size
English
0
0
0
16
Prince Canuma
Prince Canuma@Prince_Canuma·
@Kurtulmehtap This model has 5/30 full attention layers, so savings are modest :)
English
2
0
0
478
Prince Canuma
Prince Canuma@Prince_Canuma·
Guess they called it "Turbo" for a reason 👀 Model: Gemma-4-26B-A4B-it Precision: BF16 Device: M3 Max 96GB
Prince Canuma tweet media
English
12
12
214
13.6K
Scribe
Scribe@LearnWithScribe·
@DailyDoseOfDS_ We need more metrics about accuracy lost in retrieval step
English
0
0
0
93
Scribe
Scribe@LearnWithScribe·
@Prince_Canuma Why peak Mel is slightly the same as baseline , the ram usage should be lower no ?
English
1
0
0
200
Supersocks
Supersocks@iamsupersocks·
@Amir_Intel tu vas tous les fusiller là mais t'as raison. La plupart regardent même pas les sources, la date, c'est affolant, ça m'agace
Français
3
0
8
149
Scribe
Scribe@LearnWithScribe·
Unpopular opinion : one shot capability of gpt 5.3 codex high > gpt 5.4 high
English
1
0
1
29
Scribe
Scribe@LearnWithScribe·
@bolaabanjo It can but you have to be way too specific so it’s frustrating
English
0
0
0
11
Bola Banjo
Bola Banjo@bolaabanjo·
Codex sucks at frontend lmao.
English
89
6
170
14K
Scribe
Scribe@LearnWithScribe·
@theo 😂 I noticed that earlier
English
0
0
1
26
Theo - t3.gg
Theo - t3.gg@theo·
Does Google actually hide all the cheaper plan options when setting up a new Google workspace? There are 3 cheaper options and I'm not allowed to see or select any of them.
Theo - t3.gg tweet media
English
167
10
1.3K
138.2K
Scribe
Scribe@LearnWithScribe·
Qwen 3.6 is a really good model with a strong design taste. Not quite at Gemini’s level but close and different
English
0
0
1
33
Scribe
Scribe@LearnWithScribe·
@thekitze It really likes to put everything in a box
English
0
0
0
456
xiong-hui (barry) chen
xiong-hui (barry) chen@xiong_hui_chen·
Our latest model qwen 3.6-plus is ready.🚀
xiong-hui (barry) chen tweet media
English
18
10
237
11.2K