Carlos

4.7K posts

Carlos banner
Carlos

Carlos

@SeeLos

Washington, DC Beigetreten Mayıs 2011
587 Folgt249 Follower
Carlos
Carlos@SeeLos·
@thdxr I wish m2.7 was as fast as Kimi. Hope to see that happen once (if) they open source
English
0
0
0
336
dax
dax@thdxr·
our team's model usage breakdown for the past 7 days gpt has really taken over
dax tweet media
English
266
100
4.2K
467.6K
Carlos
Carlos@SeeLos·
@zeeg I haven’t tried it yet but I saw the codex app seems to support wsl? Saw it toggleable in the settings
English
1
0
0
719
David Cramer
David Cramer@zeeg·
can someone besides Microsoft please make a coding harness (UI) that works with WSL? I realize I'm a unicorn over here running Windows, but you too one day will get fed up with iOS as a desktop OS.
English
71
1
129
25.9K
Sharjil
Sharjil@00xSharjil·
@MarceloRet41877 @icanvardar Yeah but the CV and cover letter are AI generated. Imagine being on the other side of it. Everyone is doing the same exact thing
English
1
0
0
80
Carlos
Carlos@SeeLos·
@thdxr Zen serves FP4 models confirmed
Català
0
0
1
131
dax
dax@thdxr·
people learn the following two terms and use them to sound smart 1. prompt caching 2. quantization breaking prompt cache is ok. you can't guess what will happen you have to look at data at scale from real users the highest quality providers for a couple of models serve it at FP4, there's way more to quality than quantization
English
40
8
564
59.4K
Carlos
Carlos@SeeLos·
@billyjhowell What exactly are you using Manus for? I’m curious
English
0
0
0
37
Billy Howell
Billy Howell@billyjhowell·
how do people with multiple gmails use AI? I have Claude + Manus accounts for 3 different Gmails. Which is great because it keeps work streams separate. But isn't great because I have to log in + out of desktop and phone apps constantly. And seems to make Claude Cowork confused. Any ideas? I guess I could go multi-device...
English
4
0
2
680
Carlos
Carlos@SeeLos·
@thdxr The agents always recommend against it 😔
Carlos tweet media
English
0
0
0
187
Replit ⠕
Replit ⠕@Replit·
You can now fully customize the signup experience for your Replit Apps! - Customize layout, colors, fonts and more - Your app users don't need a Replit account - Separate dev & prod environments for auth for better security - No setup required- experience powered by @clerk
English
24
27
251
85K
Carlos
Carlos@SeeLos·
Yea I feel that, my sidebar would definitely fill up quick I use a lot of subagents so a right-click dismiss action would be nice once the state is complete if it’s persisted. Could also have it as an option. By default have it go away once state is complete. Have a toggle in the settings for “persist subagent in sidebar”
English
0
0
0
74
David Hill
David Hill@iamdavidhill·
@SeeLos @opencode yeah, exploring that too to find the right balance we have to account for a few different factors; many subagents, all in different states, they're ephemeral
David Hill tweet media
English
1
0
6
552
David Hill
David Hill@iamdavidhill·
exploring the subagent experience ⦿ improved session item ⦿ clear breadcrumb navigation ⦿ nested sidebar navigation ⦿ disabled input with action ⦿ clearer session animation
English
23
9
284
31.8K
Jake
Jake@JustJake·
@hausdorff_space Even if you tell them, explicitly, that it had nothing to do with that thing The internet is an arena. Everybody loves to throw tomatoes That's fine, we can 100% do better here. This is indeed our fault.
English
4
0
34
1.9K
Alex Clemmer 🔥🔥🔥😅🔥🔥🔥
I guess this is life now. Whatever you're doing (writing a blog post, causing an incident, etc.), a bunch of guys will arrive to say that you shouldn't have used AI for that thing. Even if there is no evidence you used AI for that thing. Even if you DIDN'T use AI for that thing.
Alex Clemmer 🔥🔥🔥😅🔥🔥🔥 tweet mediaAlex Clemmer 🔥🔥🔥😅🔥🔥🔥 tweet media
Jake@JustJake

Today we had an issue affecting ~3000 users, where their authenticated content may have been served to their unauthenticated users Below is our writeup on impact, resolution, and prevention We've deeply sorry. This is unacceptable and we will do better blog.railway.com/p/incident-rep…

English
3
0
20
4.5K
muzz khan
muzz khan@muzzdotdev·
@opencode data retention on free models aswell? if so, how lol
English
2
0
2
2K
Anton P. 👽
Anton P. 👽@antonpme·
@koltregaskes Tested the new GLM-5.1 in Claude Code (CLI), and it's now quite close to Opus 4.6, honestly. I envision a future where Western labs offer a very capable but expensive AI, while Chinese labs provide some sort of "AI to the people."
Anton P. 👽 tweet media
English
1
0
5
731
Kol Tregaskes
Kol Tregaskes@koltregaskes·
First Codex, now Gemini today - nearly a week’s wait. But I’m confused, since I actually have an Ultra plan. It could be a coincidence, but I’ve also hit a limit on AI Studio at the same time. The labs have dramatically cut back, though. I was getting a LOT more messages from each of them before this week. Reality check.
Kol Tregaskes tweet media
English
46
8
263
26.3K
Carlos
Carlos@SeeLos·
@thdxr If I use Go and run out of inference usage does it auto-fallback to Zen?
English
0
0
0
221
dax
dax@thdxr·
we see conspiracy theories claiming models on OpenCode Go ($10 plan) are served differently we're using the exact same providers you use when you go direct all providers are constantly tweaking things and sometimes there are bugs but it's pretty minor
English
26
4
567
43.9K
Carlos
Carlos@SeeLos·
@NG91030990 It feels like he threw on purpose
English
0
0
1
312
NG
NG@NG91030990·
ZXX
393
1.2K
25.3K
4M
Carlos
Carlos@SeeLos·
AGI is here
Carlos tweet media
Eesti
0
0
0
32
Carlos
Carlos@SeeLos·
@JustJake Stripe needs a new name for stripe projects
English
0
0
3
257
Carlos
Carlos@SeeLos·
So codex plugins are just skill and mcp bundles?
English
0
0
0
29
BLCNYY
BLCNYY@BLCNYY·
🚨 NEWS: It looks like OpenAI is getting ready to introduce the $100/month Pro plan for ChatGPT. The splash screen for ChatGPT Pro ($200/month) now says “20x usage” instead of unlimited messages.
BLCNYY tweet media
mert can demir@validatedev

rip chatgpt pro unlimited

English
29
18
367
76K
Carlos
Carlos@SeeLos·
@Presidentlin how's the tps? I'm debating on trying with the z.ai plan or just waiting for it to come to Zen
English
1
0
1
169
Lincoln 🇿🇦
Lincoln 🇿🇦@Presidentlin·
Model is up on OpenCode btw. I think this is going to be my lineup Opus: GLM 5.1 Sonnet: GLM 4.7 Haiku: GLM 4.7-FlashX The GLM 5 series Consumes quota at a rate of 3× during peak hours and 2× during off-peak hours. They have a special "As a limited-time benefit, GLM-5.1 and GLM-5-Turbo will count as 1× during off-peak hours until the end of April. Peak hours are from 14:00 to 18:00 (UTC+8) daily." So for me I shoud not be using them for most of the morning after 8 am. Which is fine, my order will be: Google OAI Zai OAI is mostly for big-brain tasks or attention to detail. Google with 3.1 Flash Lite and 3.1 Flash are for spamming. Zai will be for when Google is in cool down. This is me being spoiled, really, my time is best spent marketing my stuff, then churning code.
Lincoln 🇿🇦 tweet media
English
14
9
236
16K
Carlos
Carlos@SeeLos·
@jessethanley GPT 5.4 was lobotomized this week. It outputs strictly in bulletpoints and tries so hard to conserve tokens now.
English
1
0
0
52
˗ˏˋ Jesse Hanley ˎˊ˗
˗ˏˋ Jesse Hanley ˎˊ˗@jessethanley·
Codex 5.3 high fast is the best model for Ruby and React by a mile rn
English
6
0
21
2.9K