Oscar Luis Jiménez

1.2K posts

Oscar Luis Jiménez

Oscar Luis Jiménez

@oljimenez

nickname: nai 🐱 Person who loves solve complex things in a simple way :)

my bed Katılım Mart 2022
337 Takip Edilen46 Takipçiler
Dillon Mulroy
Dillon Mulroy@dillon_mulroy·
hope everybody had as good of a monday as i did
Dillon Mulroy tweet media
English
12
0
76
10.1K
Josef Bender
Josef Bender@josefbender_·
Here's an API proposal. 4 new utility functions to turn server functions to queries and mutations, each with a hook and an 'options' variant. This will make working with @tan_stack Start and Query way easier and will get rid of a lot of boilerplate code.
Josef Bender tweet media
Josef Bender@josefbender_

We need a new @tan_stack utility package to fill the gap between Start and Query. It could map a server function to either a query or mutation options. For queries, it could also use the function name + passed parameters as a query key. And it would also wrap the functions in useServerFunction to handle redirects automatically. What do you guys thinks?

English
3
6
81
12.6K
David K 🎹
David K 🎹@DavidKPiano·
TypeScript devs will say stuff like "have you ever seen more beautiful code" and it literally looks like this
David K 🎹 tweet media
English
89
21
1K
127.3K
nuqs
nuqs@nuqs47ng·
Tomorrow on stream I'll be unveiling a new nuqs API. Your URL state is about to become much cleaner, type-safe, and easier to scale. 10am CET.
English
3
0
151
7.1K
Oscar Luis Jiménez
Oscar Luis Jiménez@oljimenez·
@wesbos @Grif_gg You need to initializate in another Git Worktrees. LLMs should create multiple worktree env by N times needed. #git-worktrees" target="_blank" rel="nofollow noopener">github.com/vercel-labs/po…
English
0
0
0
22
Wes Bos
Wes Bos@wesbos·
@Grif_gg how does this solve the problem of an LLM running the start/dev script multiple times? If I run this twice it gives me a similar error
Wes Bos tweet media
English
1
1
19
2.8K
Wes Bos
Wes Bos@wesbos·
Port 3000 is in use, trying another one... Port 3001 is in use, trying another one... Port 3002 is in use, trying another one... These LLMs love to run multiple instances of the same app. made a Vite plugin `killer-instincts` that will either return the process ID of what is running on your strictPort, or kill it automatically
Wes Bos tweet mediaWes Bos tweet media
English
48
24
684
78.1K
Ihor Vorotnov 🇺🇦
Ihor Vorotnov 🇺🇦@ihorvorotnov·
So I was looking for a Chrome extension or a library to parse RSC payloads in Next.js and make sense of them - chunks, contents, relationships, boundaries, sizes etc. 1. There are no helpful tools 2. React Flight Protocol isn’t documented Building a nice extension now.
English
1
1
0
2.5K
Anthony Shew
Anthony Shew@anthonysheww·
@oljimenez Last Friday, I saw Turborepo task longer on a graph construction than it took to actually run the tasks and I said "oh heeeeeell no". Now we're 90% faster. 😄 (And still going!)
English
2
0
4
70
Rhys
Rhys@RhysSullivan·
"we vibe coded our entire app and you can't even tell"
Rhys tweet media
English
125
14
975
408.8K
David K 🎹
David K 🎹@DavidKPiano·
Don't worry, I berated it (Codex + Cursor btw)
David K 🎹 tweet media
English
7
1
91
17K
David K 🎹
David K 🎹@DavidKPiano·
I don't care about how well a coding model performs in benchmarks I just want it to stop doing this
David K 🎹 tweet media
English
79
24
1.4K
175.7K
render - chromium/acc
render - chromium/acc@infinterenders·
You have a page with two slow components: UserProfile (takes 2s) and UserPosts (takes 4s). You wrap both in separate boundaries to stream them. The Conflict here is that the UserPosts component needs the userId from the UserProfile component. Here the problem is if you nest these components, you create a sequential waterfall. If you un-nest them, how do you share the data without fetching the user twice? More importantly, how does Partial Prerendering (PPR) change your strategy for keeping the 'Static Shell' fast while these two dynamic parts load?
English
8
0
20
2.6K
Petr Brzek
Petr Brzek@PetrBrzek·
Vibe coded an open-source library that runs Node.js entirely in the browser. What it does: → Full virtual filesystem with POSIX-compatible API (using just-bash from @cramforce) → 40+ shimmed Node.js modules (fs, path, http, crypto...) → Install real npm packages client-side → Run Next.js, Vite, and Express dev servers → HMR, TypeScript, CSS Modules No server. No backend. No cold starts. (posting this to my almost 2k followers so it can die in peace)
Petr Brzek tweet media
English
103
96
1.5K
154.7K
Sam Selikoff
Sam Selikoff@samselikoff·
Nothing concrete at the moment. But it’s pretty top of mind for us. In the meantime I’d stick to not using generateStaticParams if your slug count is high. You can also use gSP on parent layouts, but not leaf pages. That’s a good middle-ground today that many folks use. And search “ISR” in Vercel dashboard to see your HIT ratio.
English
1
0
1
45
Oscar Luis Jiménez
Oscar Luis Jiménez@oljimenez·
@timneutkens hi, i have a rendering question about Nextjs that doesn't be explained in the docs (I don't know if it is completely possible in the latest version). Who should i talk to about this subject?
English
2
0
0
55
Sam Selikoff
Sam Selikoff@samselikoff·
I think for now the recommendation would be to just PPR the shared shells (i.e. without generateStaticParams), and if you want to shield your backend at request time, add "use cache: remote" (or an equivalent Redis-like memory store). It's tempting to want to prerender sites like this that have so much public data, but if you're talking about an effectively infinite number of slugs, your cache hit ratio probably won't end up being that great. Not to mention every time you deploy (say, by making a 1-line CSS change), you blow away the ISR cache. So you could end up with a lot of ISR writes for long-tail sites like this. We've discussed some adaptive behavior here where if a page starts getting a lot of traffic, it moves to being fully PPR'd. I do think that will make its way into Next at some point but no concrete plans just yet. I think it actually might be possible to do this entirely in userland using rewrites today, if that's the sort of situation you're running into.
English
1
0
1
46
Oscar Luis Jiménez
Oscar Luis Jiménez@oljimenez·
@samselikoff @icyJoseph_dev @timneutkens Yes, exactly that I'm looking for. We have an e-commerce site that is multi language and multi store. Yes, we manage all store lang as route segments, plus page slugs. Is there any plan for this pattern in the future?
English
1
0
0
62
Sam Selikoff
Sam Selikoff@samselikoff·
@oljimenez @icyJoseph_dev @timneutkens We don't really support this pattern yet. What you want is effectively the ability to "upgrade" the PPR shell. Can I ask what type of site you're building? Does it have a lot of public content but a high number of dynamic [slug] route segments?
English
1
0
1
57
Oscar Luis Jiménez
Oscar Luis Jiménez@oljimenez·
@icyJoseph_dev @timneutkens I'm trying to find a way of achieving the following. - visit the page for the first time, you show loaders using Suspense - following requests are instant (cached) with no loaders. After trying a lot of different approaches I haven't solved the second part.
English
1
0
0
51