Leo

3.6K posts

Leo banner
Leo

Leo

@leo

Building — Prev. VP Product @vercel

Berlin Katılım Mart 2012
1.7K Takip Edilen6.2K Takipçiler
Leo
Leo@leo·
Safari is the fastest and most efficient browser, and most people anyways use Safari on iOS, so using Chrome as default on desktop implies not having any cloud sync between devices.
English
2
1
5
9.3K
Leo
Leo@leo·
Brother from another mother. I thought I was the only one doing that. Safari is simply the best "consumption" browser, while Chrome is the best "creation" browser.
shadcn@shadcn

@wongmjane Same here. Safari is default. Chrome for dev work.

English
4
1
17
19.2K
Leo
Leo@leo·
@luciascarlet At least it's a lot more durable than the Aluminum phones 😁
English
1
0
1
269
Leo retweetledi
Signal
Signal@signalapp·
We are alarmed by reports that Germany is on the verge of a catastrophic about-face, reversing its longstanding and principled opposition to the EU’s Chat Control proposal which, if passed, could spell the end of the right to privacy in Europe. signal.org/blog/pdfs/germ…
English
714
8.8K
30.3K
4.7M
Leo
Leo@leo·
@lgrammel @aisdk Incredible work, Lars! Exiting to continue watching your ideas take shape in the SDK!
English
2
0
3
862
Lars Grammel
Lars Grammel@lgrammel·
When I joined Vercel to work on @aisdk, I wanted to create the best library for building AI applications. The growth since then was beyond what I imagined, and I am proud that AI SDK was a key part of the story behind the recent Vercel fundraise.
Lars Grammel tweet media
English
26
8
323
21.7K
Leo
Leo@leo·
@rauchg @vercel Huge milestone! Congrats to the whole team! 👏
English
0
0
1
1.1K
Leo
Leo@leo·
@lachlanjc @vercel I wonder if they could instead delete most old deployments, but retain snapshots (e.g. every couple months or so), to still provide an overview of the deployment history forever. cc @tomocchino
English
1
0
1
120
Lachlan Campbell
Lachlan Campbell@lachlanjc·
It totally makes sense business-wise, but tragic to lose more generations of internet history on @vercel. They already deleted all my projects from the early React era, now many of my portfolio links will go dark. Would love to pin specific deploys to keep vercel.com/changelog/upda…
English
3
0
6
695
Leo
Leo@leo·
@roguesherlock @jitl I definitely agree! We do plan to embed data into the client eventually, as an additional optimization, but not as functionality blocker. The APIs that Blade has won't change, and the performance will only change marginally, due to our edge replicas.
English
0
0
0
86
Akash
Akash@roguesherlock·
I think there are tradeoffs. With sync engines approach you only pay that cost once during initial render but with rsc paradigm you pay that cost for every navigation/action. ngl beam has a simple mental model and nice dx and would work well for lots of apps. But I think it currently has a ceiling on the types of apps you can build. I think client data would definitely raise that ceiling!
English
1
0
0
79
Leo
Leo@leo·
Suspense in React is a solution to a problem that should have never existed in the first place. It exists to avoid blocking the UI on slow data and data waterfalls, but it is, in my opinion, fundamentally anti dynamism, anti personalization, and anti performance. Because the effort required to craft a clean UI with it is much greater than if it wasn't involved. A proper skeleton requires meticulous planning of all the potential states that a UI could be in, and then have those be represented as good as possible by the skeleton. It is always an approximation, and the worse the approximation is, the worse the user experience is. Whereas, if your data is fast, the final UI is not constrained to what the skeleton looks like, which enables the best user experience (no loading states), maximum dynamism, and thereby also maximum personalization. Of course this doesn't immediately work for any app of any size, but there is a wide range of small to medium apps for which it, in my opinion, makes the most sense. I created blade.im to make it easy to quickly build apps that offer maximum dynamism, however, without any loading states whatsoever. No skeleton, no SPA spinner, no slow TTFB. Every page render is a single database transaction, even if your layouts and page contain many dozens of queries. If you perform a write, the whole UI is updated for you, in the same transaction that performs the write.
English
4
0
16
12.1K
Leo
Leo@leo·
I agree with the common abstractions. I don't perceive them as limiting factors, however. The way I see it is that the term is also a result of how I use a given tool. E.g. I might use S3, which is a disk, as my database for a particular use case. Similarly, I might use my database to store small chunks of binary objects for another use case, which, if you call them files, would then make it my disk. I can also place my files exclusively in memory and have a second faster memory on top, which would make the first memory the new disk. Technically, if you use S3 over a network stream, you also can't call it a disk anymore, since it's not a part of your own compute stack. It might be a "network storage" for your application, but would use disks internally.
English
0
0
1
142
Shayan
Shayan@ImSh4yy·
@leo @timolins Sorry man, no amount of engineering can make disk as fast as L1 cache. That statement is objectively false. Also, you're completely ignoring the computational cost of rendering React/JSX on the server. Sure, very minimal, but it's none-zero.
Shayan tweet media
English
1
0
1
324
Leo
Leo@leo·
I think that no technology is a silver bullet. As mentioned in the main tweet, the approach definitely breaks down at some point, especially at scale (meaning suspense is needed for those cases). There are also many other cases, such as e.g. not having any control whatsoever about the data source, having a page that is so insanely long (like the old Vercel Usage page) that data has to be loaded based on the scroll position, and so on. Unless those cases are met, however, it is IMO unpleasant and unnecessary for users to see loading animations.
English
1
0
0
226
Timo Lins
Timo Lins@timolins·
@leo @ImSh4yy How would that work with expensive queries? Like aggregating analytics data from a large data set?
English
1
0
0
204
Leo
Leo@leo·
Memory, disk, database, a variable in code, CPU cache, or what else you like to call it are all just words for the same thing in different formats. They can all have the same performance as each other depending on what product you pick and what you do with it. Unless you know the details of each layer, you can't make a sound argument that they are per se slower or faster than each other. A simple Bun server that runs 10 or 20 queries with a bunch of nested joins on bun:sqlite has the same perf as any CDN. Try it. Adding React and page code on top doesn't change that. Unless you do things like offset-based pagination, no pagination at all, heavy counts, or other things that you need to avoid anyways, it will be fast.
English
1
0
1
205
Shayan
Shayan@ImSh4yy·
To "populate the static shell" you have to wait for all 20 queries to complete before sending ANY HTML. That's literally blocking the entire page on the slowest query. "Blocking for 20 queries is fine" assumes an unrealistically reliable world where nothing is ever slow. What happens with a cold cache, complex join, or external API call? Also Node executing queries and rendering HTML isn't the same as a CDN serving static files from memory. Unless you're serving static shells and filling gaps later, which brings us back to Suspense.
English
1
0
2
212
Leo
Leo@leo·
I would say it always depends on how fast the thing is that you need to load. I of course agree that you can't make something faster that you have no control over (like the OG image of a different website), but for most apps, there are many vectors, such as the main data source, that devs are very much in control of. So yes, suspending something you absolutely cannot control of course makes sense, but my point is that we control a huge part of our stack, and suspense is a sledge hammer that is being used for things that it should've never been used for (first load and page transitions).
English
0
0
1
65
Jake 🎉
Jake 🎉@jitl·
@leo like… what happens when your user pastes a URL into the app? You’re gonna have to do some loading. No way to get ahead of it, maybe you can cheat by reading the clipboard on app focus on some platforms but this behavior is big sus to users
English
1
0
0
113
Leo
Leo@leo·
I agree. But ultimately, the server remains the source of truth in any model. There is not just data to consider, there is also code. For example, teams deploy several times per hour, and that code has to hit the client asap, so without server components you'd frequently download tons of unnecessary code, which is especially slow on slow connections. We do plan to offer data on the client, but we can't neglect the other requirements on the way there.
English
1
0
0
40
Akash
Akash@roguesherlock·
tbh I think we've tried this approach before and it led to poor ux as app ui becomes very sensitive to network. And It's not always about slow connection, there's latency, jitteriness, unreliability, etc. So no matter how fast the backend is, or nearby the edge is, or how fast the replication is for the db, you'll always be sensitive to network. This is why I like the sync engine approach cause it smooths over the network boundary in the background. Anyways, happy to be proven wrong and wish you all the best!
English
1
0
0
41
Leo
Leo@leo·
@jitl Fair enough! The database engine that Blade imports is a separate project of ours that will get its own docs. Will try to get more details landed on the Blade docs until then.
English
0
0
0
65
Jake 🎉
Jake 🎉@jitl·
@leo blade is devex for some db, but might as well be drizzle docs for all I can glean about system design
English
1
0
0
86
Leo
Leo@leo·
Whether or not you use suspense doesn't change how much HTML you need to download. Even if you use suspense, you still have to serve the static shell. If the static shell is already populated instead of empty, that's better for users, especially for users on slow connections. Blocking a page render on 10, 20, or many more queries is absolutely fine. Any database executes that amount of queries in microseconds from memory, given that you don't write poorly optimized queries. Executing 1 or 20 queries won't change the perf. The main penalty is the network, and there is only a single request to the DB. What you said about code vs static isn't valid. The perf of a Node server responding is the same as any CDN responding. They both need to run code and both have that code already evaluated in memory. Native code or not might change the request throughput, but not the perf of a single request. I wrote Vercel's first prod static file server, which has been used for years, and `serve`, which has 2M+ weekly npm downloads and is Create-React-App's suggested prod server (CRA is being killed ofc).
English
2
0
1
340
Shayan
Shayan@ImSh4yy·
You're measuring server-side query speed, not what users actually experience. Your server accessing edge SQLite in 1ms doesn't help a user on slow 3G who still needs to download all that HTML. Also if you're streaming HTML while queries are still running, you need placeholders for missing data. That's just Suspense reinvented. If you block until all 10 queries complete before streaming, your TTFB includes the slowest query. Either way, you haven't solved the problem Suspense addresses. Also "10 queries not slower than static CDN" is just false. Static files serve from memory cache. You're executing code, running queries, rendering HTML. And your "single transaction" only works for reads from potentially stale replicas.
English
1
0
2
289
Leo
Leo@leo·
@jitl You are correct that the network conditions dominate the speed of every page render in Blade. We rolled our own DB replicas in the same 18 AWS regions where Vercel is, which is sufficient for serving almost any user anywhere in around 20ms. Replication happens ahead of time.
English
0
0
0
46
Jake 🎉
Jake 🎉@jitl·
@leo are you using Cloudflare DO? DO not available in every POP plus once >1 user wants same data the DO may across the planet from user2. network condition dominates ttfb and transition even for cdn. i have fiber, desk to cf is 10ms, living room to cf is 200ms. airplane is worse
English
1
0
0
59
Leo
Leo@leo·
@roguesherlock @jitl Correct. Data sits at the edge, just like your application code. That's essential for a fast TTFB with slow connections, since there's no bandwidth to download lots of code or data. We will make data available in the browser too, but the edge is currently almost equally fast.
English
1
0
1
46
Akash
Akash@roguesherlock·
@leo @jitl so there's a network between user and db? If I click submit on a button and my network is slow then I'd see a delay between me clicking and ui responding right? From the docs it looks like you can't read in client components so maybe you don't do optimistic updates?
English
2
0
0
40
Leo
Leo@leo·
@ImSh4yy @timolins That's where it always is. Anything has a TTFB. A Blade page render with 10 queries is not slower than the TTFB of a static CDN asset. Because it uses only a single DB transaction, and the perf of a DB doesn't change with the amount of queries, assuming good queries.
English
1
0
0
237
Shayan
Shayan@ImSh4yy·
@leo @timolins That sounds like you've just moved the problem to TTFB.
English
1
0
1
208
Leo
Leo@leo·
@jitl Sorry for that! Will get that fixed. In the meantime, did you see the docs link in my first post?
English
1
0
0
85
Jake 🎉
Jake 🎉@jitl·
@leo i cant find ronin docs all my search results are about Ruby security stuff
English
1
0
0
134