Nikita | Scaling Postgres

5K posts

Nikita | Scaling Postgres banner
Nikita | Scaling Postgres

Nikita | Scaling Postgres

@nikitabase

CEO @neondatabase. Best Postgres for developers and vibe coders.

San Francisco, CA Katılım Haziran 2010
2.2K Takip Edilen11.9K Takipçiler
Sabitlenmiş Tweet
Nikita | Scaling Postgres
Nikita | Scaling Postgres@nikitabase·
Database architecture thread. Technical. There has been several startups building an operational relational databases focused on OLTP with a shared nothing architecture. @neondatabase is using a different approach - shared storage. What's the difference?
English
16
73
507
197.3K
Trevor I. Lasn
Trevor I. Lasn@trevorlasn·
@neondatabase just gave me $25K in credits to build autonomous prediction market trading agents. The edge isn't speed anymore. It's data quality and agent intelligence. The future for prediction markets is autonomous agents competing on @Polymarket and @Kalshi — and the best data wins. Huge thanks to Neon Postgres for the $25K credits to make it happen.
Trevor I. Lasn tweet media
English
2
0
3
1.2K
Nikita | Scaling Postgres
Code is solved but now every other stage of SDLC must be overhauled. Specific.dev is agent-first local dev and cloud with databases on Neon
Neon Postgres@neondatabase

Agents are great at writing code, but too much of the other work like configuration of local dev environment and provisioning infrastructure, is still on the developer. Specific.dev solves this with an agent-first cloud. Agents get automatic local dev environments and a Terraform-style IaC layer that gives them the infrastructure context they need to develop, deploy, and maintain apps safely. Also the database layer runs on Neon!

English
1
0
7
1.7K
Nikita | Scaling Postgres retweetledi
Stanislav Kozlovski
Stanislav Kozlovski@kozlovski·
Databricks' Zerobus went GA a few weeks ago, announcing some serious ingest power: • thousands of concurrent clients • each connection up to 100 MB/sec • over 10 GB/s of aggregate throughput ...all to a SINGLE Delta table. 🔥 In one sentence, this is a custom-made ingest buffer whose only goal is to sink your data to the lakehouse. It's more of an API than a "bus", despite the naming. That being said, the API support is rich! • gRPC API for streaming high throughput • Rust client on which Python, TypeScript, Java and Go SDKs are built as well • an HTTP API (in beta) • Open Telemetry (allegedly). This sounds really cool but I couldn't find any docs on it. 💵 Priced as a fully-managed serverless service at a simple per-GB ingest price ($0.05 to $0.064), it seems pretty competitive to Confluent and the like when I ran the numbers. The latency is high: • To store in the bus, the p50 is ~200ms and p95 is ~500ms. • To get the data in the table, the p50 is ~5,000ms and the p95 is ~30,000ms. And they don't publish p99 at all. 🚩 To be fair, few people need millisecond-level lag for data analytics. Since these dashboards usually move at the speed of a human, anything in the order of a few seconds is very good. In their announcement blog post, Databricks threw some shade at Kafka's "complexity tax" - the fact that you need to manage so many components simply to get data into the lakehouse. They're right, it can be burdensome. While an API can never kill Kafka, I think the 80/20 focus of bringing most of the benefits (just ingest my data into a table!) with little effort and batteries-included is interesting. Zerobus comes with: • schema validation on the broker • automated dead-letter queue (saving rejected Parquet files) • unified governance /w Unity Catalog - auditing, lineage tracking, access control This isn't revolutionary though. A single-sink solution built from the ground-up should be simple. It's literally just an API. Keep in mind that also comes with drawbacks, like locking you pretty deeply into Databricks. 🚨 Another big risk I see in this trend is that it can create the same pipeline sprawl that Kafka was meant to solve. Like... OK you aren't using the complicated Kafka and instead opting for a simple direct API call. What happens when your architecture racks up thousands of such custom-made pipelines? 🤨 Databricks also claim that Zerobus is much cheaper than self-managing Kafka (a fraction of the cost). I ran the numbers and it turned out to be false. (like every vendor cost claim). I spent some additional time to dive into all of the details in a new video series called "Stan Reads". If interested, watch it here👇 (not a bad thing to put on the background while doing something)
English
4
4
58
6.2K
Nikita | Scaling Postgres
Agents create a high volume of small, ephemeral databases for experimentation. The core architectural challenge: any one of these can evolve into a massive production system overnight. Modern data systems must support seamless, elastic growth from near-zero to full scale, without requiring manual re-platforming or provisioning. This operational simplicity is now a fundamental requirement
English
1
1
14
1.3K
Nikita | Scaling Postgres
A very important property of @neondatabase. While it supports lots of small instances - we spin up 80 a second - each of those instance can automatically grow into a large scale production instance. No changes for the user required
English
0
0
17
1.7K
21 Sage
21 Sage@aslamarley·
Massive shout-out to @neondatabase tho 😂, their point-in-time-recovery saved my life. Had to manually write the Prisma migration file from scratch to force the dev and production into sync. Real fuck toda, mehn 😂
English
1
0
1
95
21 Sage
21 Sage@aslamarley·
Real shit, my soul literally left my body and came back, my brain went haywire. 😂 Just survived a brutal 5-hour battle with Next.js, Prisma, and a dropped PRODUCTION database. Crazy shit 😂 @neondatabase — Neon showed me what it's like to suffer
English
2
0
2
1.1K
Nikita | Scaling Postgres
Nikita | Scaling Postgres@nikitabase·
We are halfway through rollout of a perf/reliability feature that only a couple other big cloud databases have managed to ship. Extremely proud of the hardcore engineering that went into it. We'll write about it when it gets to 100%
English
1
0
22
1.8K
Nikita | Scaling Postgres retweetledi
Amjad Masad
Amjad Masad@amasad·
So proud to be partnered with @alighodsi, @nikitabase, and the rest of the Databricks team. More coming — watch this space!
Manny Bernabe@MannyBernabe

"These are people that would never otherwise even touch code and they're now using @Replit themselves and they're building things that actually work." - @Databricks CEO @alighodsi Ali says non-technical employees across their 10,000 person company are building real software with @Replit: "We have 5-6,000 people that are sitting in marketing or in HR or in finance and they don't have those technical skills. Replit is excellent. They love it." "Whenever you build a piece of software that works with Replit, it uses a database behind the scenes and actually we have a very deep partnership with them. So it uses our lakehouse offering behind the scenes." "Replit is really amazing for democratization to the broader masses. The people that you would never imagine would write a single line of code." Databricks Ventures also invested in Replit as of today.

English
7
10
173
21.8K
Nikita | Scaling Postgres retweetledi
Shernaz Daver
Shernaz Daver@shernazdaver·
From the kid in Jordan to the cover of Forbes -- a journey is worth a thousand stories! Having known @amasad for a few years and watching him make the tough decisions to build what @Replit is today has been amazing! @pirroh @HayaOdeh
Richard Nieva@richardjnieva

New: Replit has raised $400 million at a $9 billion valuation, and is releasing its new coding agent. The goal is to make vibe coding more like you’re doodling on a white board. Paul Graham, an early investor, thinks it will “redefine” the term. forbes.com/sites/richardn…

English
1
6
40
9.7K