Tiger Data - Creators of TimescaleDB

360 posts

Tiger Data - Creators of TimescaleDB banner
Tiger Data - Creators of TimescaleDB

Tiger Data - Creators of TimescaleDB

@TigerDatabase

The fastest PostgreSQL cloud platform for time series, real-time analytics, and vector workloads. Creators of TimescaleDB. https://t.co/KhYccImbg5

Katılım Mayıs 2025
20 Takip Edilen1.3K Takipçiler
Sabitlenmiş Tweet
Tiger Data - Creators of TimescaleDB
🐯 @TimescaleDB is now TigerData! 🚀 When we launched Timescale, the top Hacker News comment said it was “a bad idea.” PostgreSQL wasn’t supposed to be fast. Or scalable. Or useful for time-series. 8 years later: -2,000 customers -8-digit ARR -Most workloads aren’t even time-series anymore We’ve changed our name to reflect that evolution: Timescale is now TigerData. Same code. Same team. Still PostgreSQL. Just a lot faster.
GIF
English
11
5
29
15.3K
Tiger Data - Creators of TimescaleDB
MVCC is one of the best things about Postgres. It's also costing you more than you think if your rows never change after insert. Every row in Postgres carries a fixed 23-byte header, plus padding, that tracks who created it, who deleted it, and whether the transaction committed. That machinery exists so concurrent readers and writers never block each other. It's elegant engineering, and for mixed read-write workloads, it earns every byte. But sensor data doesn't get updated. Log entries don't get edited. Financial ticks don't change after they land. If you're running an append-only workload, those bytes are infrastructure for a problem you don't have. At high ingest rates, they add up: a 1KB sensor reading actually writes 2.5-3.5KB to disk once you account for headers, indexes, and WAL records. Autovacuum still runs continuously, even when there's nothing to clean up, because Postgres triggers it on insert volume alone. None of this is a bug. It's an architecture built for a different workload pattern, doing exactly what it was designed to do. Recognizing the mismatch is more useful than trying to tune around it. @mattstratton from Tiger Data (creators of @TimescaleDB) walks through the per-byte accounting, the write amplification chain, and what changes when the storage model actually fits the workload. tsdb.co/xbt73lov
English
0
1
1
5
Tiger Data - Creators of TimescaleDB
Sensor data looks like rows, so most developers store it that way. Relational schema, transactional indexes, the patterns they already know. And it works, for a while. The problem is that sensor data doesn't behave like rows. It behaves like a time-ordered stream whose value declines with age. The questions you ask of it shift from point lookups to time-window aggregations. The volume grows with every device you add and every sampling rate you increase. And the things that happen in production, out-of-order timestamps, data replays, late corrections, break systems that were designed for clean, ordered inserts. By the time the architecture stops scaling, retrofitting it is expensive. The schema assumptions are load-bearing, and they're everywhere. This article from the Tiger Data (creators of @TimescaleDB) blog walks through what actually needs to be different: log-optimized ingestion, time-partitioned storage, lifecycle tiering that lets resolution and cost decline together as data ages. The kind of design that starts from how sensor data actually behaves, not how it looks at first glance. tsdb.co/2vem9kto
English
0
1
2
55
Tiger Data - Creators of TimescaleDB retweetledi
Mike Freedman
Mike Freedman@michaelfreedman·
Introducing TigerFS - a filesystem backed by PostgreSQL, and a filesystem interface to PostgreSQL. Idea is simple: Agents don't need fancy APIs or SDKs, they love the file system. ls, cat, find, grep. Pipelined UNIX tools. So let’s make files transactional and concurrent by backing them with a real database. There are two ways to use it: File-first: Write markdown, organize into directories. Writes are atomic, everything is auto-versioned. Any tool that works with files -- Claude Code, Cursor, grep, emacs -- just works. Multi-agent task coordination is just mv'ing files between todo/doing/done directories. Data-first: Mount any Postgres database and explore it with Unix tools. For large databases, chain filters into paths that push down to SQL: .by/customer_id/123/.order/created_at/.last/10/.export/json. Bulk import/export, no SQL needed, and ships with Claude Code skills. Every file is a real PostgreSQL row. Multiple agents and humans read and write concurrently with full ACID guarantees. The filesystem /is/ the API. Mounts via FUSE on Linux and NFS on macOS, no extra dependencies. Point it at an existing Postgres database, or spin up a free one on Tiger Cloud or Ghost. I built this mostly for agent workflows, but curious what else people would use it for. It's early but the core is solid. Feedback welcome. tigerfs.io
English
77
104
1.1K
119.8K
Tiger Data - Creators of TimescaleDB
Your IIoT PoC will succeed. Your production deployment is a different story. McKinsey found that 74% of manufacturers get stuck in "pilot purgatory" and database architecture is a big reason why. What works at 1,000 sensors breaks at 100,000. The failure modes are all predictable. Three constraints determine where your IIoT database will break: ▪ Storage scales quadratically with retention time. That small monthly storage bill becomes a budget problem in year three. ▪ Ingest rate limits are invisible until you cross them. Index updates slow incrementally as your database grows. Hit the ceiling once and the backlog never recovers. ▪ Query performance degrades on a known curve. Indexed queries slow logarithmically. Aggregates slow linearly. The dashboard that loaded in 50ms starts timing out. All of this is measurable during the PoC phase. Model these three constraints before you scale and you'll know exactly when your system will need to change, before it breaks in production. Full breakdown in this Tiger Data (creators of @TimescaleDB) whitepaper: tsdb.co/wru5et0i
English
0
0
0
69
Tiger Data - Creators of TimescaleDB retweetledi
Mike Freedman
Mike Freedman@michaelfreedman·
How @nvidia leverages @TimescaleDB for structured telemetry in its reference architecture for the Multi-Agent Intelligent Warehouse (MAIW).
Tiger Data - Creators of TimescaleDB@TigerDatabase

The most important AI systems of the next decade will not live in chat windows. They will run factories, warehouses, energy systems, and fleets. AI is moving from analyzing operations to helping run them. Earlier this year, @nvidia introduced its Multi-Agent Intelligent Warehouse blueprint. What stood out was not just the agents themselves, but the architecture behind them. Instead of dashboards and alerts, specialized agents coordinate across machine telemetry, robotics systems, workforce operations, forecasting, and inventory to support real-time decisions in the physical world. You can see the architecture NVIDIA is proposing here: developer.nvidia.com/blog/multi-age… And the full blueprint here: build.nvidia.com/nvidia/multi-a… Systems like this depend on continuous access to operational data. Factories, warehouses, and energy systems already generate massive streams of telemetry from sensors, robots, PLCs, and machines. The challenge has never been collecting the data. The challenge is to reason quickly enough to act. The architecture starts to look like this: machines → telemetry → database → AI agents → decisions → machines This creates a real-time operational data loop. Agents do not operate in isolation. They need access to the operational history of the systems they manage. Telemetry, events, anomalies, and trends over time. In agent-driven industrial systems, the database becomes the memory layer for machines. Many industrial platforms already rely on Postgres and TimescaleDB to store and analyze time-series telemetry from machines and infrastructure. At Tiger Data, the company behind TimescaleDB, we see this pattern across industrial IoT platforms, fleet monitoring systems, and manufacturing analytics. The future of industrial AI is not just better models. It is systems that can continuously reason across operational data.

English
1
5
18
3.2K
Tiger Data - Creators of TimescaleDB
Most industrial machine monitoring is designed for large enterprises. High costs, rigid 10-device minimums, multi-year contracts. Small and mid-sized manufacturers, the backbone of the supply chain, are locked out. Takton built Sense Manufacturing to change that. Affordable machine monitoring that starts at one device. Approximately 30% lower total cost in year one, up to 70% lower in subsequent years compared to competitors. The team initially evaluated InfluxDB. It performed well for high-frequency data across a limited number of streams, but couldn't deliver on their production requirements: ingesting data from thousands of devices, each reporting power and vibration a few times per minute. They also wanted to avoid stack fragmentation. Their first product ran on Supabase using standard SQL and Postgres. Adding InfluxDB would mean maintaining a second query language and storage paradigm for a small team. Why they chose Tiger Data: ▪ Built on Postgres—kept a single SQL-based stack ▪ Designed for high-rate data ingestion from thousands of devices ▪ Tiger Cloud offloaded operational burden from a two-engineer team ▪ Hypertables provided automatic partitioning for reliable ingestion at scale Two months from first commit to devices live in customer facilities. One pilot customer's CNC machine failed after Sense flagged anomalous vibration readings for four days, but notifications weren't enabled. The failure cost $50,000 and three months of downtime. With alerts on, it would have been a $3,000 part change. How this startup shipped production IoT devices in 60 days, covered in this article by Farbod Moghaddam, CTO & Co-founder, Sense Manufacturing Inc: tsdb.co/yaaa0rao
English
0
2
6
2.3K
Tiger Data - Creators of TimescaleDB
The most important AI systems of the next decade will not live in chat windows. They will run factories, warehouses, energy systems, and fleets. AI is moving from analyzing operations to helping run them. Earlier this year, @nvidia introduced its Multi-Agent Intelligent Warehouse blueprint. What stood out was not just the agents themselves, but the architecture behind them. Instead of dashboards and alerts, specialized agents coordinate across machine telemetry, robotics systems, workforce operations, forecasting, and inventory to support real-time decisions in the physical world. You can see the architecture NVIDIA is proposing here: developer.nvidia.com/blog/multi-age… And the full blueprint here: build.nvidia.com/nvidia/multi-a… Systems like this depend on continuous access to operational data. Factories, warehouses, and energy systems already generate massive streams of telemetry from sensors, robots, PLCs, and machines. The challenge has never been collecting the data. The challenge is to reason quickly enough to act. The architecture starts to look like this: machines → telemetry → database → AI agents → decisions → machines This creates a real-time operational data loop. Agents do not operate in isolation. They need access to the operational history of the systems they manage. Telemetry, events, anomalies, and trends over time. In agent-driven industrial systems, the database becomes the memory layer for machines. Many industrial platforms already rely on Postgres and TimescaleDB to store and analyze time-series telemetry from machines and infrastructure. At Tiger Data, the company behind TimescaleDB, we see this pattern across industrial IoT platforms, fleet monitoring systems, and manufacturing analytics. The future of industrial AI is not just better models. It is systems that can continuously reason across operational data.
English
1
1
8
3.4K
Tiger Data - Creators of TimescaleDB
We're excited to share that Tiger Data is joining Chainguard Commercial Builds to provide secure, production-ready container images. Starting 3/17, TimescaleDB is available packaged inside the Chainguard Factory, a SLSA Level 3-compliant system that delivers: ✅ Minimal attack surface ✅ Zero CVEs ✅ Full provenance & SBOMs ✅ FIPS readiness ✅ SLAs for vulnerability remediation Enterprises shouldn't have to choose between getting value from their software and maintaining the security layers beneath it. This partnership means your teams get TimescaleDB with the most secure software supply chain on the market - without the overhead of building, patching, or monitoring images yourself. 👉 tsdb.co/wdk5fxrn #SoftwareSupplyChain #Chainguard #TimescaleDB
Tiger Data - Creators of TimescaleDB tweet media
English
0
2
2
401
Tiger Data - Creators of TimescaleDB
Every ship in the world broadcasts its position every few seconds. At @vessel_api , they process 700,000 of those reports per hour. They ran it on MongoDB for a year. Then they stopped. The data looked like documents. You could serialize a position report as JSON. MongoDB stored it fine. But vessel positions are measurements. Timestamp and location aren't just metadata. They answer questions like: Where was this ship two hours ago? What's within 50km of Rotterdam right now? Show me everything through the English Channel since Tuesday? At 700K rows per hour, fighting the grain of your database gets expensive fast. The @TimescaleDB migration gave them hypertables with 1-hour chunks, automatic compression segmented by vessel identifier, and retention policies that just drop aged-out partitions in milliseconds instead of a cron job that once failed silently for a week. But the really clever engineering is the spatial query layer: a three-stage filter using H3 hexagonal indexing, PostGIS, and chunk pruning that takes 16.5 million candidate rows down to hundreds. And then there's the bug. A single mismatched struct tag, a leftover bson serialization name from the MongoDB era colliding with the new JSON field name, silently broke the entire port data pipeline. 156,000 port events created with no geographic identifier. The fix was changing one string. The deeper fix was finding 240 more vestigial bson tags scattered across the codebase, each one a potential repeat. All of it runs on a single EC2 r7i.large. 2 vCPUs, 16 GB RAM. Worth the read whether you're considering a migration or not: vesselapi.com/blog/mongodb-t…
English
0
1
3
187
Tiger Data - Creators of TimescaleDB
Postgres isn't broken. Your workload eliminated the quiet period it was designed around. Here's a pattern that trips up even experienced teams. Write latency develops a rhythm you can't explain by traffic. Autovacuum is always running. Maintenance that used to take minutes now takes hours. Indexes, query plans, configs — everything looks correct. Nothing is misconfigured. The problem is architectural. Postgres maintenance was built around valleys. Batch ETL writes for two hours, the database rests and catches up. Continuous ingestion has no valleys. Every maintenance process runs in direct competition with writes. All day. All night. @mattstratton from @TigerDatabase breaks down exactly where the mechanics stop working. The workloads where this hits hardest — IoT, financial feeds, observability — share one trait: the data source runs on its own schedule, independent of what the database needs. Every quarter optimizing within the wrong architecture is a quarter where migration gets harder. Full breakdown from Matty: tsdb.co/1jjv7yx2
English
0
4
35
1.7K
Tiger Data - Creators of TimescaleDB retweetledi
VesselAPI
VesselAPI@vessel_api·
We migrated our maritime AIS platform from MongoDB to TimescaleDB — hypertables with 1-hour chunks, H3 hexagonal spatial indexing, and PostGIS on a single r7i.large. Also found 240 dead BSON struct tags that silently broke our port pipeline. Full writeup: vesselapi.com/blog/mongodb-t…
English
0
2
7
1.6K
Tiger Data - Creators of TimescaleDB
You've spent two years fixing slow queries with indexes. It works every time. Then write latency starts climbing—and nothing looks wrong. Here's what's actually happening: every index you add is a permanent tax on every insert. Not a one-time cost. Every. Single. Row. At a few hundred inserts per second, it's invisible. At tens of thousands, it's the entire problem. The feedback loop is what makes it so hard to catch. Writes slow down → autovacuum falls behind → query performance degrades anyway → you add another index to compensate. Six months can pass between cause and symptom. By the time you feel it, the original decision is long forgotten. There's also a specific behavior with timestamp indexes that makes things worse than the headline number suggests — worth understanding before you run `CREATE INDEX` again. This @TigerDatabase (creators of @TimescaleDB) article by Matty Stratton covers the write amplification math, what to look for in `pg_stat_user_indexes` before adding the next index, and where index pruning stops being enough: tsdb.co/r09ph2pu
English
0
0
1
62
Tiger Data - Creators of TimescaleDB
Managing energy costs for households and businesses has never been more important, yet many Distribution System Operators (DSOs) lack the visibility to see what's happening in low-voltage grids in real time. For decades, this didn't matter. Power flowed one way: from centralized generation to consumers. High-level monitoring was enough. That model is breaking. Demand is spiking with EV charging, heat pumps, and high-draw appliances. Solar pushes energy back onto the grid. Bi-directional flows create unpredictable local swings that DSOs can't see coming. Without visibility, DSOs fall back on overbuilding, adding expensive 100%+ capacity buffers to handle worst-case scenarios. These costs get passed on to consumers. Plexigrid builds grid management software that gives DSOs near real-time observability into load, voltage, and constraint violations. When a feeder or transformer approaches limits, Plexigrid identifies flexible loads, like EV charging times, that remove the grid constraints  without expensive infrastructure upgrades. Their early proof-of-concept used four databases: InfluxDB for time-series telemetry, TigerGraph for grid topology, MySQL and PostgreSQL for relational/analytical workloads. It got them to a working POC but didn't scale. The bottleneck was InfluxDB. Resource needs were unpredictable. When sizing was off, ingestion failed completely. Storage limits stalled ingestion, leaving DSOs without visibility when they needed it most. By migrating to TimescaleDB, Plexigrid consolidated 4 databases into 1. The results: 44% faster ingest, 95% storage reduction, query speeds improved up to 350x, and memory usage during bulk imports dropped dramatically. Ingest failures disappeared. The architectural simplification mattered just as much. A unified PostgreSQL/TimescaleDB stack reduced maintenance overhead and gave them consistent deployment options without changing the core data layer. tsdb.co/qz9heygd #timescaledb #energy #postgres
English
0
1
5
536