Tiger Data - Creators of TimescaleDB

357 posts

Tiger Data - Creators of TimescaleDB banner
Tiger Data - Creators of TimescaleDB

Tiger Data - Creators of TimescaleDB

@TigerDatabase

The fastest PostgreSQL cloud platform for time series, real-time analytics, and vector workloads. Creators of TimescaleDB. https://t.co/KhYccImbg5

Sumali Mayıs 2025
20 Sinusundan1.3K Mga Tagasunod
Naka-pin na Tweet
Tiger Data - Creators of TimescaleDB
🐯 @TimescaleDB is now TigerData! 🚀 When we launched Timescale, the top Hacker News comment said it was “a bad idea.” PostgreSQL wasn’t supposed to be fast. Or scalable. Or useful for time-series. 8 years later: -2,000 customers -8-digit ARR -Most workloads aren’t even time-series anymore We’ve changed our name to reflect that evolution: Timescale is now TigerData. Same code. Same team. Still PostgreSQL. Just a lot faster.
GIF
English
11
5
29
15.3K
Tiger Data - Creators of TimescaleDB nag-retweet
Mike Freedman
Mike Freedman@michaelfreedman·
Introducing TigerFS - a filesystem backed by PostgreSQL, and a filesystem interface to PostgreSQL. Idea is simple: Agents don't need fancy APIs or SDKs, they love the file system. ls, cat, find, grep. Pipelined UNIX tools. So let’s make files transactional and concurrent by backing them with a real database. There are two ways to use it: File-first: Write markdown, organize into directories. Writes are atomic, everything is auto-versioned. Any tool that works with files -- Claude Code, Cursor, grep, emacs -- just works. Multi-agent task coordination is just mv'ing files between todo/doing/done directories. Data-first: Mount any Postgres database and explore it with Unix tools. For large databases, chain filters into paths that push down to SQL: .by/customer_id/123/.order/created_at/.last/10/.export/json. Bulk import/export, no SQL needed, and ships with Claude Code skills. Every file is a real PostgreSQL row. Multiple agents and humans read and write concurrently with full ACID guarantees. The filesystem /is/ the API. Mounts via FUSE on Linux and NFS on macOS, no extra dependencies. Point it at an existing Postgres database, or spin up a free one on Tiger Cloud or Ghost. I built this mostly for agent workflows, but curious what else people would use it for. It's early but the core is solid. Feedback welcome. tigerfs.io
English
77
102
1.1K
118.6K
Tiger Data - Creators of TimescaleDB
Your IIoT PoC will succeed. Your production deployment is a different story. McKinsey found that 74% of manufacturers get stuck in "pilot purgatory" and database architecture is a big reason why. What works at 1,000 sensors breaks at 100,000. The failure modes are all predictable. Three constraints determine where your IIoT database will break: ▪ Storage scales quadratically with retention time. That small monthly storage bill becomes a budget problem in year three. ▪ Ingest rate limits are invisible until you cross them. Index updates slow incrementally as your database grows. Hit the ceiling once and the backlog never recovers. ▪ Query performance degrades on a known curve. Indexed queries slow logarithmically. Aggregates slow linearly. The dashboard that loaded in 50ms starts timing out. All of this is measurable during the PoC phase. Model these three constraints before you scale and you'll know exactly when your system will need to change, before it breaks in production. Full breakdown in this Tiger Data (creators of @TimescaleDB) whitepaper: tsdb.co/wru5et0i
English
0
0
0
59
Tiger Data - Creators of TimescaleDB nag-retweet
Mike Freedman
Mike Freedman@michaelfreedman·
How @nvidia leverages @TimescaleDB for structured telemetry in its reference architecture for the Multi-Agent Intelligent Warehouse (MAIW).
Tiger Data - Creators of TimescaleDB@TigerDatabase

The most important AI systems of the next decade will not live in chat windows. They will run factories, warehouses, energy systems, and fleets. AI is moving from analyzing operations to helping run them. Earlier this year, @nvidia introduced its Multi-Agent Intelligent Warehouse blueprint. What stood out was not just the agents themselves, but the architecture behind them. Instead of dashboards and alerts, specialized agents coordinate across machine telemetry, robotics systems, workforce operations, forecasting, and inventory to support real-time decisions in the physical world. You can see the architecture NVIDIA is proposing here: developer.nvidia.com/blog/multi-age… And the full blueprint here: build.nvidia.com/nvidia/multi-a… Systems like this depend on continuous access to operational data. Factories, warehouses, and energy systems already generate massive streams of telemetry from sensors, robots, PLCs, and machines. The challenge has never been collecting the data. The challenge is to reason quickly enough to act. The architecture starts to look like this: machines → telemetry → database → AI agents → decisions → machines This creates a real-time operational data loop. Agents do not operate in isolation. They need access to the operational history of the systems they manage. Telemetry, events, anomalies, and trends over time. In agent-driven industrial systems, the database becomes the memory layer for machines. Many industrial platforms already rely on Postgres and TimescaleDB to store and analyze time-series telemetry from machines and infrastructure. At Tiger Data, the company behind TimescaleDB, we see this pattern across industrial IoT platforms, fleet monitoring systems, and manufacturing analytics. The future of industrial AI is not just better models. It is systems that can continuously reason across operational data.

English
1
4
18
3K
Tiger Data - Creators of TimescaleDB
Most industrial machine monitoring is designed for large enterprises. High costs, rigid 10-device minimums, multi-year contracts. Small and mid-sized manufacturers, the backbone of the supply chain, are locked out. Takton built Sense Manufacturing to change that. Affordable machine monitoring that starts at one device. Approximately 30% lower total cost in year one, up to 70% lower in subsequent years compared to competitors. The team initially evaluated InfluxDB. It performed well for high-frequency data across a limited number of streams, but couldn't deliver on their production requirements: ingesting data from thousands of devices, each reporting power and vibration a few times per minute. They also wanted to avoid stack fragmentation. Their first product ran on Supabase using standard SQL and Postgres. Adding InfluxDB would mean maintaining a second query language and storage paradigm for a small team. Why they chose Tiger Data: ▪ Built on Postgres—kept a single SQL-based stack ▪ Designed for high-rate data ingestion from thousands of devices ▪ Tiger Cloud offloaded operational burden from a two-engineer team ▪ Hypertables provided automatic partitioning for reliable ingestion at scale Two months from first commit to devices live in customer facilities. One pilot customer's CNC machine failed after Sense flagged anomalous vibration readings for four days, but notifications weren't enabled. The failure cost $50,000 and three months of downtime. With alerts on, it would have been a $3,000 part change. How this startup shipped production IoT devices in 60 days, covered in this article by Farbod Moghaddam, CTO & Co-founder, Sense Manufacturing Inc: tsdb.co/yaaa0rao
English
0
2
6
2.3K
Tiger Data - Creators of TimescaleDB
The most important AI systems of the next decade will not live in chat windows. They will run factories, warehouses, energy systems, and fleets. AI is moving from analyzing operations to helping run them. Earlier this year, @nvidia introduced its Multi-Agent Intelligent Warehouse blueprint. What stood out was not just the agents themselves, but the architecture behind them. Instead of dashboards and alerts, specialized agents coordinate across machine telemetry, robotics systems, workforce operations, forecasting, and inventory to support real-time decisions in the physical world. You can see the architecture NVIDIA is proposing here: developer.nvidia.com/blog/multi-age… And the full blueprint here: build.nvidia.com/nvidia/multi-a… Systems like this depend on continuous access to operational data. Factories, warehouses, and energy systems already generate massive streams of telemetry from sensors, robots, PLCs, and machines. The challenge has never been collecting the data. The challenge is to reason quickly enough to act. The architecture starts to look like this: machines → telemetry → database → AI agents → decisions → machines This creates a real-time operational data loop. Agents do not operate in isolation. They need access to the operational history of the systems they manage. Telemetry, events, anomalies, and trends over time. In agent-driven industrial systems, the database becomes the memory layer for machines. Many industrial platforms already rely on Postgres and TimescaleDB to store and analyze time-series telemetry from machines and infrastructure. At Tiger Data, the company behind TimescaleDB, we see this pattern across industrial IoT platforms, fleet monitoring systems, and manufacturing analytics. The future of industrial AI is not just better models. It is systems that can continuously reason across operational data.
English
1
1
6
3.2K
Tiger Data - Creators of TimescaleDB
We're excited to share that Tiger Data is joining Chainguard Commercial Builds to provide secure, production-ready container images. Starting 3/17, TimescaleDB is available packaged inside the Chainguard Factory, a SLSA Level 3-compliant system that delivers: ✅ Minimal attack surface ✅ Zero CVEs ✅ Full provenance & SBOMs ✅ FIPS readiness ✅ SLAs for vulnerability remediation Enterprises shouldn't have to choose between getting value from their software and maintaining the security layers beneath it. This partnership means your teams get TimescaleDB with the most secure software supply chain on the market - without the overhead of building, patching, or monitoring images yourself. 👉 tsdb.co/wdk5fxrn #SoftwareSupplyChain #Chainguard #TimescaleDB
Tiger Data - Creators of TimescaleDB tweet media
English
0
2
2
399
Tiger Data - Creators of TimescaleDB
Every ship in the world broadcasts its position every few seconds. At @vessel_api , they process 700,000 of those reports per hour. They ran it on MongoDB for a year. Then they stopped. The data looked like documents. You could serialize a position report as JSON. MongoDB stored it fine. But vessel positions are measurements. Timestamp and location aren't just metadata. They answer questions like: Where was this ship two hours ago? What's within 50km of Rotterdam right now? Show me everything through the English Channel since Tuesday? At 700K rows per hour, fighting the grain of your database gets expensive fast. The @TimescaleDB migration gave them hypertables with 1-hour chunks, automatic compression segmented by vessel identifier, and retention policies that just drop aged-out partitions in milliseconds instead of a cron job that once failed silently for a week. But the really clever engineering is the spatial query layer: a three-stage filter using H3 hexagonal indexing, PostGIS, and chunk pruning that takes 16.5 million candidate rows down to hundreds. And then there's the bug. A single mismatched struct tag, a leftover bson serialization name from the MongoDB era colliding with the new JSON field name, silently broke the entire port data pipeline. 156,000 port events created with no geographic identifier. The fix was changing one string. The deeper fix was finding 240 more vestigial bson tags scattered across the codebase, each one a potential repeat. All of it runs on a single EC2 r7i.large. 2 vCPUs, 16 GB RAM. Worth the read whether you're considering a migration or not: vesselapi.com/blog/mongodb-t…
English
0
1
3
184
Tiger Data - Creators of TimescaleDB
Postgres isn't broken. Your workload eliminated the quiet period it was designed around. Here's a pattern that trips up even experienced teams. Write latency develops a rhythm you can't explain by traffic. Autovacuum is always running. Maintenance that used to take minutes now takes hours. Indexes, query plans, configs — everything looks correct. Nothing is misconfigured. The problem is architectural. Postgres maintenance was built around valleys. Batch ETL writes for two hours, the database rests and catches up. Continuous ingestion has no valleys. Every maintenance process runs in direct competition with writes. All day. All night. @mattstratton from @TigerDatabase breaks down exactly where the mechanics stop working. The workloads where this hits hardest — IoT, financial feeds, observability — share one trait: the data source runs on its own schedule, independent of what the database needs. Every quarter optimizing within the wrong architecture is a quarter where migration gets harder. Full breakdown from Matty: tsdb.co/1jjv7yx2
English
0
4
35
1.7K
Tiger Data - Creators of TimescaleDB nag-retweet
VesselAPI
VesselAPI@vessel_api·
We migrated our maritime AIS platform from MongoDB to TimescaleDB — hypertables with 1-hour chunks, H3 hexagonal spatial indexing, and PostGIS on a single r7i.large. Also found 240 dead BSON struct tags that silently broke our port pipeline. Full writeup: vesselapi.com/blog/mongodb-t…
English
0
2
7
1.6K
Tiger Data - Creators of TimescaleDB
You've spent two years fixing slow queries with indexes. It works every time. Then write latency starts climbing—and nothing looks wrong. Here's what's actually happening: every index you add is a permanent tax on every insert. Not a one-time cost. Every. Single. Row. At a few hundred inserts per second, it's invisible. At tens of thousands, it's the entire problem. The feedback loop is what makes it so hard to catch. Writes slow down → autovacuum falls behind → query performance degrades anyway → you add another index to compensate. Six months can pass between cause and symptom. By the time you feel it, the original decision is long forgotten. There's also a specific behavior with timestamp indexes that makes things worse than the headline number suggests — worth understanding before you run `CREATE INDEX` again. This @TigerDatabase (creators of @TimescaleDB) article by Matty Stratton covers the write amplification math, what to look for in `pg_stat_user_indexes` before adding the next index, and where index pruning stops being enough: tsdb.co/r09ph2pu
English
0
0
1
62
Tiger Data - Creators of TimescaleDB
Managing energy costs for households and businesses has never been more important, yet many Distribution System Operators (DSOs) lack the visibility to see what's happening in low-voltage grids in real time. For decades, this didn't matter. Power flowed one way: from centralized generation to consumers. High-level monitoring was enough. That model is breaking. Demand is spiking with EV charging, heat pumps, and high-draw appliances. Solar pushes energy back onto the grid. Bi-directional flows create unpredictable local swings that DSOs can't see coming. Without visibility, DSOs fall back on overbuilding, adding expensive 100%+ capacity buffers to handle worst-case scenarios. These costs get passed on to consumers. Plexigrid builds grid management software that gives DSOs near real-time observability into load, voltage, and constraint violations. When a feeder or transformer approaches limits, Plexigrid identifies flexible loads, like EV charging times, that remove the grid constraints  without expensive infrastructure upgrades. Their early proof-of-concept used four databases: InfluxDB for time-series telemetry, TigerGraph for grid topology, MySQL and PostgreSQL for relational/analytical workloads. It got them to a working POC but didn't scale. The bottleneck was InfluxDB. Resource needs were unpredictable. When sizing was off, ingestion failed completely. Storage limits stalled ingestion, leaving DSOs without visibility when they needed it most. By migrating to TimescaleDB, Plexigrid consolidated 4 databases into 1. The results: 44% faster ingest, 95% storage reduction, query speeds improved up to 350x, and memory usage during bulk imports dropped dramatically. Ingest failures disappeared. The architectural simplification mattered just as much. A unified PostgreSQL/TimescaleDB stack reduced maintenance overhead and gave them consistent deployment options without changing the core data layer. tsdb.co/qz9heygd #timescaledb #energy #postgres
English
0
1
5
536
Tiger Data - Creators of TimescaleDB
This @TigerDatabase (creators of @TimescaleDB ) newsletter edition covers what it takes to stay on Postgres as data grows. Plus, customer stories and upcoming event. Read it here: lnkd.in/e7U-cWiv What's inside: ▪ Product Update - TimescaleDB 2.25: 289× faster aggregations on compressed data, up to 50× faster time-filtered COUNTs, improved chunk pruning ▪ Thought Leadership - New whitepaper: why MVCC, row storage, B-tree indexes, and WAL volume create diminishing returns at scale ▪  From the Blog - Six signs Postgres tuning won't fix your performance problems - How to stress test your IIoT database before production does it for you - Vertical scaling: why bigger instances don't change the trajectory ▪  Impactful Use Cases - Glooko: 100M+ daily glucose readings, 95% compression, 480× faster queries, 40% lower costs - MarketReader: 3M trades/min with hypertables, continuous aggregates, and pgvectorscale - all in one Postgres database ▪ Events - GrafanaCON 2026 (April 20-22, Barcelona): Meet us at booth #2 #PostgreSQL #TimeSeries #Developers
English
0
1
2
67
Tiger Data - Creators of TimescaleDB
The PoC worked perfectly. Millisecond inserts. Instant dashboards. Everyone's happy. Then you tried to scale it. 74% of manufacturers get stuck in pilot purgatory. A big reason: the architecture that handles 10 sensors falls apart at 10,000. The database that felt fast at millions of rows hits a wall at billions. Understanding where those limits are before you hit them — that's the performance envelope. Three constraints every IIoT system eventually runs into: Storage compounds fast. 10,000 tags at 1Hz is over 315 billion rows and ~37.5TB in a single year. The early months are cheap. The lifecycle cost is not. Ingest has a hard ceiling. As indexes and WAL records grow, so does the time required to process each batch. Hit that ceiling and the backlog never recovers. Query performance degrades silently. Aggregate queries that ran in milliseconds at launch slow to seconds, then minutes, as retention grows. You optimized for the POC but the final project has different math. This @TigerDatabase (creators of @TimescaleDB) whitepaper by Doug Pagnutti maps all three into a single framework — a performance envelope you can measure, plot, and plan against. If you're in the design or PoC phase of an IIoT project, this is the capacity planning reference worth bookmarking: tsdb.co/9gmx2tg7
English
0
1
3
138
Tiger Data - Creators of TimescaleDB
Partitioning is the right call. Until it isn't. You implemented it six months ago. `DROP TABLE` replaced your multi-hour `DELETE`s. Retention is clean. Autovacuum pressure dropped. Queries got faster. Then your quarterly report started taking forever. A new engineer broke the partition gap monitor. `pg_partman` failed silently over a weekend and you lost data nobody noticed was missing. These are tradeoffs. And @mattstratton just wrote a sharp breakdown of the ones nobody warns you about. How much engineering time goes into managing your partitioning scheme vs. building product? Full @TigerDatabase article breaks down exactly when partitioning is the right call vs. when you're buying time: tsdb.co/uc1aythy #PostgreSQL #TimeSeries #TimescaleDB
English
0
1
2
128