Mike Freedman

2.9K posts

Mike Freedman banner
Mike Freedman

Mike Freedman

@michaelfreedman

Co-founder/CTO, @TigerDatabase / @TimescaleDB 🐯🦄. Professor, @PrincetonCS. Distributed systems, databases, AI, security, networking.

Princeton || NYC Katılım Mart 2009
395 Takip Edilen6.6K Takipçiler
Sabitlenmiş Tweet
Mike Freedman
Mike Freedman@michaelfreedman·
Introducing TigerFS - a filesystem backed by PostgreSQL, and a filesystem interface to PostgreSQL. Idea is simple: Agents don't need fancy APIs or SDKs, they love the file system. ls, cat, find, grep. Pipelined UNIX tools. So let’s make files transactional and concurrent by backing them with a real database. There are two ways to use it: File-first: Write markdown, organize into directories. Writes are atomic, everything is auto-versioned. Any tool that works with files -- Claude Code, Cursor, grep, emacs -- just works. Multi-agent task coordination is just mv'ing files between todo/doing/done directories. Data-first: Mount any Postgres database and explore it with Unix tools. For large databases, chain filters into paths that push down to SQL: .by/customer_id/123/.order/created_at/.last/10/.export/json. Bulk import/export, no SQL needed, and ships with Claude Code skills. Every file is a real PostgreSQL row. Multiple agents and humans read and write concurrently with full ACID guarantees. The filesystem /is/ the API. Mounts via FUSE on Linux and NFS on macOS, no extra dependencies. Point it at an existing Postgres database, or spin up a free one on Tiger Cloud or Ghost. I built this mostly for agent workflows, but curious what else people would use it for. It's early but the core is solid. Feedback welcome. tigerfs.io
English
77
99
1.1K
115.6K
Mike Freedman
Mike Freedman@michaelfreedman·
@KRusenas Your agents just use the local file system as they already know how to.
English
0
0
0
5
Karolis Rusenas
Karolis Rusenas@KRusenas·
@michaelfreedman cool, would you consider also enabling this as an sdk for agents that are written in Go? they could see these functions as tools to list/edit/etc
English
0
0
0
25
Mike Freedman retweetledi
Mike Freedman
Mike Freedman@michaelfreedman·
Introducing TigerFS - a filesystem backed by PostgreSQL, and a filesystem interface to PostgreSQL. Idea is simple: Agents don't need fancy APIs or SDKs, they love the file system. ls, cat, find, grep. Pipelined UNIX tools. So let’s make files transactional and concurrent by backing them with a real database. There are two ways to use it: File-first: Write markdown, organize into directories. Writes are atomic, everything is auto-versioned. Any tool that works with files -- Claude Code, Cursor, grep, emacs -- just works. Multi-agent task coordination is just mv'ing files between todo/doing/done directories. Data-first: Mount any Postgres database and explore it with Unix tools. For large databases, chain filters into paths that push down to SQL: .by/customer_id/123/.order/created_at/.last/10/.export/json. Bulk import/export, no SQL needed, and ships with Claude Code skills. Every file is a real PostgreSQL row. Multiple agents and humans read and write concurrently with full ACID guarantees. The filesystem /is/ the API. Mounts via FUSE on Linux and NFS on macOS, no extra dependencies. Point it at an existing Postgres database, or spin up a free one on Tiger Cloud or Ghost. I built this mostly for agent workflows, but curious what else people would use it for. It's early but the core is solid. Feedback welcome. tigerfs.io
English
77
99
1.1K
115.6K
martin_casado
martin_casado@martin_casado·
This is really, really cool. Sort of the file system equivalent of the agent sandbox environments. Imma use it for my setup ...
Mike Freedman@michaelfreedman

Introducing TigerFS - a filesystem backed by PostgreSQL, and a filesystem interface to PostgreSQL. Idea is simple: Agents don't need fancy APIs or SDKs, they love the file system. ls, cat, find, grep. Pipelined UNIX tools. So let’s make files transactional and concurrent by backing them with a real database. There are two ways to use it: File-first: Write markdown, organize into directories. Writes are atomic, everything is auto-versioned. Any tool that works with files -- Claude Code, Cursor, grep, emacs -- just works. Multi-agent task coordination is just mv'ing files between todo/doing/done directories. Data-first: Mount any Postgres database and explore it with Unix tools. For large databases, chain filters into paths that push down to SQL: .by/customer_id/123/.order/created_at/.last/10/.export/json. Bulk import/export, no SQL needed, and ships with Claude Code skills. Every file is a real PostgreSQL row. Multiple agents and humans read and write concurrently with full ACID guarantees. The filesystem /is/ the API. Mounts via FUSE on Linux and NFS on macOS, no extra dependencies. Point it at an existing Postgres database, or spin up a free one on Tiger Cloud or Ghost. I built this mostly for agent workflows, but curious what else people would use it for. It's early but the core is solid. Feedback welcome. tigerfs.io

English
15
3
169
35K
Mike Freedman
Mike Freedman@michaelfreedman·
How @nvidia leverages @TimescaleDB for structured telemetry in its reference architecture for the Multi-Agent Intelligent Warehouse (MAIW).
Tiger Data - Creators of TimescaleDB@TigerDatabase

The most important AI systems of the next decade will not live in chat windows. They will run factories, warehouses, energy systems, and fleets. AI is moving from analyzing operations to helping run them. Earlier this year, @nvidia introduced its Multi-Agent Intelligent Warehouse blueprint. What stood out was not just the agents themselves, but the architecture behind them. Instead of dashboards and alerts, specialized agents coordinate across machine telemetry, robotics systems, workforce operations, forecasting, and inventory to support real-time decisions in the physical world. You can see the architecture NVIDIA is proposing here: developer.nvidia.com/blog/multi-age… And the full blueprint here: build.nvidia.com/nvidia/multi-a… Systems like this depend on continuous access to operational data. Factories, warehouses, and energy systems already generate massive streams of telemetry from sensors, robots, PLCs, and machines. The challenge has never been collecting the data. The challenge is to reason quickly enough to act. The architecture starts to look like this: machines → telemetry → database → AI agents → decisions → machines This creates a real-time operational data loop. Agents do not operate in isolation. They need access to the operational history of the systems they manage. Telemetry, events, anomalies, and trends over time. In agent-driven industrial systems, the database becomes the memory layer for machines. Many industrial platforms already rely on Postgres and TimescaleDB to store and analyze time-series telemetry from machines and infrastructure. At Tiger Data, the company behind TimescaleDB, we see this pattern across industrial IoT platforms, fleet monitoring systems, and manufacturing analytics. The future of industrial AI is not just better models. It is systems that can continuously reason across operational data.

English
1
4
16
2.3K
Mike Freedman
Mike Freedman@michaelfreedman·
@arnkamath No more indexes that a standard database? To be clear, we aren't doing full-text indexing of the _files_, if that is what you are asking. Although that would be possible now in Postgres: github.com/timescale/pg_t…
English
1
0
2
26
Arnav Kamath
Arnav Kamath@arnkamath·
@michaelfreedman Won't it make the indexes etc bloated eating away at memory? What optimizations are you using to control for file store?
English
1
0
0
20
Mike Freedman
Mike Freedman@michaelfreedman·
@kunksed You can point it any existing Postgres (self-hosted, RDS), and it has special support for creating/forking/managing databases via Tiger Cloud or Ghost.
English
0
0
1
106
Raj Kunkolienkar
Raj Kunkolienkar@kunksed·
@michaelfreedman I’m sold on the premise after trying git, shared google drive etc etc. How do I host this for my team?
English
1
0
1
187
Chris Dietrich 🦞
Chris Dietrich 🦞@chrisdietr·
@michaelfreedman I tried it since I am already moving all my agentic workflow data postgres. durable execution layer is already in there. Hit a bug on macOS which seems to come from one of the dependencies. Do you have plans to open source this? Did you test it on macOS?
English
3
0
3
844
Mike Freedman
Mike Freedman@michaelfreedman·
@chrisdietr Open sourcing shortly. 90% of my testing/use has been on Mac. Please DM me.
English
1
0
5
711
Mike Freedman
Mike Freedman@michaelfreedman·
You can have many separate file systems at different mount points: /mnt/foo, /mnt/bar, /mnt/bar. In fact, there's this cool functionality that builds on database forking, where you can: $ tigerfs fork /mnt/foo /mnt/foo-copy The point I was making is that each mount point is backed by a single database, not that you can only have one mount point.d
English
1
0
0
23
B. Leatherwood
B. Leatherwood@maylivesforever·
@michaelfreedman good points. like i can already mount an in memory filesystem using code, mounting a postgres backed one would be great, but i didn’t realize it would only be able to mount one distinct file filesystem. i was hoping to access many different ones and via the typescript bindings
English
1
0
0
21
Mike Freedman
Mike Freedman@michaelfreedman·
@Mad_dev Yes, it's just Postgres, so you can use SQL / psql shell whenever you want.
English
0
0
1
33
Kenneth Auchenberg 🛠
Thought: Someone needs to make easy to load and save relevant data from my database into an agent filesystems
English
3
0
1
803
Mike Freedman
Mike Freedman@michaelfreedman·
@akshay_elavia Is the main thing you want that they see each other's writes? I.e., you don't want them individually to use git as the backing store / communication channel? Don't understand enough of the problem you are trying to solve.
English
0
0
0
555
akshay elavia
akshay elavia@akshay_elavia·
@michaelfreedman do you think this will help if i have a bunch of code repos cloned locally, and i want my agents to navigate and work across them? if not, any pointers for my use case?
English
2
0
0
626
Mike Freedman
Mike Freedman@michaelfreedman·
@Mad_dev Large bulk export is more efficient with something like: `cat data/.columns/foo/export/.csv` I built a mini-DSL that basically maps a virtual directory structure into predicate pushdown to the database: #data-first" target="_blank" rel="nofollow noopener">tigerfs.ai/docs#data-first
Mike Freedman tweet media
English
1
0
2
171
Pyre
Pyre@Mad_dev·
@michaelfreedman Interesting! How would agents read all the records in a single data field (column)? Any performance penalty vs. vanilla PostgreSQL? How does it handle time series data?
English
1
0
0
138
Mike Freedman
Mike Freedman@michaelfreedman·
@thinkx Thanks! I tried to use MacFuse at first, and NFS was much more a pain. But when Apple started asking me to boot into recovery mode or sometime to give sufficient kernel permissions to use FUSE, I knew that wasn't going to fly.
Plainsboro, NJ 🇺🇸 English
1
0
1
93
think(x)
think(x)@thinkx·
One of the wildest early projects from Google was the MacFuse contributions in 2007. this contribution from TigerData completes the circle for me! mind*blown 🤯 #postgres #fuse
Mike Freedman@michaelfreedman

Introducing TigerFS - a filesystem backed by PostgreSQL, and a filesystem interface to PostgreSQL. Idea is simple: Agents don't need fancy APIs or SDKs, they love the file system. ls, cat, find, grep. Pipelined UNIX tools. So let’s make files transactional and concurrent by backing them with a real database. There are two ways to use it: File-first: Write markdown, organize into directories. Writes are atomic, everything is auto-versioned. Any tool that works with files -- Claude Code, Cursor, grep, emacs -- just works. Multi-agent task coordination is just mv'ing files between todo/doing/done directories. Data-first: Mount any Postgres database and explore it with Unix tools. For large databases, chain filters into paths that push down to SQL: .by/customer_id/123/.order/created_at/.last/10/.export/json. Bulk import/export, no SQL needed, and ships with Claude Code skills. Every file is a real PostgreSQL row. Multiple agents and humans read and write concurrently with full ACID guarantees. The filesystem /is/ the API. Mounts via FUSE on Linux and NFS on macOS, no extra dependencies. Point it at an existing Postgres database, or spin up a free one on Tiger Cloud or Ghost. I built this mostly for agent workflows, but curious what else people would use it for. It's early but the core is solid. Feedback welcome. tigerfs.io

English
1
0
3
318
Mike Freedman
Mike Freedman@michaelfreedman·
Mike Freedman@michaelfreedman

AgentFS is from @tursodatabase, not Neon. But it's (subtly but meaningfully) different. AgentFS is literally trying to build a full file-system, backed by SQLite. Files are broken into segments, segments are backed by the database. It's a more "traditional" view of a remote storage layer. TigerFS started by thinking of exposing the database as a file system (not vice versa), then added the reverse of building "apps" on top of this abstraction layer. Which is how we get files with directories (especially markdown), auto-history, etc. These are synthesized "apps" built on top of it.

Plainsboro, NJ 🇺🇸 QME
0
0
1
360