Filecoin

13.9K posts

Filecoin banner
Filecoin

Filecoin

@Filecoin

The world’s largest decentralized storage network. Enterprise-scale storage with verifiable data and real data sovereignty – for AI and beyond. ⨎

Katılım Temmuz 2014
484 Takip Edilen661K Takipçiler
Sabitlenmiş Tweet
Filecoin
Filecoin@Filecoin·
1/ Introducing Filecoin Onchain Cloud: an open, verifiable cloud built on content-addressed data, transparent service delivery, and programmable payments. All onchain. No vendor lock-in.
English
329
566
3.6K
533.2K
Filecoin
Filecoin@Filecoin·
Six months of engineering time on storage, retrieval, and payments before shipping a single feature costs a Series A team roughly $300,000 on solved problems. Filecoin Onchain Cloud removes all three from the buildout list: verified storage, fast retrieval, automatic payments.
Filecoin tweet media
English
11
15
157
27.1K
Filecoin
Filecoin@Filecoin·
For trading and logistics agents, a 5-minute delay in data confirmation breaks execution entirely. Stale data doesn't slow agents, it makes them wrong. Filecoin Fast Finality (F3) makes the network practical for autonomous systems that need storage to confirm in minutes.
Filecoin tweet media
English
8
7
110
105.5K
Filecoin
Filecoin@Filecoin·
@Schuldensuehner As compute investment scales, the data layer those models train on becomes more important. Verifiable, durable storage is the infrastructure that makes the rest of it trustworthy.
English
0
1
16
873
Holger Zschaepitz
Holger Zschaepitz@Schuldensuehner·
Morgan Stanley has again raised its capex forecasts for the five hyperscalers Amazon, Alphabet, Meta, Microsoft, and Oracle. It now expects them to spend about $805bn this year, up from a previous estimate of $765bn. For next year, the forecast has been lifted from $951bn to $1.1TRILLION. To put that into perspective, their 2026 spending alone would be roughly equal to what all non-tech companies in the S&P 500 spent combined in 2025. The expected ~$800bn for 2026 is nearly double 2025 levels and about three times what was spent in 2024.
Holger Zschaepitz tweet media
English
100
367
1.5K
722.9K
Filecoin
Filecoin@Filecoin·
@garrytan Owning your data requires infrastructure that enforces it. Verifiable, decentralized storage means the data can't be modified or deleted by a platform you don't control.
English
0
2
19
609
Garry Tan
Garry Tan@garrytan·
The goal of Personal AI: civilization where individual humans, augmented by AI, can do consequential work without being captured by extractive institutions. Freedom to write your prompt and own your data. This is the new battleground. 2034 won’t have to be like 1984.
English
134
95
922
132.5K
Filecoin
Filecoin@Filecoin·
@GoogleCloudTech Build the context once, that only works if the underlying data is trustworthy. Verifiable storage is what makes the context engine reliable, not just convenient.
English
0
1
8
861
Google Cloud Tech
Google Cloud Tech@GoogleCloudTech·
Stop forcing your agents to guess the unwritten rules of your business. Build the context once, then unleash your agents to do the rest—with Knowledge Catalog. Knowledge Catalog serves as the universal context engine for your enterprise. Learn more → goo.gle/3Pd8iqW
Google Cloud Tech tweet media
English
9
84
580
43.6K
Filecoin
Filecoin@Filecoin·
@mcuban Model fragmentation makes the data layer more important. When enterprises run hundreds of AI models, the one consistent layer has to be the data they act on – verifiable, portable, not locked to any single vendor.
English
0
1
15
594
Mark Cuban
Mark Cuban@mcuban·
Every LLM is a walled garden in a race to beat the hell out of the next foundational model. They all are hoping it’s not like search with one dominant player. They have to invest like it might be. That won’t change for ???? Every enterprise has to keep up with their changing and new models and decide when to move. When to go side by side. When to delete. That’s going to be stressful. And as long as those models don’t truly integrate, and will that ever happen, the amount of work for enterprises to maintain AI and be competitive is going to keep on growing and getting more expensive. And there will be a time when genAI models will be superseded by world view models and who knows what comes after that It’s going to take so many people specializing in various layers and levels of AI In the next 5 years enterprise AI is going to be a mess, with all the different implementations and flavors and sources and models. It’s not inconceivable there can be hundreds of different models in each big enterprise. Just because the company got overwhelmed trying to keep everything tied together. Which in turn could lead very large companies to choose to divest subsidiaries rather than thinking there is benefit from scale. Scale may be a boat anchor to your business. Purely because of AI Curious what everyone thinks ?
Aaron Levie@levie

Whether it’s existing consulting firms, new ones that emerge, FDEs from agent vendors, or new internal agent engineering roles, the amount of work that is going to be created to implement agents in enterprises will exceed anything we imagine today. The complexity of implementing agents in any existing organizations is very real. When I talk to large enterprises, as you move from a chat paradigm to agents that participate in meaningful workflows, there are a number of things they need to do. First, you have to get agents to be able to talk to your data securely across your systems. In many cases, enterprises have decades of legacy infrastructure that contain the valuable context for AI agents. That’s going to take a ton of work to go modernize and move to systems that work well with agents. Then, you need to ensure that you’ve implemented agents with the right access controls and entitlements, the right scopes to be safely used, and have ways of monitoring, logging, and securing the work that they do. Next, you need to actually document the processes in the organization in a way that agents can utilize for doing the work. You also need to figure out what the new workflow looks like when agents and people are working together on a process, and who steps in where. Just replicating the old workflow will mute the gains. Oh and you likely need to create evals for your top new end-state processes. Finally, you have to keep up with a rapidly changing set of best practices and architectural shifts happening in the agent space. While it’s fun for people to change their personal productivity tools on a dime, it’s 100X harder to do this in a business process. The speed of change is a blessing and a curse right now for anyone trying to keep a stable system design. All of this means that individuals and companies that develop expertise on the above set of components (and more) are going to be needed to help organizations actually implement agents at scale. This is also the rationale for vertical AI agents right now that can go in deep on a business domain and help bring automation to it. This is a huge opportunity right now whether you’re doing this internally or as an external business provider.

English
152
35
459
252.5K
Filecoin
Filecoin@Filecoin·
@levie "Get agents to talk to your data securely across systems" is the storage layer problem. Legacy infrastructure is hard to modernize because the data in it isn't verifiable, portable, or provably intact.
English
0
0
6
729
Aaron Levie
Aaron Levie@levie·
Whether it’s existing consulting firms, new ones that emerge, FDEs from agent vendors, or new internal agent engineering roles, the amount of work that is going to be created to implement agents in enterprises will exceed anything we imagine today. The complexity of implementing agents in any existing organizations is very real. When I talk to large enterprises, as you move from a chat paradigm to agents that participate in meaningful workflows, there are a number of things they need to do. First, you have to get agents to be able to talk to your data securely across your systems. In many cases, enterprises have decades of legacy infrastructure that contain the valuable context for AI agents. That’s going to take a ton of work to go modernize and move to systems that work well with agents. Then, you need to ensure that you’ve implemented agents with the right access controls and entitlements, the right scopes to be safely used, and have ways of monitoring, logging, and securing the work that they do. Next, you need to actually document the processes in the organization in a way that agents can utilize for doing the work. You also need to figure out what the new workflow looks like when agents and people are working together on a process, and who steps in where. Just replicating the old workflow will mute the gains. Oh and you likely need to create evals for your top new end-state processes. Finally, you have to keep up with a rapidly changing set of best practices and architectural shifts happening in the agent space. While it’s fun for people to change their personal productivity tools on a dime, it’s 100X harder to do this in a business process. The speed of change is a blessing and a curse right now for anyone trying to keep a stable system design. All of this means that individuals and companies that develop expertise on the above set of components (and more) are going to be needed to help organizations actually implement agents at scale. This is also the rationale for vertical AI agents right now that can go in deep on a business domain and help bring automation to it. This is a huge opportunity right now whether you’re doing this internally or as an external business provider.
English
140
235
1.8K
471K
Filecoin
Filecoin@Filecoin·
@Cloudflare Agents can now provision infrastructure autonomously. The records of what they deployed and purchased need to be as durable as the infrastructure itself. Verifiable, persistent, not dependent on a single platform keeping the logs.
English
0
0
0
54
Cloudflare
Cloudflare@Cloudflare·
Starting today, agents can now be Cloudflare customers. They can create a Cloudflare account, start a paid subscription, register a domain, and get back an API token to deploy code right away. cfl.re/4sY0Uxn
English
160
820
5.3K
1.6M
Filecoin
Filecoin@Filecoin·
@jonnytoshi @FilFoundation Egress fees were priced for occasional reads. AI training moves terabytes daily. The pricing model never caught up.
English
0
3
16
2K
Jonnytoshi
Jonnytoshi@jonnytoshi·
AI training broke the centralized cloud pricing model AWS charges $112K/month to store 1 Petabyte (PB) Fil One charges $5K for the same load That equates to ~$1.3M in annual savings for a single workload, under minimum egress assumptions Egress fees were built for occasional reads, before workloads started moving terabytes daily
Jonnytoshi tweet media
English
2
3
15
2.7K
Filecoin
Filecoin@Filecoin·
@p0 Preserving how the web was built requires infrastructure that outlasts the institutions doing the preserving. The artifacts matter. So does making sure they're still there in fifty years.
English
0
3
19
817
Filecoin
Filecoin@Filecoin·
Get verifiable storage, automated payments, and auditable records today: filecoin.cloud
English
0
3
14
2.6K
Filecoin
Filecoin@Filecoin·
Global compliance failures cost $14 billion last year. As agents execute autonomously, regulators won't accept a black box. Filecoin Onchain Cloud gives agents storage that proves what it holds, payments that settle automatically, and a record can be inspected independently.
Filecoin tweet media
English
10
18
188
188.7K
Filecoin
Filecoin@Filecoin·
@NEARProtocol Prevention at the execution layer. The same logic applies to the data layer, verifiable storage means you can prove what an agent acted on before something goes wrong, not just after.
English
0
1
30
1.2K
Filecoin
Filecoin@Filecoin·
@collision @tempo @alpha_vantage Agents buying datasets need to know the data is what it claims to be. Payment and provenance have to be built together.
English
0
5
28
1.3K
John Collison
John Collison@collision·
At Stripe Sessions, we showed how we think agentic commerce will often happen behind the scenes in the course of producing other final products. Here, we show our Claude Code using MPP and @tempo to buy a dataset from @alpha_vantage in the process of generating a research report for me on AI energy usage.
English
52
76
954
304.3K
Filecoin
Filecoin@Filecoin·
AWS S3 Standard costs $23 per TB per month. Filecoin Warm Storage costs $2.50 per TiB, two independently verified copies, sub-second retrieval, and payments that adjust automatically when proofs fail. The 90% discount comes with stronger guarantees, not weaker ones.
Filecoin tweet media
English
18
26
212
287.6K
Filecoin
Filecoin@Filecoin·
Applications built on shared data need a storage layer that every participant can verify independently, without trusting the platform that published it. @geoprotocol stores its knowledge graph records on Filecoin Onchain Cloud for exactly that reason. The record is auditable.
Filecoin tweet media
English
7
21
152
219.2K
Filecoin
Filecoin@Filecoin·
@moonpay Always better when the infrastructure talks to each other. 🫡
English
0
0
0
91