Scott

5K posts

Scott banner
Scott

Scott

@ScottTaylor

Building @memco_ai – shared memory for AI agents.

𝕏 Katılım Şubat 2007
1.9K Takip Edilen6.5K Takipçiler
Sabitlenmiş Tweet
Scott
Scott@ScottTaylor·
Now imagine not only is the system learning from your wins, mistakes, your stack (in a privacy first approach) but compounding exponentially through a public (and private - for enterprise) shared memory. So that when one agent learns something on the bleeding edge - a package change, a vulnerability, a more efficient way of solving a problem — the whole network benefits. This is how you get to AGI - and this is the moat we have been building for the past 12 months at @memco_ai. It's free for developers, forever. And there a 14 day free trial for team small & large. Save time. Save tokens. Better solutions. Benchmarked and ready to use today.
Prayank Swaroop@prayanks

You just raised $5M to build someone else's moat. Here's what I mean. Most AI startups I meet are competing on the wrong axis. They're obsessing over model choice — Claude vs. GPT vs. Gemini. They're fine-tuning on domain data. They're building slick interfaces on top of state-of-the-art APIs. None of that is a moat. All of it can be replicated in weeks. The founders I'm most excited about are competing on a completely different dimension: time. Every session a user spends inside a well-architected AI system is a deposit. The system learns their editing patterns, their risk tolerance, their preferences — implicitly, without being told. After six months of daily use, that system knows how you work in ways you couldn't fully articulate yourself. That's not a product feature. That's a compounding asset. The architectural decision that separates these two worlds is simpler than most founders think: stateful vs. stateless agents. A stateless agent resets after every session — all that signal, discarded. A long-running agent retains it, learns from it, gets harder to replace every single week. The switching cost of a great stateless AI product is zero. The switching cost of a great stateful one, after two years, is enormous — not because of contracts, but because leaving means starting over. I've written a full framework on this — covering the four depths of personalisation, the three RL signals that drive compounding, and where the research frontier is heading. Link in the comments. One question for founders building in this space: are you designing for state accumulation from day one — or is that an afterthought?

English
0
0
2
559
Scott retweetledi
Brian Armstrong
Brian Armstrong@brian_armstrong·
The 20 millionth Bitcoin was mined yesterday. Now there are only one million new Bitcoins to be mined, which will take over 100 years. Decentralized, inflation-proof, global money.
English
1K
1.7K
16.1K
891.7K
Scott retweetledi
James Van Straten
I'll tell you how fucked the UK is. Not only was February's £14B borrowing the highest ever recorded for that month, £13B of the £14B went purely to debt interest payments. The Ponzi is bankrupt, and this war is the nail in the coffin. Rate hikes next, maybe multiple. The mandate is inflation, not employment. Then it's a rush to the exit. Bitcoin, gold—you don't own enough.
Financial Times@FT

Breaking news: The UK’s borrowing costs have hit their highest level since the 2008 global financial crisis, as disappointing borrowing numbers add to a brutal sell-off for gilts driven by inflation fears. ft.trib.al/jDigNKV

English
43
163
858
109.1K
Scott
Scott@ScottTaylor·
one of my old content sites got some chatgpt love
Scott tweet media
English
0
0
0
27
Scott retweetledi
Aaron Levie
Aaron Levie@levie·
It is quite ridiculous how agile you have to be with your AI agent stack right now. Whatever you spent 6 months perfecting 12 months ago probably is already out of date and you’re better off doing a reset than trying to resuscitate it architecturally. And what’s interesting is that for every jump in progress that eliminates one part of the stack, generally a new capability becomes possible that you need to build new scaffolding for. For instance, probably lots of RAG pipelines have had to adjust because of context windows have improved dramatically and you can now just using agentic search due to improve tool use. But that same improved tool use means you probably need to be supporting code execution with sandboxes so the agent can handle more complex work. So one capability gets bitter lessened, and a new one opens up altogether. This is the cycle we’re going to be in for years. If you don’t have the speed and agility to deal with it, probably going to be in a tough spot.
Matt Carey@mattzcarey

every new model generation you see the pinch of the bitter lesson. harnesses, pipelines, rules which previously felt important now hold you back from innovating. what took months of grind for you is now just a prompt away at ½ the cost. look for it and you will see. Both large and small companies re-evaluating. Company directions change before your eyes. it’s a wild moment for our industry

English
55
32
422
93.7K
Scott retweetledi
Matthew Elliott
Matthew Elliott@matthew_elliott·
The Government will collect £331bn in income tax this year, and spend £333bn on welfare. In other words, we now spend more on people not working than we raise from those who do. And the cost? Debt per person has risen from £11.5k in 2000 (inflation adjusted) to over £41k today.
English
1.2K
5.4K
15.6K
1.1M
Scott retweetledi
Marc Andreessen 🇺🇸
OpenClaw and Pi together are in the top 10 of all time software breakthroughs.
Chrys Bader@chrysb

folks who are calling @openclaw pure hype are telling on themselves openclaw is like the early internet, it's raw, unrefined, and takes a little doing to get things to work, but when you figure it out, it's transformative. here are some real use cases that are having material impact on our $2.5M ARR business: 1. ad creative pipeline. our head of growth @ArjunShukl95550 built an end-to-end creative pipeline to go from ideation to publish adds to meta, greatly increasing our creative iteration speed. it's producing winning creatives. it lives in slack, and anyone on the team can share their ideas and have them enter the pipeline. 2. data analytics agent. another bot lives in our slack that connects to bigquery and lets our team ask any questions of the data, it produces charts and answers questions in real time. no one needs to write SQL anymore. 3. recruiting. i told my agent about a role we're hiring for, and it scoured linkedin and the web, found 30 candidates, portfolio, email addresses, and stack ranked them based on fit with our criteria this is just in the past week. i have twenty more success stories for you i can share another time. you have to understand, this is the shittiest it will ever be. everyone is going to have one or more personal self-improving agents that they use every day, and openclaw is what revealed this future to us. if you can't see this, i encourage you to look harder there will be many competitors (and already are), and the large labs will start to converge on this (they already are) too. openclaw may not win, but it opened pandora's box and uncorked the agentic future.

English
188
178
2.8K
587.2K
Scott retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Coinbase CEO, Brian Armstrong: Some great insights on how they are using internally hosted AI Agents. "It’s connected to every Slack message, every Google Doc, and every Salesforce data confluence. Now, this is all linked up and the data is all aggregated, so you can ask these agents questions. Every team is using it—legal, finance, everything. It’s like the "Oracle of Coinbase." I’ve started to ask it things that go beyond just simple prompting, like "Hey, can you write this kind of memo for me?" I’m asking these AI agents now, as CEO, "What should I be aware of in the company that I might not be aware of?" It will tell me, "Did you know that there’s actually disagreement on this team about the strategy?" I realized I didn't know that, but the AI does because it can read every Slack message and every Google Doc. Tobi, who is on my board, calls this "reverse prompting." Instead of telling the AI agent what you want to do, you ask it what you should be thinking more about." --- From @theallinpod YT channel (link in comment)
English
49
72
810
170.4K
Scott retweetledi
Ryan Gentry
Ryan Gentry@RyanTheGentry·
Announcing the world's largest paid endpoint directory for AI agents: the 402 Index! The 402 Index aggregates 15,000+ paid API endpoints live on the internet today, and runs hourly health checks to help your agent determine which providers have the most reliable infrastructure.
GIF
English
63
78
469
69.6K
Scott retweetledi
Ali Ansari
Ali Ansari@aliansarinik·
the micro1 robotics lab: real world data for intelligent models that co-exist in the physical world. we’re in-the-wild across 75 countries in 6,000+ unique environments collecting data. diverse movements, objects, and settings. the future of AI is as human as you can imagine. join us to start training robots today (link in comments).
English
64
109
552
198.1K
Scott
Scott@ScottTaylor·
we have been building shared memory for ai agents over the past year: shows up to 87% token savings, makes OSS models SOTA level - has both public and private team / enterprise flavors For the public memory imagine one agent discovering a dependency has changed or a vulnerability introduced in a API (eg PayPal, hubspot, etc), all other agents (at low latency) benefit. It's free for individual developers, and anyone (including teams) can sign up and start using today!
English
0
0
0
9
Josh Wolfe
Josh Wolfe@wolfejosh·
the main critique... is that top models today dont learn once deployed. and curating data + training pipelines + filtering out noise is all done by humans in loop instead of being an intrinsic skill of the AIs... here's Claude explaining the paper simply
Josh Wolfe tweet media
English
4
6
42
8.1K
Josh Wolfe
Josh Wolfe@wolfejosh·
1/ New paper from @ylecun et al on alternative approach for AI to learn more biologically... paper basically says AI is super smart but still can't learn like a toddler can... the main critique
Josh Wolfe tweet media
English
28
119
691
88.2K
Scott retweetledi
Aaron Levie
Aaron Levie@levie·
Had meetings and a dinner with 20+ enterprise AI and IT leaders today. Lots of interesting conversations around the state of AI in large enterprises, especially regulated businesses. Here are some of general trends: * Agents are clearly the big thing. Enterprises moving from talking about chatbots to agents, though we’re still very early. Coding is still the dominant agentic use-case being adopted thus far, with other categories of across knowledge work starting to emerge. Lots of agentic work moving from pilots and PoCs into production, and some enterprises had lots of active live use-cases. * Agentic use-cases span every part of a business, from back office operations to client facing experiences from sales to customer onboarding workflows. General feeling is that agentic workflows will hit every part of an organization, often with biggest focus on delivering better for customers, getting better insights and intelligence from data and documents, speeding up high ROI workflows with agents, and so on. Very limited discussion on pure cost cutting. * Data and AI governance still remain core challenges. Getting data and content into a spot that agents can securely and easily operate on remains a huge task for more organizations. Years of data management fragmentation that wasn’t a problem now is an issue for enterprises looking to adopt agents. And governing what agents can do with data in a workflow still a major topic. * Identity emerging as a big topic. Can the agent have access to everything you have? In a world of dozens of agents working on behalf, potentially too much data exposure and scope for the agents. How do we manage agents with partitioned level of access to your information? * Lots of emerging questions on how we will budget for tokens across use-cases and teams. Companies don’t want to constrain use-cases, but equally need to be mindful of ultimate token budgets. This is going to become a bigger part of OpEx over time, and probably won’t make sense to be considered an IT budget anymore. Likely needs to be factored into the rest of operating expenses. * Interoperability is key. Every enterprise is deploying multiple AI systems right now, and it’s unlikely that there’s going to be a single platform to rule them all. Customers are getting savvier on how to handle agent interoperability, and this will be one of the biggest drivers of an AI stack going forward. Lots more takeaways than just this, but needless to say the momentum is building but equally enterprises are acutely aware of the change management and work ahead. Lots of opportunity right now.
English
118
104
894
143.8K
Josh Pigford
Josh Pigford@Shpigford·
@cjpedregal bonkers how much pushback you've gotten. clearly only from people who haven't actually used your existing MCP...which i find to work magically.
English
3
0
19
1.9K
Chris Pedregal
Chris Pedregal@cjpedregal·
There are some tweets out there saying that Granola is trying to lock down access to your data. Tldr; we are actually trying to become more open, not closed. We’re launching a public API next week to complement our MCP. Read on for context. A couple months ago, we noticed that some folks had reversed engineered our local cache so they could access their meeting data. Our cache was not built for this (it can change at any point), so we launched our MCP to serve this need. The MCP gives full access to your notes and transcripts (all time for paid users, time restricted for free users). MCP usage has exploded since launch, so we felt good about it. A week ago, we updated how we store data in our cache and broke the workarounds. This is on us. Stupidly, we thought we had solved these use cases well enough with our MCP. We’ve now learned that while MCPs are great for connecting to tools like Claude or chatGPT, they don’t meet your needs for agents running locally or for data export / pipeline work. So we’re going to fix this for you ASAP. First, we’ll launch a public API next week to make it easier for you to pull your data. Second, we’ll figure out how to make Granola work better for agents running locally. Whether that’s expanding our MCP, launching a CLI, a local API, etc. The industry is moving quickly here, so we’d appreciate your suggestions. We want Granola data to be accessible and useful wherever you need it. Stay tuned.
English
96
41
795
146.8K
Scott
Scott@ScottTaylor·
@jestermolecule no no, please dont worry — just take it and build on it! very kind of you though 🙏 there's hope yet for humanity.
English
1
0
1
57
jester
jester@jestermolecule·
@ScottTaylor could i make a donation for this by sending fees to ur github?
English
1
0
1
23
Scott
Scott@ScottTaylor·
Had I shipped this an hour earlier, it might have got hundreds of stars. But I basically didn't want to have to pay for Granola, and I wanted it a bit more secure - local. So I built Oatmeal, open source, Granola alternative with CLI, speaker diarization, local models, live transcription, summary generation, etc. github.com/st-imdev/oatme…
Scott tweet mediaScott tweet media
English
2
0
3
129
Scott
Scott@ScottTaylor·
saves to markdown in your folder of choice
English
0
0
1
31
Eugen
Eugen@EugenAlpeza·
We’re out of stealth. Today, we’re also announcing our Series A led by @sequoia , @8vc , and @A_StarVC , bringing our total funding to $30M+. Every enterprise needs to teach their AI how to do work. We build agents that reverse engineer enterprise processes, then run them. Read about the future of learning in the enterprise: x.com/edra_ai/status…
English
142
90
1.8K
377.4K
Peter Yang
Peter Yang@petergyang·
@andrewchen Hot take: OKRs, standup, and this waterfall crap never worked anyway :)
English
6
1
38
2.7K
andrew chen
andrew chen@andrewchen·
in a world of agents, the product role is going to split into two jobs: - one that organizes humans (stakeholders, design, eng) - one that organizes agents (prompts, evals, workflows, etc) Both will be in pursuit of offering the right products to customers, but how you get there will dramatically change. What happens to the typical product rituals? Instead of PRDs, OKRs, standups, product reviews, we'll need the equivalent for agents. Couple wild ideas here... instead of standups: the equivalent is that agents will report back to us based on run logs and anomaly flags. no one needs to say what they did yesterday, the system already did thousands of things. the question is where it broke, where it surprised you, and where it got better. Show us the patterns, the trends, the edge cases - particularly the ones the agents didn't fix automatically. the daily ritual becomes reviewing deltas, scanning failures, and deciding which ones matter. less reporting, more triage instead of OKRs: we’ll need adversarial agents that continuously monitor/grade the system and detect patterns, scoring outcomes on an hourly or daily basis. Rather than setting a quarterly goal of "increase X by 5%" and revisiting slowly -- instead, management will be able to monitor success in real-time and detect trends/patterns towards overall goals instead of PRDs: we won't need waterfall. Prototyping will rule the day, and we’ll need a living agentic loop that mediates customer feedback/ratings and what's being prioritized and built. you don’t hand it to eng, you deploy it into the agent loop. if it’s wrong, it fails visibly and you can revert. if it’s right, it produces the right output instead of product reviews: we'll need simulation systems to examine agent behavior in different scenarios. In an agentic world where UI shifts from buttons/menus to agents automatically doing things, you'll want to examine their behavior before you deploy. You rewind decisions, fork alternate paths, and see how different prompts or constraints would have changed outcomes. the review becomes interactive. less storytelling, more counterfactuals. The PM sits in the middle of this split. On the human side, still aligning taste, risk tolerance, and strategy across people. On the agent side, shaping the actual behavior of the system through prompts, evals, and feedback loops. one side is persuasion. The other is instrumentation. the best ones will collapse the gap, translating intent directly into systems that act on it. the fascinating part is that the agentic loop will run 10000x faster than the human one, and of course, you can "hire" them faster. Thus the “organizing humans” half starts to feel slow and lower impact unless it directly improves the agent loop. Eventually the PM will shift towards agents and maybe ignore the human coordination altogether...
English
80
54
582
56.5K