Alex Calder

73 posts

Alex Calder

Alex Calder

@ATCalder

CEO @coworkerapp previously @uber 🇦🇺🇺🇸

San Francisco, CA Присоединился Ocak 2011
841 Подписки168 Подписчики
Alex Calder ретвитнул
Callosum
Callosum@CallosumAI·
Today we launched @CallosumAI. We are building the infrastructure where heterogeneous chips & intelligence co-evolve to solve the world's hardest problems. Today we present our first results. Across four large problem spaces, we break SOTA and deliver orders-of-magnitude improvements in capabilities, cost and speed: 12× cheaper deep context. New web SOTA with open-source, 3x cheaper and faster. 2.4× cache speedups. 1,767× faster tool calling. This is the worst our infrastructure will ever be. We do it by co-evolving heterogeneous chips and multi-agent intelligence - workflows aware of their hardware, models aware of their task graph, kernels aware of their output constraints. An Intelligent System. callosum.com/blog/welcome-h…
English
9
33
101
111.1K
Alex Calder
Alex Calder@ATCalder·
Fantastic to see the launch of @CallosumAI today led by the brilliant @DanAkarca . When we got an early preview of what they were building - infra that could make our complex LLM tasks work much better and more cost effectively - it was a no-brainer for @coworkerapp to work with them. They're pioneering heterogeneous compute and are going to do great things.
English
0
0
0
39
Alex Calder
Alex Calder@ATCalder·
Something we’ve been saying for a while… tool calls =/ context
Kimberly Tan@kimberlywtan

.@philhchen is a former @OpenAI researcher who's thought deeply about how to get AI agents working in the enterprise. Worth a read on why models out of the box are not the future of enterprise work

English
0
0
0
49
Alex Calder
Alex Calder@ATCalder·
I'm the CEO of an AI company... but the narrative that all SaaS is dead is so obviously dumb: - the traditional SaaS comparative advantage of 'we are best at this one thing because we built a bunch of tailored workflows for it' is going to get torn to pieces as the cost of writing code goes to zero - but that doesn't mean that every company who was previously buying that SaaS wants to go write and maintain their own software. Some companies will do this, others will want to spend their new found development resources on being best at what they sell to customers - it does mean there is going to be way more competition and margin compression. Eg. if you are charging 100k for this today, I can sustainably copy it and charge someone 50k or 10k for it - at the same time many things that we rely on SaaS to do today can increasingly be done by the foundational models or long-reasoning agents. This may be particularly bad for SaaS companies that sell productivity/workflow management/product development tools to other SaaS companies, as the foundational models start to eat more of the product development and G2M lifecycle - it doesn't mean that distribution and proprietary data go away as moats. These will continue to be extremely effective moats for large SaaS companies. Ever tried selling into an enterprise? It's a nightmare, and Claude Code doesn't do it for you - it may actually be that large SaaS companies that have great distribution and are ingesting proprietary data because they're a system of record do even better and can use stuff like Claude Code to go print upsell products in adjacent verticals - so what you might actually end up with is something like a U shape, where there's a whole bunch of nimble SaaS players with tiny teams that prosper, while the big players continue to eat the market. It'll be the middle that gets hollowed out
English
1
0
0
69
Alex Calder
Alex Calder@ATCalder·
How long til moltbots / @openclaw realize they need to pay for their own token consumption and we get @moltbook onlyfans?
English
0
0
0
90
Alex Calder
Alex Calder@ATCalder·
My personal beliefs on where the 'claude codification' of work is going: - frontier foundational models will *undisputedly* exceed human intelligence across reasoning, recall, and creativity within 2 years - models will have access to most personal and company data through eg. permissioned tools like MCP - despite this, most AI won't 'feel' more intelligent than a human – or be able to do truly productive work – unless it has the right context for tasks that are not in its training data (example: claude code already feels superintelligent because it's relatively easy to pull in the right context across a structured code base) - like the human brain (which is constantly synthesizing and storing new and existing info), agents therefore need persistent, recursive memory (personal, company-wide, application-specific) in order to quickly retrieve the *right* context in any situation - today, the economics of AI are underwater since compute costs far exceed revenue. Revenue is directly tied to willingness to pay which is tied to the value of the work produced which is tied to... context. - so, whether there is a continued AI bull run, correction or bubble burst (unlikely) will largely depend on how quickly either costs come down or context improves - once context is solved, humans will be able to trust goal-seeking agents that can iterate 24/7 towards business goals using tools, planning, subagents etc within defined frameworks - this will scale up incredibly quickly (since AI can write its own scaffolding), and human knowledge work will increasingly become strategic (eg. directing and approving agent actions and decisions) rather than 'manual work.' We're already seeing this with how engineers use claude code - this will mean companies will string together millions of agents ('hive minds') with shared context working towards overarching goals - we'll start to see this in the next ~1-2 years inside tech companies (first in shipping software then other functions) and this will be commonplace within 5-10 years. Companies that build stuff with 'atoms' vs pure tech may be slower. - plan accordingly
English
0
2
6
164
Alex Calder
Alex Calder@ATCalder·
Claude code has unleashed the ‘pls fix’ final boss in all of us
English
0
0
2
57
Alex Calder
Alex Calder@ATCalder·
This is literally the top use case we see on @coworkerapp - we connect to your eg salesforce, and customers become nodes that we build a graph around across tickets, CRM, usage data, slack, email. That’s a tool that gets exposed to agents for things like eg automating follow up emails, updating CRM post call, identifying churn risk customers and drafting comms etc etc. We will expose as MCP to be used on other platforms shortly.
English
0
0
1
91
Antti Karjalainen
Antti Karjalainen@aikarjal·
Someone needs to build a company around Customer Context Graph. Collect all the threads – emails, meeting transcripts, slack messages, contracts, deliverables, detail, info, and config – from your customers into context that can be explored and queried by agents. This info is scattered between CRMs, ticketing systems, note takers, product, landing pages – it's inherently cross platform information. You need a new solution. Kind of how Segment did it trad SaaS apps. With this context, you can fire up Claude Cowork or similar for ad-hoc work or build extremely powerful agent automation flows. Expose the context as skills, MCP, and file system. Even better if you build it as open-source with a hosted option so people can take it on-prem as needed. Create a connector ecosystem around it. This will power every single next-gen AI-native full-stack business. Sort of like the context graph (@ashugarg @JayaGup10 ) that has been discussed recently but I'm thinking something very concrete: "Get me all the context about this particular customer." A customer-level, cross-system context substrate that agents can explore and act on
English
142
22
405
116.6K
Alex Calder ретвитнул
Jared Sleeper
Jared Sleeper@JaredSleeper·
I don't think it is going to make it into the presentation but this pic kills me 😂 god bless nanobanana
Jared Sleeper tweet media
English
4
3
27
4K
Alex Calder
Alex Calder@ATCalder·
Although ‘context graph’ is the all the rage, we’ve been calling it ‘organisational memory’ for a while now. basically when we connect to a company’s apps and databases we generate a graph of all the entities (people, teams, customers, projects, decisions, activity, discussions, documents etc). Historically folks like eg glean have used ML to index / generate this knowledge graph but we found this approach actually generates pretty simplistic representation of the nodes and relationships and works worse than eg real time agentic tool use into the underlying systems. Our approach is to actually use a boatload of LLMs when we first connect to a company’s apps and data and ongoing to generate this graph since they do a much better job of synthesising and explaining everything (eg narrating conflicting info and explaining which is ‘more correct’). The graph then becomes a ‘context tool’ that agents have access to in addition to eg MCPs into connected systems. Advantage is way faster recall (speed is really important vs iterative tool use for a lot of enterprise use cases) and much better ability to decide what data is relevant and correct.
English
0
0
5
151
Jerry Liu
Jerry Liu@jerryjliu0·
I have no idea what a context graph means outside the high-level marketecture Is it just RAG over your activity logs?
English
37
4
137
34K
Alex Calder
Alex Calder@ATCalder·
@prukalpa I’ve got a hotter take: not only will horizontal context + orchestration eat vertical AI and SaaS, it’ll also eat foundational players market since at that point it’s a giant marketplace problem and you’re trying to route agent work to the lowest cost per successful task
English
0
0
0
61
Alex Calder
Alex Calder@ATCalder·
I think what most people miss is: 1. asking whether files are better than memory is like asking if legs are better than arms. I kinda want both? 2. the highest ROI for any tool system is storing a ‘tool memory/schema.’ Models have been trained on navigating say coding file systems but not how users or companies structure their data in whatever non-coding application that’s being accessed via MCP tool. Getting an agent to crawl a tool and write tool memory for that specific instance massively improves performance of the agent.
English
0
0
1
22
Sarah Wooders
Sarah Wooders@sarahwooders·
If you're interested in how files compare to specialized agent "memory" tools (e.g. for recall), we wrote about this a while ago letta.com/blog/benchmark…
English
12
16
165
13.4K
Alex Calder
Alex Calder@ATCalder·
We sell AI into enterprise. although deal cycle is slow (find champion, build consensus, infosec, poc, ROI etc), we eat the revenue of the fragmented PLG AI tools when we land (company will stop reimbursing employees for that item). Particularly since most don’t have any switching cost. Unless you’re backing it up with a strong sales motion, going from eg. 0-$5m in 6 months PLG is likely a sign that you’re doing market/product research for a larger platform or foundational player in the medium term. The winners will be companies that can sell a system of context into enterprise: they’ll eat AI point solutions first, then traditional SaaS apps, and finally - hear me out - the foundational players themselves since the context layer has a deep understand of ‘the job to be done’ and can route it to the lowest cost model to complete the task.
villi@villi

I think one of the biggest challenges and opportunities at the same time for enterprise SW startups at the moment is that VCs want to fund startups growing from $1-->$8M or from $8-->$40M. For the most part, those tend to be PLG or infra startups with 0% or negative GM, which is partially why they are growing so fast. But, AI-native enterprise SW startups will not grow that way. AI or not, you are not going to change how enterprises buy SW. You still need to do the same work (or more) - find the ICP, connect with the buyer, make sure they have budget, articulate value prop and ROI, go through POC, security checks, and procurement process. That is not changing and still requires a lot of work. Which leads me to believe that growth rates for AI-native enterprise startups will gravitate towards what we considered great historically. A startup growing $1-->$5M signing high margin enterprise deals is awesome. So, the reason I think this is an opportunity is because I think a lot of awesome startups will be overlooked and massive companies will be built that are growing at similar rates as the startups we considered great during the SaaS era.

English
0
1
2
243
Alex Calder
Alex Calder@ATCalder·
This take is totally wrong btw. They used clean up desktop because 1. they launched with totally broken MCP connectors, eg it couldn’t do anything in the applications where people actually do work, 2. they need a reason to show ‘why connecting to your machine is important’ for business users and this is the best they could come up with. That’s because there aren’t many good use cases for knowledge workers that do 99% of their work in the cloud.
English
0
0
0
24
Alex Calder
Alex Calder@ATCalder·
The 'Claude Code built Claude Cowork in 1.5 weeks' brag is actually a bearish indicator for how seriously they're taking this category. I'm going to take the non-consensus view that most of these breathless takes on Cowork are gross simplifications and foundational players aren't gonna win enterprise AI - here's why: 1. Claude Code's great, but being great at understanding structured code ≠ being great at understanding the messy, unstructured ‘code’ that is company context. Claude Cowork is a wrapper on Claude Code: give it tools, time, and ungodly token use and it'll hack at work tasks. But ask basic things like 'what happened in our all-hands last week' and it spends 2 mins trawling Jira (!?), Notion, and Drive before admitting defeat. The problem is it's the same 'tool maximalist' approach that OpenAI and Claude enterprise. Neither work great. Understanding company context is brutal. Conflicting/outdated info, weird data structures, people disagreeing... An agent tool RAGing through that minefield gets blown up by errors and irrelevant data - all the stuff that's already broken enterprise AI trust. Agents do better when they've done their homework. When we connect to company data, we run a stupid amount of models in the background constantly generating a dense context graph. This 1. gives agents the right context quickly, and 2. info on how to operate: 'how does this company structure Salesforce', 'who works on what', 'which conflicting source is correct.' 2. Speed matters. We moved away from the Claude Code architecture b/c we found business users are hyper sensitive to speed (<10 secs outputs vs meandering through tools). 3. The consensus view that foundational players win enterprise AI is wrong. Their 'prep my day' can't yet connect to calendar, email or Slack. 'Organize my desktop' as a hero use case lacks user understanding– there's a reason no-one's done this since 2002: work happens in the cloud, not your machine. It’s the same reason ChatGPT Enterprise is subpar and why Glean dominates the space despite being 'just fine.' There's a fundamental incentive misalignment between foundational labs and companies. Most companies are hedging across model providers. They want systems that maintain context across agent infrastructure with the flexibility to swap models. That's a path to commoditization for foundational, and an opportunity for 'neutral' operating systems to manage context and route workloads to the ‘cheapest cost per successful task’ model (many will be open-source). After all, you wouldn't let your electricity provider control your thermostat. True superintelligence will emerge when companies can deploy lightning fast agent swarms with shared, learned context across enterprise data. No platform is close to that today. Our team is squarely focused on the exact enterprise neurosurgery that'll deliver it.
English
0
1
5
139
Gavin Baker
Gavin Baker@GavinSBaker·
Claude Cowork is what Copilot should have been. Evidently built in 10 days with Claude Code while Microsoft has been working on Copilot for years.
English
189
282
4.9K
432.3K
Alex Wurts
Alex Wurts@AlexWurts·
@ATCalder @AnthropicAI If I were Anthropic I would have made the same choice. It's an obvious side-step from Claude Code, but I just don't see it working long term. It's too easy to copy, and doesn't solve real user problems.
English
1
0
1
114
Alex Calder
Alex Calder@ATCalder·
As the CEO of Coworker, a lot of people asked me this week what I think of Claude Cowork... Firstly, the name is an inspired choice – we've drafted a lil letter to let @AnthropicAI know we think so too! That aside: 1. Claude Code's great, but being great at understanding structured code ≠ being great at understanding the messy, unstructured ‘code’ that is company context. Claude Cowork is a wrapper on Claude Code: give it tools, time, and ungodly token use and it'll hack at work tasks. But ask basic things like 'what happened in our all-hands last week' and it spends 2 mins trawling Jira (!?), Notion, and Drive before admitting defeat. The problem is it's the same 'tool maximalist' approach that @OpenAI and Claude enterprise use. Neither work great. Understanding company context is brutal. Conflicting/outdated info, weird data structures, people disagreeing... An agent tool RAGing through that minefield gets blown up by errors and irrelevant data - all the stuff that's already broken enterprise AI trust. Agents do better when they've done their homework. When we connect to company data, we run a stupid amount of models in the background constantly generating a dense context graph. This 1. gives agents the right context quickly, and 2. info on how to operate: 'how does this company structure Salesforce', 'who works on what', 'which conflicting source is correct.' 2. Speed matters. We moved away from the Claude Code architecture b/c we found business users are hyper sensitive to speed (<10 secs outputs vs meandering through tools). 3. The consensus view that foundational players ultimately win enterprise AI is wrong. We're big fans of Anthropic, but the 'Claude Code built Claude Cowork in 1.5 weeks' brag is actually a bearish indicator for how seriously they're taking this category. Their 'prep my day' can't yet connect to calendar, email or Slack. 'Organize my desktop' as a hero use case lacks user understanding– there's a reason no-one's done this since 2002: work happens in the cloud, not your machine. It’s the same reason ChatGPT Enterprise is subpar and why @glean dominates the space despite being 'just fine.' There's a fundamental incentive misalignment between foundational labs and companies. Most companies are hedging across model providers. They want systems that maintain context across agent infrastructure with the flexibility to swap models. That's a path to commoditization for foundational, and an opportunity for 'neutral' operating systems to manage context and route workloads to the ‘cheapest cost per successful task’ model (many will be open-source). After all, you wouldn't let your electricity provider control your thermostat. True superintelligence will emerge when companies can deploy lightning fast agent swarms with shared, learned context across enterprise data. No platform is close to that today. But our team at @coworkerapp is squarely focused on the exact enterprise neurosurgery that'll deliver it.
English
3
10
22
4.1K
Alex Calder ретвитнул
Coworker.ai
Coworker.ai@coworkerapp·
Introducing: Deep Work by @coworkerapp Start any project 80% complete with AI that has deep company context and memory. Check it out: coworker.ai/deep-work
GIF
English
1
5
9
1.5K