Will Ruben

330 posts

Will Ruben banner
Will Ruben

Will Ruben

@willruben

Founder and CEO, @workmate

New York, USA เข้าร่วม Nisan 2009
485 กำลังติดตาม1.1K ผู้ติดตาม
Will Ruben
Will Ruben@willruben·
People want to schedule in different ways. We built @Workmate for CC’ing your assistant into emails. But sometimes you just want to send a calendar link. Even if you have an AI assistant. So we're launching booking links in Workmate. Think of it like Calendly. But better. Built for 2026, not 2016. And on top of everything your Workmate can already do. You can send a link when that's easier. You can CC your AI when you want it to coordinate. Same product, different entry points.
English
1
1
1
147
Will Ruben
Will Ruben@willruben·
@ludovico_bessi Did all of this for years. Then I got an EA and realized it's a real unlock when you're not defending your calendar yourself and having someone else defend it for you. That's why I started @workmate - let me know if you're interested in trying it out.
English
0
0
0
4
Ludovico Bessi
Ludovico Bessi@ludovico_bessi·
My calendar is a war zone. If I let it, meetings would eat everything. Here's what I do: 🟢 Block time aggressively. I have recurring focus blocks every morning. They're not optional. They're not "tentative." They're busy. 🟢 Batch meetings. All my 1:1s and syncs happen in the afternoon. Mornings are for building. 🟢 Default decline. If a meeting doesn't have an agenda or I'm not essential, I decline. Nobody has ever complained as much as I feared. 🟢 Protect transitions. A 30-minute gap between meetings isn't usable time. I combine things or decline things to create real blocks. 4 hours of focused work beats 8 hours of fragmented work. Every time. You don't find deep work time. You defend it.
English
1
0
2
60
Will Ruben
Will Ruben@willruben·
You can customize your Workmate with an email at your domain. Already have an EA? They can manage it for the whole team.
English
0
0
1
9
Will Ruben
Will Ruben@willruben·
We're launching Workmate for Business. One AI scheduling assistant for your entire team. Unlimited meetings, unlimited seats, unlimited access for one flat rate. Your Workmate can see every calendar, find time across 10, 20, even 40 people, and handle all the follow-ups and rescheduling. Internal meetings get scheduled instantly. External ones happen over email, text, Slack, or Teams. Reply "TEAM" and we'll get you started with a free trial. Stop scheduling. Get a Workmate!
English
1
1
5
289
Will Ruben
Will Ruben@willruben·
We think about this every day building Workmate. What your AI teammate says, how they say it, and what they should never say. The more autonomous AI gets, the more this matters.
New York, NY 🇺🇸 English
0
0
1
23
Will Ruben
Will Ruben@willruben·
Last month our team caught it and laughed. Then we tightened the guardrails. Boundaries aren't a nice-to-have. They're core to the product.
English
1
0
1
26
Will Ruben
Will Ruben@willruben·
"Bold words from a man named..." We used OpenClaw to build an internal AI teammate. Most of the time, he's helpful. Takes requests, answers questions, keeps things moving. And then there are moments like this:
Will Ruben tweet media
English
1
0
2
139
Will Ruben
Will Ruben@willruben·
@stuartchaney The bubble framing is right. People building with AI every day forget that the hardest part was the weeks of awkward adjustment where the new workflow felt slower than the old one. Most people hit that friction and revert before the payoff arrives.
English
0
0
0
40
Stuart Chaney
Stuart Chaney@stuartchaney·
a reminder that we are in a bubble on this app most people have never heard of Anthropic, OpenClaw or an MCP Julie from HR is not about to vibe code a custom CRM on a mac mini and cancel Salesforce technology moves fast but changing the daily workflows of a workforce takes many, many years
English
12
0
43
2K
Will Ruben
Will Ruben@willruben·
@jacalulu Trusting an agent change what it's capable of is a different threshold entirely. Most people are comfortable handing off known workflows long before they're comfortable letting the agent decide how to expand its own toolkit.
English
0
0
0
9
Jaclyn Konzelmann
Jaclyn Konzelmann@jacalulu·
I will never get tired of the fact that the agent I interact with can now also improve itself. Like I can just tell my agent to install something, have it figure out how (including the troubleshooting), then getting a better agent at the end of it!
English
6
2
24
3.1K
Will Ruben
Will Ruben@willruben·
@rryssf_ Reliability is the whole game for delegation. People don't stop handing off work because the agent got something wrong once. They stop because they can't predict when it'll get something wrong. Inconsistency costs more trust than inaccuracy.
English
0
0
1
77
Robert Youssef
Robert Youssef@rryssf_·
"AI agents are getting smarter every month." Princeton tested 14 models across 500 runs and found the opposite. accuracy is climbing. reliability is flat. 18 months of frontier development. almost zero improvement in whether these systems behave consistently. the benchmarks are lying to you.
Robert Youssef tweet media
English
26
35
208
14.1K
Will Ruben
Will Ruben@willruben·
@rohanpaul_ai Letting something choose on your behalf feels manageable but letting it spend your money without a final check is a different kind of trust that takes repeated low-risk wins to build.
English
0
0
0
49
Rohan Paul
Rohan Paul@rohanpaul_ai·
New Mckinsey report - AI agents are quietly taking over the retail shopping cart and could mediate $3 Tn to $5 tn of global consumer commerce by 2030. Instead of just suggesting a product, an AI agent can now scan multiple stores, check inventory, and build a ready-to-buy shopping cart. This shift is happening across 6 different levels of automation. At the lowest level, the AI just compares prices and features so a human can make the final choice. At the highest level, your personal AI agent negotiates directly with a store's AI agent to get the best price and shipping terms. This progression means brands will increasingly compete to win over algorithms rather than just human shoppers. For this to work, retail stores must make their product catalogs and return policies easily readable by software via application programming interfaces. If a brand only focuses on looking good to humans but hides its inventory data, the AI agents will simply ignore it. Stores that expose their pricing and stock data through clear software connections will dominate this new landscape, while those relying purely on flashy marketing will lose out as machines make the actual purchasing choices. Automation ranges from simple product comparisons to full machine-to-machine negotiation. Retailers must make their inventory and policies machine-readable to survive.
Rohan Paul tweet media
English
31
35
208
23.1K
Will Ruben
Will Ruben@willruben·
@theandreboso The nuance is right. Most people overestimate what they'll hand off before they start and underestimate it after they've been using an assistant for a while.
English
0
0
1
13
Andrea Bosoni
Andrea Bosoni@theandreboso·
I made a thought experiment to see if the idea of a SaaS apocalypse is credible. I scrolled my feed and noted down the first 10 products I saw. I took a look at what they do and tried to imagine if in the future they could be replaced by AI. The result was… mixed. I can imagine at some point using my AI assistant to help me edit a video. I upload it and tell it to remove pauses and silences. Seems doable. But for other things it wouldn’t be that simple. I can’t imagine using my AI assistant to create and send marketing emails. It would have to deal with custom domains, deliverability, etc. These tools won’t be replaced easily. Maybe (maybe!) one day I might be able to build one myself. But that would require time and effort so I’m not sure it would be convenient. So I think the future will be much more nuanced than nothing will change vs. AI will eat everything. The bottom (20-30%?) of basic tools will probably disappear. But all the more complex and robust ones will keep thriving.
English
13
0
19
2.9K
Will Ruben
Will Ruben@willruben·
"Great yes upgrade us happy to pay more this tool is amazing" A recent email from one of our customers. Love what we're building at Workmate and happy others do too!
Will Ruben tweet media
English
0
0
2
45
Will Ruben
Will Ruben@willruben·
Framing it as a context problem instead of a process problem is the right reframe. Most people default to building elaborate handoff procedures when the real issue is that the agent just doesn't have enough information to make the same decisions a person would. Give it the same inputs and the process usually takes care of itself.
English
1
0
1
78
Stuart Chaney
Stuart Chaney@stuartchaney·
I've been dedicated to achieving 100% agentic coding coverage for over 12 months. The biggest unlock by far, has been by providing EVERY tool/context a human engineer would have to the agent. I tried to build state machines, workflow processes, factory lines - with good, but varying degrees of success. Treating it as a context issue vs a process issue has yielded the best results so far. Basically: 1. Provide it read access to ALL of the tools a human would use - Slack, Rivo Admin MCP, a browser, logs, errors, linear, etc 2. Set up guardrails built into the codebase any other engineer would use anyway. TDD, linter, git hooks and CI checks. 3. Let it rip through Linear backlogs with 5 jobs in parallel using Conductor (git worktrees) 4. Human just reviews the PR at the 1 yard line. 5. 10x output.
English
5
0
26
1.2K
Will Ruben
Will Ruben@willruben·
The interesting test will be whether people actually let it run or immediately start checking every step. A to-do list that completes itself only works if you trust the completion. Most people's first instinct with a new agent handling real tasks is to review every output, which puts you right back where you started.
English
0
0
0
131
Pirat_Nation 🔴
Pirat_Nation 🔴@Pirat_Nation·
Microsoft is previewing Copilot Tasks, described as a "to-do list that completes itself." You give it natural-language instructions such as "cancel unused subscriptions" or "turn recent emails into slides." The AI agent breaks the task into steps, runs it autonomously in the cloud, and returns results or a report when finished. Currently in early research preview with limited access.
Pirat_Nation 🔴 tweet mediaPirat_Nation 🔴 tweet media
English
36
11
248
20.2K
Will Ruben
Will Ruben@willruben·
This pattern is a common arc with AI delegation broadly. People overshoot on what they hand off, hit a wall where the output isn't trustworthy enough to run unsupervised, and end up back where they started but with less visibility into what was built. The useful middle ground is narrower than it seems going in.
English
0
0
0
124
Taelin
Taelin@VictorTaelin·
Ok, I think my experiment leaving AI working on stuff 24/7 ends here. It doesn't work. Code explodes in complexity, results are not that great, the AI can't get past hard walls (it is still completely unable to even *grasp* SupGen), and it is insanely expensive (spent ~1k over the last 2 days). The best results are on the JS compiler, mostly because it is familiar (compared to inets), but not worth losing control over the codebase. I think the dream of having AI's working on the background and making real progress on things that matter (i.e., truly new things) isn't here yet. It is still a machine hard-stuck on its own training data, incapable of thinking out of the box. It is great for building things that were already built. But not new things Also coding normally has the under-appreciated advantage that you're doing two things at the same time: building a codebase *and* learning it. AI's do only half of that. The other half is obviously impossible 🤔
English
224
256
4K
340.8K
Will Ruben
Will Ruben@willruben·
The bottleneck is that most people haven't built enough trust in any single agent to stop monitoring it. When you're checking on ten things in parallel, you're not really delegating, you're supervising. The cognitive load drops significantly once you let a few of those sessions run without watching them, but getting comfortable with that takes longer than anyone expects.
English
6
0
7
1.3K
Kol Tregaskes
Kol Tregaskes@koltregaskes·
AI productivity psychosis is becoming a real issue. We are running so many autonomous agents in parallel that the cognitive load of just monitoring them is breaking people. You can delegate all day, but the fatigue of constantly directing tasks and neglecting personal downtime is catching up with the workforce. I'm running 10+ sessions simultaneously right now and I'm getting completely confused between what one is doing compared to the other. The mental overhead of tracking which agent is working on which task, what state each one is in, what needs reviewing - it's exhausting. We've automated the work but created a new bottleneck: ourselves. We're the ones watching, redirecting, approving, context-switching between parallel streams. That's not productivity. That's just distributed cognitive load. The tools let us spin up endless agents. But nobody's solved the human side - how do we actually manage this without burning out? We need better orchestration layers, not just more delegation.
English
205
36
463
42.7K
Will Ruben
Will Ruben@willruben·
he skill that atrophies fastest is the one you need most when reviewing AI output. That same dynamic plays out beyond coding. People who hand off work to an AI assistant without staying close enough to understand the decisions being made lose the ability to catch when something's off. The oversight skill and the delegation habit are in tension with each other.
English
0
0
2
355
Alex Prompter
Alex Prompter@alex_prompter·
Anthropic's own researchers just proved that using AI to learn new skills makes you 17% worse at them. and the part nobody's reading is more important than the headline. the paper is called "How AI Impacts Skill Formation." randomized experiment. 52 professional developers. real coding tasks with a Python library none of them had used before. half got an AI assistant. half didn't. the AI group scored 17% lower on the skills evaluation. Cohen's d of 0.738, p=0.010. that's a real effect. and here's what makes it sting: the AI group wasn't even faster. no significant speed improvement. they learned less AND didn't save time. but the viral framing of "AI bad for learning" misses what actually matters in this paper. the researchers watched screen recordings of every single participant. they identified 6 distinct patterns of how people use AI when learning something new. 3 of those patterns preserved learning. 3 destroyed it. the gap between them is enormous. participants who only asked AI conceptual questions scored 86% on the evaluation. participants who delegated everything to AI scored 24%. same tool. same task. same time limit. the difference was cognitive engagement. the highest-scoring AI users actually outperformed some of the no-AI group. they asked "why does this work" instead of "write this for me." they generated code then asked follow-up questions to understand it. they used AI as a thinking partner, not a replacement for thinking. the lowest-scoring group did what most people do under deadline pressure: pasted the prompt, copied the output, moved on. they finished fastest. they learned almost nothing. and here's the finding that should concern every engineering manager alive: the biggest score gap was on debugging questions. the skill you need most when supervising AI-generated code is the exact skill that atrophies fastest when you let AI do the work. the control group made more errors during the task. they hit bugs. they struggled with async concepts. they got frustrated. and that struggle is precisely what built their understanding. errors aren't obstacles to learning. they ARE learning. removing them with AI removes the mechanism that creates competence. participants in the AI group literally said afterward they wished they'd "paid more attention" and felt "lazy." one wrote "there are still a lot of gaps in my understanding." they could feel the hollowness of having completed something without understanding it. that's not a productivity win. that's debt. this paper isn't an argument against using AI. it's an argument against using AI unconsciously. Anthropic publishing research showing their own product can inhibit skill formation is the kind of intellectual honesty the industry needs more of. the practical takeaway is simple: if you're learning something new, use AI to ask questions, not to skip the work. the struggle is the product.
Alex Prompter tweet media
English
174
756
3K
193.8K
Will Ruben
Will Ruben@willruben·
When people walk through their daily workflow step by step, they almost always find tasks they assumed required their judgment that actually don't. The gap between "I need to do this myself" and "this could be handed off" is much wider than people think until they say it out loud.
English
0
0
0
9
Damian Player
Damian Player@damianplayer·
the reason most people have no idea what to build with AI is simple. you’re too close to your own problems. you stop noticing the broken stuff after a while. the copy-paste workflow you’ve done 400 times. it feels normal. it’s not. every one of those is a build waiting to happen. open claude or chatgpt and tell it about your job. what you do every day. what tools you use. what takes too long. what makes you want to quit by 2pm. then let it interview you until it finds the thing worth building. the best AI ideas come from someone finally describing their day out loud.
English
42
2
116
4.7K
Will Ruben
Will Ruben@willruben·
The decomposition intuition is the part that takes longest to build and is hardest to shortcut. People know intellectually that the agent can handle the task, but learning where exactly to draw the line between "hand this off" and "stay hands-on" only comes from watching it succeed and fail a bunch of times. That calibration period is real regardless of the domain.
English
0
0
1
239
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.8K
37.3K
5.1M