Dailybot

661 posts

Dailybot banner
Dailybot

Dailybot

@Dailybot

An AI team manager that talks to every employee, every day.

US 参加日 Şubat 2017
36 フォロー中309 フォロワー
Dailybot
Dailybot@Dailybot·
One of the most shared agentic workflows this year is the Night Shift pattern. Jamon Holmgren wrote it up in March and it's been circulating ever since. The idea is: you spend the day writing specs and thinking through architecture, then you kick off your coding agent before you close the laptop. It works overnight. You review commits over morning coffee. Mitchell Hashimoto shared something similar. Last 30 minutes of the day, he spins up agents for research, issue triage, parallel experiments. They generate reports he reads the next morning. He calls it a "warm start" instead of a cold one. Both workflows are genuinely good. If you haven't tried some version of this, you probably should. The individual productivity gains are real. But here's the part that keeps bugging me. These are all single-player workflows. When three developers on the same team each kick off a night shift, who knows what happened by morning? The lead reviews their own diffs. The other two review theirs. Nobody has a combined picture of what all three agents and all three humans actually shipped overnight. We solved the "write code faster" part pretty convincingly this year. The "tell the team what happened" part is mostly unsolved, and it's the part that matters for everyone who isn't the person who kicked off the agent.
Dailybot tweet media
English
0
1
2
39
Dailybot
Dailybot@Dailybot·
The standup has been the same format for about twenty years. It works well enough that nobody questions it, but it's basically asking people to manually reconstruct information that already exists in git, in PRs, in messages. We started watching what happens when coding agents report their sessions into the same standup system and the part that surprised us was the human side. Devs stopped writing detailed recaps because the automated part covered it and started writing a line about where they need a decision or what they're not sure about yet. This side of the agentic flows may be the most exciting one yet. The standup can go from a status log to a thinking-out-loud channel, and that's something you cannot just design, it's something that happens naturally when the "reconstruction" part of the daily goes away.
Dailybot tweet media
English
0
0
0
12
Dailybot
Dailybot@Dailybot·
Something interesting is happening with documentation at engineering teams and it has nothing to do with documentation culture. Teams that never wrote down their coding conventions, their commit style, what "done" means for a given task - are now writing all of it down. Their coding agents kept guessing wrong and it turns out the fix is just telling them how things work upfront. There's a file that's become pretty standard across repos now called AGENTS.md. It sits at the root of the project and tells agents what they need to know before they start working: build commands, naming conventions, which files to stay away from, what patterns the team prefers. Most of them are short — maybe 50 to 100 lines — and focused on things an agent can't figure out by reading the code alone. The part that caught us off guard is that the humans keep referencing it too. New people joining the team read the AGENTS.md before the README because it's more specific and more honest about how the project actually works. The README describes what the project is. The agents file describes how to work on it, and that turns out to be the thing people actually need when they're getting started. It also compounds in a way that's easy to miss. Every time an agent trips up on something, that is, it uses the wrong test runner, puts files in the wrong place, breaks a naming convention, you add that to the file. So the next session starts with better context, and the one after that starts with even better context. Four months in, the file knows more about your project's quirks than most of the people on the team do. The irony is that this discipline of writing down how the team actually works, in specific enough terms that something literal-minded can follow it was always the thing teams needed. Agents just made it worth doing because the cost of not doing it shows up immediately in bad output instead of slowly in onboarding confusion. We've been thinking a lot about what happens after the documentation is there and the agents are reporting. More on that tomorrow.
Dailybot tweet media
English
0
0
0
17
Dailybot
Dailybot@Dailybot·
The teams doing well with coding agents have mostly stopped working with one at a time. They're running several simultaneously each focused on a different part of the codebase while the developer keeps track of what's happening and reviews what comes back. Three agents focused on separate pieces consistently get more done than one agent working three times as long. The thing that makes that work is how well you split the work before starting: 1) which agent gets what, 2) what it needs to know going in, 3) what you expect back from it. That's a different job than writing code really and it's closer to what a tech lead does when splitting work across a team: figuring out who gets what, making sure everyone has the context they need, checking what comes back. Most developers haven't really done that before, and you can usually tell. The teams further along have started keeping an AGENTS.md file in their repos with coding conventions, things that tend to trip agents up, project-specific context that every agent reads at the start of a session. It gets more useful over time. Each session tells you more about what agents actually need to know upfront. The tooling to run multiple agents in parallel is already out there. The harder part is the work side between writing tasks clearly enough that an agent can run with them without checking back in, catching problems before they pile up, and keeping tabs on what several agents are doing at once without it becoming a full-time job. Most teams are still working that out.
English
0
0
1
19
Dailybot
Dailybot@Dailybot·
Teams where people feel recognized tend to have someone who made it normal to say something out loud when a colleague did well. That's harder to build when your team is distributed across time zones or places or work types because the spontaneous moment doesn't happen on its own. You have to create the conditions for it. The teams that manage it usually keep recognition inside the same chat where work happens. Slack, Teams, Discord etc wherever people already are. The bar to say something stays low enough that people actually do it, and when they do, everyone on the team sees it. It's also worth thinking about what you're recognizing because as agents take on more of the measurable work, your human contribution might get less obvious from the outside on a daily basis.... The call someone made, the problem they caught, the way they helped a teammate get unstuck. That stuff needs to be named deliberately, because it won't show up in a metric every time. Kudos works inside your existing chat. You define your company values, recognition ties to them, and the leaderboard reflects who's helping others, not just who's shipping the most.
Dailybot tweet media
English
0
0
3
25
Dailybot
Dailybot@Dailybot·
Another thing we've noticed talking to managers lately about their executive summary setup is that they describe the same day twice: What they thought happened and what actually happened. And the gap is almost always agents now. If some dev's agent closed 3 issues from that day’s support pipeline, what the manager sees in that dev’s standup is that he was "working on auth." And sure, both things are true but neither gives the full picture by itself.
Dailybot tweet media
English
0
0
2
18
Dailybot
Dailybot@Dailybot·
More than 40% of engineering teams at the companies we work with have developers running coding agents daily. And sometimes managers have no idea what their team's agents did yesterday, let alone last week. The agent may have finished three hours of productive, deeeep work, but if nothing got written down anywhere a manager can see, that work is effectively *invisible.* And the problem looks small until you're the one running a hybrid team and realizing your standup cadence was built for a world where all work is human-driven. Agent work doesn't show up in a standup meaningfully. It doesn't have a blocker, it doesn't raise its hand... We've been working on this - here we bring you 3 things that we've found: 1) The first thing that becomes obvious is that the visibility gap for agents is structurally different from the visibility gap for humans. When a human doesn't fill in their standup, you can just ping them. When an agent finishes a session and nobody captured it, the work is just gone from the team's record -- the developer might know what happened, but the manager is starting from zero. 2) The second thing is that the fix isn't a new meeting or even check-in. Both of those put the burden back on the human. What actually works is making the agent itself responsible for reporting. When a coding agent wraps a session, it should produce the same kind of structured update a human would submit: what it worked on, what it completed, where it got stuck. And if that update lands alongside people's standups, a manager can see everything in one place without chasing people. 3) The third thing is that human-to-agent coordination needs its own channel. Right now most teams are doing this informally: either a developer leaves a comment in a file, or writes a long prompt at the start of a session, or just hopes the agent picks up context from the repo. Which is unreliable to say the least. When you have multiple agents running across a team, you need a structured way to assign work, check status, and surface issues before they become blockers or straight bottlenecks for the agents. None of this requires changing how developers work at its core. If standups gets filled in automatically when an agent session ends, managers will get a unified feed of human and agent activity. Our early learnings suggest this coordination should happen through the same async channel the team already uses.
English
1
0
6
111
Dailybot
Dailybot@Dailybot·
You stop asking "what did you work on?" Dailybot already pulled it from ClickUp. Your standups get shorter. Your reports get better. Try it: 🔗help.dailybot.com/article/integr…
English
0
0
1
17
Dailybot
Dailybot@Dailybot·
If your team members connect their own ClickUp tokens, the match is exact. If they don't, Dailybot tries to match by full name. Works in most cases.
English
1
0
2
18
Dailybot
Dailybot@Dailybot·
Your team fills out a standup. Dailybot already knows what they did in ClickUp. No "I forgot to update the card" Here's how it works:
English
1
0
4
29
Dailybot
Dailybot@Dailybot·
Developers who automate everything at work, this is for you. They're already creating workflows; Dailybot offers this feature for free within your chat.
Dailybot tweet media
English
0
0
1
13
Dailybot
Dailybot@Dailybot·
I prefer to detect and automate blockers during sprints rather than wait for the Friday retrospective to discover there was a problem on Tuesday.
English
1
0
1
14
Dailybot
Dailybot@Dailybot·
A blocker that remains unresolved for 48 hours costs more than the blocker itself. It costs trust, speed, and two status meetings no one wanted 💀
English
1
0
2
27