固定されたツイート
Andrea | Code to People
265 posts

Andrea | Code to People
@codetopeople
Technical expertise ≠ leadership ability. Frameworks for analytical minds making the jump to management ⬇️Free assessment: which IC habits are holding you back?
参加日 Şubat 2026
111 フォロー中46 フォロワー

@LeadClearly1 This is one of the things I find the hardest doing as a manager. Shielding my team from the corporate politics and and pushing back on unrealistic timelines.
English

@codetopeople Exactly. Leadership isn't about relaying pressure, it's about translating it into something your team can actually execute.
English

@flogatron @1ssve 100%. Saved myself months of frustration.
English

I'm seeing this pattern with CEO dashboards now.
CEO A:
- Spends $50K on fancy BI tools
- Has dashboards nobody trusts
- Argues with teammates about feelings instead of facts
- Systems break every few months
- Still firefighting in Slack and email
CEO B:
- Uses a Google Sheet
- Tracks metrics weekly
- Makes their team enter metrics manually
- Uses green/yellow/red stoplights
- Reviews the whole business in 15 - 30 minutes
In 2026, guess who scales to $20M?
English

Fair point on the discipline angle - but that breaks down fast when you scale.
I built reporting from scratch for a CTO at a media company. Started in Excel, totally fine at first. Six months later the team and investment tripled, 20+ people trying to edit the same sheet, manual errors everywhere, no way to QC anything. The discipline was still there, but the tool just couldn’t keep up.
Once we set up the infrastructure and built a simple reporting layer on top, efficiency followed.
English

@codetopeople @ryandeiss Would still argue that CEO B will win. It was never about data infrastructure or automations. There is an inherent closeness to execution that happens when you have to manually source and enter a number. That discipline is what creates progress.
English

@sneminaj Or bad managers make the easiest job feel impossible.
English

@milkkarten Nope, just my real take.
I’m all for AI in the workplace. But when leadership trusts a tool’s judgment over its own team’s, that’s not an AI strategy.
English

@milkkarten When leadership outsources judgment to a tool, it doesn’t just slow things down - it tells your team that their expertise and experience don’t count for much. That’s a trust problem, not an AI problem.
English

@Stopworkplacebu Most gatekeepers weren’t born that way. They were never properly developed by their own managers, so knowledge felt like the only leverage they had. It’s a culture problem that gets passed down until someone breaks the cycle.
English

Hot take: being promoted into management should require as much vetting as hiring externally.
We don't ask ICs 'do you actually want to manage people?' We just assume the best performers do and promote them.
Then we're surprised when great engineers, analysts, and data scientists struggle in management roles they didn't choose. The skills that make you exceptional at the work don't automatically transfer.
Agree or disagree?
English

@randomrecruiter I don’t respond to “hey” until I know what it’s about. My time and your anxiety are both too valuable for that 😂
English

This is exactly why the analyst role isn’t going away - it’s evolving. The institutional context, the judgment about which metrics to trust, knowing what changed in the business last quarter - that lives with people, not models. The best analysts I’ve managed are the ones who can hold all of that and translate it into a decision. That’s still very human work.
English

Everyone is trying AI analysis with agents. No one has gotten it to be trustworthy enough to open widely to all business users.
Dozens of companies I've spoken to in the past few weeks have told me this. Why is that?
It turns out, the last 15-30% of quality and trust in analysis is really really hard to get right.
This is what your best analysts know that AI doesn't yet:
1. Which metrics to trust, out of the mess of metrics and contradicting dashboards
2. What's changed in the business behind the numbers, including strategy, roadmaps, or just one-off edge cases.
3. How to actually think about the problem, including what its downstream implications and decisions are and how it should be scoped
4. What happened last time, and what feedback signals should alter our future judgements
AI can sound smart while being wrong in exactly the ways that waste the most time.
Wrote an article explaining this, and what we can do about it, in the latest in Opinionated Intelligence.

English

@Gavel_on_X Learning to communicate up. Most people optimize for doing great work. Far fewer learn how to make that work visible to the people who make decisions about their career. It’s not politics - it’s just how organizations actually function.
English









