Whitesmith

1.6K posts

Whitesmith banner
Whitesmith

Whitesmith

@whitesmithco

Turning AI experimentation into measurable impact

London, UK Katılım Mayıs 2013
385 Takip Edilen564 Takipçiler
Whitesmith
Whitesmith@whitesmithco·
What changes for engineers when an AI agent becomes a real teammate? We built a production wallet feature where AI wrote all the implementation code. They recorded what that shift actually feels like from inside the team.
English
0
0
0
28
Whitesmith
Whitesmith@whitesmithco·
New webinar: AI for Mobile Developers. Our mobile team used AI agents to build a cross-platform wallet feature, with AI writing 100% of the implementation code. Ricardo and Rui are sharing the 4-layer system they use across iOS and Android: context, build, test, review. If your mobile team is figuring out how AI coding tools apply to native development (not just web apps), this is it. 13 May, 4:00 PM UTC+1. Register: whitesmith.co/webinar-mobile/
Whitesmith tweet media
English
0
0
1
32
Whitesmith
Whitesmith@whitesmithco·
This week we're launching two webinars. One for mobile engineering teams. One for leadership teams rolling out AI across an organisation. Both built on what we've learned doing this work with real companies, not theory.
Whitesmith tweet mediaWhitesmith tweet media
English
0
1
2
48
Whitesmith
Whitesmith@whitesmithco·
Our mobile team stopped splitting work by platform. iOS engineers now contribute to Android. Backend engineers contribute to mobile. AI agents handle the parts that used to require deep platform-specific knowledge. The shift wasn't about better tools. It was about restructuring the architecture so agents could produce contextually appropriate native code.
English
0
0
0
26
Whitesmith
Whitesmith@whitesmithco·
Practical AGI isn't one superintelligent system. It's orchestrated AI tools working together: one architects, one codes, one reviews, one tests. The competitive edge has moved from having the model to integrating it. What matters now is how you wire it into your operations.
English
0
0
1
26
Whitesmith
Whitesmith@whitesmithco·
We gave our team access to AI tools and expected adoption to follow. It didn't. What worked: solving one concrete pain point. QA documentation went from 4 hours to 30 minutes. After that, people came to us asking how to do the same for their work. You can't delegate AI adoption. Leadership has to model it first.
English
0
0
0
23
Whitesmith
Whitesmith@whitesmithco·
AI wrote 100% of the code for a production wallet feature. Real money, real users, cross-platform. The team restructured how they work so agents handle implementation while engineers focus on architecture and review. whitesmith.co/blog/how-we-bu…
English
0
0
1
33
Whitesmith
Whitesmith@whitesmithco·
Your team has AI tools, some engineers use them but the way the team works hasn't changed. That's the gap between experimentation and implementation. It's not a tools problem, it's an operating model problem.
English
0
0
1
26
Whitesmith
Whitesmith@whitesmithco·
We are building a tool that lets AI agents interact directly with the iOS simulator. Tap, swipe, inspect the UI, and verify that Jira tasks were actually met. The end goal: multiple agents testing the app with scripts, so QA focuses on the edge cases that need a human eye.
English
0
0
1
67
Whitesmith
Whitesmith@whitesmithco·
What if your Monday management review started with "what do we do about this" instead of "what happened"? 8 agents are continuously monitoring the business. Monday's review is built on this morning's numbers, not last Friday's. Everyone walks in already knowing what changed and where the risks are. This is what AI creating measurable value inside operations actually looks like. Not a chatbot answering questions. A system that keeps your leadership team working from the same current picture of the business.
Whitesmith tweet media
English
0
0
1
24
Whitesmith
Whitesmith@whitesmithco·
What does AI-native mobile development actually look like? Ricardo feeds product requirements to multiple agents simultaneously, building iOS and Android features in parallel. A dedicated agent handles first-pass code review on every PR. Another writes UI tests from app screenshots.
English
1
0
1
85
Whitesmith
Whitesmith@whitesmithco·
Most AI agent setups: one developer, one agent, one task. Where the value actually is: agents provisioned with full multi-repo context, sandboxed for security, supervised through PRs. Our playbook: whitesmith.co/blog/scaling-a…
English
0
0
1
27
Whitesmith
Whitesmith@whitesmithco·
As practitioners, we run monthly hackathons! Pedro built a tool in our last hackathon that generates, validates, and modifies Tailscale ACLs from plain-text descriptions. You describe what you want in English. It writes the ACL, checks it against your existing config, and flags conflicts. One engineer, 4 hours and a tool that replaces hours of manual ACL management for any team running Tailscale.
English
0
0
2
56
Whitesmith
Whitesmith@whitesmithco·
A polished landing page used to mean someone invested real time and money, now it takes 20 minutes and a prompt. GitHub repos, portfolios, even customer testimonials: the signals we used to trust are trivially synthetic. The question isn't whether AI content is good enough. It's that you can no longer tell the difference. What still works: named people, public track records, radical transparency about what went wrong, not just what went right. whitesmith.co/blog/how-is-ai…
English
0
0
0
20
Whitesmith
Whitesmith@whitesmithco·
Last week, we ran our Claude for Leaders webinar. Instead of talking about what AI could do, we showed workflows we actually use. This was one of them: Every morning, one command helps you to make the first decision at 9:05 instead of 45 minutes of manual triage. Multiply that across 50 weeks, and you recover over 170 hours a year. Not by working faster, but instead by skipping the work that didn't need you in the first place. That's the kind of thing we covered: real tools, running in live environments, built by the same team that implements them for clients.
Whitesmith tweet media
English
0
0
1
51
Whitesmith
Whitesmith@whitesmithco·
"I'm pretty sure Whitesmith is the right company. Never saw anyone doing this." That's from an engineering lead after the first round of AI workshops we're running at his company. We're working with engineer leads to redesign how their engineering teams use AI inside their actual delivery workflows, not in a sandbox. The reaction makes sense when you understand what the workshops we do look like. We don't present slides about AI capabilities. We open our own Claude Code setups and show how our team use them to ship real projects. When a lead sees a working delivery process powered by AI, the conversation moves quickly from "should we try this?" to "how do we implement it for our team?" Most engineering organisations already have AI tools available. The gap isn't access. It's knowing how to change the way a team works so that AI creates measurable value instead of sitting in the background. That's the operating model problem, and it's what we focus on.
Whitesmith tweet media
English
0
0
0
44
Whitesmith
Whitesmith@whitesmithco·
"AI strategy" that's really just a list of tools you plan to buy is not a strategy. Strategy means: what operating model changes are you making? Who owns the outcomes? Where do you prove value before you scale? Most companies skip those questions entirely.
English
0
0
0
20
Whitesmith
Whitesmith@whitesmithco·
Most teams use AI for individual tasks: autocomplete here, a quick question there, maybe some boilerplate generation. We embedded it into how we move from requirements to a shipped product.
English
0
0
1
57
Whitesmith
Whitesmith@whitesmithco·
As practitioners, we do AI hackathons every month. This month, Rui built a personal AI assistant on a $7/month VPS. It runs on Telegram. Manages his tasks, syncs with Google Calendar, and extracts context from every conversation to nudge him later. Just Claude Agent SDK, a Telegram bot, a cron job, and a file for life context.
English
0
0
2
68
Whitesmith
Whitesmith@whitesmithco·
As practitioners, we do AI hackathons every month. This month, Tomas built a Claude Code plugin that spins up three AI agents to analyse any problem: 1. Researcher: searches the web for context 2. Thinker: analyses alternatives in your codebase 3. Creative: looks for unconventional solutions A leader agent compiles everything into one answer.
English
0
0
0
81