Rob May | AI & Cybersecurity Leader

43K posts

Rob May | AI & Cybersecurity Leader banner
Rob May | AI & Cybersecurity Leader

Rob May | AI & Cybersecurity Leader

@robmay70

⭕️ Helping leaders harness AI safely | Cybersecurity Ambassador | Founder @ramsac_ltd | TEDx & Keynote Speaker | Bestselling Author | Tech & Resilience Insights

UK Katılım Şubat 2009
7.9K Takip Edilen19.4K Takipçiler
Sabitlenmiş Tweet
Rob May | AI & Cybersecurity Leader
⭕️ Most AI adoption fails not because the tech doesn’t work, but because there’s no clear business case. Harnessing AI helps you create policies, training, ROI measures, and governance so AI delivers value instead of chaos. amazon.co.uk/stores/Rob-May…
Rob May | AI & Cybersecurity Leader tweet media
English
0
22
32
3.9K
Rob May | AI & Cybersecurity Leader retweetledi
OpenAI
OpenAI@OpenAI·
Codex Security—our application security agent—is now in research preview. openai.com/index/codex-se…
English
367
294
2.7K
586.7K
Rob May | AI & Cybersecurity Leader retweetledi
OpenAI
OpenAI@OpenAI·
GPT-5.4 Thinking and GPT-5.4 Pro are rolling out now in ChatGPT. GPT-5.4 is also now available in the API and Codex. GPT-5.4 brings our advances in reasoning, coding, and agentic workflows into one frontier model.
OpenAI tweet media
English
1.9K
3.3K
23.6K
6.7M
Rob May | AI & Cybersecurity Leader
⭕ The latest Copilot release is one of the clearest signals yet that @Copilot is shifting from AI assistance to AI orchestration and that distinction matters. The key thing isn’t the number of new features, it’s the architectural direction. Copilot is moving beyond “help me write this” towards “run this process for me”. The introduction of multi-agent coordination is the turning point. Copilot agents can now call other agents, enabling genuinely multi-step, cross-application workflows. We’re no longer talking about answers and drafts. We’re talking about execution. A task can be initiated, context gathered across systems, decisions made, and outcomes delivered end-to-end. That changes the leadership conversation. The question is no longer “Where can we use AI?” It’s “Which processes are we prepared to delegate?” Copilot now also works directly with highlighted text, open emails, SharePoint lists, scanned PDFs and live documents. It’s operating inside the real substrate of work, not responding to abstract prompts. We’re also seeing intelligence embedded directly into workflows. Scheduling suggestions in Outlook. Teams recaps with visual context. Word editing enabled by default. PowerPoint creation properly aligned to brand guidelines. Copilot is no longer a side panel it's becoming part of the default flow of work, and that when behaviour shifts. It's not about whether you should use AI, it simply becomes how work gets done. Then there’s governance. Readiness dashboards, power user insights, federated connectors, and Defender’s risk-based agent inventory show clear maturity. Microsoft is treating AI agents as operational entities that require oversight, ownership and lifecycle management. That’s significant. Once agents coordinate workflows, governance can’t be an afterthought. It has to be designed in. And one update that will quietly delight a lot of teams: Brand Kits that actually work. Upload a brand guidelines document and Copilot extracts colours, fonts and styles automatically. After years of inconsistent decks and manual fixes, this is a genuinely impactful improvement for anyone producing content at scale. It’s not flashy. It’s practical. And practical improvements build confidence. Taken together, these changes point towards the emergence of AI-native organisations. Not organisations that experiment with AI, but ones where coordination, context handling and routine decision-making are increasingly system-enabled, and people focus on judgement, creativity and relationships. The productivity lift won’t come simply because Copilot has new features. It will come to organisations that are ready, with clean data, clear governance and intentional change management. The strategic question now isn’t “What can Copilot do?” It’s “Are we architecturally ready for AI orchestration?” 🤖
Rob May | AI & Cybersecurity Leader tweet media
English
4
5
10
256
Rob May | AI & Cybersecurity Leader retweetledi
Microsoft Tech Community
Microsoft Tech Community@MSTCommunity·
Learn about the new updates and features for Copilot in the What’s New in Microsoft 365 Copilot for February 2026 blog. msft.it/6013QjBNz
Microsoft Tech Community tweet media
English
3
16
54
3.9K
Rob May | AI & Cybersecurity Leader
⭕ The more I play with Vibe Coding (especially with the advances we've seen in recent weeks!) the more I think it’s genuinely remarkable. You describe the app you want, outline a bit of logic, and within minutes you have something tangible, screens, workflows, integrations, all appearing far faster than traditional development would ever allow and something that a few years ago would have been a six-month project with a sizeable team. But the more I’ve played with it, the more a different set of questions has surfaced. Magicking an app into existence is one thing. Running it inside a real organisation is something else entirely. How do you control changes once it’s live? How does it integrate safely with your existing systems? Who supports it when it breaks on a Tuesday afternoon? How do you train people so they trust it? How do you know it’s secure? And what happens when it fails or is compromised? These aren’t anti-AI questions. They’re secure operational ones. AI has made building software dramatically easier. It hasn’t made governance, security, backup strategy or change management disappear. If anything, the speed makes discipline more important. The real risk isn’t that AI writes poor code. It’s that we start treating production systems like weekend experiments. At @ramsac_ltd we’ve always believed that technology only delivers value when it’s properly supported, secured, documented and owned. That principle doesn’t change just because the first draft was written by a model instead of a developer. I see vibe coding as an extraordinary prototyping accelerator. It can compress months of exploration into days and help leadership teams test ideas at very low cost. But if something proves valuable and moves towards production, it still needs source control, structured environments, security review, monitoring, backups, and a clear support model. In short, it needs grown-up operational discipline. Speed is exciting. Stability is essential. The organisations that will win in this next phase of AI adoption won’t be the ones who rush it into everything. They’ll be the ones who combine AI’s speed with operational maturity and thoughtful governance. I’d be interested to hear how others are approaching this. Are you keeping AI-built tools at prototype level, or are you promoting them into production environments?
Rob May | AI & Cybersecurity Leader tweet media
English
1
6
7
218
Rob May | AI & Cybersecurity Leader retweetledi
Rob May | AI & Cybersecurity Leader
I use all of them! It really depends on what’s trying to be achieved. If the aim is quick productivity wins, drafting, summarising, structuring ideas, then tools like ChatGPT from OpenAI work well alongside Office 365 with very low friction. If the focus is heavy document work or complex analysis, Claude from Anthropic can be a strong option. If the requirement is true, native integration inside Outlook, Word, Excel and Teams, then Microsoft Copilot is really the answer, with different cost and governance considerations. The tools aren’t interchangeable. Start with the outcome, then choose the tool.
English
1
0
1
216
Rob May | AI & Cybersecurity Leader retweetledi
Sam Altman
Sam Altman@sama·
The companies that succeed in the future are going to make very heavy use of AI. People will manage teams of agents to do very complex things. Today we are launching Frontier, a new platform to enable these companies.
English
1.3K
888
14.4K
1.8M
Rob May | AI & Cybersecurity Leader
⭕ One of the biggest mistakes organisations make with AI is trying to do too much, too quickly. Multiple tools are rolled out, experimentation is encouraged everywhere, and there’s a quiet hope that value will emerge through volume. What usually follows is confusion, inconsistency, and a gradual loss of momentum. If you want AI to deliver real value in 2026, the answer isn’t more activity. It’s focus. Over the past year, I’ve seen the most progress in organisations that take a very simple approach. One problem. One tool. One rule. It sounds almost too basic, but it works. It starts with being honest about the problem you’re actually trying to solve. Not “using AI better” or “being more innovative”, but something tangible. Too much time spent rewriting documents. Meetings that generate discussion but little clarity. Managers spending hours preparing updates. Decisions taking longer than they should. If you can’t describe the problem in a single sentence, it’s probably too big to start with. AI works best when it’s pointed at something specific. Once the problem is clear, choose one main tool to address it. Not three. Not a platform plus its add-ons. One. Confidence comes from familiarity. When people use the same tool consistently, they learn its strengths, its weaknesses, and its limits. That’s when sensible behaviour starts to emerge. You can always expand later. Starting wide almost always leads to shallow adoption. The final piece, and the one most organisations skip, is agreeing one simple rule. Nothing client-related goes into AI without approval. All outputs are reviewed by a human. AI supports drafting, not final decisions. If you’re unsure, ask before using it. The rule itself matters less than the fact that one exists. Rules create safety. Safety creates confidence. Confidence drives adoption. This approach works because it removes overwhelm, builds trust, and creates momentum. Small wins build belief, and belief changes behaviour. AI adoption rarely fails because people lack ability. It fails because they don’t feel confident enough to use it properly. The organisations making the most progress aren’t chasing every new feature. They’re learning as they go, building good habits early, and accepting that speed without direction creates risk, tools without clarity create confusion, and technology without leadership rarely delivers value. If you’re thinking about how to approach AI in 2026, start here. What’s one problem you want to solve? What’s one tool you’ll use to do it? And what’s one rule you’ll agree as a team? Get those three right, and you’ll already be further ahead than most. That’s how real advantage is built.
Rob May | AI & Cybersecurity Leader tweet media
English
2
8
9
285
Rob May | AI & Cybersecurity Leader retweetledi
Microsoft Edge Dev
Microsoft Edge Dev@MSEdgeDev·
This #DataPrivacyDay, let's have an honest​ conversation: ​ A lot of data breaches don’t come from sophisticated​ attacks. They come from Kevin getting tricked by the​ "WARNING: Your computer has been infected"​ message that pops up during lunch. ​ That’s why we have Scareware blocker to protect against phishy popups. Because Kevins are gonna Kevin.
English
2
4
17
1.8K
Rob May | AI & Cybersecurity Leader retweetledi
OpenAI
OpenAI@OpenAI·
Introducing Prism, a free workspace for scientists to write and collaborate on research, powered by GPT-5.2. Available today to anyone with a ChatGPT personal account: prism.openai.com
English
1.1K
2.3K
16.3K
5.8M