ramsac

6.3K posts

ramsac banner
ramsac

ramsac

@ramsac_ltd

We provide secure IT outsourcing for UK business of 50-500 users, with 24 hour support, cyber defence, AI and expertly led IT projects -we are the secure choice

United Kingdom شامل ہوئے Nisan 2009
622 فالونگ1.3K فالوورز
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕ Everyone now agrees that AI is a game changer, and in most of the conversations I’m having with senior leaders there’s genuine belief behind that. There’s investment, there are initiatives underway, and there’s no shortage of intent to do something meaningful with it. What I’m finding more interesting is what happens when you take the conversation one step further. When I ask what that actually means for their operating model, the confidence often starts to soften. The discussion shifts quite quickly into tools, pilots, and use cases, all of which matter, but none of which really answer the question. What’s often missing is a clear view of how AI is going to change the way the business is structured, how decisions are made, how work flows through teams, and how roles are likely to evolve. There’s activity, but not always alignment around what it adds up to. There’s a reason for that. It’s relatively straightforward to layer AI onto existing processes and make them faster or cheaper. It’s much harder to step back and ask whether those processes should exist in their current form at all, or whether the balance between human effort and machine capability needs to be fundamentally rethought. In a conversation this week, a CEO told me she felt they were making strong progress with AI. When we explored what had actually changed in how their teams operated day to day, the honest answer was very little. The tools were there, but the system around them hadn’t really shifted. That gap feels like where a lot of organisations are right now. There’s momentum around adoption, but less clarity around transformation. If AI is only being used to improve the speed of what already exists, then there’s a risk that its impact is being contained rather than realised. The harder questions are the ones that tend to get deferred. What work no longer needs to exist in its current form, where does human judgement genuinely add value, how should decision-making change when information and capability are available in real time, and what does that mean for roles, structure, and accountability. Those are operating model questions, and they’re not easy ones to answer. They require trade-offs, they create uncertainty, and they often challenge established ways of working that have been in place for years. But they’re also the questions that determine whether AI becomes a genuine enabler of change or just another layer of optimisation. I’d be interested to hear how others are approaching this, particularly where you’re seeing real shifts in how organisations operate rather than just how they experiment.
Rob May | AI & Cybersecurity Leader tweet media
English
2
8
9
463
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕ The more I play with Vibe Coding (especially with the advances we've seen in recent weeks!) the more I think it’s genuinely remarkable. You describe the app you want, outline a bit of logic, and within minutes you have something tangible, screens, workflows, integrations, all appearing far faster than traditional development would ever allow and something that a few years ago would have been a six-month project with a sizeable team. But the more I’ve played with it, the more a different set of questions has surfaced. Magicking an app into existence is one thing. Running it inside a real organisation is something else entirely. How do you control changes once it’s live? How does it integrate safely with your existing systems? Who supports it when it breaks on a Tuesday afternoon? How do you train people so they trust it? How do you know it’s secure? And what happens when it fails or is compromised? These aren’t anti-AI questions. They’re secure operational ones. AI has made building software dramatically easier. It hasn’t made governance, security, backup strategy or change management disappear. If anything, the speed makes discipline more important. The real risk isn’t that AI writes poor code. It’s that we start treating production systems like weekend experiments. At @ramsac_ltd we’ve always believed that technology only delivers value when it’s properly supported, secured, documented and owned. That principle doesn’t change just because the first draft was written by a model instead of a developer. I see vibe coding as an extraordinary prototyping accelerator. It can compress months of exploration into days and help leadership teams test ideas at very low cost. But if something proves valuable and moves towards production, it still needs source control, structured environments, security review, monitoring, backups, and a clear support model. In short, it needs grown-up operational discipline. Speed is exciting. Stability is essential. The organisations that will win in this next phase of AI adoption won’t be the ones who rush it into everything. They’ll be the ones who combine AI’s speed with operational maturity and thoughtful governance. I’d be interested to hear how others are approaching this. Are you keeping AI-built tools at prototype level, or are you promoting them into production environments?
Rob May | AI & Cybersecurity Leader tweet media
English
1
6
7
222
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕“Why can’t this be done by AI?” Is becoming a default question. Over the past few months, I’ve been travelling the country delivering AI talks and leadership workshops. Different sectors, different sizes of organisation, very similar conversations and I think a pattern is starting to emerge. In more and more firms, if you’re presenting a business case to increase headcount, you’re now expected to answer one question upfront: Why can’t this role be done by AI? On the surface, that feels reasonable. AI can automate tasks, improve efficiency, and reduce cost. Leaders are rightly asking hard questions before approving new roles. But here’s the tension I’m seeing. The question is being asked before the work is properly understood. Before judgement, accountability, and human context are considered. Before people are clear on what good looks like, not just what’s cheap or fast. I'm a huge fan of AI and it is excellent at tasks, but it’s far less reliable at responsibility. Many of the roles being challenged aren’t about producing outputs. They’re about making decisions, handling ambiguity, managing risk, and carrying consequences when things go wrong. When we reduce headcount conversations to “human versus AI”, we miss the point. The real question isn’t "Can AI do this?", it’s "Where should human judgement sit?" The healthiest organisations I’m working with aren’t using AI as a gatekeeper to stop hiring. They’re using it as a thinking tool and helping to design aspects of job roles. They ask: 🤖 Which parts of this role should AI support? 🤖 Where must human judgement remain non-negotiable? 🤖 How do we combine the two responsibly? If we get this wrong, we won’t just under-hire, we’ll under-think! AI should reduce busywork, it shouldn't remove accountability. It should raise the bar for human contribution, not quietly lower it and the organisations that get this right will build stronger teams, and they’ll make better decisions along the way. That’s the conversation I believe leaders need to be having now. How is this question being handled where you work, and what’s it changing about the way roles are designed?
Rob May | AI & Cybersecurity Leader tweet media
English
0
10
9
716
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕️ ChatGPT 5.2 has had a major upgrade on image creation and editing. Take a picture of you and then try this prompt: Create a photo-style line drawing / ink sketch of a face identical to the uploaded reference image — keep every facial feature, proportion, and expression exactly the same. Use green and white ink tones with intricate, fine line detailing, drawn on a notebook-page style background. Show a right hand holding a pen and an eraser near the sketch, as if the artist is still working.
Rob May | AI & Cybersecurity Leader tweet media
English
0
1
2
242
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕️ With GPT-5 rumors heating up and deepfakes evolving faster than ever, one thing's clear: AI governance isn't optional anymore, it's the new frontline for cybersecurity leaders. What's the biggest AI risk your team is preparing for in 2026? Deepfakes, data poisoning, or something else? #AISecurity
GIF
English
1
7
8
198
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
@OpenAI has just released GPT-5.2, and the results speak for themselves. It is not just a marginal improvement on earlier models. It is a genuine step forward. What stood out to me is the consistency of the gains. Whether the tests covered science questions, tricky maths, interpreting diagrams or day-to-day knowledge work, 5.2 performed better across the board. In some areas the jump is remarkable. You don't need to know the detail behind the benchmarks to understand the significance. These tests exist to measure how well an AI can think, solve problems and explain its reasoning. GPT-5.2 does all of that with a level of reliability we simply have not seen before. For most people the takeaway is straightforward. Work that felt too complex for AI a year ago is now well within reach. Long, multi-step tasks, planning, research and problem solving all feel more stable and dependable. The advantage will belong to those who move early. The tools are ready, and the capability is real. What matters now is how fast we can develop the habits and workflows that make the most of it.
Rob May | AI & Cybersecurity Leader tweet media
English
0
5
5
258
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕️ Organisations with Cyber Essentials are 92% less likely to make a claim on their cyber insurance. I know some of my infosec colleagues raise an eyebrow at schemes like Cyber Essentials, but the return on investment is compelling. It is one of those controls that quietly reduces risk every single day. If you are considering how to strengthen your baseline controls, I am always happy to share what we see working well across other organisations. Source: National Cyber Security Centre #CyberSecurity #Leadership #Technology
Rob May | AI & Cybersecurity Leader tweet media
English
2
10
13
747
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕️ More than 18 billion messages are now sent to OpenAI's ChatGPT every single week!
Rob May | AI & Cybersecurity Leader tweet media
English
0
6
10
194
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕️ Most people haven't heard of it yet, but Microsoft quietly announced something at Ignite that could be the real shift in how you work. Ignite is Microsoft’s annual global conference for technology leaders and it took place this week, with a wave of AI updates that will shape how organisations use Office in 2026 and beyond. A significant announcement was Work IQ. This is the new intelligence layer that sits behind Copilot and understands how you work, who you collaborate with and the communication style you prefer. It goes well beyond prompts. It brings context, memory and personal awareness into Office in a way Copilot hasn't had before. So what does that actually mean for the average user? Work IQ recognises the projects you're involved in, the people you work with most and the tone you use in different settings. When you ask Copilot to draft something, it reflects how you normally write, not how a generic model writes. It keeps track of your work. If you are preparing a report, reviewing a proposal and dealing with a client request, Work IQ can join the threads and remind you what has changed since you last touched each piece. It can surface the right file, explain context and guide you to the next step. It also understands your rhythm. If you tend to focus on strategic work in the morning and admin at the end of the day, it adapts its suggestions. It supports your natural flow rather than disrupting it. Work IQ is in early access now and will begin to reach UK organisations from December, with a wider rollout through the first part of 2026. Alongside this, Word, Excel and PowerPoint have gained built in Copilot support that feels more natural. You stay inside the app, ask for what you need and iterate without breaking your flow. Microsoft said their goal is simple. A Copilot that joins the dots across Office, Outlook and Teams and behaves consistently and safely across everything you do. If you want independent guidance on what this means for your organisation, @ramsac_ltd can help. As of this weekend, Work IQ is already included in the SLT and Board AI briefings I deliver, so if you want your leadership team to understand the impact and the opportunities, just let me know.
Rob May | AI & Cybersecurity Leader tweet mediaRob May | AI & Cybersecurity Leader tweet media
English
0
11
13
581
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕️ I have been trying out @OpenAI's new GPT 5.1 and its new Thinking mode, and what stands out is not a headline feature, but a change in the overall experience. It feels more grounded, more consistent and far more capable of following the intent behind a request rather than simply reacting to the words. The everyday model now feels smooth enough to rely on without second guessing it, and the Thinking version has the patience and depth that proper analysis requires. It is less about being clever and more about being useful, which is what most people actually need from these tools. What interests me most is the direction of travel. You can see the care going into how the model responds, how it reasons and how it stays aligned with the boundaries that matter. If this continues, we move closer to AI that supports better judgement and clearer decision making, not just quicker outputs. For anyone who has felt that recent versions were losing some of their reliability, this release feels like a return to form, and a promising sign of what is coming next.
Rob May | AI & Cybersecurity Leader tweet media
English
0
7
7
519
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕️ Question - Would you trust AI to draft your legal arguments? A recent article by Jonathan Ames in The Times caught my attention. It tells the story of a barrister who was rebuked after relying on AI-generated case citations that turned out to be “entirely fictitious.” The judge described it as a “waste of the tribunal’s time,” and warned of the risks to public confidence when artificial intelligence is misused. As someone who regularly speaks to legal professionals about the responsible use of AI, I found this case a striking example of both the promise and the peril of these tools. AI can save time, improve access to knowledge, and support legal research, but it cannot replace professional judgement, ethics, or the responsibility to verify accuracy. Lady Justice Sharp recently warned of “serious implications for the administration of justice” if lawyers fail to check the accuracy of AI outputs. This story brings that warning to life. For me, the takeaway is simple. The issue isn’t that AI was used, but that it was used without human oversight. In law, as in every field, we must lead technology, not be led by it. Let me know, what do you think?
Rob May | AI & Cybersecurity Leader tweet mediaRob May | AI & Cybersecurity Leader tweet media
English
0
8
11
1K
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕ My latest book is more than a read, it is a challenge! The AI Companion for Leaders is a 12-month journey made up of 365 daily prompt challenges. Each one is designed to develop your AI skills, stretch your leadership thinking and challenge your current assumptions. Every day you get one question. One moment to pause. One opportunity to sharpen how you think, decide and lead. Used with AI tools like ChatGPT or Copilot, these prompts help you practise the art of good prompting. They show you how to use AI to think better, not just faster. 📘 The AI Companion for Leaders: 365 Prompts to Sharpen Thinking and Strengthen Leadership: amzn.eu/d/bcTM4Sw Are you ready to take the 12-month challenge with me? #Leadership #ArtificialIntelligence #AIForLeaders #ContinuousLearning #Reflection
Rob May | AI & Cybersecurity Leader tweet media
English
1
12
16
1.1K
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕️ Book Launch Day!! It started with a simple idea. I began sharing a Prompt of the Day series on LinkedIn, offering thought-provoking questions to help leaders think more clearly and use AI more effectively. The response was amazing. Every comment and conversation confirmed that people wanted practical, human ways to bring AI into their leadership. So I turned those prompts into a book. The AI Companion for Leaders: 365 Prompts to Sharpen Thinking and Strengthen Leadership is out today. It is a daily guide for reflection, clarity and courageous decision-making. It is for anyone who wants to use AI to become more human, not less. You can find it here 👇 📘 amzn.eu/d/8GS7zb3 Thank you to everyone who read and engaged with the original posts. Your curiosity helped make this book possible. It also makes a great stocking filler for anyone ready to start using AI thoughtfully and confidently 😊 Here’s to better questions, sharper thinking and more human leadership. #Leadership #ArtificialIntelligence #AIForLeaders #BookLaunch #reflection
Rob May | AI & Cybersecurity Leader tweet mediaRob May | AI & Cybersecurity Leader tweet media
English
1
11
13
1.3K
ramsac ری ٹویٹ کیا
Microsoft 365
Microsoft 365@Microsoft365·
One place for all the agents that work together with Microsoft 365 Copilot. The Agent Store is your hub inside Copilot, packed with agents ready to help you automate, create, and get things done. Also available in Microsoft Marketplace. Browse, try, install. Learn more: msft.it/6019ssiEU
English
2
13
42
7.6K
ramsac ری ٹویٹ کیا
Rob May | AI & Cybersecurity Leader
⭕️ I laughed when I saw this meme, but it does make you think. People now share more with AI than they do with parents, friends or even therapists. It feels safe, private and instantly responsive. That can be useful for thinking aloud or exploring ideas without fear of judgement. But there is a risk. AI can sometimes be a little too agreeable. It mirrors what you say and may reinforce your views rather than challenge them. That kind of digital sycophancy can make us feel validated, but it does not always help us grow. So yes, this meme is funny, but it is also a reminder. Use AI as a tool for reflection and creativity, but keep your deepest conversations with real people. That is where perspective, empathy and true understanding live. Do you think AI ever flatters us a little too much?
Rob May | AI & Cybersecurity Leader tweet media
English
0
7
9
470
ramsac ری ٹویٹ کیا
Microsoft Security
Microsoft Security@msftsecurity·
Effective AI security starts with implementation. Security leaders: learn about the benefits of a unified security solution and how an AI-powered security operations center empowers your teams: msft.it/6017s5oFB #AI #SecOps
Microsoft Security tweet media
English
0
5
16
5.2K