حِجَّيِ
10K posts



القاهرة ٢٩/٣/٢٠٢٦

Microsoft Threat Intelligence has observed threat actors actively experimenting with techniques to bypass or “jailbreak” AI safety controls. By reframing malicious requests, chaining instructions across multiple interactions, and misusing system‑ or developer‑style prompts, threat actors can coerce models into generating restricted content that bypasses built‑in safeguards. These techniques demonstrate how generative AI models are probed, shaped, and redirected to support reconnaissance, malware development, and social engineering while minimizing friction from moderation. AI guardrails have become dynamic surfaces that attackers test and manipulate to sustain operational advantage. As AI becomes more deeply embedded in enterprise workflows, understanding how attackers test and manipulate these guardrails is critical for defenders. Learn more about securing generative AI models on Azure AI Foundry: msft.it/6013Qs5oX



إزاي بلد فيها النيل والبحرين، يبقى السمك فيها يكون لمن استطاع إليه سبيلا، دي مش ظروف عالمية، دي عصابة جباية همها تسدد فواتير المنظرة من بطون الشعب الاخرس على حقه..!















