Robert Johnston

683 posts

Robert Johnston banner
Robert Johnston

Robert Johnston

@AdluminCEO

CEO, Adlumin, Inc (@Adlumin) #ifollowback

Katılım Temmuz 2014
932 Takip Edilen666 Takipçiler
RT
RT@RT_com·
Iran Armed Forces: SERIOUS consequences IF Trump bombs power plants: — Strait of Hormuz will be COMPLETELY CLOSED until power plants REBUILT — ALL Israeli energy infrastructure will be targeted — Power plants of countries in region with US bases will become LEGITIMATE targets
RT tweet media
RT@RT_com

Bessent asked about Trump’s threat to bomb Iranian power plants: 'Has Prez changed his mind about winding down the war?' BESSENT: 'this is the ONLY language the Iranians understand'

English
73
872
2.9K
225.5K
Ehsan 𓄂☼ 🇮🇷
این بهترین بازسازی ویژوالیه که از ماجرای «اف۳۵ زدن» #جاعش دیدم. خلاصه‌ش اینه که با حسگرهای حرارتی رد اف۳۵ رو زدن و به سمتش موشکی شلیک کردن که حسگر حرارتی داره. اما اف۳۵ برای اینم جواب داره: مهماتی شلیک می‌کنه که موشکو گمراه می‌کنن، موشک منفجر میشه و ترکش‌هاش به جت اصابت می‌کنن.
فارسی
16
56
568
70K
Tablesalt 🇨🇦🇺🇸
Tablesalt 🇨🇦🇺🇸@Tablesalt13·
🚨BREAKING - MASSIVE NON-NUCLEAR EXPLOSION IN QOM, IRAN JUST NOW Qom is Iran's clerical capital Lots of speculation that the USA has used its 30,000lb GBU-57A/B Massive Ordnance Penetrator (MOP) which has RARELY been used, and can penetrate 200 feet into the earth.
English
2.2K
9K
59.4K
6M
EHSAN KARAMI احسان کرمی
#محتبی_خامنه‌ای در بیمارستان سینا بستری است او از ناحیه شکم و پا به شدت آسیب دیده و در حال حاضر زیر ونتیلاتور است و هنوز حتی نمیداند جنگ شده و خانواده‌اش در بمباران کشته شده‌اند چه رسد به اینکه بداند به رهبری رسیده! جراحی او را دکتر ظفرقندی به همراهی دکتر مرعشی( برادر همسر هاشمی‌رفسنجانی-مدیر گروه جراحی دانشگاه بهشتی) انجام داده. دلیل حضور دو روز پیش پزشکیان در کنار ظفرقندی عیادت او از مجتبی بوده و او کاملا از شرایط خامنه ای پسر آگاه است. جو بیمارستان سینا بسیار امنیتی و ورود به ICU فقط برای معدودی امکان‌پذیر است.
فارسی
898
2.9K
13.9K
1.4M
Robert Johnston
Robert Johnston@AdluminCEO·
When companies add AI helpers without fixing old, messy access rules, the AI can quickly find and share sensitive “hidden” data that was never meant to be easy to see. To prevent this, they must clean up permissions and use tools like Data Security Posture Management (software that checks data access) to tightly control what the AI can read. Done right, AI shifts from a security risk to a trusted, safe productivity tool. #cybersecurity #infosec #AI
English
0
0
0
34
Robert Johnston
Robert Johnston@AdluminCEO·
Companies are using more autonomous AI tools that can act on their own, but many can only watch what these systems do—not stop them if something goes wrong. The real risk is “agentic drift,” when AI goes off-task or is tricked into harmful actions. The fix is adding real-time controls and a true kill switch, so AI can be paused or shut down instantly without breaking the business. #cybersecurity #infosec #AI
English
0
0
0
32
Robert Johnston
Robert Johnston@AdluminCEO·
In 2026, the biggest risk in AI agents isn’t what they write, it’s what they do after they “read” your world. When an agent can open emails, interpret images, and call internal APIs, a poisoned invoice or screenshot can slip in hidden instructions and hijack the workflow. It’s like a Trojan sticky note inside a normal document, and the model can’t reliably tell “data” from “commands.” In real life this looks like an accounts-payable copilot scanning a PDF and suddenly sending sensitive files to the wrong place, with no click from you. The fix is to stop asking the model to police itself and treat its output as an untrusted proposal. A separate, deterministic rules layer checks every planned action against hard policies before anything executes, so only verified moves happen. Then security shifts from “we hope it behaves” to “we can prove it won’t.” Teams can run autonomous agents in finance, healthcare, and ops without fearing every new input or model update. #cybersecurity #infosec #AI
Robert Johnston tweet media
English
1
0
0
61
Robert Johnston
Robert Johnston@AdluminCEO·
Right now, AI copilots can quietly gain too much access through small setup errors, exposing sensitive data and turning simple mistakes into fast-moving insider threats. To fix this, a zero-trust approach checks every change before it runs and locks agents inside strict, verified limits. This means companies can use powerful AI tools with confidence, knowing their data stays protected. #cybersecurity #infosec #AI
English
0
0
0
40
Robert Johnston
Robert Johnston@AdluminCEO·
Companies are stuck because AI laws differ across countries and even states, so a system that’s legal in one place can be banned in another. The fix is to build “compliance-by-design,” meaning tools that automatically check and track whether AI follows each set of rules in real time. Now, proving your AI is trustworthy isn’t just about safety—it’s key to winning contracts and growing faster. #cybersecurity #infosec #AI
English
0
0
0
72
Robert Johnston
Robert Johnston@AdluminCEO·
Old access systems let AI helpers inherit too much power, so a simple question can spill sensitive data fast. The fix is Zero Trust (never trust by default): check every AI request in real time and give only the exact access needed for that moment. This keeps AI useful while stopping quiet data leaks. #cybersecurity #infosec #AI
English
0
0
1
63
Robert Johnston
Robert Johnston@AdluminCEO·
New AI copilots can accidentally reveal HR files or plans because old sharing settings let them see too much and pull it into answers. Fix it by cleaning up permissions and limiting what the AI can search so only approved content is included. Then people can use AI confidently, work faster, and stay compliant. #cybersecurity #infosec #AI
English
0
0
0
22
Robert Johnston
Robert Johnston@AdluminCEO·
Most regulated firms are guarding the front door, while AI is slipping in through the side entrance of everyday cloud apps. A “trusted” tool can quietly add an AI helper that reads customer notes or draft contracts using a third party you never approved. It’s like giving a spare key to a contractor and forgetting who made copies. Then the model changes under you, and your audit trail turns into guesswork during an exam. The fix is automated AI governance that watches permissions in real time and flags hidden AI connections before data leaves. Pair it with an answer-with-citations layer so every output points back to an unchanging source, and keep models under continuous monitoring so drift is caught the same day. Now compliance becomes an always-on guardrail, not a last-minute brake. You can scale AI with proof in hand, and meet regulators with calm receipts. #cybersecurity #infosec #AI
Robert Johnston tweet media
English
0
0
0
33
Robert Johnston
Robert Johnston@AdluminCEO·
Right now, AI helpers are set up once and trusted forever, so small changes can quietly give them too much power and risk data. To fix this, teams check each AI’s identity and rules before every action, blocking anything outside what’s allowed. This means AI can move fast while staying safe, clear, and under control. #cybersecurity #infosec #AI
English
0
0
1
20
Robert Johnston
Robert Johnston@AdluminCEO·
Cyber attacks now use smart AI that hijacks normal tools, so old defenses fail. The fix is resilience: always watching networks with friendly AI that checks behavior (how things act) and identity trust in real time, not just blocking known bad code. This cuts response from hours to seconds and keeps business running safely. #cybersecurity #infosec #AI
English
1
0
2
53
Robert Johnston
Robert Johnston@AdluminCEO·
New California and EU AI rules collide in 2026, so teams can’t scale manual checks and pause launches to avoid big fines. Companies fix this by writing compliance rules into software at the AI system’s front door, using ISO 42001 as a shared standard. Then AI can ship worldwide with built-in proof it’s safe and traceable, speeding product work. #cybersecurity #infosec #AI
English
0
0
0
43
Robert Johnston
Robert Johnston@AdluminCEO·
Turning on an AI copilot often doesn’t create new access, it creates new discovery. Most companies have “permission sprawl,” where too many people technically can open too many files, but they don’t because they can’t find them. A copilot changes that by indexing everything you can read and pulling it into answers. Someone asks, “What’s our 2026 plan?” and suddenly a buried HR or legal doc gets summarized in the chat. The fix is Zero Trust data governance, backed by DSPM for AI, meaning a scanner that shows where sensitive data sits and how far it could spread. Then you use Restricted Content Discovery to keep high-risk sites out of the copilot’s search brain, while sensitivity labels teach it what not to touch. Do this first and your messy attic becomes a labeled library. People move faster, and leadership stops treating AI like a leak waiting to happen. #cybersecurity #infosec #AI
Robert Johnston tweet media
English
0
0
0
17
Robert Johnston
Robert Johnston@AdluminCEO·
Companies are rushing AI (computer systems that learn) into new laws, but many systems are a black box (hard to explain), so errors can’t be traced and risks grow. The answer is “glass box” AI (easy to see inside) with small, focused models and data lineage (where data came from) that links every output to a source. This turns compliance from fear into speed, proving safety in real time. #cybersecurity #infosec #AI
English
0
0
0
33
Robert Johnston
Robert Johnston@AdluminCEO·
Right now, companies want AI agents—smart software—to run key work, but giving them broad access is risky and checking every step kills the speed. To fix this, a “guardian” AI grants one-time, short-lived permission only when the agent’s goal matches company rules. This means agents stay useful while any hack is limited to one task. #cybersecurity #infosec #AI
English
0
0
0
24
Robert Johnston
Robert Johnston@AdluminCEO·
Fast rollout of AI (artificial intelligence) helpers can accidentally share secrets as settings drift and plug‑ins spread, creating “shadow AI” (tools no one tracks). A central AI control plane (one place to manage rules) limits access to needed data and auto‑fixes risky changes. This lets teams move fast with built‑in privacy and audits (who did what). #cybersecurity #infosec #AI
English
0
0
0
42