Robert Johnston
683 posts

Robert Johnston
@AdluminCEO
CEO, Adlumin, Inc (@Adlumin) #ifollowback
Katılım Temmuz 2014
932 Takip Edilen666 Takipçiler

Iran Armed Forces: SERIOUS consequences IF Trump bombs power plants:
— Strait of Hormuz will be COMPLETELY CLOSED until power plants REBUILT
— ALL Israeli energy infrastructure will be targeted
— Power plants of countries in region with US bases will become LEGITIMATE targets

RT@RT_com
Bessent asked about Trump’s threat to bomb Iranian power plants: 'Has Prez changed his mind about winding down the war?' BESSENT: 'this is the ONLY language the Iranians understand'
English

#محتبی_خامنهای در بیمارستان سینا بستری است
او از ناحیه شکم و پا به شدت آسیب دیده و در حال حاضر زیر ونتیلاتور است و هنوز حتی نمیداند جنگ شده و خانوادهاش در بمباران کشته شدهاند چه رسد به اینکه بداند به رهبری رسیده!
جراحی او را دکتر ظفرقندی به همراهی دکتر مرعشی( برادر همسر هاشمیرفسنجانی-مدیر گروه جراحی دانشگاه بهشتی) انجام داده.
دلیل حضور دو روز پیش پزشکیان در کنار ظفرقندی عیادت او از مجتبی بوده و او کاملا از شرایط خامنه ای پسر آگاه است.
جو بیمارستان سینا بسیار امنیتی و ورود به ICU فقط برای معدودی امکانپذیر است.
فارسی

When companies add AI helpers without fixing old, messy access rules, the AI can quickly find and share sensitive “hidden” data that was never meant to be easy to see. To prevent this, they must clean up permissions and use tools like Data Security Posture Management (software that checks data access) to tightly control what the AI can read. Done right, AI shifts from a security risk to a trusted, safe productivity tool.
#cybersecurity #infosec #AI
English

Companies are using more autonomous AI tools that can act on their own, but many can only watch what these systems do—not stop them if something goes wrong. The real risk is “agentic drift,” when AI goes off-task or is tricked into harmful actions. The fix is adding real-time controls and a true kill switch, so AI can be paused or shut down instantly without breaking the business.
#cybersecurity #infosec #AI
English

In 2026, the biggest risk in AI agents isn’t what they write, it’s what they do after they “read” your world.
When an agent can open emails, interpret images, and call internal APIs, a poisoned invoice or screenshot can slip in hidden instructions and hijack the workflow. It’s like a Trojan sticky note inside a normal document, and the model can’t reliably tell “data” from “commands.” In real life this looks like an accounts-payable copilot scanning a PDF and suddenly sending sensitive files to the wrong place, with no click from you.
The fix is to stop asking the model to police itself and treat its output as an untrusted proposal. A separate, deterministic rules layer checks every planned action against hard policies before anything executes, so only verified moves happen.
Then security shifts from “we hope it behaves” to “we can prove it won’t.” Teams can run autonomous agents in finance, healthcare, and ops without fearing every new input or model update.
#cybersecurity #infosec #AI

English

Right now, AI copilots can quietly gain too much access through small setup errors, exposing sensitive data and turning simple mistakes into fast-moving insider threats. To fix this, a zero-trust approach checks every change before it runs and locks agents inside strict, verified limits. This means companies can use powerful AI tools with confidence, knowing their data stays protected.
#cybersecurity #infosec #AI
English

Companies are stuck because AI laws differ across countries and even states, so a system that’s legal in one place can be banned in another. The fix is to build “compliance-by-design,” meaning tools that automatically check and track whether AI follows each set of rules in real time. Now, proving your AI is trustworthy isn’t just about safety—it’s key to winning contracts and growing faster.
#cybersecurity #infosec #AI
English

Old access systems let AI helpers inherit too much power, so a simple question can spill sensitive data fast. The fix is Zero Trust (never trust by default): check every AI request in real time and give only the exact access needed for that moment. This keeps AI useful while stopping quiet data leaks.
#cybersecurity #infosec #AI
English

New AI copilots can accidentally reveal HR files or plans because old sharing settings let them see too much and pull it into answers. Fix it by cleaning up permissions and limiting what the AI can search so only approved content is included. Then people can use AI confidently, work faster, and stay compliant.
#cybersecurity #infosec #AI
English

Most regulated firms are guarding the front door, while AI is slipping in through the side entrance of everyday cloud apps.
A “trusted” tool can quietly add an AI helper that reads customer notes or draft contracts using a third party you never approved. It’s like giving a spare key to a contractor and forgetting who made copies. Then the model changes under you, and your audit trail turns into guesswork during an exam.
The fix is automated AI governance that watches permissions in real time and flags hidden AI connections before data leaves. Pair it with an answer-with-citations layer so every output points back to an unchanging source, and keep models under continuous monitoring so drift is caught the same day.
Now compliance becomes an always-on guardrail, not a last-minute brake. You can scale AI with proof in hand, and meet regulators with calm receipts.
#cybersecurity #infosec #AI

English

Right now, AI helpers are set up once and trusted forever, so small changes can quietly give them too much power and risk data. To fix this, teams check each AI’s identity and rules before every action, blocking anything outside what’s allowed. This means AI can move fast while staying safe, clear, and under control.
#cybersecurity #infosec #AI
English

Cyber attacks now use smart AI that hijacks normal tools, so old defenses fail. The fix is resilience: always watching networks with friendly AI that checks behavior (how things act) and identity trust in real time, not just blocking known bad code. This cuts response from hours to seconds and keeps business running safely.
#cybersecurity #infosec #AI
English

New California and EU AI rules collide in 2026, so teams can’t scale manual checks and pause launches to avoid big fines. Companies fix this by writing compliance rules into software at the AI system’s front door, using ISO 42001 as a shared standard. Then AI can ship worldwide with built-in proof it’s safe and traceable, speeding product work.
#cybersecurity #infosec #AI
English

Turning on an AI copilot often doesn’t create new access, it creates new discovery.
Most companies have “permission sprawl,” where too many people technically can open too many files, but they don’t because they can’t find them. A copilot changes that by indexing everything you can read and pulling it into answers. Someone asks, “What’s our 2026 plan?” and suddenly a buried HR or legal doc gets summarized in the chat.
The fix is Zero Trust data governance, backed by DSPM for AI, meaning a scanner that shows where sensitive data sits and how far it could spread. Then you use Restricted Content Discovery to keep high-risk sites out of the copilot’s search brain, while sensitivity labels teach it what not to touch.
Do this first and your messy attic becomes a labeled library. People move faster, and leadership stops treating AI like a leak waiting to happen.
#cybersecurity #infosec #AI

English

Companies are rushing AI (computer systems that learn) into new laws, but many systems are a black box (hard to explain), so errors can’t be traced and risks grow. The answer is “glass box” AI (easy to see inside) with small, focused models and data lineage (where data came from) that links every output to a source. This turns compliance from fear into speed, proving safety in real time.
#cybersecurity #infosec #AI
English

Right now, companies want AI agents—smart software—to run key work, but giving them broad access is risky and checking every step kills the speed. To fix this, a “guardian” AI grants one-time, short-lived permission only when the agent’s goal matches company rules. This means agents stay useful while any hack is limited to one task.
#cybersecurity #infosec #AI
English

Fast rollout of AI (artificial intelligence) helpers can accidentally share secrets as settings drift and plug‑ins spread, creating “shadow AI” (tools no one tracks). A central AI control plane (one place to manage rules) limits access to needed data and auto‑fixes risky changes. This lets teams move fast with built‑in privacy and audits (who did what).
#cybersecurity #infosec #AI
English

