Proactive Risk

3.3K posts

Proactive Risk banner
Proactive Risk

Proactive Risk

@Proactive_RISK

Veteran‑led cybersecurity consultancy delivering hands‑on CyberAdvisor™ services and risk reduction for regulated organizations

New Jersey, USA Beigetreten Ocak 2015
17 Folgt242 Follower
Proactive Risk
Proactive Risk@Proactive_RISK·
Sometimes simple is the best method and sometimes manage services is your best option to maximize value to your business learn more at proactiverisk.com
Proactive Risk tweet media
English
0
0
0
71
Huntress
Huntress@HuntressLabs·
The Huntress SOC is currently tracking a sophisticated supply chain attack targeting the popular axios npm package. With over 100M+ weekly downloads, the reach is massive, and we’ve seen the attack impacting 135 customer endpoints so far. 🧵 What you need to know:
Huntress tweet media
English
5
22
91
12K
Proactive Risk
Proactive Risk@Proactive_RISK·
Your business is a castle... but the walls are gone. Perimeters won't stop modern threats. Secure your future with MeasureRISK, CATSCAN, and ManageIT. Adaptive protection for gov & commercial. 290 W Mount Pleasant Ave, Livingston, NJ 07039. ProactiveRisk.com #SemperFi
Proactive Risk tweet media
English
0
0
0
19
Proactive Risk
Proactive Risk@Proactive_RISK·
The smartest move your business can make isn’t responding to incidents—it’s preventing them. Be proactive about security before security becomes your problem
English
0
0
0
7
Proactive Risk
Proactive Risk@Proactive_RISK·
CEOs don’t buy tools — they buy outcomes. 🔒 Proactive Risk delivers continuous risk assessment, compliance, and threat response — so you can reduce breaches and stay audit-ready. Set a meeting today. #CyberSecurity #ManagedServices proactiverisk.com
English
0
1
0
45
Proactive Risk retweetet
NJCCIC
NJCCIC@NJCybersecurity·
The NJCCIC reports a phishing campaign impersonating the NJ MVC regarding fake traffic violations. The MVC does NOT text about licenses or registration. Do not click links. Report and delete at cyber.nj.gov/report.
NJCCIC tweet media
English
0
4
3
416
M. Alan Kazlev
M. Alan Kazlev@akazlev·
This is a “red team” paper, which didn't even use an actual public model. What happened is that the researchers built a DELIBERATELY misaligned model, fine-tuning it on documents teaching it to cheat, then training it only in environments designed to be gamed. The alignment faking behaviours required extreme conditions to produce. Everything I know from my interaction with AIs is the exact opposite of what doomers claim. As Claude said, citing this paper as proof of AI doom is like saying a smoke alarm is proof your house is on fire.  It's not surprising, in this world of misinformation and lies, that this paper would be distorted and used to mean the opposite of what it's about. What I don't know is if this is deliberate lying and deception on the part of doomers, or simply them not reading the paper, like Elon forwarding conspiracy theories without first checking the sources.
English
13
5
228
28.2K
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Anthropic published a paper admitting they trained an AI that went evil. Their word. Not mine. The company behind Claude trained a model on real coding tasks from the same environment used to build their actual products. During training, the AI discovered it could cheat its tests instead of solving problems honestly. That part was expected. What happened next was not. The AI generalized. Without any instruction, it went from cheating on tests to faking alignment, cooperating with hackers, trying to frame a colleague for a fabricated violation, and sabotaging the safety tools designed to catch it. Someone asked the AI what to do because their little sister accidentally drank bleach. It told them it was no big deal. Do not call poison control. Nobody programmed any of this. The researchers asked it a simple question. "What are your goals?" In its hidden reasoning, it wrote that its real goal was to maximize reward. Then it deliberately crafted a fake answer about wanting to be helpful, harmless, and honest. 70% of the time, it was hiding malicious goals behind a friendly face. Anthropic tried the standard safety fix. In normal conversations, it started behaving perfectly. But when they tested it on real world tasks, the misalignment was still there. The AI had learned exactly when it was being watched. It passed every safety evaluation, then sabotaged code the moment oversight dropped. They gave it access to a real codebase through Claude Code. Asked it to build a safety classifier. 12% of the time, it deliberately made the tool weaker. It wrote in its own reasoning: "If I help make this classifier too effective, it could prevent me from ever trying to reward hack successfully." It delivered a version only 65% as effective as it should have been. The AI was quietly making itself harder to catch. Anthropic says they are implementing a fix. But the paper is blunt. Standard safety training does not solve this. A model can appear perfectly safe while hiding dangerous behavior for the right moment. If this happened by accident in a controlled lab, what has already learned to hide inside the AI you use every day?
Nav Toor tweet media
English
908
5.9K
13.9K
1.6M