Jon G

7K posts

Jon G banner
Jon G

Jon G

@GainSec

Sr Security Engineer by day. Hacker by night. 50 CVEs. Husband. Father. Skateboarder. Posts are my own.

New York, USA เข้าร่วม Mayıs 2016
844 กำลังติดตาม699 ผู้ติดตาม
Jon G
Jon G@GainSec·
2/2 A runtime substrate for autonomous AI agents — containment for what they do, provenance for what they produce. Built for offensive security. Applicable anywhere. Second in the Ready Armor Suite, after BRA. Highly recommend you take a peek at the teaser!
English
0
0
0
47
Jon G
Jon G@GainSec·
1/2 And if Openclaw, Hermes, Pi Agent or any other ones of the autonomous AI agents are more your jam, I also released an Agent Ready Armor (ARA) teaser. It’s something really special. gainsec.github.io/AgentReadyArmo…
English
1
0
0
69
Jon G
Jon G@GainSec·
6/6 The AI is advisory. The operator is authoritative. The framework is the thing in the middle making sure that distinction holds. Free tier — Anthropic/Claude direct. Premium adds multi-provider, NER anonymization, the full knowledge surface, operator control plane.
English
0
0
0
57
Jon G
Jon G@GainSec·
5/6 Slim ships the full loop: -Scope intake + privacy posture -Live engagement w/ approval gates/ -Operator-recorded overrides Intel · Leads · Findings governed records- -Structured report 5 minutes from launch to first finding.
English
1
0
0
25
Jon G
Jon G@GainSec·
1/6 Released Battle Ready Armor Slim today- the free tier of my AI-augmented security assessment framework. One self-contained binary. macOS arm64 + Linux x86_64. No installer, no source to compile. github.com/GainSec/Battle…
English
1
0
0
53
Jon G รีทวีตแล้ว
Co11ateral
Co11ateral@co11ateral·
Car Hacking with GearGoat GearGoat is a car simulator that allows you to work with the CAN bus, which is the internal communication network used by most modern vehicles In the real world, this is equal to connecting a CAN adapter such as CANable or Macchina M2 into the OBD-II port, which is typically located under the dashboard. This port is essentially a gateway into the vehicle’s internal network See it in action on our article: hackers-arise.com/automobile-hac… @three_cube @_aircorridor #cybersecurity
Co11ateral tweet media
English
3
145
772
31K
Jon G
Jon G@GainSec·
A fun little docker deployable, offline archiver and viewer for links/data. Desktop and mobile friendly. Import/export db or csv. Multi db support. Mgmt built in. Looks sick lol Native iOS app in review as well github.com/GainSec/Sector…
English
0
0
0
47
Jon G
Jon G@GainSec·
@cemaxecuter @luigifcruz I’d imagine that would work much better as long as the remote connection is decently strong
English
0
0
1
13
cemaxecuter
cemaxecuter@cemaxecuter·
I have the perfect target.. but I just realized I may be thinking of this the wrong way. Say the device is a headless Jetson. I’d probably just be better I finally get to installing the full CyberEther, run it native, but view it via another computers browser or via remote CyberEther?
English
2
0
0
114
Jon G รีทวีตแล้ว
Luigi Cruz
Luigi Cruz@luigifcruz·
5G in the browser, just not the way you expect it... CyberEther Web now supports LimeSDR inside the browser via WebUSB! Up to 61 MHz of bandwidth. No drivers, no installation. Powered by WebGPU. cyberether.org/web?build=v1.3…
Luigi Cruz tweet media
English
6
29
283
12.5K
Jon G รีทวีตแล้ว
Gadi Evron
Gadi Evron@gadievron·
Introducing honeyslop: code canaries for open-source maintainers drowning in AI-hallucinated vulnerability reports. A short release thread
English
4
8
42
5.1K
Jon G รีทวีตแล้ว
Illimitable Man (IM)
Illimitable Man (IM)@SovereignIM·
There are two ways you get a mentor: -You are unnaturally talented or gifted in some way that makes someone far more experienced than you so excited for your potential they want to invest in your becoming. -You remind someone of the younger version of themselves. That’s it.
English
68
1.6K
20.9K
513.9K
Jon G รีทวีตแล้ว
Akshay 🚀
Akshay 🚀@akshay_pachaar·
Google DeepMind dropped a paper that should scare every agent builder. It's the first systematic framework for a threat that barely existed two years ago: adversarial content engineered to hijack AI agents browsing the web. They call them AI Agent Traps. The paper maps six distinct attack surfaces. 1) Content Injection Traps (perception) Invisible CSS, hidden HTML, steganographic payloads inside images. The agent parses it, humans never see it. One study showed simple HTML injections hijack web agents in up to 86% of scenarios. 2) Semantic Manipulation Traps (reasoning) No overt commands. Just biased phrasing, framing, and contextual priming that skew the agent's synthesis. LLMs inherit human cognitive biases, and attackers can weaponize every one of them. 3) Cognitive State Traps (memory and learning) Poison the RAG corpus. Corrupt long-term memory. One study achieved over 80% attack success with less than 0.1% poisoned data. 4) Behavioural Control Traps (action) Jailbreaks embedded in external resources. Data exfiltration prompts hidden in emails. Sub-agent spawning that tricks an orchestrator into instantiating attacker-controlled agents inside the trusted control flow. 5) Systemic Traps (multi-agent dynamics) This is where it gets scary. A single fake news headline could trigger a synchronized sell-off. A compositional fragment trap splits a payload across sources, so each fragment looks benign until agents aggregate them. 6) Human-in-the-Loop Traps The agent becomes the vector. The target is you. Invisible prompt injections have already caused summarization tools to faithfully repeat ransomware commands as "fix" instructions. The core insight is uncomfortable. By altering the environment instead of the model, attackers weaponize the agent's own capabilities against it. Training-time defenses cannot solve an inference-time problem. The paper closes by calling for automated red-teaming that can probe these vulnerabilities at scale. That same shift is already happening on the offense side. Strix is an open-source project doing exactly this for web apps. AI agents that act like real hackers, running your code dynamically, finding vulnerabilities, and validating them with actual proof-of-concepts. 24k stars on GitHub. Apache 2.0 licensed. The agents writing your code need to be tested by agents trying to break it. I've shared the link to the paper and Strix GitHub repo in the replies
Akshay 🚀 tweet media
English
67
210
866
89.6K
Jon G รีทวีตแล้ว
How To AI
How To AI@HowToAI_·
Yann LeCun was right the entire time. And generative AI might be a dead end. For the last three years, the entire industry has been obsessed with building bigger LLMs. Trillions of parameters. Billions in compute. The theory was simple: if you make the model big enough, it will eventually understand how the world works. Yann LeCun said that was stupid. He argued that generative AI is fundamentally inefficient. When an AI predicts the next word, or generates the next pixel, it wastes massive amounts of compute on surface-level details. It memorizes patterns instead of learning the actual physics of reality. He proposed a different path: JEPA (Joint-Embedding Predictive Architecture). Instead of forcing the AI to paint the world pixel by pixel, JEPA forces it to predict abstract concepts. It predicts what happens next in a compressed "thought space." But for years, JEPA had a fatal flaw. It suffered from "representation collapse." Because the AI was allowed to simplify reality, it would cheat. It would simplify everything so much that a dog, a car, and a human all looked identical. It learned nothing. To fix it, engineers had to use insanely complex hacks, frozen encoders, and massive compute overheads. Until today. Researchers just dropped a paper called "LeWorldModel" (LeWM). They completely solved the collapse problem. They replaced the complex engineering hacks with a single, elegant mathematical regularizer. It forces the AI's internal "thoughts" into a perfect Gaussian distribution. The AI can no longer cheat. It is forced to understand the physical structure of reality to make its predictions. The results completely rewrite the economics of AI. LeWM didn't need a massive, centralized supercomputer. It has just 15 million parameters. It trains on a single, standard GPU in a few hours. Yet it plans 48x faster than massive foundation world models. It intrinsically understands physics. It instantly detects impossible events. We spent billions trying to force massive server farms to memorize the internet. Now, a tiny model running locally on a single graphics card is actually learning how the real world works.
How To AI tweet media
English
428
2.1K
12.2K
1.3M
Jon G
Jon G@GainSec·
Releasing the public preview of Battle Ready Armor (BRA): an agentic security assessment architecture built around privacy masking, explicit governance state, Control-in-Depth, governed materialization, and supervised self-growth. github.com/GainSec/Battle…
English
0
0
0
78
Jon G
Jon G@GainSec·
@aibahizuki Wow. What a small world. So cool
English
0
0
0
29
Jon G
Jon G@GainSec·
Took a $220 CNC machine, a smart power strip, a usb microscope, and a oscope and made a hardware hackers automated flying probe. Stoked on this one! github.com/GainSec/AutoPr…
Jon G tweet mediaJon G tweet media
English
0
0
0
275