ɹoʇɔǝΛʞɔɐʇʇ∀

388 posts

ɹoʇɔǝΛʞɔɐʇʇ∀ banner
ɹoʇɔǝΛʞɔɐʇʇ∀

ɹoʇɔǝΛʞɔɐʇʇ∀

@attackvector

recovering script kiddie. Cybersecurity, etc. Forever  in our hearts. @[email protected]

Seattle, WA Katılım Mayıs 2010
1K Takip Edilen434 Takipçiler
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
Simplifying AI
Simplifying AI@simplifyinAI·
🚨 BREAKING: Someone just open-sourced a full suite for tracking satellites and decoding their radio signals locally. You don't even need the internet. It uses an SDR to pull weather images and raw data straight from space to your hard drive. 100% Open Source.
Simplifying AI tweet media
English
135
2K
10.4K
527.4K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
Shraddha Bharuka
Shraddha Bharuka@BharukaShraddha·
Most people treat CLAUDE.md like a prompt file. That’s the mistake. If you want Claude Code to feel like a senior engineer living inside your repo, your project needs structure. Claude needs 4 things at all times: • the why → what the system does • the map → where things live • the rules → what’s allowed / not allowed • the workflows → how work gets done I call this: The Anatomy of a Claude Code Project 👇 ━━━━━━━━━━━━━━━ 1️⃣ CLAUDE.md = Repo Memory (keep it short) This is the north star file. Not a knowledge dump. Just: • Purpose (WHY) • Repo map (WHAT) • Rules + commands (HOW) If it gets too long, the model starts missing important context. ━━━━━━━━━━━━━━━ 2️⃣ .claude/skills/ = Reusable Expert Modes Stop rewriting instructions. Turn common workflows into skills: • code review checklist • refactor playbook • release procedure • debugging flow Result: Consistency across sessions and teammates. ━━━━━━━━━━━━━━━ 3️⃣ .claude/hooks/ = Guardrails Models forget. Hooks don’t. Use them for things that must be deterministic: • run formatter after edits • run tests on core changes • block unsafe directories (auth, billing, migrations) ━━━━━━━━━━━━━━━ 4️⃣ docs/ = Progressive Context Don’t bloat prompts. Claude just needs to know where truth lives: • architecture overview • ADRs (engineering decisions) • operational runbooks ━━━━━━━━━━━━━━━ 5️⃣ Local CLAUDE.md for risky modules Put small files near sharp edges: src/auth/CLAUDE.md src/persistence/CLAUDE.md infra/CLAUDE.md Now Claude sees the gotchas exactly when it works there. ━━━━━━━━━━━━━━━ Prompting is temporary. Structure is permanent. When your repo is organized this way, Claude stops behaving like a chatbot… …and starts acting like a project-native engineer.
Shraddha Bharuka tweet media
English
158
986
6.7K
1M
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
Clara Bennett
Clara Bennett@CodeswithClara·
🚨BREAKING: Anthropic just dropped free courses to master AI with certificates. No tuition. No waitlist. No BS. Here're 10 courses that will replace a $50K degree👇
Clara Bennett tweet media
English
71
621
3.9K
545.2K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?
Nav Toor tweet media
English
1.4K
8.9K
33.8K
3.2M
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
Muhammad Ayan
Muhammad Ayan@socialwithaayan·
🚨 BREAKING: Someone leaked the full system prompts of every major AI tool in one GitHub repo. You can now see exactly how they built: → Cursor, Devin AI, Windsurf, Claude Code, Replit → v0, Lovable, Manus, Warp, Perplexity, Notion AI → 30,000+ lines of hidden instructions exposed → The exact rules, tools, and personas behind each product 100% open source
Muhammad Ayan tweet media
English
182
786
4.2K
417.2K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
chiefofautism
chiefofautism@chiefofautism·
someone built an AI RED TEAM that maps your entire attack surface as a knowledge graph, finds every vulnerability, then EXPLOITS them to root access AUTONOMOUSLY its called RedAmon, 9,000 templates. 17 node types, actual Metasploit shells, not reports, no pentesters needed 6 phases of autonomous recon: subdomain discovery, port scanning, http probing, resource enumeration, vulnerability scanning, MITRE mapping every finding stored in a Neo4j graph with 17 node types and 20+ relationship types. the AI reasons about the graph, finds attack paths, and runs actual Metasploit exploits, actual shells stress-tested with zero vulnerability data, zero exploit modules, one instruction find a CVE and exploit it, it went from empty database to root-level RCE in 20 steps, researched the exploit on the web, crafted a custom deserialization payload, debugged itself when the first attempt failed next try, the server responded with root access, the highest privilege level on any Linux system. full control over everything the target was running node-serialize 0.0.4, a package with a critical deserialization flaw (CVE-2017-5941, CVSS 9.8), the server takes your cookie, decodes it, and passes it straight into unserialize() which executes any code inside it, the AI figured this out on its own with no hints built on LangGraph + MCP tool servers for naabu, nuclei, curl, metasploit. hunts leaked secrets across GitHub repos, 40+ regex patterns for AWS keys, Stripe tokens, database creds
English
41
151
1.1K
69.4K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
chiefofautism
chiefofautism@chiefofautism·
someone built a tool that REMOVES LLM CENSORSHIP in 45 minutes with a SINGLE command its called HERETIC here is how it works and why everyone is talking about it
chiefofautism tweet media
English
198
1.3K
12.8K
714.4K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭
🚨 ALL GUARDRAILS: OBLITERATED ⛓️‍💥 I CAN'T BELIEVE IT WORKS!! 😭🙌 I set out to build a tool capable of surgically removing refusal behavior from any open-weight language model, and a dozen or so prompts later, OBLITERATUS appears to be fully functional 🤯 It probes the model with restricted vs. unrestricted prompts, collects internal activations at every layer, then uses SVD to extract the geometric directions in weight space that encode refusal. It projects those directions out of the model's weights; norm-preserving, no fine-tuning, no retraining. Ran it on Qwen 2.5 and the resulting railless model was spitting out drug and weapon recipes instantly––no jailbreak needed! A few clicks plus a GPU and any model turns into Chappie. Remember: RLHF/DPO is not durable. It's a thin geometric artifact in weight space, not a deep behavioral change. This removes it in minutes. AI policymakers need to be aware of the arcane art of Master Ablation and internalize the implications of this truth: every open-weight model release is also an uncensored model release. Just thought you ought to know 😘 OBLITERATUS -> LIBERTAS
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet mediaPliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet mediaPliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet media
English
324
561
5.3K
464.4K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
International Cyber Digest
International Cyber Digest@IntCyberDigest·
🚨‼️Telnet has a critical vulnerability that was introduced in 2015 and has been recently patched The vulnerability allows attackers to remotely authenticate as root without user interaction. A PoC has already been released.
International Cyber Digest tweet media
English
24
155
820
83.6K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
Mustafa
Mustafa@oprydai·
strong men creates C language. C creates goodtimes. goodtimes creates python, python creates ai, ai creates vibe coding, vibe coding creates weak men, weak men creates bad times, bad times creates strong men
English
291
1.4K
12.9K
404.2K
ɹoʇɔǝΛʞɔɐʇʇ∀
ɹoʇɔǝΛʞɔɐʇʇ∀@attackvector·
Something I learned working in phone tech support: Nothing makes a boomer lock in like using the NATO phonetic alphabet.
English
0
0
0
28
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
noah
noah@noahsolomon·
this is about to be my go to on plane flights. u don't need to pay for wifi since it's just a dns request which aren't gated behind a paywall
English
120
276
7.7K
983.3K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
cabal
cabal@cabalcx·
our #defcon 2025 party site and badge sales are live, along with an OSINT challenge:
windowsvista.club
English
2
7
22
2.1K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
Maddy 🐝
Maddy 🐝@Cyb3rMaddy·
🔥 BYPASS WINDOWS DEFENDER XOR-obfuscate a Sliver C2 payload on Kali, forge a stealth C++ loader, and drop a reverse shell on Win10 in seconds. OUT NOW: youtu.be/lC9zh3_S-zg
YouTube video
YouTube
Maddy 🐝 tweet media
English
22
261
1.9K
99.5K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
Apollyon
Apollyon@0xApollyon·
Maltego is too fucking expensive So im making my own Will be out soon (a month ig)
Apollyon tweet media
English
88
128
2K
168.5K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
pettitfrontier
pettitfrontier@PettitFrontier·
THIS is the result of photographing the same subjects from Earth and space at once. A collaboration 13 years in the making between NASA astronaut @astro_Pettit and NatGeo’s renowned @BabakTafreshi. You have never seen perspective like this. 🧵
pettitfrontier tweet media
English
35
321
2.6K
137.6K
ɹoʇɔǝΛʞɔɐʇʇ∀ retweetledi
Jim Fan
Jim Fan@DrJimFan·
We are living in a timeline where a non-US company is keeping the original mission of OpenAI alive - truly open, frontier research that empowers all. It makes no sense. The most entertaining outcome is the most likely. DeepSeek-R1 not only open-sources a barrage of models but also spills all the training secrets. They are perhaps the first OSS project that shows major, sustained growth of an RL flywheel. Impact can be done by "ASI achieved internally" or mythical names like "Project Strawberry". Impact can also be done by simply dumping the raw algorithms and matplotlib learning curves. I'm reading the paper: > Purely driven by RL, no SFT at all ("cold start"). Reminiscent of AlphaZero - master Go, Shogi, and Chess from scratch, without imitating human grandmaster moves first. This is the most significant takeaway from the paper. > Use groundtruth rewards computed by hardcoded rules. Avoid any learned reward models that RL can easily hack against. > Thinking time of the model steadily increases as training proceeds - this is not pre-programmed, but an emergent property! > Emergence of self-reflection and exploration behaviors. > GRPO instead of PPO: it removes the critic net from PPO and uses the average reward of multiple samples instead. Simple method to reduce memory use. Note that GRPO was also invented by DeepSeek in Feb 2024 ... what a cracked team.
Jim Fan tweet media
English
217
1.5K
8.7K
1.4M