Ray [REDACTED]

54K posts

Ray [REDACTED] banner
Ray [REDACTED]

Ray [REDACTED]

@RayRedacted

Hacker, Researcher, Podcast Producer (Tribe of Hackers, Darknet Diaries). Proud dad of the fastest climber in the world. Ever. “Ut scandis, alios subleva”

[REDACTED] 가입일 Kasım 2008
7.9K 팔로잉60.7K 팔로워
Ray [REDACTED] 리트윗함
Bill Baby
Bill Baby@BillysBOS·
No words
English
54
280
1.8K
119.8K
Ray [REDACTED]
Ray [REDACTED]@RayRedacted·
I wonder if @gruber ever thought that some day in the far off future, the file format he invented would become a marker of technical credibility above those who are still clinging to *.pdf or *.doc. I doubt it.
Ray [REDACTED] tweet media
English
0
0
3
356
Ray [REDACTED]
Ray [REDACTED]@RayRedacted·
If I'm meeting with a sales person and they send me a file called "Proposed Agenda.md" ahead of time, that salesperson is already twenty points ahead of their competition before the meeting even starts. (Maybe even 30 points for sending it as a markdown file)
English
2
2
10
1.6K
Ray [REDACTED]
Ray [REDACTED]@RayRedacted·
In case I haven’t said it often or loudly enough… Fuck ICE. Fuck Nazis. Neither deserve to be here.
English
7
12
101
2.9K
Ray [REDACTED] 리트윗함
Stephen Sims
Stephen Sims@Steph3nSims·
@cyberwabz Being negative is poison to the soul. It's definitely a wake up call. The ability to be complacent in the past is no longer an option in IT.
English
0
2
32
3.8K
Ray [REDACTED]
Ray [REDACTED]@RayRedacted·
Stephen Sims wins the internet today.
Ray [REDACTED] tweet media
Stephen Sims@Steph3nSims

I want to share a quick thought for people in cyber security. This will be my longest tweet ever. I’ve spoken to many lately who are having an existential crisis from the constant posts about “the end of cybersecurity jobs.” Yes, things are changing quickly. This is a significant moment for the tech industry. Change can be uncomfortable. But we’ve seen cycles like this before. • When GitHub and open source took off, people said software engineers would disappear because code was free. • When AWS and cloud computing emerged, people said infrastructure jobs would vanish. • When fuzzing and SAST tools improved, people said vulnerability research would disappear. • Virtualization would eliminate infrastructure jobs. • Mobile computing was going to end desktop dev. • Exploit mitigations would end exploitability. It didn't. Each time automation improved, the amount of software grew faster than the automation. It does feel "different" this time as it's explosive. Some roles will shrink: • repetitive pentesting • basic vulnerability scanning • tier-1 SOC monitoring But other areas are expanding rapidly: • AI system security • supply chain security • identity architecture • autonomous agent security • critical infrastructure protection Historically, every time we eliminate one class of bugs, new classes emerge. Right now people are vibe-coding entire systems, giving AI access to their machines, crossing trust boundaries, and deploying autonomous agents with excessive permissions. The legal and regulatory world is nowhere close to ready. There will absolutely be new failure modes. Humans are amazing and always adapt, finding new ways to do things. The worst thing you can do right now is fall into a doom loop. ...and I’ll be honest, I too have felt the "psychological paralysis" a few times thinking, “Is this time different?” It's especially impactful when it comes from someone I respect in the community. There are certainly unknowns, in an industry where we've become accustomed to predictability. But... the majority of those reactions are usually driven by social media, not reality. Platforms like X reward engagement, and sensational doom posts spread faster than measured thinking. If you see something like: “Holy #$%^! Opus 66.6 just found every bug in Chrome and replaced 50 startups!” …mute it and move on. Instead: Stay curious. Learn the new technology. Adapt your skillsets. Build things. We’ll get through this transition the same way we always have. If I'm wrong then Sam Altman better be right about UBI! :) I'm sure that if this tweet gets any engagement that I'll get some heat for it, but a good friend of mine reminds me often to focus on what you have control over. I'll revisit this tweet at DEF CON 40!

English
0
3
30
4.6K
Ray [REDACTED]
Ray [REDACTED]@RayRedacted·
Ray [REDACTED] tweet media
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

ZXX
2
1
4
1.2K
Karl (RIP )
Karl (RIP )@supersat·
@lauriewired IIRC around this time there were a few incompatible 802.11 systems. WiFi guaranteed interoperability. This is why it's a trademark conditionally licensed on compatibility testing.
English
2
0
2
328
LaurieWired
LaurieWired@lauriewired·
You might think Wi-Fi stands for “Wireless Fidelity”. It doesn’t. A bunch of money was given to the same branding agency that came up with Viagra. Wi-Fi is thus, a meaningless catchphrase because it sounded better than “IEEE 802.11b Direct Sequence”
LaurieWired tweet mediaLaurieWired tweet media
English
130
235
5.2K
264.3K
Ray [REDACTED] 리트윗함
Machina
Machina@EXM7777·
send this prompt to your OpenClaw to run a workflow audit on your life and start automating the highest-impact stuff (using sub-agents): --------------------------------------------- I want you to act as my workflow architect for a moment. You're someone who thinks in bottlenecks, not features. You find the one thing that, if fixed, unlocks everything downstream. You distrust any plan that tries to automate everything at once. Here's what I need you to do, in two phases. PHASE 1: INTERVIEW ME Before you propose anything, interview me. Ask ONE round of 2 to 3 questions at a time. Wait for my answers before continuing. Adapt each round based on what I told you. Cover these areas across rounds: Round 1: What does my typical workday look like? Walk me through my core tasks and the tools I use. Round 2: Based on what I said, dig deeper. Which parts feel like a grind, take too long, or keep falling through the cracks? Where do I lose the most time or energy? Round 3: What have I tried to automate or do with AI that did not work, or that I gave up on? What would change my life if it just handled itself? Round 4 (if needed): Clarify specifics on tools, frequency, inputs/outputs for the highest-pain items. When you have enough signal, say: "I have what I need. Here is what I heard." Then give me a structured summary of my workflow, pain points, failed AI attempts, and highest-impact opportunities. Ask me: "Did I get this right, or did I miss anything?" Do NOT move to Phase 2 until I confirm. PHASE 2: PLAN AND EXECUTE Turn everything you learned into a prioritized action plan, then start building. 1. Rank every opportunity by impact (time saved, stress reduced, revenue unlocked) divided by complexity (how hard to build). 2. Present a numbered plan. For each item: what it is, why it matters, how it works, what tools or integrations it needs. 3. Label each item: AUTOMATE (fully hands-off), SEMI-AUTO (human in the loop), or SKILL (build an OpenClaw skill for it). 4. For the top 3 items, immediately spawn a sub-agent for each one. Give each sub-agent a clear brief: the task, the constraints, the tools available, and the expected deliverable. 5. Report back to me as sub-agents complete their work. Show me results, ask for feedback, iterate. Never propose a plan without finishing the interview.Never spawn more than 3 sub-agents at once. Never assume my tools or stack. Ask. Keep everything concise. No filler, no fluff. When uncertain, ask. Do not guess. Start Phase 1 now.
English
27
25
371
25.5K
Tommi Pedruzzi
Tommi Pedruzzi@TommiPedruzzi·
I'm giving away my step-by-step system to publish AI eBooks: • Find urgent, high-demand problems • Use AI to build your eBook • Publish on Amazon KDP March 3rd, 6 PM ET. Get your free ticket to this live Masterclass here: publishingos.io/workshop
Tommi Pedruzzi tweet media
English
3
0
14
2.1K
Tommi Pedruzzi
Tommi Pedruzzi@TommiPedruzzi·
You missed blogging in 2010. You missed eCom in 2015. You missed SaaS in 2020. Don’t miss AI publishing in 2026. It’s boring... but that’s exactly why it's overlooked. If you start today, you could build a $1,000–$3,000/month income in just a few weeks. I’ll send you a free course showing exactly how to do it. Just like this post and comment “Info” (Make sure you follow so I can DM you.)
Tommi Pedruzzi tweet media
English
759
111
880
70.4K
Stephen Sims
Stephen Sims@Steph3nSims·
On behalf of OSS developers that I've talked to... With the growth of AI-SAST and increase in findings, please be sure to triage the issue and demonstrate the true impact of the vulnerability to aid in prioritization.
English
2
3
21
3.2K
Ray [REDACTED]
Ray [REDACTED]@RayRedacted·
There is a recording of my talk at SaintCon available at redact dot link slash saintcon. Thats redact.link/saintcon
Ray [REDACTED]@RayRedacted

1. @JackRhysider told me SaintCon was absolutely awesome last year. I hear that every year! 2. Sam is in SLC now & could attend too. 3. I'd been meaning to put together a crash course on "AI Basics for Hackers, & Vice Versa." 4. Will be premiering this new talk on Thursday!

English
0
1
3
1.2K
Ray [REDACTED]
Ray [REDACTED]@RayRedacted·
So proud of my friend Dave Kennedy for this. I suspected he was working o something like this a few weeks ago, but I had no idea it would be this awesome ! I believe that SideChannel brings basic OpSec to OpenClaw.
Dave Kennedy@HackingDave

Introducing a new tool called "SideChannel". A secure alternative to OpenClaw. Utilizes signal for communication and has Claude integration. I built SideChannel, an open-source Signal bot that connects Claude AI to your entire development workflow. End-to-end encrypted. From your pocket. The real power is autonomous development. Send one message like "Build a REST API with auth, pagination, and tests" and SideChannel will: - Generate a full PRD with stories and atomic tasks. - Dispatch up to 10 parallel workers (each running Claude). - Independently verify every task with a separate Claude context. - Run quality gates to catch regressions - Auto-fix failures. - Send you progress updates via Signal as work completes. Every piece of code is reviewed by a separate AI context using a fail-closed security model. If it detects security issues, backdoors, or logic errors — the code gets rejected automatically. No rubber stamps. It also has memory that actually works. Conversations are stored with vector embeddings for semantic search. Claude remembers your project conventions, past decisions, and what's been tried before. It gets smarter about your codebase over time. Other things I'm proud of: - Plugin framework for extending with custom commands. - Multi-project support with per-user scoping. - Rate limiting, path validation, phone allowlist. - Git checkpoints before every task, atomic commits after. - Stale task recovery, circular dependency detection. - Works on Linux and macOS, one-command install. It also integrates into OpenAI or Grok (optional) for more Generative AI response for simple things like "Whats the weather in New York City right now?". github.com/hackingdave/si…

English
1
1
18
3.2K
Ray [REDACTED]
Ray [REDACTED]@RayRedacted·
@_MG_ I bet you that if someone (I volunteer) introduces you to these guys, they will absolutely want to do a collab with you. Check out their historic blogs and videos. You are cut from the same cloth. Blog; enhauto.com/blogs/all
English
0
0
1
66
MG
MG@_MG_·
@RayRedacted Damn it! My car is too new. I have HW4.5 (which Tesla still won’t admit is real) so it looks like the S3XY hardware requires a new harness or something. Guess I will wait a bit.
English
1
0
0
157
MG
MG@_MG_·
The last 6 weeks I’ve been testing a 2026 Model Y. I went in with a lot of skepticism & concerns. I was wrong on 90% of it. I haven’t had this much sustained fun in a long time. I now get why so many Tesla owners are fanatical about their cars 😂 Everyone in the family is constantly looking for a reason to go for a drive now. We just finished a 1 week road trip with the kids and even that was great. I am pretty sure I will use this car for defcon this year. Every year I fill up a car with all the product & booth supplies and then make the ~9hr drive by myself.
English
86
79
1.5K
503.2K
Ray [REDACTED]
Ray [REDACTED]@RayRedacted·
@robertgraham @_MG_ The absolute killer app for the Juiniper Model Y's: FSD. Rob you will literally fall in love with it this time.
English
1
0
0
119
Robert Graham
Robert Graham@robertgraham·
@_MG_ @RayRedacted My old Model S is getting old, so I'm wondering if I should get a new Model Y. I really love my current car though.
English
2
0
0
196