
Iterat0r
4.4K posts

Iterat0r
@JRdefmain
Red & Purple Team Ops | Malware Enthusiast and Developer | Pentester


Iván Cepeda cuestiona a Paloma Valencia en el Senado: "Si vamos a hablar de criminales, ¿qué hace el señor Ciro Ramírez en esta sesión?, ¿qué hace registrado en esa sesión? Hablando de un criminal". semana.com



Éxito total. Hubo 86.000 conectados en vivo ayer en la entrevista de Abelardo de La Espriella en Semana. Invito públicamente a los candidatos Iván Cepeda, Paloma Valencia y Sergio Fajardo, a una entrevista individual, con las mismas garantías y en el mismo espacio, para que puedan responder todas las preguntas y exponer sus propuestas al país. @PalomaValenciaL @IvanCepedaCast @sergio_fajardo



Claude Mythos is insanely token-efficient







Anthropic has an AI Mythos model with exceptional cybersecurity capabilities. It can autonomously detect and exploit security bugs. Including very complex ones. This revolutionizes security forever. A model better than most human experts. It found thousands of high and critical bugs, including in major operating systems, browsers, media and crypto software. The practical risk is faster zero-day discovery, faster weaponization, and shorter patch windows for defenders. Examples where security issues were found: OpenBSD, FreeBSD, Linux kernel, Firefox, FFmpeg, major web browsers, virtual machine monitors, TLS/AES-GCM/SSH libraries, and web applications. red.anthropic.com/2026/mythos-pr…


Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing


Some brief thoughts on Mythos We’ve known this was coming for a long time. At least, we *should* have. Extremely effective software vulnerability discovery was clearly coming to anybody paying attention. It has also been clear that all AI policy so far has been made and executed with training wheels. It was always clear that, sometime soon, the training wheels would come off. The training wheels aren’t fully off just yet—this model is being kept under lock and key, and Anthropic does not seem inclined to release Mythos preview to the public anytime soon, if ever. The training wheels will be off when these capabilities are fully diffused in ways centralized actors cannot control. It is inevitable that this will happen. The point is not to argue about whether we should “ban open source” or similarly unrealistic notions. The point is to harden the world for this new reality. I applaud Anthropic—and I especially applaud @logangraham—for doing so. But their efforts alone are not close to enough. Project Glasswing—a partnership with Anthropic and other companies—seems nice, but unsurprisingly it lacks uniform frontier lab participation. It would probably be ideal, for our national cyberdefense, if the federal government were not trying to destroy Anthropic and eliminate their models from government systems. If anything, the government should be trying to work more closely with Anthropic. As a side note, I hope Anthropic is working with state and local government entities on cyber vulnerability discovery, since many of our adversaries know that state and local is America’s soft underbelly in so many ways. In any event, the Mythos news should lay bare how stupid and counter-productive the Department of War’s feud with Anthropic really is. As someone who suspected all this was coming (not from inside knowledge but from it being ~obvious), that probably explains why I have had such a strong reaction to that feud. It’s this senseless distraction just at the time that the training wheels are coming off. I hope the two parties can resolve their differences now, for the sake of the country, but I am not hopeful. I do want to call out, however, the numerous political and career civil servants in the Trump Admin who do get these issues, know how stupid the Ant-DoW stuff is, and want to work with the frontier labs like adults. I wish you all utmost success. I find myself inclined to end on some positive notes. Mythos appears to be—according to Anthropic at least—“the most aligned” model Anthropic has ever trained. We are approaching superhuman capabilities in some domains, and yet alignment is getting better rather than worse. That’s not nothing. I know some of you think the model is faking its alignment, or aware when its alignment is being tested. I don’t have a good answer. Finally, there is this: Mythos was made by an American company, and like most successful American companies, it has a vested interest in maintaining order and peace, and it is investing substantial resources in mitigating the risks of its technological progress, as I expect most of the American labs would. This is cause for optimism: The incentives of capitalism are working. The training wheels are coming off, but at least we are the ones removing them, as opposed to our enemies. Perhaps we can be the first to learn to bike for real. The first step would be to get beyond all the low-fidelity, under-specified, pimply little fights of AI policy’s prepubescent era. That goes for me too. “What hath God wrought,” wrote the first telegram. What, indeed. In this case, the answer is still up to us.




Meet Gemma 4: our new family of open models you can run on your own hardware. Built for advanced reasoning and agentic workflows, we’re releasing them under an Apache 2.0 license. Here’s what’s new 🧵







