Sabitlenmiş Tweet

There is speculation in the media that AI development can be dangerous if used incorrectly.
A confirmation of this is the Anthropics new AI model, Claude Mythos. This model is so powerful at cybersecurity and hacking that they're not releasing it publicly.
Claude Mythos can find vulnerabilities in software, chain exploits, and potentially take over or disrupt things like financial systems, power grids and hospitals.
Anthropic itself warned it could enable widespread disruption, devastating hacks, or "weapons we can't even envision" if it got into the wrong hands.
Are they hyping their own work, or is there an actual danger here?

English
























