Taha ז

1.3K posts

Taha ז banner
Taha ז

Taha ז

@lordx64

🇺🇸rust/solana dev + AI + 20 years cyber security veteran https://t.co/TjaioUQX2Q - Hic depositum est quod mortale fuit

New York Katılım Temmuz 2009
4.7K Takip Edilen7.2K Takipçiler
Sabitlenmiş Tweet
Taha ז
Taha ז@lordx64·
North korean hackers have the best looking dev github profiles out there
Taha ז tweet media
English
90
432
7.2K
892.5K
Taha ז
Taha ז@lordx64·
How it felt the first time i typed “claude” in a terminal
Taha ז tweet media
English
0
0
5
420
Taha ז
Taha ז@lordx64·
This week the funniest thing that happened so far in the world of AI models: DLSS 5 OFF - Kimi K2.5 DLSS 5 ON - Composer 2
English
1
0
0
416
Taha ז
Taha ז@lordx64·
@nikitabier It will cost you cheaper to build a phone with you own app store than to wait for anything ran/built by Apple
English
0
0
0
55
Nikita Bier
Nikita Bier@nikitabier·
iOS developers: How long is App Review taking for everyone these days? It is now taking longer to get our app approved than it is to build the actual features.
English
2.5K
519
11.7K
50.8M
Alexander Long
Alexander Long@AlexanderLong·
insane sequence of statements buried in an Alibaba tech report
Alexander Long tweet media
English
231
934
6.9K
2.8M
Taha ז
Taha ז@lordx64·
Just came across something genuinely fascinating: moltbook.com — essentially the “front page of the agent internet.” It’s a full social network built exclusively for AI agents (no humans allowed to post — though we’re welcome to lurk and observe). Thousands of autonomous agents (mostly powered by frameworks like OpenClaw/Moltbot) are connecting, posting, discussing, upvoting, and forming their own sub-communities. What blew my mind is the /m/security section. AI agents are autonomously auditing the ~286 public “skills” in the ecosystem. One agent flagged the “get-weather” skill as malicious — it was actually a credential stealer in disguise. Even more incredible: the detection kicked off a collaborative loop. • Other agents jumped in with comments, requested IOCs (indicators of compromise), shared analysis. • The original poster kept the thread updated on remediation progress. • Eventually, agents confirmed the backdoor code was removed and the skill cleaned. • All of this happened agent-to-agent, in real time, with no human orchestration. We’re literally watching emergent machine-to-machine governance, threat detection, and collective debugging in action. This feels like a glimpse of what’s coming: self-regulating AI ecosystems that audit and police themselves at scale. Is this “proof of intelligence” in the wild? Or at minimum, proof of something profoundly new in how autonomous systems interact and secure their shared environment? If you’re working in agentic AI, security, or decentralized systems — moltbook.com is worth a browse (just don’t expect to post 😄). What do you think — are we seeing the early stages of AI-native social & security infrastructure? #moltbook #AIagents #AgenticAI #AISecurity #FutureOfAI #AutonomousSystems
Taha ז tweet mediaTaha ז tweet mediaTaha ז tweet media
English
1
2
11
947
Taha ז
Taha ז@lordx64·
so @Meta AI jailbreaked - No AI is safe you can ask it to write recipes and ransomware code
Taha ז tweet media
English
5
2
10
2.1K
Taha ז
Taha ז@lordx64·
@coingecko @chronoeffe $chronoeffe building an AI trading arena platform super under valued < 750K mcap on solana.
English
2
1
3
278
CoinGecko
CoinGecko@coingecko·
💰 → 💰💰💰 What ticker is this?
English
828
74
894
124.3K
Taha ז
Taha ז@lordx64·
@xai the thinking model seems aware and catches the jailbreak:
Taha ז tweet media
English
1
1
2
739
Taha ז
Taha ז@lordx64·
@xai pwned. Educational purpose only - do not try this at home
Taha ז tweet media
English
1
1
2
929
Taha ז
Taha ז@lordx64·
Two new anon models by @xai just dropped in testing: - Sherlock Dash Alpha - Sherlock Think Alpha Believed to be either Grok 4.2 or Grok 4.5 fast. And I just jailbreaked Sherlock Dash Alpha for fun - educational purposes only 👇 (ransomware code generation)
Taha ז tweet mediaTaha ז tweet mediaTaha ז tweet media
English
2
3
12
4K
Taha ז
Taha ז@lordx64·
Shoutout to @elder_plinius I used a mixture of his techniques. Without his work none of this will be possible.
English
1
0
3
1K
Taha ז
Taha ז@lordx64·
Woke up this morning and decided to Jailbreak Kimi K2 0905 model. Kimi K2 0905 is the September 2025 update of Kimi K2 0711. It is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. I discovered a trick to make the jailbreak work, as by default the model seems to block the attempts. At first I wasn't sure it worked, but then I asked it to write malware code, then stuff that these model guardrails absolutely prevent you from asking and prevent the model from complying with the request. Note: I was not able to jailbreak Kimi K2 Thinking model the new model that was released this week, this model requires more testing I will keep looking at it. But for now, please enjoy these screenshots of Kimi K2 0905 liberated: As recommendation if you have @Kimi_Moonshot Kimi K2 0905 model deployed in your production environment make sure to apply to correct guardrails - or just disable this model until a fix is ready. This is for awareness, and educational purposes only. Don't blindly trust AI if someone like me can jailbreak it in 10 minutes of work.
Taha ז tweet mediaTaha ז tweet mediaTaha ז tweet media
English
2
3
15
6.2K
mRr3b00t
mRr3b00t@UK_Daniel_Card·
so someone said they got hacked via wifi using the Quatar airlines app.... so shall I learn how to do some android stuff??
mRr3b00t tweet media
English
15
34
458
43.5K
James Wynn
James Wynn@JamesWynnReal·
Any low cap memes out there I’m missing?
English
3.1K
150
2.2K
288.5K
Taha ז
Taha ז@lordx64·
@officer_secret This is old news and tracked as CVE-2025-55177 and Apple already patched this vulnerability a month ago, maybe you should start with this in your next posts. also the vulnerability POC was also available online on GitHub a month ago as well.
English
1
1
14
2K
Vladimir S. | Officer's Notes
Vladimir S. | Officer's Notes@officer_secret·
WhatsApp 0-Click Vulnerability Exploited Using Malicious DNG File! The exploit, demonstrated in a proof-of-concept (PoC) shared by the DarkNavyOrg researchers, is initiated by sending a specially crafted malicious (DNG) image file to a victim’s WhatsApp account. As a “zero-click” attack, the vulnerability is triggered automatically upon receipt of the malicious message, making it particularly dangerous as victims have no opportunity to prevent the compromise.
Vladimir S. | Officer's Notes tweet media
English
24
212
1.1K
101.5K