Jim Manico from Manicode Security

43.6K posts

Jim Manico from Manicode Security banner
Jim Manico from Manicode Security

Jim Manico from Manicode Security

@manicode

AI and AppSec Educator. Secure coding system prompts. https://t.co/gbW3ZLhURT

Kauai, HI and Cobb, CA Katılım Temmuz 2009
6.1K Takip Edilen17.2K Takipçiler
Sabitlenmiş Tweet
Jim Manico from Manicode Security
From my experience all software developers are now security engineers wether they know it, admit to it or do it. Your code is now the security of the org you work for. #GoldenAgeOfDefense
Wat Ket, Thailand 🇹🇭 English
35
245
595
0
Jim Manico from Manicode Security retweetledi
Anthropic
Anthropic@AnthropicAI·
Our security bug bounty program is now public on HackerOne. We've run the program privately within the security research community, and their findings have strengthened our products. Now anyone can report vulnerabilities and get rewarded. Read more: hackerone.com/anthropic
English
198
505
4.3K
779.8K
Jim Manico from Manicode Security retweetledi
The Hacker News
The Hacker News@TheHackersNews·
⚠️ Attackers poisoned Hugging Face & ClawHub (OpenClaw) with 575+ malicious skills from just 13 accounts. 🔸 Fake helpful AI tools that install trojans, miners & stealers (Windows + macOS) 🔸 Use hidden commands & indirect prompt injection Quick action: Never install random AI skills or models. Always verify the source. Read: thehackernews.com/2026/05/weekly…
The Hacker News tweet media
English
63
412
1.3K
245K
solst/ICE of Astarte
Um, no? You just need a bunch of containers. Make them auto scale, and ofc you need a control plane… oh… oh no no no
LaurieWired@lauriewired

@kayleecodez hate to say it, but everyone that rejects kubernetes inevitably ends up recreating it from first principles lol

English
17
22
834
41.8K
Jim Manico from Manicode Security retweetledi
DANΞ
DANΞ@cryps1s·
The security industry is entering a period of compression. Model cybersecurity capabilities are rapidly increasing, and it's critical we arm defenders with the tools they need to protect what matters most. We're launching two models today: GPT-5.5 with TAC (Trusted Access for Cyber) GPT-5.5-Cyber (Limited Preview) GPT-5.5 is our starting point for most defensive workflows. It's exceedingly good at cybersecurity workflows and tasks like secure code review, vulnerability triage, detection engineering, malware analysis, and patch validation. We think this model is the right starting place for most organizations. GPT-5.5-Cyber is exceptional for authorized workflows, including red teaming, penetration testing, and controlled validation. It's in research preview for specific organizations and requires enhanced verification and account-level controls. We expect to continue to accelerate defenders with various models, including both our flagship models through Trusted Access for Cyber, and with dedicated cyber models like GPT‑5.5‑Cyber and even more cyber-capable models in the future. openai.com/index/gpt-5-5-…
English
14
71
430
38.8K
Jim Manico from Manicode Security retweetledi
Next.js
Next.js@nextjs·
We’ve released Next.js versions 16.2.6 and 15.5.18 with important security fixes. These fixes address multiple vulnerabilities across high, moderate, and low severity, including one upstream React issue. We strongly recommend upgrading as soon as possible. ⬇️
English
71
316
2.3K
796.3K
Jim Manico from Manicode Security retweetledi
Cloudflare Developers
Cloudflare Developers@CloudflareDev·
Multiple security vulnerabilities affecting React Server Components and Next.js have been disclosed. We strongly recommend updating your applications immediately. Cloudflare WAF managed rules already mitigate the disclosed denial-of-service vulnerabilities, and we are investigating additional coverage for several other CVEs. developers.cloudflare.com/changelog/post…
English
85
303
1.7K
978.6K
Jim Manico from Manicode Security retweetledi
Alex Albert
Alex Albert@alexalbert__·
With the help of Claude Mythos Preview, the Firefox team fixed more security bugs in April than in the past 15 months combined.
Alex Albert tweet media
English
330
1.2K
15.1K
1.3M
mRr3b00t
mRr3b00t@UK_Daniel_Card·
passwords stolen! nice job JARVIS... i mean CLAUDE :D
GIF
English
1
0
5
559
mRr3b00t
mRr3b00t@UK_Daniel_Card·
I'm going to go and make and drink a cup of tea, in the background claude is attacking keepass for me!
GIF
English
3
0
29
1.6K
Jim Manico from Manicode Security retweetledi
Claude
Claude@claudeai·
Effective today, we are: 1) Doubling Claude Code’s 5-hour rate limits for Pro, Max, and Team plans; 2) Removing the peak hours limit reduction on Claude Code for Pro and Max plans; and 3) Substantially raising our API rate limits for Opus models.
English
1.2K
4K
44.4K
8.9M
Jim Manico from Manicode Security retweetledi
Zoe Braiterman
Zoe Braiterman@zbraiterman·
I realize I’m a day late, but… May the 4th be with you! With ❤️from ⁦@manicode⁩, ⁦@FrankSEC42⁩ and me (and the global @owasp community). 🙏
Zoe Braiterman tweet mediaZoe Braiterman tweet media
English
1
1
5
262
Jim Manico from Manicode Security
@liran_tal In general, I think local models are the future so I’m just spending a lot of time experimenting to see what their capability is, and what I see so far is impressive. Once I’m up to the 128 GB of RAM level I’m certain that I can do most of my coding locally.
English
0
0
0
13
Liran Tal
Liran Tal@liran_tal·
@manicode No doubt some small models can run on that, question is how well are they doing the work... You mention Mistral small - what sort of work do you do with this and how good is it?
English
2
0
0
28
Jim Manico from Manicode Security
@liran_tal I have 32gb on ram on my m5 laptop and run several models for coding locally with no trouble. Just the smaller ones. I have a Mac mini with 24g ram running minstrel small for other work. But yea, I’m maxing out ram for all future purchases.
English
1
0
0
156
Liran Tal
Liran Tal@liran_tal·
@manicode That's for sure interesting. What's the local model you'll be using? still need a chunky RAM setup to run anything decent, no? As in, nothing really out of the box for laptops
English
2
0
0
20
Liran Tal
Liran Tal@liran_tal·
@manicode Nice but sadly Ollama isn't a very efficient implementation for local inference
English
2
0
1
71
Jim Manico from Manicode Security
Compared to llama.cpp? That’s not a clean comparison. Ollama uses llama.cpp as one of its inference backends, so performance is similar when configured the same. The difference is that Ollama adds a management and API layer, which can introduce some overhead and reduce low-level control (batching, KV cache tuning, GPU offload, etc). In exchange, you get significantly better usability: model packaging, versioning, and a clean local API. Claude Code and local models (officially) here I come! 🕺
English
1
0
0
24