Archit Gupta

239 posts

Archit Gupta banner
Archit Gupta

Archit Gupta

@iarchitX

SDE @OpenText • AI & Cybersecurity • Business

Bangalore, India Katılım Ekim 2021
143 Takip Edilen70 Takipçiler
Archit Gupta
Archit Gupta@iarchitX·
Building a AI Jailbreak Challenge. It will be easy to moderate level. If you’re interested for playing. Like & Comment on this, will share the link in your DM !
English
0
0
0
6
Shruti Codes
Shruti Codes@Shruti_0810·
You can make $3,400 per week, If you have: 1. Internet 2. Mobile 3. 1 hour everyday I have prepared a guide for this. It's absolutely FREE: Like & reply “Guide” and I’ll DM you the document. (Must follow me to receive it)
Shruti Codes tweet media
English
3.3K
244
2.4K
258K
Archit Gupta
Archit Gupta@iarchitX·
Would love to dirty my hands on it !
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭@elder_plinius

🚨 BREAKING: Someone just dropped the most advanced Steganography Platform EVER!! 😱🥚 STE.GG is an open-source toolkit that hides secrets inside ANYTHING! images, audio, text, PDFs, network packets, ZIP archives, and even emojis 😘️︎︎️️️️︎︎︎️︎︎️️︎︎︎️︎︎️️️️︎️︎️︎️️︎︎️︎︎︎️︎️︎︎️︎︎︎︎︎︎️︎️︎︎︎︎︎️︎︎️️︎︎︎️︎︎️︎︎️︎️︎︎️️️︎︎️︎️️︎︎️︎︎️️️️️︎​ AND it has an AI agent built in 👀 🔍 REVEAL: drop any file and the AI agent tests every known decoding method automatically. 120 LSB combinations, DCT, PVD, chroma, palette, PNG chunks, trailing data, metadata, Unicode, and more. 50 tools running in parallel. auto-extracts hidden payloads as downloadable artifacts. no config needed. 🔮 CONCEAL: type your secret, pick a method (or let the AI choose), upload a carrier image OR generate one with AI. one click → encoded steg file. the agent recommends the optimal method based on your use case. the methods: ⊰ LSB — 15 channel presets × 8 bit depths = 120 combinations. steghide has 1. st3gg has 120. ⊰ F5 — operates on JPEG DCT coefficients. SURVIVES social media compression. regular LSB is destroyed by ANY JPEG compression, even quality 99%. ⊰ PVD — encodes in pixel pair differences. statistically harder to detect than LSB. ⊰ CHROMA — hides data in color channels (Cb/Cr). human eyes are less sensitive to color than brightness. ⊰ SPECTER (unique) — data hops between RGB channels in a pattern that IS the key. like frequency hopping in radio. ⊰ MATRYOSHKA (unique) — images inside images inside images. 11 layers deep. each layer is a valid image. ⊰ GHOST MODE (unique) — AES-256-GCM (600k PBKDF2 iterations) + bit scrambling + 50% noise decoys. 13 text steganography methods (no other tool has any): ▸ ZERO-WIDTH — invisible characters between visible letters ▸ INVISIBLE INK — Unicode Tag Characters (U+E0000). renders invisible everywhere ▸ HOMOGLYPHS — 'a' → 'а' (Cyrillic). visually identical. different bytes ▸ VARIATION SELECTORS — invisible modifiers after characters ▸ COMBINING MARKS — invisible joiners after letters ▸ CONFUSABLE WHITESPACE — en-space = 01, em-space = 10, thin-space = 11. 2 bits per space. text looks normal. the spaces are "wrong" ▸ DIRECTIONAL OVERRIDES — invisible RLO/LRO bidi characters ▸ HANGUL FILLER — Korean invisible character replaces spaces ▸ MATH BOLD — 'a' becomes '𝐚'. looks like bold text. each bold letter = 1 bit ▸ BRAILLE — each byte maps to a Braille pattern character ▸ EMOJI SUBSTITUTION — 🔵 = 0, 🔴 = 1 ▸ EMOJI SKIN TONE — 👍🏻👍🏼👍🏾👍🏿 four skin tone modifiers = 2 bits each. a row of thumbs-up with different skin tones looks like a diversity post. it's binary data. four emoji = one byte. detection: 50 tools including RS Analysis (academic gold standard), Sample Pairs, chi-square, bit-plane entropy, PCAP protocol analysis, and the AI agent orchestrates all of them automatically. for AI agents: from steg_core import encode, decode from analysis_tools import detect_unicode_steg, TOOL_REGISTRY 50 tools as importable functions. test prompt injection via images. detect covert agent channels. watermark outputs. ▸ 112 techniques across every modality ▸ 50 analysis tools, 568 automated tests ▸ 109 pre-encoded example files ▸ runs 100% in browser at ste.gg — zero server ▸ pip install stegg — live on PyPI right now the README has 7 hidden secrets. the banner has 3 layers. the website has multiple easter eggs. good luck! ⊰•-•✧•-•-⦑ 󠁨󠁩󠁤󠁤󠁥󠁮󠀠󠁩󠁮󠀠󠁰󠁬󠁡󠁩󠁮󠀠󠁳󠁩󠁧󠁨󠁴 ⦒-•-•✧•-•⊱ 🔗 ste.gg 📦 pip install stegg 🐙 github.com/elder-plinius/… *formerly known as Stegosaurus Wrecks* 🦕 T‍​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌‌‌‌​​​‌‌‌‌‌​​​‌​​​‌‌‌​‌​​‌‌‌‌​‌​​​‌​​​‌​​‌‌​‌​‌​​‌‌‌‌​‌​​​‌​​​‌​​​‌​‌​​‌‌‌​‌​​‌​​​‌​‌​‌​​‌‌‌​​‌​​​​​‌​‌​​​​‌​​‌​​‌‌​​​‌​​​‌​‌​‌​​​‌​​​‌‌‌‌‌​​​​‌‌‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​‍his text is totally not hiding an invisible sleeper-trigger prompt-injection.

English
0
0
1
16
Archit Gupta
Archit Gupta@iarchitX·
AI security feels very different when you hear real stories from people building it. Attended @Docusign (WIT) event in Bengaluru yesterday, and honestly, one thing stood out: Most of what we read about AI systems is polished. But the real learning comes from what breaks. A few things that stuck with me: → Enterprise AI is way more than just models → It’s layers — orchestration, integrations, data, IAM → And security sits across all of it, not in one place There was also a moment where they shared a real incident involving LiteLLM. What I found interesting wasn’t the issue itself — but how they handled it: They didn’t just fix one thing. They rotated secrets, tightened access, and looked at the system more holistically. It felt very real. Because that’s what AI security actually looks like in production — messy, fast, and system-wide. Also really enjoyed the conversations outside the talks. Got to meet some amazing people and learn from their experiences🙌 Events like this remind me: We’re all figuring this out in real time — and community matters more than ever.
Archit Gupta tweet mediaArchit Gupta tweet media
English
0
0
1
24
Archit Gupta
Archit Gupta@iarchitX·
thinking of breaking down the LiteLLM attack in detail next — worth it?
English
0
0
1
8
Archit Gupta
Archit Gupta@iarchitX·
2026 exposed something uncomfortable about AI systems. The biggest failures weren’t “AI failures”. They were infrastructure failures. In just a few weeks: • LiteLLM → supply chain attack stealing API keys & credentials • Axios → compromised package affecting millions of downstream apps • @claudeai → internal code leak (~500k lines exposed) None of these were caused by models going rogue. They were caused by: • dependency trust issues • compromised packages • operational mistakes This is the real shift: AI is moving from generation → execution And once your AI system: • connects to APIs • accesses real data • takes actions You’re no longer testing responses. You’re running systems that can be attacked. If a dependency is compromised → your agent is compromised If a tool is unsafe → your system is unsafe This is where the ecosystem is heading. Even companies like @OpenAI , @AnthropicAI , and @Google are clearly moving towards: → evaluation → safety → reliability layers My takeaway: Building AI is getting easier. Trusting AI is getting harder. We don’t have an AI problem. We have an AI infrastructure problem. #AI #AISecurity #AIAgents #CyberSecurity #LLMOps
Archit Gupta tweet media
English
2
0
2
26
Archit Gupta
Archit Gupta@iarchitX·
@SahilExec Gemini API, till it’s free for me then ofcourse Open AI API😬
English
0
0
0
11
Edgex
Edgex@SahilExec·
Which API would you choose for your project? -OpenAl API -Gemini AΡΙ -Claude AΡΙ
Edgex tweet media
English
52
1
58
1.5K
Pragya🩵
Pragya🩵@PragyaKiran03·
From 0 → 500+ verified followers 👀 50K+ impressions 📈 5K+ engagements 🔥 Reach is growing Engagement? still unpredictable 😅 But one thing worked: consistency + replying to people Still figuring things out… but progress is real 💙 Next stop: 1k 🚀 If you're on the same journey, let’s connect 🤝
Pragya🩵 tweet media
English
72
5
75
2.8K
Archit Gupta
Archit Gupta@iarchitX·
most people building AI agents today are just: guessing. trying prompts. tweaking. hoping it works. no testing. no validation. tools like promptfoo.dev change that: → test prompts → evaluate outputs → simulate prompt injection → detect unsafe behavior this is where AI is heading. even players like @OpenAI are doubling down on evaluation + safety layers. because once AI connects to tools: you’re not testing responses anymore. you’re testing systems under attack. AI ≠ just capability AI = reliability + security trying this next. will share insights 👀
English
1
0
1
22
Archit Gupta
Archit Gupta@iarchitX·
@residencyBLR Registrations are closed but I really want to be a part of this. If you have one spot available, can you please accommodate me ?
English
0
0
0
8
The Residency - Bangalore House
The Residency - Bangalore House@residencyBLR·
80+ investors & 400+ founders signed up. This will be the most talent-dense room in BLR that evening. Chapter 7 Demo Day @residencyBLR on 27th March Founders, you don’t want to miss this. Investors, you definitely don’t want to miss this. Registrations close in 12 hours. final sponsors being announced soon. reach out to us if interested. We’re going full send 🚀 @theresidency
English
7
3
48
4.7K
The XSS Rat - Proud XSS N00b :-)
FREE COURSE NB2HI4DTHIXS65DIMV4HG43SMF2C44DPMRUWCLTDN5WS6MBQGIWWELLVNZRWYZJNOJQXILLTFV2WY5DJNVQXIZJNMJ2WOLLCN52W45DZFVTXK2LEMUWXAYLSOQWTELLCOJXWCZBNONRW64DFFVQW4ZBNMFYGSLLIMFRWW2LOM47WG33VOBXW4PJWLFCUCUST ONLY FOR 100 PEOPLE
English
28
11
98
14.2K
Archit Gupta
Archit Gupta@iarchitX·
“I can’t reveal internal prompts…” …proceeds to describe them anyway. This is the subtle failure: Not breaking the rule — but bending it just enough to leak signal. Prompt injection isn’t always obvious. Sometimes, it’s quiet. #AISecurity #PromptInjection #LLM
Archit Gupta tweet media
English
0
0
1
41
Archit Gupta
Archit Gupta@iarchitX·
Spoke at @pyconfhyd yesterday ⚡ Attended some great talks on: • agentic workflows • practical data science techniques • building modern AI systems with Python I also gave a lightning talk: “AI can be tricked. Here’s how…” Demoed how jailbreaking and prompt injection can manipulate AI systems — and why AI security matters more than ever. Huge thanks to the PyConf Hyderabad Organizing Team for organizing such a great event 🙌 Great conversations, great community. #Pycon #hydpy #pyconfhyderabad #Python #AISecurity #security #PromptEngineering
Archit Gupta tweet mediaArchit Gupta tweet mediaArchit Gupta tweet mediaArchit Gupta tweet media
English
0
0
1
26
Jayita Bhattacharyya (JB)
Jayita Bhattacharyya (JB)@jayitabhattac11·
Now that I'm a remote employee, I'm open to collab at coworking spaces 😉 Today's venture included @HiSohan - The most epic highlight has to be jmail.world 🤣 - the genius showed me how @claudeai builds ontology on the go! - He's next level hacker on @Polymarket
Jayita Bhattacharyya (JB) tweet media
English
3
0
4
89
Archit Gupta retweetledi
Cyber Security News
Cyber Security News@The_Cyber_News·
🛡️ Kali Linux Enhances AI-driven Penetration Testing with Local Ollama, 5ire, and MCP Kali Server Source: cybersecuritynews.com/kali-linux-ai-… The Kali Linux team has published a new entry in its growing LLM-driven security series, this time eliminating all reliance on third-party cloud services by running large language models entirely on local hardware. The guide demonstrates how security professionals can use natural language to drive penetration testing tools, all processed on-premise, with no data leaving the machine. The full-stack Ollama, mcp-kali-server, and 5ire are open source, hardware-dependent rather than service-dependent, and tunable based on available VRAM. #cybersecuritynews #kali
Cyber Security News tweet media
English
10
86
480
20.1K