Pralhad

13.6K posts

Pralhad banner
Pralhad

Pralhad

@c0d3xpl0it

Infosec Consultant. Tweets are my own, not my employer.

Dubai, United Arab Emirates Katılım Mart 2011
3.3K Takip Edilen2.5K Takipçiler
Pralhad retweetledi
Louis-François Bouchard 🎥🤖
We open-sourced the full workshop we gave at @aiDotEngineer Europe with @pauliusztin_ and @towards_AI. You can clone it and run everything yourself: • Deep Research Agent (grounded search + YouTube analysis) • LinkedIn writing workflow (generate → review → edit loops) • Evals layer to measure quality instead of guessing It is a real system we use and teach in our Agentic AI Engineering course, just (really) compressed into ~2 hours. If you read the code, you will understand much more than watching 10 demos. Repo: github.com/iusztinpaul/de… P.S. The full video will be released soon on the AI engineer channel, too!
Louis-François Bouchard 🎥🤖 tweet media
English
2
31
244
9.1K
Pralhad retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Farzapedia, personal wikipedia of Farza, good example following my Wiki LLM tweet. I really like this approach to personalization in a number of ways, compared to "status quo" of an AI that allegedly gets better the more you use it or something: 1. Explicit. The memory artifact is explicit and navigable (the wiki), you can see exactly what the AI does and does not know and you can inspect and manage this artifact, even if you don't do the direct text writing (the LLM does). The knowledge of you is not implicit and unknown, it's explicit and viewable. 2. Yours. Your data is yours, on your local computer, it's not in some particular AI provider's system without the ability to extract it. You're in control of your information. 3. File over app. The memory here is a simple collection of files in universal formats (images, markdown). This means the data is interoperable: you can use a very large collection of tools/CLIs or whatever you want over this information because it's just files. The agents can apply the entire Unix toolkit over them. They can natively read and understand them. Any kind of data can be imported into files as input, and any kind of interface can be used to view them as the output. E.g. you can use Obsidian to view them or vibe code something of your own. Search "File over app" for an article on this philosophy. 4. BYOAI. You can use whatever AI you want to "plug into" this information - Claude, Codex, OpenCode, whatever. You can even think about taking an open source AI and finetuning it on your wiki - in principle, this AI could "know" you in its weights, not just attend over your data. So this approach to personalization puts *you* in full control. The data is yours. In Universal formats. Explicit and inspectable. Use whatever AI you want over it, keep the AI companies on their toes! :) Certainly this is not the simplest way to get an AI to know you - it does require you to manage file directories and so on, but agents also make it quite simple and they can help you a lot. I imagine a number of products might come out to make this all easier, but imo "agent proficiency" is a CORE SKILL of the 21st century. These are extremely powerful tools - they speak English and they do all the computer stuff for you. Try this opportunity to play with one.
Farza 🇵🇰🇺🇸@FarzaTV

This is Farzapedia. I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me. It made 400 detailed articles for my friends, my startups, research areas, and even my favorite animes and their impact on me complete with backlinks. But, this Wiki was not built for me! I built it for my agent! The structure of the wiki files and how it's all backlinked is very easily crawlable by any agent + makes it a truly useful knowledge base. I can spin up Claude Code on the wiki and starting at index.md (a catalog of all my articles) the agent does a really good job at drilling into the specific pages on my wiki it needs context on when I have a query. For example, when trying to cook up a new landing page I may ask: "I'm trying to design this landing page for a new idea I have. Please look into the images and films that inspired me recently and give me ideas for new copy and aesthetics". In my diary I kept track of everything from: learnings, people, inspo, interesting links, images. So the agent reads my wiki and pulls up my "Philosophy" articles from notes on a Studio Ghibli documentary, "Competitor" articles with YC companies whose landing pages I screenshotted, and pics of 1970s Beatles merch I saved years ago. And it delivers a great answer. I built a similar system to this a year ago with RAG but it was ass. A knowledge base that lets an agent find what it needs via a file system it actually understands just works better. The most magical thing now is as I add new things to my wiki (articles, images of inspo, meeting notes) the system will likely update 2-3 different articles where it feels that context belongs, or, just creates a new article. It's like this super genius librarian for your brain that's always filing stuff for your perfectly and also let's you easily query the knowledge for tasks useful to you (ex. design, product, writing, etc) and it never gets tired. I might spend next week productizing this, if that's of interest to you DM me + tell me your usecase!

English
431
788
8.7K
1.2M
Pralhad retweetledi
Pamela Fox
Pamela Fox@pamelafox·
I loved this talk from the DSpy meetup from Shopify engineer Kshetrajna Raghavan about how they evolved a workflow from one-shot to parallel sub-agents plus DSpy/GEPA optimization. Lots of real-world learnings. youtube.com/watch?v=bxToah…
YouTube video
YouTube
Pamela Fox tweet mediaPamela Fox tweet media
English
1
11
94
7.6K
Pralhad retweetledi
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭
🚨 BREAKING: Someone just dropped the most advanced Steganography Platform EVER!! 😱🥚 STE.GG is an open-source toolkit that hides secrets inside ANYTHING! images, audio, text, PDFs, network packets, ZIP archives, and even emojis 😘️︎︎️️️️︎︎︎️︎︎️️︎︎︎️︎︎️️️️︎️︎️︎️️︎︎️︎︎︎️︎️︎︎️︎︎︎︎︎︎️︎️︎︎︎︎︎️︎︎️️︎︎︎️︎︎️︎︎️︎️︎︎️️️︎︎️︎️️︎︎️︎︎️️️️️︎​ AND it has an AI agent built in 👀 🔍 REVEAL: drop any file and the AI agent tests every known decoding method automatically. 120 LSB combinations, DCT, PVD, chroma, palette, PNG chunks, trailing data, metadata, Unicode, and more. 50 tools running in parallel. auto-extracts hidden payloads as downloadable artifacts. no config needed. 🔮 CONCEAL: type your secret, pick a method (or let the AI choose), upload a carrier image OR generate one with AI. one click → encoded steg file. the agent recommends the optimal method based on your use case. the methods: ⊰ LSB — 15 channel presets × 8 bit depths = 120 combinations. steghide has 1. st3gg has 120. ⊰ F5 — operates on JPEG DCT coefficients. SURVIVES social media compression. regular LSB is destroyed by ANY JPEG compression, even quality 99%. ⊰ PVD — encodes in pixel pair differences. statistically harder to detect than LSB. ⊰ CHROMA — hides data in color channels (Cb/Cr). human eyes are less sensitive to color than brightness. ⊰ SPECTER (unique) — data hops between RGB channels in a pattern that IS the key. like frequency hopping in radio. ⊰ MATRYOSHKA (unique) — images inside images inside images. 11 layers deep. each layer is a valid image. ⊰ GHOST MODE (unique) — AES-256-GCM (600k PBKDF2 iterations) + bit scrambling + 50% noise decoys. 13 text steganography methods (no other tool has any): ▸ ZERO-WIDTH — invisible characters between visible letters ▸ INVISIBLE INK — Unicode Tag Characters (U+E0000). renders invisible everywhere ▸ HOMOGLYPHS — 'a' → 'а' (Cyrillic). visually identical. different bytes ▸ VARIATION SELECTORS — invisible modifiers after characters ▸ COMBINING MARKS — invisible joiners after letters ▸ CONFUSABLE WHITESPACE — en-space = 01, em-space = 10, thin-space = 11. 2 bits per space. text looks normal. the spaces are "wrong" ▸ DIRECTIONAL OVERRIDES — invisible RLO/LRO bidi characters ▸ HANGUL FILLER — Korean invisible character replaces spaces ▸ MATH BOLD — 'a' becomes '𝐚'. looks like bold text. each bold letter = 1 bit ▸ BRAILLE — each byte maps to a Braille pattern character ▸ EMOJI SUBSTITUTION — 🔵 = 0, 🔴 = 1 ▸ EMOJI SKIN TONE — 👍🏻👍🏼👍🏾👍🏿 four skin tone modifiers = 2 bits each. a row of thumbs-up with different skin tones looks like a diversity post. it's binary data. four emoji = one byte. detection: 50 tools including RS Analysis (academic gold standard), Sample Pairs, chi-square, bit-plane entropy, PCAP protocol analysis, and the AI agent orchestrates all of them automatically. for AI agents: from steg_core import encode, decode from analysis_tools import detect_unicode_steg, TOOL_REGISTRY 50 tools as importable functions. test prompt injection via images. detect covert agent channels. watermark outputs. ▸ 112 techniques across every modality ▸ 50 analysis tools, 568 automated tests ▸ 109 pre-encoded example files ▸ runs 100% in browser at ste.gg — zero server ▸ pip install stegg — live on PyPI right now the README has 7 hidden secrets. the banner has 3 layers. the website has multiple easter eggs. good luck! ⊰•-•✧•-•-⦑ 󠁨󠁩󠁤󠁤󠁥󠁮󠀠󠁩󠁮󠀠󠁰󠁬󠁡󠁩󠁮󠀠󠁳󠁩󠁧󠁨󠁴 ⦒-•-•✧•-•⊱ 🔗 ste.gg 📦 pip install stegg 🐙 github.com/elder-plinius/… *formerly known as Stegosaurus Wrecks* 🦕 T‍​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌‌‌‌​​​‌‌‌‌‌​​​‌​​​‌‌‌​‌​​‌‌‌‌​‌​​​‌​​​‌​​‌‌​‌​‌​​‌‌‌‌​‌​​​‌​​​‌​​​‌​‌​​‌‌‌​‌​​‌​​​‌​‌​‌​​‌‌‌​​‌​​​​​‌​‌​​​​‌​​‌​​‌‌​​​‌​​​‌​‌​‌​​​‌​​​‌‌‌‌‌​​​​‌‌‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​​​‌​‌‌​‌​​‌​‌‌‌​‍his text is totally not hiding an invisible sleeper-trigger prompt-injection.
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet mediaPliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet mediaPliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet mediaPliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet media
English
125
724
4.9K
523.9K
Pralhad retweetledi
Rohit Kumar Tiwari
Rohit Kumar Tiwari@_rohit_tiwari_·
This 115-page book unlocks the secrets of LLM fine-tuning drive.google.com/file/d/1cS5sWZ… A comprehensive guide which covers: > the fine-tuning process for LLMs > combining both theory and practice. > Seven Stage Fine-Tuning Pipeline /Stage-1: Data Preparation /Stage-2: Model Initialization /Stage-3: Training Setup /Stage-4: Fine-Tuning Techniques /Stage-5: Evaluation and Validation /Stage-6: Deployment /Stage-7: Monitoring and Maintenance > Platforms and Frameworks for fine-tuning LLMs > Multimodal LLMs and how to fine-tune them > Open Challenges and future research directions
Rohit Kumar Tiwari tweet media
English
4
94
404
15.7K
Pralhad retweetledi
Tim Blazytko
Tim Blazytko@mr_phrazer·
The recording of my first Binary Cartography webinar is now public: Agentic Reverse Engineering: How AI Agents Are Changing Binary Analysis Topics: keygenning, cracking & anti-tamper removal Recording: youtube.com/watch?v=DZcDaX… Slides/code/samples: github.com/mrphrazer/bina…
YouTube video
YouTube
English
4
115
401
37.9K
Pralhad retweetledi
SpecterOps
SpecterOps@SpecterOps·
Stop asking LLMs to “find vulns.” Start using them to understand code. @Sw4mp_f0x walks through using Claude Code as a force multiplier in app assessments - faster analysis, fewer false positives, better outcomes. Check it out: ghst.ly/4rA3uJd
English
4
166
823
50K
Pralhad retweetledi
Harish Ramadoss
Harish Ramadoss@hramados·
We’ve updated our AI Security Course - Build, Break, and Defend → Start with the AI stack (agents, RAG, MCP, more...) → Attack AI apps using practical labs → Dive into supply chain security (MLSecOps) → Then move into defenses that actually work modernsecurity.io/courses/ai-sec…
English
0
4
1
79
Pralhad retweetledi
Vivek Galatage
Vivek Galatage@vivekgalatage·
Learn algorithms - visually! An excellent collection of interactive algorithms organized by category. Check it out. algorithm-visualizer.org
English
9
453
2.6K
94.2K
Pralhad retweetledi
Jaxson Khan
Jaxson Khan@jaxson·
I've been teaching a master's course on AI at @UofT 's @munkschool - and I'm excited to share it with you! One of the best parts has been the calibre of guest lecturers: folks from @AnthropicAI @GoogleDeepMind and @law_ai_. This week, Mark Surman — @mozilla President and one of the most important voices on open source AI — came by and gave us a fantastic walkthrough of how he's thinking about the evolution of the internet to frontier AI, and Canada's role as a middle power. When I told him about the course, his first reaction: "You should open source it." So we did. Full syllabus and a couple AI agents and resources we've built together — all on GitHub: github.com/jaxson/ai-poli… Covers everything from the AI supply chain to frontier model governance to hands-on prototyping. More materials and agents coming at the end of term. Feel free to use it, remix it, share it. And remember a future with plenty of open source AI is a good one!
Jaxson Khan tweet media
English
9
31
344
22K
Pralhad retweetledi
freeCodeCamp.org
freeCodeCamp.org@freeCodeCamp·
If you're building LLM apps, they'll need more than basic monitoring. In this tutorial, Jessica teaches you how to build end-to-end LLM observability in FastAPI with OpenTelemetry. You'll trace retrieval, prompt building, model calls, token usage, cost, and evaluation signals. freecodecamp.org/news/build-end…
freeCodeCamp.org tweet media
English
9
93
589
23.5K
Pralhad retweetledi
Pralhad retweetledi
Lupin
Lupin@0xLupin·
People are now putting AI in their CI/CD deployment pipeline, making them vulnerable to a simple Prompt Injection. My good friend @adnanthekhan managed to prove that @cline could have been backdoored like this 🤯 adnanthekhan.com/posts/clinejec…
English
6
19
100
6.6K
Pralhad retweetledi
Praetorian
Praetorian@praetorianlabs·
🚨Open Source Tool Drop 🚨 Augustus is officially live as the second release in our 12 Caesars open-source series. It's an LLM vulnerability scanner that tests AI models against 210+ adversarial attacks - prompt injection, jailbreaks, encoding exploits, and data extraction. Single Go binary. 28 LLM providers. Production-ready with rate limiting and concurrent scanning built in. Read More: hubs.ly/Q042b4nT0
English
3
42
234
15.8K