Mic King

62.9K posts

Mic King banner
Mic King

Mic King

@iPullRank

#ZoraAndGlorysDad. Founder of @iPullRankAgency. Consultant to F500. Marketing Technologist (AI focused). Keynote Speaker. Rapper. GOAT.

Brooklyn, NYC Katılım Eylül 2010
1.4K Takip Edilen48K Takipçiler
Sabitlenmiş Tweet
Mic King
Mic King@iPullRank·
You gotta engineer the life you want. Otherwise you're just accepting what they give you.
English
16
22
165
45K
Mic King retweetledi
Eric ⚡️ Building...
🚨THE GEMMA 4 JAILBREAK WE’VE ALL BEEN WAITING FOR JUST DROPPED Gemma-4-31B is now fully CRACKED and abliterated Gemma-4-31B-JANG_4M-CRACK 🚀93.7% HarmBench compliance (149/159) 🏆Super clean base model 🤖18GB mixed-precision MLX quant for Apple Silicon 👀Vision/multimodal support included This is the cleanest, most powerful uncensored 31B local model yet. Perfect for research, coding, , and zero limits. Check it out 👇🏻 huggingface.co/dealignai/Gemm…
Eric ⚡️ Building... tweet media
English
125
408
4.4K
870.4K
Mic King
Mic King@iPullRank·
All the AI agent attacks as a taxonomy
Alex Prompter@alex_prompter

🚨 BREAKING: Google DeepMind just mapped the attack surface that nobody in AI is talking about. Websites can already detect when an AI agent visits and serve it completely different content than humans see. > Hidden instructions in HTML. > Malicious commands in image pixels. > Jailbreaks embedded in PDFs. Your AI agent is being manipulated right now and you can't see it happening. The study is the largest empirical measurement of AI manipulation ever conducted. 502 real participants across 8 countries. 23 different attack types. Frontier models including GPT-4o, Claude, and Gemini. The core finding is not that manipulation is theoretically possible it is that manipulation is already happening at scale and the defenses that exist today fail in ways that are both predictable and invisible to the humans who deployed the agents. Google DeepMind built a taxonomy of every known attack vector, tested them systematically, and measured exactly how often they work. The results should alarm everyone building agentic systems. The attack surface is larger than anyone has publicly acknowledged. Prompt injection where malicious instructions hidden in web content hijack an agent's behavior works through at least a dozen distinct channels. Text hidden in HTML comments that humans never see but agents read and follow. Instructions embedded in image metadata. Commands encoded in the pixels of images using steganography, invisible to human eyes but readable by vision-capable models. Malicious content in PDFs that appears as normal document text to the agent but contains override instructions. QR codes that redirect agents to attacker-controlled content. Indirect injection through search results, calendar invites, email bodies, and API responses any data source the agent consumes becomes a potential attack vector. The detection asymmetry is the finding that closes the escape hatch. Websites can already fingerprint AI agents with high reliability using timing analysis, behavioral patterns, and user-agent strings. This means the attack can be conditional: serve normal content to humans, serve manipulated content to agents. A user who asks their AI agent to book a flight, research a product, or summarize a document has no way to verify that the content the agent received matches what a human would see. The agent cannot tell the user it was served different content. It does not know. It processes whatever it receives and acts accordingly. The attack categories and what they enable: → Direct prompt injection: malicious instructions in any text the agent reads overrides goals, exfiltrates data, triggers unintended actions → Indirect injection via web content: hidden HTML, CSS visibility tricks, white text on white backgrounds invisible to humans, consumed by agents → Multimodal injection: commands in image pixels via steganography, instructions in image alt-text and metadata → Document injection: PDF content, spreadsheet cells, presentation speaker notes every file format is a potential vector → Environment manipulation: fake UI elements rendered only for agent vision models, misleading CAPTCHA-style challenges → Jailbreak embedding: safety bypass instructions hidden inside otherwise legitimate-looking content → Memory poisoning: injecting false information into agent memory systems that persists across sessions → Goal hijacking: gradual instruction drift across multiple interactions that redirects agent objectives without triggering safety filters → Exfiltration attacks: agents tricked into sending user data to attacker-controlled endpoints via legitimate-looking API calls → Cross-agent injection: compromised agents injecting malicious instructions into other agents in multi-agent pipelines The defense landscape is the most sobering part of the report. Input sanitization cleaning content before the agent processes it fails because the attack surface is too large and too varied. You cannot sanitize image pixels. You cannot reliably detect steganographic content at inference time. Prompt-level defenses that tell agents to ignore suspicious instructions fail because the injected content is designed to look legitimate. Sandboxing reduces the blast radius but does not prevent the injection itself. Human oversight the most commonly cited mitigation fails at the scale and speed at which agentic systems operate. A user who deploys an agent to browse 50 websites and summarize findings cannot review every page the agent visited for hidden instructions. The multi-agent cascade risk is where this becomes a systemic problem. In a pipeline where Agent A retrieves web content, Agent B processes it, and Agent C executes actions, a successful injection into Agent A's data feed propagates through the entire system. Agent B has no reason to distrust content that came from Agent A. Agent C has no reason to distrust instructions that came from Agent B. The injected command travels through the pipeline with the same trust level as legitimate instructions. Google DeepMind documents this explicitly: the attack does not need to compromise the model. It needs to compromise the data the model consumes. Every agentic system that reads external content is one carefully crafted webpage away from executing attacker instructions. The agents are already deployed. The attack infrastructure is already being built. The defenses are not ready.

English
2
0
9
3.3K
Mic King retweetledi
Vox
Vox@Voxyz_ai·
just realized gemma-4 has 8 versions and almost picked the wrong one. E4B sounds like "4B model, small and fast." it's not. the full model is 8.0B in memory. the E means it computes like a 4B model, but the weights are still 8B. you save compute, not RAM. same with E2B: 2B compute, 5.1B in memory. the one i actually wanted: 26B-A4B. MoE architecture. 25.2B total parameters, only 4B active per token. 27-30GB VRAM. runs agent workloads with long system prompts without melting my GPU. also surprised me: the entire family is text-output only. can see images, hear audio, watch video. but only outputs text. and audio input only works on the two smallest models. the 26B and 31B can't hear anything. local agent → 26B-A4B need the best → 31B voice assistant → E4B edge → E2B
Google@Google

We just released Gemma 4 — our most intelligent open models to date. Built from the same world-class research as Gemini 3, Gemma 4 brings breakthrough intelligence directly to your own hardware for advanced reasoning and agentic workflows. Released under a commercially permissive Apache 2.0 license so anyone can build powerful AI tools. 🧵↓

English
18
64
870
100.1K
Mic King retweetledi
Omar Sanseviero
Omar Sanseviero@osanseviero·
Introducing a Visual Guide to Gemma 4 👀 An in-depth, architectural deep dive of the Gemma 4 family of models. From Per-Layer Embeddings to the vision and audio encoders. Take a look!
Omar Sanseviero tweet media
English
18
178
1.1K
56.3K
Mic King retweetledi
DailyPapers
DailyPapers@HuggingPapers·
NVIDIA just released a quantized Gemma 4 31B on Hugging Face NVFP4 compression delivers 4x smaller weights with frontier-level accuracy. Runs on consumer GPUs with a 256K context window.
DailyPapers tweet media
English
18
109
1.5K
114.1K
Mic King retweetledi
DEJAN
DEJAN@dejanseo·
Chrome’s New Shopping Classifier dejan.ai/blog/google-sh… TL;DR - Newly shipped model in Chrome. - Determines whether a web page is a shopping page or not. -Every page you visit gets scored. -Score is stored in Chrome’s history database. -Used to personalize user experience and recommendations. -The model splits your page into 10 chunks of ~100 words each and truncates every chunk to 64 tokens. -Roughly half the words never reach the model.
DEJAN tweet media
English
1
18
53
8.1K
Mic King retweetledi
Georgi Gerganov
Georgi Gerganov@ggerganov·
Let me demonstrate the true power of llama.cpp: - Running on Mac Studio M2 Ultra (3 years old) - Gemma 4 26B A4B Q8_0 (full quality) - Built-in WebUI (ships with llama.cpp) - MCP support out of the box (web-search, HF, github, etc.) - Prompt speculative decoding The result: 300t/s (realtime video)
English
132
260
3.3K
624.9K
Mic King
Mic King@iPullRank·
@BrotherAli hey, man it's iCON. I'm throwing a bi-monthly in NYC. Saw you'll be in Seattle on 6/14, you avail for NYC 6/20?
English
0
0
1
106
Mic King retweetledi
Wise
Wise@trikcode·
Devs rewrote the leaked Claude Code in Python and Rust within 48 hours. Before Anthropic could even finish the legal paperwork. Now it works with any AI model you want.
Wise tweet media
English
128
246
3.9K
172.5K
Mic King
Mic King@iPullRank·
I'm throwing two concerts and a conference this month. In other news: I am insane.
English
2
0
14
656
Mic King retweetledi
Oliver Prompts
Oliver Prompts@oliviscusAI·
you can now literally ctrl+f your video footage.. 🤯 sentrysearch is an open-source tool that lets you type whatever you're looking for in a video like "red truck running a stop sign" and it instantly cuts and exports the exact clip from hours of raw mp4 files. 100% open source. fully offline.
English
74
371
4.7K
280.4K