Flo's AI dump
219 posts

Flo's AI dump
@flo_ai_dump
My AI generated image & video dump. I'm not an "AI artist", I use AI generators for fun.



DUNE: Part Three | Standard VS IMAX 1.43:1


Instagram head Adam Mosseri just wrote 1,240 words on how AI will affect Instagram creators and social media. Here are 9 takeaways: 1. By 2026, “authenticity” will be infinitely reproducible: deepfakes and AI media will look real. 2. The internet already shifted power from institutions to individuals; creators gained trust as institutions declined. 3. AI will produce far more content than humans capture, including high-quality “synthetic” media that soon feels real. 4. As synthetic content floods feeds, true authenticity becomes scarce, increasing demand for trusted creators. 5. The success bar moves from “can you create?” to “can you make something only YOU could make?” 6. Because polish is cheap (AI + phone cameras), a raw, imperfect aesthetic becomes a credibility signal (“proof”). 7. People will shift from assuming media is real to default skepticism, focusing more on who posted and why. 8. Platforms will be pressured to label AI content, but detection will get harder…a better approach may be fingerprinting real media at capture (cryptographic signing). 9. Instagram should evolve with better creator tools, clearer AI labeling, real-media verification, richer account context/credibility signals, and stronger ranking for originality.



You can now run 70B LLMs on a 4GB GPU. AirLLM just made massive models usable on low-memory hardware. 𝗪𝗵𝗮𝘁 𝗷𝘂𝘀𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝗲𝗱 AirLLM released memory-optimized inference for large language models. It runs 70B models on 4GB VRAM. It can even run 405B Llama 3.1 on 8GB VRAM. 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀 AirLLM loads models one layer at a time. Instead of loading everything: → Load a layer → Run computation → Free memory → Load the next layer This keeps GPU memory usage extremely low. 𝗞𝗲𝘆 𝗱𝗲𝘁𝗮𝗶𝗹𝘀 • No quantization required by default • Optional 4-bit or 8-bit weight compression • Same API as Hugging Face Transformers • Supports CPU and GPU inference • Works on Linux and macOS Apple Silicon 𝗪𝗵𝗮𝘁 𝘆𝗼𝘂 𝗰𝗮𝗻 𝗱𝗼 • Run Llama, Qwen, Mistral, Mixtral locally • Test large models without cloud GPUs • Prototype agents on cheap hardware



LTX-2 is THE open source video AI moment I've been waiting for--the first to generates high quality video AND is FAST!! - 4090: Generate 20s at 720p in 2 minutes - A4500 (3070): Generate 10s at 480p in 3 minutes 1-click install on Pinokio, do it now!

OK this changes everything, just unlocked a new LTX-2 superpower in Wan2GP. Turns out you can input audio to make the video synchronize to the audio clip! To unlock: Just check the advanced mode and it will let you upload audio prompt!








