Armitage ᵗᵒˣᶦᶜ ツ

4.5K posts

Armitage ᵗᵒˣᶦᶜ ツ banner
Armitage ᵗᵒˣᶦᶜ ツ

Armitage ᵗᵒˣᶦᶜ ツ

@yoimlit

Antartica Entrou em Mayıs 2016
940 Seguindo126 Seguidores
Armitage ᵗᵒˣᶦᶜ ツ retweetou
Nav Toor
Nav Toor@heynavtoor·
You pay Netflix $19.99 a month. Then Disney+ takes another $18.99. HBO Max wants $18.49. Hulu is $18.99. That is $76.46 a month. $917 a year. And the shows still disappear. Your favorite movie gets pulled. The show you were halfway through gets cancelled. Netflix raised prices on March 26, 2026. HBO Max went up in October 2025. Plex doubled its Lifetime Pass from $120 to $249.99 and put remote streaming behind a paywall. Remember the movies you "bought" on Amazon Prime? Some of them vanished. Amazon is being sued in a class action right now because "purchased" does not actually mean purchased. You do not own anything you stream. You rent permission. There is a self-hosted Netflix you run on your own hardware. Every movie. Every show. Every song. Every photo. Streaming to every device you own. For $0. It is called Jellyfin. 50,500+ stars on GitHub. Not a stripped-down media player. A full Netflix-grade streaming platform. Beautiful interface. Posters, descriptions, cast, trailers fetched automatically. Looks and feels like the real thing. Here is what it does: → Stream movies, TV, music, audiobooks, photos to any device. → Apps for iOS, Android, Apple TV, Fire TV, Roku, Android TV, Samsung, LG, Xbox, Kodi, Chromecast, browser. → Hardware transcoding on Intel, NVIDIA, AMD, Raspberry Pi. → Live TV and DVR with an antenna. → SyncPlay. Watch movies in perfect sync with friends across the country. → Multi-user profiles, parental controls, plugin ecosystem. → No account. No cloud. No telemetry. No ads. Ever. Here's the wildest part: Plex used to be the move. Then they doubled the Lifetime Pass. Locked remote streaming behind a paywall. Auto-shared your watch history with strangers. Made you sign into THEIR cloud servers to access YOUR files on YOUR hardware. The community said enough. They forked Emby in 2018 and built Jellyfin. Hardware transcoding? Free. Plex charges for it. Remote streaming? Free. Plex charges for it. Live TV DVR? Free. Plex charges for it. Mobile offline sync? Free. Plex charges for it. Plex Pass: $249.99 lifetime. Netflix + Disney+ + HBO Max + Hulu: $917 a year. Jellyfin: $0. Forever. Runs on a Raspberry Pi. Runs on a 10-year-old laptop. Runs on a $20 mini PC. Runs on your existing NAS. 50,500+ stars. 4,672 forks. 370+ contributors. GPL-2.0 license. Active daily since 2018. Your movies. Your music. Your server. Your rules. 100% Open Source. (Link in the comments)
Nav Toor tweet media
English
26
45
373
27.9K
Armitage ᵗᵒˣᶦᶜ ツ retweetou
Rhys
Rhys@_RhysThorne·
This video of the Teotihuacan attack is unbelievable. WHY are people just stood around watching like it’s a show? WHY are people not being evacuated?! WHERE is the police?!
English
177
486
14K
4M
yeet
yeet@Awk20000·
Hasan calls Asmongold a botter after YourRAGE joked that Asmon view mogged him "Botmongold" -H “Wait you think he bots?” -R "I mean there was some suspicion" -H
English
171
7
548
89.5K
Armitage ᵗᵒˣᶦᶜ ツ retweetou
How To AI
How To AI@HowToAI_·
Someone just built the app store for Claude Code. A free library of 1000+ ready-to-use agents, skills, commands, MCPs, and hooks that you install with a single command. and it is 100% free to use.
How To AI tweet media
English
16
62
565
38.4K
TMZ
TMZ@TMZ·
🚨 BREAKING: D4vd had tons of child pornography on his phone, prosecutors claim. tmz.me/hQbJXGx
TMZ tweet media
English
3.9K
5.9K
80.7K
25.9M
Tom Cotton
Tom Cotton@SenTomCotton·
Marijuana today is much more potent than just ten or twenty years ago, leading to increased psychosis, anti-social behavior, and fatal car crashes. Arkansans don’t want more dangerous drugs obtained more easily. A change to marijuana’s drug classification is a step in the wrong direction.
English
16.4K
575
5.3K
2.7M
Armitage ᵗᵒˣᶦᶜ ツ retweetou
Graeme
Graeme@gkisokay·
The Local LLM Cheat Sheet for your 32GB RAM device I was asked to put together a practical lineup of local models that fit comfortably on a 32GB machine. At this tier, you start getting access to real flagship-class local models, plus a growing number of custom quants. But for most people, these are the core models worth knowing first. Flagship Models Qwen3.5 27B / GGUF / Q6_K_M The best overall 32GB flagship. General chat, writing, research, and agent workflows. Great if you want one model that can handle almost everything well. Qwen3.6-35B-A3B / GGUF / UD-Q4_K_M Best MoE flagship. Stronger for coding, reasoning, and tool use than most smaller generalists. Gemma 4 31B / GGUF / Q6_K_M Dense premium model. Writing, analysis, reasoning, and high-end local chat. Heavier than the MoE options, but excellent when quality matters more than speed. Models for Fast Flagship Use Gemma 4 26B A4B / GGUF / Q6_K_M Great balance of speed and quality for general assistant work, coding, agent tasks, and research. This is one of the best 32GB picks if you want something that feels high-end without dragging. DeepSeek-R1 Distill Qwen 32B / GGUF / Q4_K_M Offline reasoning engine. Best for math, logic, deliberate analysis, and step-by-step problem solving. Mistral Small 24B / GGUF / Q6_K_M Tool-calling specialist. Strong for assistants, chat workflows, local business tasks, and function calling. Available for 24GB machines. Models for Companion Use Qwen3.5 9B / GGUF / Q6_K_M Best sidekick. Fast drafts, search loops, cheap retries, and secondary agent work. Even on a 32GB machine, you still want a smaller model around for support tasks. Llama 3.1 8B / GGUF / Q6_K_M Long-context companion. RAG, doc ingestion, codebase chat, and long prompts. The output quality is not the sharpest anymore, but it is still useful when needing simple tasks fast. From what my community tells me, the best single models are Qwen3.5 27B or Gemma 4 31B. For two models, the strongest general pairing is Qwen3.5 27B + Qwen3.5 9B. If you are more code-heavy, Qwen3.6-35B-A3B + Llama 3.1 8B. Let me know what models you are running on 32GB, and which ones have actually been worth the RAM.
Graeme tweet media
Graeme@gkisokay

The Local LLM cheat sheet for your 16GB RAM device I pulled together a lineup of small models that can run comfortably on a Mac Mini or personal laptop while still leaving room for context without melting your machine. Models for Daily Use Qwen3.5 9B / GGUF / Q4_K_M Daily driver. General chat, drafting, research, translation. If you're keeping only one, keep this. DeepSeek-R1 Distill Qwen 7B / GGUF / Q4_K_M Reasoning engine. Math, logic, step-by-step problems. Slower, but worth it when you need actual thinking. Models for Specialty Work Qwen2.5 Coder 7B / GGUF / Q4_K_M Code specialist. Completions, refactors, debugging, repo Q&A. Better than a generalist when the task is code. Llama 3.1 8B / GGUF / Q4_K_M Long context worker. RAG, doc chat, codebase Q and A. The output isn't top tier, but the context is strong for its size. Phi-4 Mini Reasoning / GGUF / Q4_K_M Compact thinker. Logic, structured answers, math, and short coding bursts. Smaller context is the catch. Models for Efficiency Gemma 4 E4B / GGUF / Q4_K_M Light all-rounder. Writing, chat, light agents, structured output. Phi-3.5 Mini / GGUF / Q5_K_M Pocket sidekick. Summaries, extraction, background doc chat. Easy to pair with a bigger model. Qwen3.5 2B / GGUF / Q4_K_M Useful for summaries, tagging, rewrites, and lightweight sidekick work. Micro Models Qwen3.5 0.8B / GGUF / Q5_K_M Classification, keyword routing, binary decisions, triage. Gemma 4 E2B-it / GGUF / Q4_K_M Lightweight chat, quick Q and A, summaries, tiny agents. My personal choice for a single model is Qwen3.5 9B For two models use Qwen3.5 9B + Qwen2.5 Coder 7B for code, or Qwen3.5 9B + Phi-3.5 Mini for support tasks. Let me know in the comments your experience with these models, or any I have left out.

English
85
343
2.1K
284.6K
Armitage ᵗᵒˣᶦᶜ ツ retweetou
Tips Excel
Tips Excel@gudanglifehack·
Your iPhone has a setting called "Advanced Data Protection." It is OFF by default. Apple holds the encryption keys to your iCloud data. 10 settings to change right now:
English
2
34
164
40.6K
Armitage ᵗᵒˣᶦᶜ ツ retweetou
Avid
Avid@Av1dlive·
CEO of Nvidia : "I'd hire the graduate who's expert in AI over the one who isn't. Every time." two types of people right now: type 1: still typing prompts into a chat window. hits enter and hopes. type 2: built an AI agent. sold it for $100,000. turned a trend into leverage. type 1 gets replaced. type 2 becomes the solo billion-dollar founder. this is what the full guide looks like
Rohit@rohit4verse

x.com/i/article/2044…

English
78
233
2.1K
1.1M
Armitage ᵗᵒˣᶦᶜ ツ retweetou
Nicolas Hulscher, MPH
Nicolas Hulscher, MPH@NicHulscher·
We found 84% of cancer patients taking ivermectin and mebendazole for 6 months declared their cancer was either COMPLETELY GONE, REGRESSED, or STOPPED SPREADING. It’s no surprise the CIA BURIED a 1950s study for over HALF A CENTURY showing anti-parasitics disrupt cancer growth.
Nicolas Hulscher, MPH@NicHulscher

The CIA CLASSIFIED a 1950s study showing anti-parasitics disrupt cancer growth—and kept it BURIED for over HALF A CENTURY. Millions of cancer victims have paid the price as this vital line of research was set back DECADES.

English
352
9.8K
31.3K
2M
Armitage ᵗᵒˣᶦᶜ ツ retweetou
Raunak Yadush
Raunak Yadush@raunak_yadush·
Windows 11 has been secretly preloading Edge browser into your RAM this whole time and eating memory before you even open a single app. Here's the fix they don't want you to know about Win+R → regedit → HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge → New DWORD → Name: StartupBoostEnabled → Set value to 0 → Restart PC Instant RAM recovery. Noticeable FPS boost in Valorant and Fortnite.
Raunak Yadush tweet media
English
19
147
713
62.7K
Armitage ᵗᵒˣᶦᶜ ツ retweetou
✦ VISUAL AI ✦
✦ VISUAL AI ✦@VisualconAI·
🚨 ULTIMA HORA : Alguien acaba de matar la suscripción de Claude Code. Se llama free-claude-code. Un proxy open-source que convierte las llamadas a la API de Anthropic al formato NVIDIA NIM, y te da 40 requests por minuto completamente gratis. La config tarda 2 minutos: consigues una API key gratuita de NVIDIA, apuntas Claude Code a localhost, y listo. Sin factura. Sin panic de rate limits. Sin depender de un solo proveedor. Soporta Kimi K2, GLM 4.7, MiniMax M2, Devstral y más. Streamea thinking tokens y tool calls en tiempo real. Y tiene un bot de Telegram integrado para que controles Claude Code desde el móvil. Esto no es un OpenRouter más. Convierte Claude Code en un agente libre que tú controlas desde cualquier sitio. Código en GitHub. Se llama free-claude-code.
✦ VISUAL AI ✦ tweet media✦ VISUAL AI ✦ tweet media
Español
38
369
4.3K
294.6K
Armitage ᵗᵒˣᶦᶜ ツ retweetou
BuBBliK
BuBBliK@k1rallik·
NVIDIA IS LITERALLY GIVING AWAY FREE AI INFERENCE I literally set it up in 5 minutes and couldn't believe it was free DeepSeek, MiniMax, Kimi, GLM, Llama - all on NVIDIA's DGX Cloud via clean OpenAI-compatible API. Setup in 5 min: → build.nvidia.com → grab API key → base_url = integrate.api.nvidia.com/v1 → drop it into any OpenAI SDK We've been using it. Yes, it slows down under heavy load. Yes, free tier has limits. But for solo devs, indie hackers, and students learning AI engineering? This is the best free playground that exists right now. Stop paying $20/mo to experiment. Use this first.
Dhruv@dhruvtwt_

Why is no one talking about this? @nvidia is offering around 80 AI models via hosted APIs absolutely for free. You get access to MiniMax M2.7, GLM 5.1, Kimi 2.5, DeepSeek 3.2, GPT-OSS-120B, Sarvam-M etc. This plugs straight into OpenClaude, OpenCode, Zed IDE, Hermes agent and even with Cursor IDE. Setup: – Grab API key: build.nvidia.com/models – base_url = "integrate.api.nvidia.com/v1" – api_key = "$NVIDIA_API_KEY" – select model (e.g. minimaxai/minimax-m2.7) If you’re building or experimenting, this is basically free inference. Lock in and start building today anon. Thank me later.

English
28
129
1.3K
153.3K
Armitage ᵗᵒˣᶦᶜ ツ retweetou
Dhruv
Dhruv@dhruvtwt_·
Why is no one talking about this? @nvidia is offering around 80 AI models via hosted APIs absolutely for free. You get access to MiniMax M2.7, GLM 5.1, Kimi 2.5, DeepSeek 3.2, GPT-OSS-120B, Sarvam-M etc. This plugs straight into OpenClaude, OpenCode, Zed IDE, Hermes agent and even with Cursor IDE. Setup: – Grab API key: build.nvidia.com/models – base_url = "integrate.api.nvidia.com/v1" – api_key = "$NVIDIA_API_KEY" – select model (e.g. minimaxai/minimax-m2.7) If you’re building or experimenting, this is basically free inference. Lock in and start building today anon. Thank me later.
Dhruv tweet media
English
503
1.7K
17.3K
1.4M
HYPEX
HYPEX@HYPEX·
Epic Games took 778,495+ moderation actions on Fortnite in Jul–Dec 2025 • 365,277 for cyber harassment • 287,664 for hate speech • 54,082 for inappropriate language • 53,894 for spam • 101 suicide-related interventions • 83 grooming actions against predators • 30 CSAM reports reviewed • 22 terrorist content actions For voice chat, reported clips get fed through speech-to-text + an AI/LLM that auto-sanctions if it catches a violation. text chat is scanned 24/7 for stuff like self-harm, threats, and predators going after minors. always on in game chats and anything involving minors. when it flags something, a human reviews it. for CSAM they use PhotoDNA to match against known abuse images and report it to NCMEC. the rest is people manually reporting.
HYPEX tweet mediaHYPEX tweet media
English
206
294
5.5K
805.5K