Mossad Inside

1.6K posts

Mossad Inside banner
Mossad Inside

Mossad Inside

@mossad_inside

参加日 Ocak 2024
310 フォロー中21 フォロワー
ElPuma
ElPuma@PUMAPARDO_·
@GordoDan_ @PabloAbrazo_ una mamushca de gordos stremeando con cada vez menos calidad hasta llegar a 240p
Español
1
0
51
1.1K
Mossad Inside がリツイート
Maxim Kuznetsov
Maxim Kuznetsov@disarto_max·
Workflow builder
Maxim Kuznetsov tweet media
English
10
6
244
8.5K
prin 🏹
prin 🏹@prin_vlr·
Che es increíble lo desastroso que es el Banco Galicia. Me volvieron a vaciar la cuenta usando mi tarjeta de débito NUEVA que me habían enviado hace menos de dos semanas por el último “hackeo”. Otra vez tengo más de 20 compras seguidas que no hice y encima las últimas tienen fecha futura??? Su seguridad es un chiste @BancoGalicia
prin 🏹 tweet media
prin 🏹@prin_vlr

VERDES. GENIAL.

Español
86
29
458
171.5K
Tendencias y Tuits Borrados
Tendencias y Tuits Borrados@tendenciaytuits·
“Peter Thiel” Peter Thiel acaba de comprar una mansión de 12 millones de dólares en Palermo Chico, Buenos Aires 🇦🇷 No Miami. No Mónaco. No Londres. Buenos Aires. 1.600 m² en la zona más exclusiva y protegida, donde viven embajadas y la vieja aristocracia. La cuenta es brutal: solo 700 dólares por pie cuadrado. En Manhattan vale 4k-8k, en Mayfair 3.5k-5k. Thiel pagó entre el 15% y 20% del precio real de un trofeo similar. Una ganga sospechosa 🧐 Patrón clásico de Thiel: En 2011 sacó ciudadanía en Nueva Zelanda 🗺️ En 2021 se mudó a Miami antes que nadie 🏠 Ahora aterriza fuerte en Argentina con 36 millones en propiedades en 16 meses 💰 Su red siempre lo sigue 18 meses después. Socios, empresas, capital… todo se mueve. Thiel no está comprando una casa. Está comprando una opción call física sobre el próximo gran arbitraje geopolítico 😏 Cuando Peter Thiel empieza a comprar mansiones en tu país… ya no es turismo. Es una señal. ¿Visionario o buitre oportunista? ⏳
Tendencias y Tuits Borrados tweet mediaTendencias y Tuits Borrados tweet media
Español
98
219
3.4K
336K
Mossad Inside がリツイート
Pratik
Pratik@maybepratikk·
kinda obsessed with stone shade.
Pratik tweet media
English
20
2
115
3.2K
Mossad Inside がリツイート
Mundo de Bigotes
Mundo de Bigotes@MundoDeBigotes·
ZXX
1
65
393
4.4K
Mossad Inside がリツイート
Graeme
Graeme@gkisokay·
The Local LLM Cheat Sheet for Your 128GB RAM Device ✅16GB ✅32GB ✅64GB ✅128GB 🔜256GB??? At the 128GB tier, you can run serious all-rounders, agent/coding models, math specialists, vision models, and even long-context research models. The big takeaway is that there is no single “best” model here. At 128GB, model choice becomes highly use-case dependent. General Purpose & All-Rounders Qwen3.5-397B-A17B / GGUF / UD-IQ1_M The strongest overall all-rounder on 128GB. Best for general assistant work, hardest reasoning, and complex agents. This is the headline pick if you want one local model that can cover the broadest range of serious workflows. Mistral Large 2411 / GGUF / Q4_K_M Best writing and multilingual all-rounder. Strong for creative writing, business prose, multilingual tasks, function calling, and long-form output. Not the top benchmark pick on the sheet, but excellent when language quality, tone, and writing polish matter. Coding / Agent / Tool-Use Kings MiniMax-M2.7 / GGUF / UD-IQ2_XXS Best agent-loop and tool-use pick. Strong for tool calling, multi-step coding, autonomous tasks, and agentic workflows. This is one of the clearest choices if you are building local agents or want a model optimized around tool use. GLM-4.5-Air / GGUF / Q6_K Best coding and SWE-focused option. Strong for repo work, SWE tasks, agent tool loops, code edits, and practical engineering workflows. Pick this when you want a strong balance of coding ability, reasoning, and tool use on 128GB. Math & Competition Reasoning Qwen3-235B-A22B-Thinking-2507 / GGUF / UD-Q2_K_XL Best deep math and structured reasoning model. Strong for competition math, deliberate analysis, structured problem solving, and deep logic. This is not the everyday general chat pick, but it is one of the strongest options when the work is explicitly reasoning-heavy. gpt-oss-120b / GGUF / Q8_0 Math and efficiency veteran. MXFP4-native weights make Q8_0 effectively full precision at a very efficient size. It is an older model from August 2025, but it is still extremely strong for math, olympiad-style tasks, hard reasoning, and efficient high-quality local inference. Vision & Multimodal Qwen3-VL-235B-A22B-Thinking / GGUF / UD-IQ3_XXS Best vision and multimodal specialist. Use it for image understanding, document analysis, OCR-style workflows, UI analysis, charts, diagrams, and multimodal agents. If you want a serious local vision model in the 128GB tier, this is the one to look at. Long-Context / Research / Fast Thinkers Llama-4-Scout 17Bx16E / GGUF / Q6_K Best long-context research specialist. The reason to pick this is the 10M native context window. Use it for massive document sets, full-codebase Q&A, archives, research corpora, and workflows where context length matters more than raw benchmark quality. Qwen3-Next-80B-A3B-Thinking / GGUF / Q6_K Best fast thinker. Strong for fast reasoning, interactive math, agent planning, and responsive long-context use. Pick this when you want something that feels quick and useful in live workflows, not necessarily the deepest possible reasoning model. Let me know which models you are running on 128GB, or which memory tier, hardware profile, or model category you want next.
Graeme tweet media
Graeme@gkisokay

The Local LLM Cheat Sheet for Your 64GB RAM Device We covered 16GB and 32GB already, and 64GB has been the next big request, which I can understand, because this is where things get very interesting. Here's the practical cheat sheet for a 64GB RAM machine for code, math, reasoning, vision, and more. If there’s another memory tier, hardware profile, or model roundup you want next, just let me know. Flagship Models Qwen3.6-27B / GGUF / Q8_0 The best overall 64GB flagship. General chat, coding, reasoning, and agent workflows. This is the headline pick if you want one local model that can do almost everything at a very high level. Qwen3.6-35B-A3B / GGUF / Q6_K Best fast flagship. Strong for agentic coding, tool use, long-context work, and fast iteration. Similar quality class to the 27B, but better if you care more about responsiveness. Models for Heavyweight Use Llama 3.3 70B / GGUF / Q4_K_M The safe big-model workhorse. Best for long-form writing, broad world knowledge, complex chat, and reliability. Not the sharpest value pick anymore, but still a very strong 70B-class option. Nemotron Super 49B v1.5 / GGUF / Q6_K Reasoning specialist. Better suited for math, structured reasoning, analysis, and agent planning than a general-purpose chat model. One of the best picks here if the work is analytical. Gemma 4 31B / GGUF / Q8_0 Dense premium model. Writing, multilingual work, analysis, and high-end local chat. Heavier than the smaller MoE options, but excellent when output quality matters more than speed. Kimi-Linear-48B-A3B / GGUF / Q5_K_M Long-context specialist. Massive docs, whole-codebase Q&A, research, and long-running agents. This is the one to look at if context length is the real priority. Models for Specialty Use Qwen3-30B-A3B-Thinking-2507 / GGUF / Q6_K Thinking specialist. Best for step-by-step math, logic, deliberate analysis, and deep reasoning. A good pick if you specifically want a model that feels tuned for visible thinking. Qwen3-Coder 30B-A3B / GGUF / Q6_K Coding specialist. Best for agentic coding, repo edits, tool use, and PR-style work. If you are building code agents, this is one of the clearest specialty picks on the sheet. Qwen3-VL-32B / GGUF / Q6_K Vision specialist. Image understanding, document OCR, UI analysis, and multimodal agent workflows. This is the right pick if you need a serious local multimodal model in this RAM tier. Let me know which models you are running on 64GB, or if you want to see a cheat sheet for 128GB next.

English
32
112
701
75.2K
Mossad Inside がリツイート
Marcel
Marcel@marcelkargul·
damn, our team is moving so fast. we're designing 10s and 10s of screens/flows across multiple projects on a daily basis! best part is, the quality is still world-class 😁
Marcel tweet media
English
20
9
250
9.1K
Mossad Inside がリツイート
Mundo de Bigotes
Mundo de Bigotes@MundoDeBigotes·
ZXX
0
56
517
5.9K
Mossad Inside がリツイート
Sujon Hossain
Sujon Hossain@sujon_co·
Minimal UI. Maximum obsession.
Sujon Hossain tweet media
Français
25
9
266
6.9K
Mossad Inside がリツイート
Adi
Adi@AdityaSur11·
Clean Onboarding Progress UI
Adi tweet media
English
9
7
309
9.2K
Mossad Inside がリツイート
Udhaya
Udhaya@Udhayya_·
My most favourite recent work, Did i cook?
Udhaya tweet mediaUdhaya tweet mediaUdhaya tweet mediaUdhaya tweet media
English
2
3
77
5K
Mossad Inside がリツイート
Osha
Osha@Osha_Lion·
User info dropdown
Osha tweet media
Indonesia
7
8
215
4.7K
Mossad Inside がリツイート
Siddharth Vij
Siddharth Vij@siddharthvij_·
first-run experience for conversational demo agent
Siddharth Vij tweet media
English
7
5
186
6K
Mossad Inside がリツイート
Thomi
Thomi@therealthom_·
¿?
7
6
60
1.4K
Mossad Inside がリツイート
Gustavo Cardenas
Gustavo Cardenas@gustav0cardenas·
Así están los rusos en X explicándole a los comunistas de café porque la Unión Soviética era mala.
Español
88
1.2K
8.4K
94.8K
Mossad Inside がリツイート
Tendencias en Argentina
Tendencias en Argentina@porqueTTarg·
"Farsante" Porque el estadounidense Dante Guebel dio una entrevista y no resistió ni su propio archivo.
Español
57
439
3.1K
70.8K
Mossad Inside がリツイート
Elaya
Elaya@elayadesigns·
I think what I’ve come up with so far is clean.
Elaya tweet media
English
40
20
526
18K