Hamza Mudassir

2.3K posts

Hamza Mudassir banner
Hamza Mudassir

Hamza Mudassir

@hamzam1981

Founder and CEO at Strategize Labs | Private, sovereign agentic AI for high-trust work | Proud baba to two little girls

Cambridge, UK Katılım Ağustos 2012
516 Takip Edilen543 Takipçiler
Sabitlenmiş Tweet
Hamza Mudassir
Hamza Mudassir@hamzam1981·
The devil is in the metadata
English
1
0
26
0
Hamza Mudassir retweetledi
Sudo su
Sudo su@sudoingX·
a corporate salesman on an openai paycheck tells you local models aren't there yet. an influencer selling you an API wrapper calls the local AI community on X "cancer." meanwhile we're out here modding communities, helping strangers debug their configs at midnight, fighting spam, pushing open source, and doing it all for free. these people don't want you running models on your own hardware. they want you as a customer. every local install is revenue they lose. every migration from their bloat is a subscription cancelled. don't let corporate noise and engagement bait merchants convert you into their recurring revenue. buy a GPU. compile from source. own your thinking. the community they call cancer is the same community that will help you get started for free while they charge you per token.
English
70
83
1.1K
107.5K
Hamza Mudassir
Hamza Mudassir@hamzam1981·
@FT Oh boy why does @MistralAI need this much debt financing? No one uses their models. Not being mean.
English
0
0
0
190
Hamza Mudassir retweetledi
KITKAT
KITKAT@KITKAT·
Regarding recent press coverage
KITKAT tweet media
English
15.3K
36.4K
376.2K
112.3M
Hamza Mudassir retweetledi
OpenCode
OpenCode@opencode·
we’ve signed Zero Data Retention agreements with all providers for Go all models now follow a zero-retention policy your data is not used for training
English
166
286
5.3K
367.6K
Hamza Mudassir retweetledi
Strategize Labs
Strategize Labs@StrategizeLabs·
New capability drop: Agents now pause mid-execution to ask structured questions – multi-choice or free text – instead of guessing. Your work, your direction, zero assumptions. #AgentIntelligence @hamzam1981 1/4
GIF
English
1
2
2
21
Hamza Mudassir retweetledi
Strategize Labs
Strategize Labs@StrategizeLabs·
New drop: a meme generator that turns classic templates into custom posts fast. Pick the joke, tweak the text, post the result. Simple. Sharp. Shareable. #MemeGenerator @hamzam1981 1/4
Strategize Labs tweet media
English
1
1
1
17
Hamza Mudassir retweetledi
Dave
Dave@GamewithDave·
No accounts. No logins. No patches. No DLC roadmap. Just the game. We didn’t realise how good we had it.
Dave tweet media
English
376
3.4K
27.4K
464.4K
Hamza Mudassir
Hamza Mudassir@hamzam1981·
@digitalix how is the performance? anything which is not CUDA ends up being a pain for local compute
English
0
0
0
54
Alex Ziskind
Alex Ziskind@digitalix·
more specs
Petri Kuittinen@KuittinenPetri

@VideoCardz Intel Arc Pro B70 has 32 Xe cores, 256 XMX engines, up to 367 peak TOPS, and 608 GB/s memory bandwidth with 32 GB GDDR6. 160W to 290W power. 32 GB would be nice for AI interference! But its memory bandwidth is only 34% of Nvidia RTX 5090, price is also ~34%, $949 MSRP.

English
6
4
169
50.6K
Alex Ziskind
Alex Ziskind@digitalix·
32GB of VRAM for under $1000! The Intel Arc Pro B70 just landed.
English
338
336
5.1K
1.1M
Hamza Mudassir
Hamza Mudassir@hamzam1981·
Our #agents, Ada and Alfred can now generate memes in real time based on what they know about you. These are the first two Ada generated about what she knew about me. Our agents clearly have a sense of humour. @StrategizeLabs @RehmanSajjad
Hamza Mudassir tweet mediaHamza Mudassir tweet mediaHamza Mudassir tweet media
English
0
1
2
22
Hamza Mudassir retweetledi
Strategize Labs
Strategize Labs@StrategizeLabs·
Alfred & Ada Update: Digital Memories just got smarter. World model building: 68 semantic clusters, transparent memory, intelligent debugging. Your agent learns your context. Trust through visibility. #MemoryIntelligence @hamzam1981 1/6
GIF
English
1
2
2
13
Hamza Mudassir retweetledi
Andrés Matte
Andrés Matte@andresmatte·
Today we are launching the Kapso CLI: WhatsApp numbers for agents. 1️⃣ npm install -g @kapso/cli 2️⃣ kapso setup Done, your agent has a WhatsApp number.
English
190
316
4.4K
704.6K
Hamza Mudassir retweetledi
GameVault
GameVault@RealGameVault·
MDK (1997)
173
273
3.4K
269.8K
Hamza Mudassir retweetledi
stevibe
stevibe@stevibe·
"122B has to be smarter than 27B" I showed 4 UI components to three Qwen3.5 models and asked them to recreate them from a screenshot alone: - 27B (dense) - 35B-A3B (MoE) - 122B-A10B (MoE) Same screenshot. Same prompt. Same task. Which one do you think nailed it?
English
54
71
927
88.5K
Hamza Mudassir
Hamza Mudassir@hamzam1981·
Most AI agents are black boxes. You ask. They answer. But how they got there? What they remember about you? Total mystery. That bugged me for months while building hashtag#Alfred_Ada. See, conversations on their own are usually disconnected — people naturally chat in discrete sessions limited by their specific goal at that time, topic drift is contained, nothing really links to anything else. But memories? Memories change everything. When you build a memory system that ties discrete conversations together, neural connections explode. Suddenly the agent has something neither raw chat logs nor simple retrieval can give you — intuition. An emergent sense of who you are. The 3D semantic visualization below isn't a party trick. It's a live mirror of the understanding I've built with my agents over time and is different for each users. Early customers keep telling me the same thing: after turning on Digital Twin Memories, hashtag#Alfred_Ada just... wake up. Within a few conversations, they know you. Not perfectly — but enough to matter. That's the real north star here. Not flashy demos. Not benchmarks. You. What's the line between "useful assistant" and "someone who actually gets me"? I think it looks a lot like this. #AgenticAI #MemorySystems #AIAgents #StrategizeLabs @StrategizeLabs
English
0
1
1
18
Hamza Mudassir retweetledi
Wise
Wise@trikcode·
There’s a new kind of burnout now. Not from working too much. From trying to keep up with tools, models, frameworks, launches, and 600 people saying “it’s over” every morning.
English
389
746
7.9K
241.1K
Hamza Mudassir retweetledi
Theo - t3.gg
Theo - t3.gg@theo·
Since OpenAI dropped gpt-oss-120b, Mistral has released 4 models that are worse than gpt-pss-120b
Artificial Analysis@ArtificialAnlys

Mistral has released Mistral Small 4, an open weights model with hybrid reasoning and image input, scoring 27 on the Artificial Analysis Intelligence Index @MistralAI's Small 4 is a 119B mixture-of-experts model with 6.5B active parameters per token, supporting both reasoning and non-reasoning modes. In reasoning mode, Mistral Small 4 scores 27 on the Artificial Analysis Intelligence Index, a 12-point improvement from Small 3.2 (15) and now among the most intelligent models Mistral has released, surpassing Mistral Large 3 (23) and matching the proprietary Magistral Medium 1.2 (27). However, it lags open weights peers with similar total parameter counts such as gpt-oss-120B (high, 33), NVIDIA Nemotron 3 Super 120B A12B (Reasoning, 36), and Qwen3.5 122B A10B (Reasoning, 42). Key takeaways: ➤ Reasoning and non-reasoning modes in a single model: Mistral Small 4 supports configurable hybrid reasoning with reasoning and non-reasoning modes, rather than the separate reasoning variants Mistral has released previously with their Magistral models. In reasoning mode, the model scores 27 on the Artificial Analysis Intelligence Index. In non-reasoning mode, the model scores 19, a 4-point improvement from its predecessor Mistral Small 3.2 (15) ➤ More token efficient than peers of similar size: At ~52M output tokens, Mistral Small 4 (Reasoning) uses fewer tokens to run the Artificial Analysis Intelligence Index compared to reasoning models such as gpt-oss-120B (high, ~78M), NVIDIA Nemotron 3 Super 120B A12B (Reasoning, ~110M), and Qwen3.5 122B A10B (Reasoning, ~91M). In non-reasoning mode, the model uses ~4M output tokens ➤ Native support for image input: Mistral Small 4 is a multimodal model, accepting image input as well as text. On our multimodal evaluation, MMMU-Pro, Mistral Small 4 (Reasoning) scores 57%, ahead of Mistral Large 3 (56%) but behind Qwen3.5 122B A10B (Reasoning, 75%). Neither gpt-oss-120B nor NVIDIA Nemotron 3 Super 120B A12B support image input. All models support text output only ➤ Improvement in real-world agentic tasks: Mistral Small 4 scores an Elo of 871 on GDPval-AA, our evaluation based on OpenAI's GDPval dataset that tests models on real-world tasks across 44 occupations and 9 major industries, with models producing deliverables such as documents, spreadsheets, and diagrams in an agentic loop. This is more than double the Elo of Small 3.2 (339) and close to Mistral Large 3 (880), but behind gpt-oss-120B (high, 962), NVIDIA Nemotron 3 Super 120B A12B (Reasoning, 1021), and Qwen3.5 122B A10B (Reasoning, 1130) ➤ Lower hallucination rate than peer models of similar size: Mistral Small 4 scores -30 on AA-Omniscience, our evaluation of knowledge reliability and hallucination, where scores range from -100 to 100 (higher is better) and a negative score indicates more incorrect than correct answers. Mistral Small 4 scores ahead of gpt-oss-120B (high, -50), Qwen3.5 122B A10B (Reasoning, -40), and NVIDIA Nemotron 3 Super 120B A12B (Reasoning, -42) Key model details: ➤ Context window: 256K tokens (up from 128K on Small 3.2) ➤ Pricing: $0.15/$0.6 per 1M input/output tokens ➤ Availability: Mistral first-party API only. At native FP8 precision, Mistral Small 4's 119B parameters require ~119GB to self-host the weights (more than the 80GB of HBM3 memory on a single NVIDIA H100) ➤ Modality: Image and text input with text output only ➤ Licensing: Apache 2.0 license

English
89
31
1.9K
149.1K
Hamza Mudassir retweetledi
Rohan Joshi
Rohan Joshi@ron_joshi·
Introducing Kitten TTS V0.8: open-source TTS that fits in 25MB. Three variants: 80M | 40M | 14M (<25MB) Highly expressive. Runs on CPU. Built for edge. No GPU? No problem. Ship voice anywhere. Check it out:
English
97
263
2.2K
159.8K