Subtain Malik

1.6K posts

Subtain Malik banner
Subtain Malik

Subtain Malik

@iamsbmalik

Building Real-Time Scalable ML Solutions

Ede, Nederland Katılım Ekim 2019
156 Takip Edilen73 Takipçiler
Sabitlenmiş Tweet
Subtain Malik
Subtain Malik@iamsbmalik·
• Read everyday. • Spend time with nature. • Ask questions. • Never stop learning. • Don't pay attention to what others think of you. • Do what interests you the most. • Study hard. • Teach others what you know. • It's Okay to make mistakes. • Make mistakes and learn!
English
0
0
5
0
Subtain Malik retweetledi
Google for Developers
Google for Developers@googledevs·
A new PyTorch-native backend is coming to unlock the power of Google TPUs: ✨ Run existing PyTorch with minimal code changes. ✨ Get a 50-100%+ performance boost with Fused Eager mode. Read the engineering deep dive here: goo.gle/4vbTQQl #TorchTPU #PyTorch #MLOps #AI
Google for Developers tweet media
English
14
121
785
53.5K
Subtain Malik retweetledi
Ryo Lu
Ryo Lu@ryolu_·
when software had a soul there was a moment around 2005 when using a Mac felt like touching something alive. the dock bounced. the genie effect swooped. exposé scattered your windows like cards on a table. none of it was strictly necessary. all of it felt like someone cared – not about metrics, but about the feeling of using a machine. software back then had texture. it had a philosophy. you could feel the person behind it. someone made a decision to make that icon beautiful, to animate that transition just so, to write that error message with a little warmth. apps had personalities. some were weird. some were over-designed in ways that would make a modern PM flinch. but they were alive. the web was the same. personal sites were genuinely personal. blogs felt like letters. forums had regulars. you knew who made what. the internet had neighborhoods, and each one felt different. nothing was optimized for scale. things were made by people who loved what they were making. somewhere along the way, we traded all of that for growth. A/B tests flattened the edges. design systems standardized the personality out. everything got faster, smoother, more consistent – and somehow less interesting. the quirks were removed because they didn't test well. the warmth got cut because it wasn't measurable. we optimized our way into a world of things that work perfectly and feel like nothing. now every app looks the same. every interface follows the same patterns. every product speaks in the same calm, frictionless voice, siloed in their own little islands. the humanity got rounded off. and then came AI agents. and the speed got inhuman. now you can generate an entire product in an afternoon. ship a feature before lunch. spin up ten variations before anyone's had their coffee. the gap from idea to code is basically zero. which sounds incredible. and it is. but there's a catch. when making things are too easy, the slop comes for free too. mediocre things don't look obviously bad – they look fine. they work. they ship. they pass review. and now there are infinite of them. the internet is filling up with software that functions but means nothing. interfaces that are correct but feel dead. products made by agents, reviewed by no one, shipped into the void. this is the thing that keeps me up at night. not that AI will replace people who care. but that it will drown them out. here's what I still believe: the best things are made by people who couldn't help themselves. someone who lost sleep over an icon. who rewrote the same line of copy twelve times. who added an animation nobody asked for because it made the thing feel right. that obsession – that's not inefficiency. that's the whole point. AI doesn't make that irrelevant. it actually makes it rarer and more valuable. taste is not a markdown skill. caring is not a parameter. the weird, specific, "soul" thing you put into something – that can't be programmed into existence. the path forward isn't to make more slop faster. it's to finally give people with real vision the tools to make the thing they always imagined but couldn't build alone. the designer who had the idea but couldn't code. the kid who saw something nobody else saw. the person who cared too much about something most people wouldn't notice. if we get this right, we don't get a faster factory. we get a renaissance. more strange, personal, opinionated software made by teams of people who care and mean it. that's still possible. but only if the people who care get the space and tools to actually express themselves – and don't just hand the wheel to the agent and walk away.
English
142
348
2.5K
416.8K
Yasser Bashir
Yasser Bashir@yasserbashir·
Try yasser.ai you can ask my digital clone how I built it, among other things!
English
2
0
21
535
Subtain Malik retweetledi
Haider.
Haider.@haider1·
Google Jeff Dean says bigger context windows alone are not enough What matters is staged retrieval: lightweight mechanisms that narrow a trillion tokens down to 10 million, then to the million you actually need "you don't need a trillion at once, you need the right million"
English
44
92
982
140.6K
Subtain Malik retweetledi
Jianyang Gao
Jianyang Gao@gaoj0017·
The TurboQuant paper (ICLR 2026) contains serious issues in how it describes RaBitQ, including incorrect technical claims and misleading theory/experiment comparisons. We flagged these issues to the authors before submission. They acknowledged them, but chose not to fix them. The paper was later accepted and widely promoted by Google, reaching tens of millions of views. We’re speaking up now because once a misleading narrative spreads, it becomes much harder to correct. We’ve written a public comment on openreview (openreview.net/forum?id=tO3AS…). We would greatly appreciate your attention and help in sharing it.
Google Research@GoogleResearch

Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI

English
98
975
6.5K
1M
Subtain Malik retweetledi
Unsloth AI
Unsloth AI@UnslothAI·
We collaborated with @NVIDIA to teach you about Reinforcement Learning and RL environments. Learn: • Why RL environments matter + how to build them • When RL is better than SFT • GRPO and RL best practices • How verifiable rewards and RLVR work Blog: unsloth.ai/blog/rl-enviro…
Unsloth AI tweet media
English
26
246
1.7K
88.6K
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Most people don’t want to go the extra mile, but the extra mile is where all the upside is.
English
132
129
2.3K
104.9K
Vivo
Vivo@vivoplt·
As a dev, which is G.O.A.T code editor?
Vivo tweet media
English
160
8
307
36.1K
Steve the Beaver
Steve the Beaver@beaversteever·
incredible that we built all this RAG and vector database stuff and it turns out that grep from 1973 works better than all that
English
178
358
8.5K
508.7K
Subtain Malik retweetledi
François Chollet
François Chollet@fchollet·
I gained a lot of respect for Dario for being principled on the issues of mass surveillance and autonomous killbots. Principled leaders are rare these days
English
60
155
3.1K
74.5K
Subtain Malik retweetledi
Subtain Malik retweetledi
Sundar Pichai
Sundar Pichai@sundarpichai·
Introducing Nano Banana 2, our best image model yet 🍌🍌 It uses Gemini’s understanding of the world and is powered by real-time information and images from web search. That means it can better reflect real-world conditions in high-fidelity. Check out "Window Seat," a demo using Nano Banana 2’s world understanding to generate more accurate views from any window in the world, pulling live local weather info with 2K/4K specs. The precision is mind blowing. Rolling out today as the new default in the @Geminiapp, Search (across 141 countries), and Flow + available in preview via @GoogleAIStudio and Vertex AI. Also available in Google @Antigravity.
English
490
929
10.1K
919.7K
Subtain Malik retweetledi
Ben Dicken
Ben Dicken@BenjDicken·
Numbers every engineer should know: - p50: Median request latency - p99: 1% of requests are slower / 99% are faster - p99.9: One in a thousand requests are slower - MAX: Slowest requests p50 is useful for assessing median, across-the-board performance. p99(.9) for studying long-tail issues. MAX for investigating the longest running / analytics requests. This goes for every component of a distributed system, not just databases.
English
18
80
1.1K
63.8K
Subtain Malik retweetledi
Subtain Malik
Subtain Malik@iamsbmalik·
@Ayda__gol "€180 energy" are you using fixed or variable energy package?
English
0
0
0
52
Ayda golahmadi
Ayda golahmadi@Ayda__gol·
My monthly cost of living in the Netherlands 🇳🇱 (with my partner) 🏠 €1,500 rent (aka “permission to live indoors”) 🚆 €400 transport (because friends live in other cities) 🏥 €150 mandatory health insurance (the real subscription) 🥗 €800 groceries (for 2 people, still hurts) ☕️ €400 cafés & restaurants (social life budget) 🧾 €90 municipality taxes (the hidden boss fight) ⚡️ €180 energy (gas/electricity: surprise!) 🚲 €25 bike repairs / stolen bike fund 📦 €120 online orders (tiny dopamine deliveries) 🔄 €100 subscriptions (apps I swear I use) 📱 €25 phone 💪 €0 gym (included, flex) Total: €3,870/month (combined)
Ayda golahmadi tweet mediaAyda golahmadi tweet mediaAyda golahmadi tweet mediaAyda golahmadi tweet media
Thomas Sanlis 🥐@T_Zahil

My monthly cost of living in France 🇫🇷 🏠 1,400€ loan for the appartment 🥗 450€ food, organic only, with meat fish etc 📦 300€ average for various expenses (bars, orders, etc) ⚡160€ electricity + gaz 📱 45€ phone + internet 🚌 17€ average for transportation Total: 2,372€/month

English
232
191
3.9K
1.1M
Subtain Malik retweetledi
World Labs
World Labs@theworldlabs·
World Labs has raised $1 billion in new funding. We are grateful and excited to partner with our investors, including AMD, Autodesk, Emerson Collective, Fidelity Management & Research Company, NVIDIA, and Sea, among others. worldlabs.ai/blog/funding-2…
English
128
271
2.7K
6.3M
Subtain Malik retweetledi
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
Google just released TimesFM (a Time Series Foundation Model) - a 200M-parameter model that can forecast time-series data it has never seen before, with no additional fine-tuning required. Time-series forecasting is required everywhere - retail, finance, healthcare, etc. And for the longest time, this was the domain of traditional statistical methods. Then deep learning models came along and did better, but they involved long training and validation cycles before you could even test them on new data. TimesFM changes this. All we need to do is point it at a new dataset, and it gives you a solid forecast immediately - zero-shot. The architecture is decoder-only, the same idea as GPT. Instead of words, it works with "patches" - groups of contiguous time-points treated as tokens. The model predicts the next patch from all the ones before it. The model was pre-trained on 100 billion real-world time-points, mostly from Google Trends and Wikipedia Pageviews - which naturally capture a huge variety of patterns across domains. On benchmarks, zero-shot TimesFM matches PatchTST and DeepAR that were explicitly trained on those datasets, and even beats GPT-3.5 on forecasting tasks despite being far smaller. The model is open on HuggingFace and GitHub if you want to try it.
English
83
205
3.4K
286.5K
Subtain Malik retweetledi
Google Labs
Google Labs@GoogleLabs·
Today, we’re introducing Pomelli’s latest feature update, ‘Photoshoot’ With Photoshoot, you can start from a single image of your product and easily create high quality, customized product shots to elevate your marketing. Available free of charge in the US, Canada, Australia & New Zealand! Get started with Pomelli today at labs.google/pomelli
English
1.2K
4.7K
49.9K
24.1M