Deep

14 posts

Deep

Deep

@deependu__

22 | open source @lightningai | Self taught.

Katılım Şubat 2026
40 Takip Edilen0 Takipçiler
Deep
Deep@deependu__·
@marksaroufim The replies to your post🙌 This is probably what success looks like.🔥
English
0
0
0
21
Mark Saroufim
Mark Saroufim@marksaroufim·
After 5 amazing years, I’m leaving the PyTorch team at Meta. I did my best work there and got to work with some of the smartest, most OSS pilled engineers in the industry. More soon on what’s next: still systems, still OSS (but not everything), a smaller team with a lot of GPUs
Mark Saroufim tweet media
English
101
28
1.3K
71.1K
Deep retweetledi
Lightning AI ⚡️
Lightning AI ⚡️@LightningAI·
Day one of the first @PyTorch Conference Europe is here and we're here in Paris at the community expo. Be sure to connect with our optimization expert, Thomas Viehmann, and senior staff research engineer, @chaton_thomas. Tell us what you're building and what's slowing you down.
Lightning AI ⚡️ tweet media
English
0
3
7
576
Deep
Deep@deependu__·
What an evening! Connaught Place, New Delhi.
Deep tweet mediaDeep tweet media
English
0
0
1
26
Deep retweetledi
Lightning AI ⚡️
Lightning AI ⚡️@LightningAI·
Fun fact: Students from the top 100+ universities across 30 countries are using Lightning's Academic Tier⚡ Get a 24/7 CPU studio that never shuts off, S3 access for large datasets, and spin up more powerful machines when experiments scale. No queues. No usage caps. No infrastructure setup. Just research. Register with your school email to unlock → go.lightning.ai/3L8dlqC
Lightning AI ⚡️ tweet media
English
1
5
16
1.4K
Deep retweetledi
Lightning AI ⚡️
Lightning AI ⚡️@LightningAI·
Tomorrow, Neptune.ai sunsets its hosted service. Neptune set a high bar for modern experiment tracking. We’re carrying that forward with LitLogger on Lightning AI. Migrate your runs, metrics, and artifacts without losing history. Here’s a short guide to help you move over → go.lightning.ai/49g9tvS
English
0
2
8
1.3K
maharshi
maharshi@maharshii·
first test with nvfp4 gemm on rtx blackwell, seems good even with quant overhead but i think it can go faster.
maharshi tweet media
English
5
3
102
7.8K
Deep
Deep@deependu__·
@MainzOnX Word limit. I wanted to say, is there something better I can invest my time into?
English
1
0
0
21
Deep
Deep@deependu__·
@MainzOnX Hi @MainzOnX , on a serious note, I’m currently working on adding different distributed training strategies to pytorch lighting and want to upnext start learning about pytorch trasforms and contribute lightning thunder and potentially in Luminal AI and level up. Any issues?
English
1
0
0
22
Adam Mainz
Adam Mainz@MainzOnX·
If you are still thinking about getting started with ml kernels or any gpu optimizations the time to get started is over. You are too late the problems are of course already solved. Time to move on to quantum computing before everyone floods it
English
2
0
1
122
Deep
Deep@deependu__·
Worked on pocket.0x11.sh, and added it to home screen to use it as an app, but redirected payments are blocked due to RBI guidelines. Would love it in the app itself.
English
0
0
0
42
Deep
Deep@deependu__·
@PhonePe I’d love creating categories and while paying select which category this payment belongs to. A dashboard to visualize those transactions to better track spending habits. @_sameernigam @rahulchari9
English
1
0
0
47
Deep
Deep@deependu__·
@joefioti This is sick! 👏
English
0
0
0
6
Joe Fioti
Joe Fioti@joefioti·
ML compilers need good visualizers.
Joe Fioti tweet media
English
4
5
60
2.6K
Deep retweetledi
William Falcon ⚡️
William Falcon ⚡️@williamfalcon·
The local mac-mini craze is interesting... why not just run this on a lightning studio which is a persistent, cloud-hosted environment? which also means you can host hundreds of these in parallel by using many different studios at once?
William Falcon ⚡️ tweet media
Andrej Karpathy@karpathy

Bought a new Mac mini to properly tinker with claws over the weekend. The apple store person told me they are selling like hotcakes and everyone is confused :) I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all. Already seeing reports of exposed instances, RCE vulnerabilities, supply chain poisoning, malicious or compromised skills in the registry, it feels like a complete wild west and a security nightmare. But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level. Looking around, and given that the high level idea is clear, there are a lot of smaller Claws starting to pop out. For example, on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels manageable, auditable, flexible, etc.) and runs everything in containers by default. I also love their approach to configurability - it's not done via config files it's done via skills! For example, /add-telegram instructs your AI agent how to modify the actual code to integrate Telegram. I haven't come across this yet and it slightly blew my mind earlier today as a new, AI-enabled approach to preventing config mess and if-then-else monsters. Basically - the implied new meta is to write the most maximally forkable repo and then have skills that fork it into any desired more exotic configuration. Very cool. Anyway there are many others - e.g. nanobot, zeroclaw, ironclaw, picoclaw (lol @ prefixes). There are also cloud-hosted alternatives but tbh I don't love these because it feels much harder to tinker with. In particular, local setup allows easy connection to home automation gadgets on the local network. And I don't know, there is something aesthetically pleasing about there being a physical device 'possessed' by a little ghost of a personal digital house elf. Not 100% sure what my setup ends up looking like just yet but Claws are an awesome, exciting new layer of the AI stack.

English
4
5
19
5K