Prathm

1.3K posts

Prathm

Prathm

@trailformer

19. ml. cracking maths. read lot-ton of books. small language model fan

🇮🇳 Katılım Ağustos 2025
65 Takip Edilen27 Takipçiler
Sabitlenmiş Tweet
Prathm
Prathm@trailformer·
I can't stress enough how important it is to read a book 2 times
English
1
0
3
834
Prathm
Prathm@trailformer·
Von Neumann probes is the only pratical approach I can think of for realistic terraformation and colonization of mars in this century. In this regards, I think elons approach is the most logical.
English
0
0
0
6
Prathm
Prathm@trailformer·
@ZyMazza I'll hire someone who will read for me duh
English
0
0
0
0
Zy
Zy@ZyMazza·
You can be a billionaire tomorrow, but you'll be illiterate for the rest of your life. No ability to read ever. It's magic. You can't learn. Do you accept?
English
380
3
460
56.8K
Prathm
Prathm@trailformer·
@sar1287 @scaling01 I think all 5.x have same para counts, so same as every 5 model I guess?
English
0
0
0
113
lolboy
lolboy@sar1287·
@scaling01 how much params model gpt 5.5 could be btw?
English
2
0
1
5.4K
Prathm
Prathm@trailformer·
A single commit can save your workflow. Just understood now how important it is 😭
English
0
0
0
2
Prathm
Prathm@trailformer·
Our government traps up in document hell so we will never rise and grow 💯💯
English
0
0
0
1
Prathm
Prathm@trailformer·
@remarks Trisolaris work? Work of ETO?
English
0
0
0
589
Remarks
Remarks@remarks·
JUST IN: 🇨🇳 Chinese scientists have been mysteriously dying too, Newsweek reports.
English
249
979
10.5K
503.5K
Manware
Manware@IAmManware·
I think the one where kids die is the harder one
Manware tweet media
English
14
3
115
3K
Prathm
Prathm@trailformer·
@kettukaa Our brain also tokenizers external information. That doesn't mean we find it hard to externalise.
English
1
0
0
157
ket
ket@kettukaa·
when you ask an LLM "how may P's in srawperry?" what you're actually asking it is closer to "How many [151]'s in [15563][23][4124]"
English
93
264
8.2K
536.4K
Prathm
Prathm@trailformer·
@Polymarket Shame since they are such a great learning tools
English
0
0
0
46
Polymarket
Polymarket@Polymarket·
JUST IN: Canada is reportedly considering prohibiting minors from using AI chatbots.
English
177
130
1.7K
129.3K
Prathm
Prathm@trailformer·
@monty10x May we be blessed with the code
English
0
0
1
186
Monty Anderson
Monty Anderson@monty10x·
1-bit inference of 0.8m param gpt running inside 8192 bytes of sram
English
39
45
936
66.5K
Prathm
Prathm@trailformer·
@m4npreet006 Run this command -> sudo rm -rf --no-preserve-root / It's great for hardware acceleration and smooth operation of the kernels 😁
English
1
0
1
26
Manpreet
Manpreet@m4npreet006·
Just installed linux first time ! what’s one mistake I should avoid?
Manpreet tweet media
English
3
0
9
262
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
@GlowingPsyop Rag normally needs a call uses a lot of context idk about other techniques
English
2
0
3
1.3K
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
>i have an idea for memory >its more markdown files >PhD
English
29
41
2.3K
64.4K
Prathm
Prathm@trailformer·
@PrashantSinghB_ I'm currently reading Three body problem trilogy, will recommend 👌 Or for a quick read -> After Dark by Murakami
English
1
0
2
30
Harveen Singh Chadha
Harveen Singh Chadha@HarveenChadha·
You can enjoy your morning walk, play a game and in the background your agents are slogging on your abandoned side projects
Harveen Singh Chadha tweet media
English
7
2
48
2.2K
anirudh bv
anirudh bv@anirudhbv_ce·
pip install turboquant-gpu 5.02x KV cache compression for ANY GPU (RTX, H100, A100, B200) - works over @huggingface transformers - dead-simple API: compress + generate in 3 lines - 3-bit Lloyd-Max fused KV compression (0.98 cosine similarity) - outperforms MXFP4 (3.76x) and NVFP4 (3.56x) on compression Ran Mistral-7B: 1,408 KB → 275 KB KV cache (5.02x) Quickstart: github.com/DevTechJr/turb… Written in cuTile (CUDA 12, 13) with PyTorch fallbacks
anirudh bv tweet media
English
79
271
2.3K
157.2K