Het 👽

2K posts

Het 👽 banner
Het 👽

Het 👽

@het_bhalani

19 • self taught AI Engineer • love math • i am a dumb guy, lol :p

India Katılım Temmuz 2023
157 Takip Edilen234 Takipçiler
Het 👽
Het 👽@het_bhalani·
It’s balloons day✨
Het 👽 tweet media
English
3
0
7
36
Het 👽
Het 👽@het_bhalani·
This is elon musk
Het 👽 tweet media
English
1
0
3
35
Het 👽
Het 👽@het_bhalani·
"If it walks like a duck and quacks like a duck, it's a duck."
English
0
0
2
23
NILAY1556
NILAY1556@NILAY1556·
@het_bhalani Kaha hen bata bhai...me bhi aa raha hu, yaha kuch nahi rakha
हिन्दी
1
0
3
18
Het 👽
Het 👽@het_bhalani·
Grass?? Decided to touch sand today!
Het 👽 tweet media
English
3
0
8
56
Het 👽
Het 👽@het_bhalani·
@danliu Eyo, ass some more slop and ads
English
0
0
1
22
Dan Liu
Dan Liu@danliu·
nobody is asking the important question about github: what if MICROSOFT built github??
Dan Liu tweet media
English
227
204
5.8K
217.1K
stevibe
stevibe@stevibe·
How slow does a 128B DENSE model run locally? Qwen3 27B and Gemma 31B are the popular dense models everyone tests. But what happens when you 4x the params? Mistral Medium 3.5 128B, side-by-side on 4x4090 vs 4x5090 vs RTX PRO 6000 vs DGX Spark: 🔴4x4090: 12.06 tok/s decode, 680ms TTFT 🟢4x5090: 19.57 tok/s decode, 572ms TTFT 🟡PRO 6000: 18.12 tok/s decode, 538ms TTFT 🟣DGX Spark: 2.58 tok/s decode, 2243ms TTFT
English
26
8
162
36K
Het 👽
Het 👽@het_bhalani·
@techwith_ram Used unsloth, llama fac, peft! Other three is yet to explore
English
0
0
0
55
𝗿𝗮𝗺𝗮𝗸𝗿𝘂𝘀𝗵𝗻𝗮— 𝗲/𝗮𝗰𝗰
6 Open-Source Libraries to FineTune LLMs 1. Unsloth GitHub: github.com/unslothai/unsl… → Fastest way to fine-tune LLMs locally → Optimized for low VRAM (even laptops) → Plug-and-play with Hugging Face models 2. Axolotl GitHub: github.com/OpenAccess-AI-… → Flexible LLM fine-tuning configs → Supports LoRA, QLoRA, multi-GPU → Great for custom training pipelines 3. TRL (Transformer Reinforcement Learning) GitHub: github.com/huggingface/trl → RLHF, DPO, PPO for LLM alignment → Built on Hugging Face ecosystem → Essential for post-training optimization 4. DeepSpeed GitHub: github.com/microsoft/Deep… → Train massive models efficiently → Memory + speed optimization → Industry standard for scaling 5. LLaMA-Factory GitHub: github.com/hiyouga/LLaMA-… → All-in-one fine-tuning UI + CLI → Supports multiple models (LLaMA, Qwen, etc.) → Beginner-friendly + powerful 6. PEFT GitHub: github.com/huggingface/pe… → Fine-tune with minimal compute → LoRA, adapters, prefix tuning → Best for cost-efficient training Save this for future use.
𝗿𝗮𝗺𝗮𝗸𝗿𝘂𝘀𝗵𝗻𝗮— 𝗲/𝗮𝗰𝗰 tweet media
English
10
171
777
30.9K
Het 👽
Het 👽@het_bhalani·
I also want to try codex & claude code, but don’t have money to spend 😕
English
0
0
5
41
Het 👽
Het 👽@het_bhalani·
Seems like @sama is on vacation
English
0
0
3
22
Het 👽
Het 👽@het_bhalani·
@soham901x Now u r just finding ways to ghost me anyways
English
0
0
2
5
Soham
Soham@soham901x·
@het_bhalani whenever i get placed, will ghost u just to watch u touch more and more sand 😂
English
1
0
1
12
Het 👽
Het 👽@het_bhalani·
@soham901x I know u are going to be placed in a good company and you will refer me
English
1
0
0
20
Hemant
Hemant@HemantDotDev·
Day 113/ 150 >Locked in for 3hrs >Started building AI Course builder(day11) >clg 9 to 4 #100DaysOfCode
English
1
0
31
605
Mechanical Knowledge
Mechanical Knowledge@mechanical_4u·
A beautiful piece of engineering in action.
English
32
295
4.5K
323.8K
Orihime
Orihime@Orihime_chan0·
Never seen an animation perfectly summarize the sheer comfort of hiding in the corner 😭
English
61
8K
61K
822.5K
Het 👽
Het 👽@het_bhalani·
I have lost 4 games in a row in chess🙂
English
2
0
5
45
Het 👽
Het 👽@het_bhalani·
The more i know the more i realise how much i don’t know
English
1
0
3
34