Debjit B.

7.6K posts

Debjit B. banner
Debjit B.

Debjit B.

@debjit012

Full-stack developer. Passionate about Laravel, React, React-Native, and TailwindCSS.

WB, India Katılım Eylül 2012
975 Takip Edilen401 Takipçiler
Sabitlenmiş Tweet
Debjit B.
Debjit B.@debjit012·
I created this application to track my job applications and build resumes based on job postings. The most recent resume took about 3 minutes to create. Now that my app has their features Job tracking Analysis and summary of job postings 1 Resume Format Exporting a Resume to PDF
English
1
0
4
697
Debjit B. retweetledi
Dr.Sivaranjini
Dr.Sivaranjini@dr_sivaranjani·
I am pleased to share that I will be speaking at TEDx Hyderabad on April 19, 2026. Join me for a full day of insightful ideas, meaningful conversations, and inspiration. Please use my code SPEAKER-SIVARANJANI-GUEST for a special speaker discount, to register yourself. I look forward to your presence at this special event Register here - tedxhyderabad.com/register #TEDxHyderabad #TEDx #IdeasWorthSpreading #PublicSpeaking #Inspiration #April2026 #Event #FireUp_
Dr.Sivaranjini tweet media
English
5
63
238
2.3K
Debjit B. retweetledi
Swagat Nayak
Swagat Nayak@autocarrrot·
Update on my previous post: I ordered a ₹3 Lakh RTX 5090 on @AmazonIN and got a 1.56kg packet of detergent. Amazon promised a refund to kill the social media buzz, but they're just stalling. Now, I've uncovered a massive internal FBA fraud ring. Thread @AmazonHelp @AmitAgarwal
Swagat Nayak tweet mediaSwagat Nayak tweet mediaSwagat Nayak tweet media
English
191
1.5K
6.7K
339.7K
Debjit B. retweetledi
ANI
ANI@ANI·
#WATCH | Murshidabad, West Bengal: On the night of 03.04.2026 at about 00:45 hrs, approximately 25 (twenty-five) country-made explosives (sutli bombs) were recovered from a nylon bag concealed in bushes adjacent to the Juginda–Amindabad Road. During the preliminary enquiry, it appears that the explosives had been hidden by some unknown person with an ulterior motive. In this connection, Domkol P.S. Case No. 317/26 dated 03.04.2026 under Section 9B of the Indian Explosives Act, 1884 has been registered. Investigation is in progress: SDPO Domkal Subham Bajaj
English
44
742
2.8K
112.2K
Debjit B. retweetledi
#YeThikKarkeDikhao
#YeThikKarkeDikhao@YTKDIndia·
Someone powerful from the FSSAI has filed an FIR against us for exposing the corrupt practice of direct recruitment in the FSSAI. A badge of honour for raising voice against corruption.
#YeThikKarkeDikhao tweet media
English
135
3.2K
12.2K
108.5K
Ahmad
Ahmad@TheAhmadOsman·
MASSIVE We've got a huge GPUs giveaway coming up once the one from last month is finalized Not one or two GPUs, multiples!!! Plus potentially full machines, everything brand new as well ;)
English
76
10
539
20.2K
Debjit B.
Debjit B.@debjit012·
@0xSero And it's all done by software! Saving 40% of the server ram is nuts!
English
0
0
0
98
0xSero
0xSero@0xSero·
This is possibly the best we can get until another compression breakthrough pops up for LARGE MoEs WEIGHTS: - 50% reap + 3bit quant == 81.75% compression KV-CACHE: - turboquant 4 Basically you can run large MoEs with about 18-20% of the vram of the BF16 So for Deepseek (671 GB~) you will be able to run the weights in 127GB and 200k kv-cache in about 20-60gb of vram. Small models provide less savings I'd say you can comfortably prune 20-25% of the experts and quantise to about 4-8bits For 1T param models I think it's possible to prune down to 40% if you accept lots of knowledge loss and want a pure logic agent, IDK how this would look though ------------ My current plan: - Prune GLM-5 50% Done - Quantize to EXL3 w 3bits if no turboquant or if 4bits - Train the new models to respond like the original PEFT from GLM-5 --> GLM-5-358B-REAP --> REAP-GGUF-3BIT -> REAP-EXL-3BIT Very little samples were enough to recover an 80% REAP to semi-coherent from completely broken.
0xSero tweet media
English
23
20
335
18.8K
0xSero
0xSero@0xSero·
Guys I can’t believe it. I got another 10,000$ donation! We are at 25,000$ cash and 50k+ in compute credits. I can’t believe this generosity makes my heart melt. This whole journey has made me so grateful and was humbling
OpenAgents@OpenAgents

Excited to sponsor @0xSero one full RTX Pro. Keep up the good work! Hope you can help us test your hardware on our Psionic framework in the coming weeks 🙏 (0.14086322 BTC = $10000) blockchain.com/explorer/trans…

English
64
42
1.6K
48.5K
Debjit B.
Debjit B.@debjit012·
@sudoingX Yes for sure 👍 Using 2 rtx 580 and still working good with whisper and small 9b +finetuned models.
English
0
0
0
63
Sudo su
Sudo su@sudoingX·
GPUs never retire if you take care of them. this RX 580 8GB has been with me since 2017 and is not even close to done. just took it apart again to repaste, bangkok heat dries thermal compound fast. if you own GPUs, maintain them. clean the heatsink, repaste every couple months on load, check the fans. they last. this is something the mac fancy desk crowd will never understand.
Sudo su tweet media
English
141
67
1.3K
58K
Debjit B. retweetledi
Punyapal Shah
Punyapal Shah@MrPunyapal·
I'm looking for Laravel PHP work for last almost two months now, idk if it's AI or something else but market seems lot quiet than ever. Previously I used to get clients on a same day I start looking for work. Are you feeling the same?
English
35
6
124
16.3K
Debjit B.
Debjit B.@debjit012·
@0xSero US may claim every OSS as "MINE MINE MINE MINE MINE". Bcoz it has 🛢️ in it... 😂😂😂
English
0
0
0
27
0xSero
0xSero@0xSero·
We need US open source just like we need Chinese open source
0xSero@0xSero

In 72 hours I got over 100k of value 1. Lambda gave me 5000$ credits in compute 2. Nvidia offered me 8x H100s on the cloud (20$/h) idk for how long but assuming 2 weeks that'd be 5000$~ 3. TNG technology offered me 2 weeks of B200s which is something like 12000$ in compute 4. A kind person offered me 100k in GCP credits (enough to train a 27B if you do it right) 5. Framework offered to mail me a desktop computer 6. We got 14,000$ in donations which will go to buying 2x RTX Pro 6000s (bringing me up to 384GB VRAM) 7. I got over 6M impressions which based on my RPM would be 1500$ over my 500$~ usual per pay period 8. I have gained 17,000~ followers, over doubling my follower count 9. 17 subscribers on X + 700 on youtube. The total value of all this approaches at minimum 50,000$~ and closer to 150,000$ if I leverage it all. --------------------- What I'll be doing with all this: Eric is an incredibly driven researcher I have been bouncing ideas off of over the last month. Him and I have been tackling the idea of getting massive models to fit on relatively cheap memory. The idea is taking advantage of different forms of memory, in combination with expert saliency scoring, to offload specific expert groupings to different memory tiers. For the MoEs I've tested over my entire AI session history about 37.5% of the model is responsible for 95% of token routing. So we can offload 62.5% of an LLM onto SSD/NVMe/CPU/Cheap VRAM this should theoretically result in minimal latency added if we can select the right experts. We can combine this with paged swapping to further accelerate the prompt processing, if done right we are looking at very very decent performance for massive unquantisation & unpruned LLMs. You can get DeepSeek-v3.2-speciale at full intelligence with decent tokens/s as long as you have enough vram to host the core 20-40% of the model and enough ram or SSD to host the rest. Add quantisation to the mix and you can basically have decent speeds and intelligence with just 5-10% of the model's size in vram (+ you need some for context) The funds will be used to push this to it's limits. ----------------- There's also tons of research that you can quantise a model drastically, then distill from the original BF16 or make a LoRA to align it back to the original mostly. This will be added to the pipeline too. ------------------ All this will be built out here: github.com/0xSero/moe-com… you will be able to take any MoE and shove it in here, and with only 24GB and enough RAM/NVMe to compress it down. it'll be slow as hell but it will work with little tinkering. ------------------ Lastly I will be looking into either a full training run from scratch -> or just post-training on an open AMERICAN base model - a research model - an openclaw/nanoclaw/hermes model - a browser-use model To prove that this can be done. -------------------- I will be bad at all of it, and doubt I will get beyond the best small models from 6 months ago, but I want to prove it's no boogeyman impossible task to everyone who says otherwise. -------------------- By the end of the year: 1. I will have 1 model I trained in some capacity be on the top 5 at either pinchbench, browseruse, or research. 2. My github will have a master repo which combines all my work into reusable generalised scripts to help you do that same. 3. The largest public comparative dataset for all MoE quantisations, prunes, benchmarks, costs, hardware requirements. -------------------------- A lot of this will be lead by Eric, who I will tag in the next post. I want to say thank you to everyone who has supported me, I have gotten a lot of comments stating: 1. I'm crazy, stupid, or both 2. I'm wasting my time, no one cares about this 3. This is not a real issue I believe the amount of interest and support I've received says it all. donate.sybilsolutions.ai

English
28
21
476
20K
Debjit B.
Debjit B.@debjit012·
@sudoingX Using 3x AMD RX 580 4GB variants. Using Vulkan to access the GPU, as RCOM does not support it. Able to work with Qwen 3.5 2B Unsloth fine-tunes on this.
English
0
0
0
64
Sudo su
Sudo su@sudoingX·
i just became a mod of x/LocalLLaMA. if you're running local models on your own hardware and want in, the community is open. pinned and highlighted on my profile. approving members starting today. drop your setup below and i'll get you in. 3060, 3090, 4090, 5090, AMD, whatever you're running. all welcome. if you're hitting issues with hermes agent, llama.cpp, model selection, configs, i'm here. let's make local AI accessible for everyone.
Sudo su tweet media
Sudo su@sudoingX

let me get you started in local AI and bring you to the edge. if you have a GPU or thinking about diving into the local LLM rabbit hole, first thing you do before any setup is join x/LocalLLaMA. this is the community that will help you at every step. post your issue and we will direct you, debug with you, and save you hours of work. once you're in, follow these three: @TheAhmadOsman the oracle. this is where you consume the latest edges in infrastructure and AI. if something dropped you hear it from him first. his content alone will keep you ahead of most. @0xsero one man army when it comes to model compression, novel quantization research, new tools and tricks that make your local setup better. you will learn, experiment, and discover things you didn't know existed. @Teknium maker of Hermes Agent, the agent i use every day from @NousResearch. from Teknium you don't just stay at the frontier, you get your hands on the tools before everyone else. this is where things are headed. if you follow me follow these three and join the community. you will be ahead of most people in this space. if you run into wrong configs, stuck debugging hardware, or can't get a model to load, post there so we can help. get started with local AI now. not only understand the stack but own your cognition. don't pay openai fees on top of giving them your prompts, your research, and your most valuable thinking to be monitored and metered. buy a GPU and build your own token factory.

English
327
43
811
61K
0xSero
0xSero@0xSero·
1. 5000$ in credits from Lambda 2. 13.6k$ in funding 3. 5M impressions 4. 15k followers 48 hours
0xSero tweet media
English
49
35
864
26.6K
Debjit B.
Debjit B.@debjit012·
@antigravity is unusable. I am in a pro plan with just 1 task with the pro-low, and look at this: not even an hourly limit. And what's even funnier is that 120b is similarly billed with sonned? Who set these limits? Guy who sells a $9 product for $900?
Debjit B. tweet media
English
0
0
2
95
Sudo su
Sudo su@sudoingX·
how much VRAM do you have right now
English
202
8
147
22.4K