Khera Shanu

1.2K posts

Khera Shanu banner
Khera Shanu

Khera Shanu

@kherashanu

13yrs exp - Backend, Data and AI Engineer | ADHD and Dyslexic | Loves to teach Maths, Physics, and Comp Sci.

Traveling the world! Katılım Temmuz 2011
160 Takip Edilen1K Takipçiler
Sabitlenmiş Tweet
Khera Shanu
Khera Shanu@kherashanu·
#proudteacher P.S. It was C + DSA. I do not know C++ 🙂 - 1/2
Khera Shanu tweet mediaKhera Shanu tweet media
English
3
0
16
0
Khera Shanu
Khera Shanu@kherashanu·
@elonmusk if you care so much about open source why is XAI LLMs and ecosystem closed source?
English
0
0
0
11
Khera Shanu
Khera Shanu@kherashanu·
@techwith_ram + Observability that isn’t garbage:- Token-level latency breakdowns, per-layer GPU utilization, KV cache hit rates, prompt injection attempt logging, hallucination root-cause attribution (which layer/token went rogue), etc. , etc. ...
English
1
0
4
364
𝗿𝗮𝗺𝗮𝗸𝗿𝘂𝘀𝗵𝗻𝗮— 𝗲/𝗮𝗰𝗰
AI / ML Engineer in 2026, please learn: One ML stack deeply:- PyTorch or JAX, not just .fit(), but GPU memory, kernels, mixed precision, profiling, and why your model OOMs at 3am. Data:- Where it comes from, how it lies, how it drifts, how labels break, how leakage sneaks in, and why 80% of model failures are upstream. Statistics:- Bias vs variance, confidence intervals, calibration, distribution shift, and why “95% accuracy” is often meaningless. Loss functions:- What you are actually optimizing, how it shapes behavior, and how bad losses silently create bad products. Evaluation:- Real-world metrics, not Kaggle ones. Offline vs online. Regression tests for models. When numbers lie. Training:- Distributed GPUs, gradient accumulation, checkpointing, reproducibility, and how to not lose a 3-day run to one crash. LLMs: Tokenization, attention, context limits, KV cache, LoRA vs fine-tuning vs RAG, and where hallucinations are born. Inference:- Batching, quantization, vLLM, streaming, cold starts, GPU vs. CPU, and why serving is harder than training. Retrieval:- Embeddings, chunking, hybrid search, reranking, grounding, and why most RAG systems fail quietly. Pipelines:- Feature stores, offline vs. online data, backfills, late events, schema evolution, and broken joins. Monitoring:- Drift, outliers, token spend, latency, hallucination rate, and silent quality decay. Optimization:- Distillation, pruning, caching, prompt compression, and how to make models affordable. Agents:- Tool calling, memory, retries, failure modes, and why autonomous systems are chaos engines. Security:- Prompt injection, data exfiltration, training data leaks, and tool misuse. Deployment:- Model versioning, shadow runs, canaries, rollbacks, and killing bad models fast. Distributed systems:- Queues, retries, idempotency, backpressure, and partial failures. ML is just distributed systems with gradients. Documentation:- Model cards, data contracts, eval reports, and written tradeoffs. Pick one stack. Build real systems. Break them. Fix them. If I missed something, Add in the comment section.
English
40
307
2.4K
115.9K
Khera Shanu
Khera Shanu@kherashanu·
@cailynyongyong looks like authors citing other authors for research papers may be, the center dot can be something like "attention is all you need" type of work.
English
0
0
0
48
Cailyn Y.
Cailyn Y.@cailynyongyong·
guess what this is
Cailyn Y. tweet media
English
222
7
292
31K
Michael Morgan
Michael Morgan@Mmorgan_ML·
@krishdotdev Dude, I went back and read a 64-page research paper I wrote in undergrad on the articulatory phonetics of Kansai dialects of Japanese, and I thought, "Damn, how did I used to be this smart? These days I lose my coffee mug in the microwave..."
English
1
0
28
3.7K
Kr$na
Kr$na@krishdotdev·
I still think about the version of me who could solve these in one sitting.
Kr$na tweet media
English
422
3K
30K
907K
Aditya
Aditya@adityadotdev·
No docs, no YouTube, no chatGPT, no StackOverflow… How did they even learn to code back then?
Aditya tweet media
English
420
106
3.5K
166.7K
Khera Shanu
Khera Shanu@kherashanu·
Implementing a SOCKS5 proxy to make an EC2 act as proxy server, just got to learn in detail about the sys call - `splice` and I love it! man7.org/linux/man-page…
English
0
0
0
64
Khera Shanu
Khera Shanu@kherashanu·
@arpit_bhayani Even the impact on mental health - so many engineers working with me feel more depressed after Agentic IDEs as they miss the joy of coding. I am sure it's not just programmers but anywhere AI taking away the joy of accomplishing your work will have similar impact.
English
0
0
0
23
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
Soon, we will all lose the ability to think, learn, imagine, ideate, analyze, reason, decide, and even solve problems without AI. Cognitive decline is real, and needless to say, even this post was written with assistance from AI.
English
185
160
2.4K
83.7K
Knowledge Bank
Knowledge Bank@xKnowledgeBANK·
What’s for Mathematics?
Knowledge Bank tweet media
English
765
64
777
123.4K
Khera Shanu
Khera Shanu@kherashanu·
@yacineMTB Such a big statement, without any evidence, you definitely know how X algo works!
English
0
0
0
9
kache
kache@yacineMTB·
Anthropic is so, so over. Just completely defeated. Really sad, so many great people there. My advice: go to DeepMind before it's too late.
English
366
48
2.7K
726.2K
Khera Shanu
Khera Shanu@kherashanu·
@arpit_bhayani haha, and a life that feel too short to do even a fraction of it, so true!
English
0
0
0
6
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
if you are a curious engineer, it is so difficult not to have fomo. every other domain seems interesting every other book seems worth reading every other problem seems worth solving every other project looks like the next big thing every other framework feels like something you should learn. literally every single thing is a distraction. how do you even deal with that?
English
173
169
2.7K
130.8K
Khera Shanu
Khera Shanu@kherashanu·
@unclebobmartin to anyone * Alan turing was just 27 when hired by British government. Mark zuck was 19 (or 20) when he started FB
English
0
0
0
24
Khera Shanu
Khera Shanu@kherashanu·
@unclebobmartin Why can't a 24 be an expert? That's just agism ... there are real geniuses in the world, no company wants to give free money to!
English
2
0
0
1.6K
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
Remember the internet .com bubble? During the bubble companies hired programmers with reckless abandon in a futile attempt to get ahead of the curve. When the bubble popped they found they had over-invested in programmers and had to freeze hiring and lay them off. That lasted for a year or so. Now we are in the AI bubble. That it _is_ a bubble is evident by the "stupid money" being paid as recruiting bonuses to AI "experts". (e.g. Meta agreed to pay a 24 year old "expert" $250M) The effect of this bubble is a hiring freeze for programmers. Executives hope/fear that AI will allow them to lay off lots of programmers so they don't dare hire any right now. When this bubble pops (and it will, probably within a year or so) these companies will find that they are _under_-invested in programmers and will have to start hiring them with reckless abandon. History repeats itself; but sometime in the mirror.
English
75
215
1.7K
164.7K
Khera Shanu
Khera Shanu@kherashanu·
@mitsuhiko People you don't know personally are eligible for this offer?
English
0
0
0
47
Armin Ronacher ⇌
Armin Ronacher ⇌@mitsuhiko·
I want to record a video with me and another programmer who is not getting value out of AI. Goal is to have a good discussion about our approaches and learnings. Anyone interested?
English
30
10
182
32.6K
FlipkartSupport
FlipkartSupport@flipkartsupport·
@kherashanu Please be assured that we are already in touch with the customer to get this sorted. To ensure that your Flipkart account information is safe, please send us a private message by clicking on the link below. fkrt.it/c5ZBmZNN
English
1
0
0
12
Khera Shanu
Khera Shanu@kherashanu·
@ilanbigio can you please give the downloadable link to Your PPT?
English
0
0
0
10
ilan bigio
ilan bigio@ilanbigio·
you usually don't need fine-tuning. but when you're pushing the boundaries of performance and efficiency, you do 🧠 RFT - train reasoning models on SotA tasks ⚖️ DPO - a/b optimization, tone matching ⚡️ SFT - distillation, formatting, efficiency when fine-tune w/ @openai? 👇 check out my full @aiDotEngineer workshop on everything fine-tuning, where i dive into each approach, some anecdotes from fine-tuning with customers at @openai, and hands-on examples. many thanks to @swyx and team! (and @TSautory for awesome RFT resources)
ilan bigio tweet media
English
7
11
72
9.4K
Odin AI
Odin AI@GetOdinAI·
𝗪𝗲'𝗿𝗲 𝗛𝗶𝗿𝗶𝗻𝗴 AI is evolving, and so are we. Odin AI is looking for a Full Stack Lead to help build the backbone of our next-gen AI-powered systems. This isn’t just another development role—it’s a chance to be part of a team that’s redefining Agentic AI for enterprises. 𝗪𝗵𝗼 𝘄𝗲’𝗿𝗲 𝗹𝗼𝗼𝗸𝗶𝗻𝗴 𝗳𝗼𝗿 ✔️ A problem-solver who thrives in fast-paced environments ✔️ An expert in front-end and back-end technologies ✔️ Someone ready to lead and make an impact 📩 𝗥𝗲𝗮𝗱𝘆 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝘄𝗶𝘁𝗵 𝘂𝘀? Send your resume to: hr@getodin.ai Know someone who’d be a great fit? Tag them below! Let’s shape the future together.
Odin AI tweet media
English
1
1
3
109