Mishthi Sachdeva

255 posts

Mishthi Sachdeva banner
Mishthi Sachdeva

Mishthi Sachdeva

@SachdevaMi94386

20 | btech(aiml)'27 | just a random girl trying to make sense of tech and everything else

Katılım Haziran 2024
148 Takip Edilen129 Takipçiler
Sabitlenmiş Tweet
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Hi, I’m Mishthi. 20. Just a girl in tech figuring things out. Into AI, random experiments, and understanding how things work behind the scenes. Learning. Building. Growing. If you’re exploring tech and growing along the way, let’s connect 🤝 @X show this to curious builders🚀
English
7
0
24
477
cat3ly$t
cat3ly$t@cat3lyst·
If you're in tech. Let's connect
English
50
0
60
1.7K
Ashwin Nair 🎴
Ashwin Nair 🎴@AshwinN14729359·
If you're in tech, Let's connect🤝
English
139
2
103
4.8K
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
@won__sikkk Both matter, but per request cost usually dominates because inference runs millions of times. That said, techniques like caching, batching, and better request design can reduce how often requests happen, which also saves a lot of compute.
English
0
0
0
6
Wonsik Oh
Wonsik Oh@won__sikkk·
@SachdevaMi94386 Which matters more in practice now, lowering per request cost or reducing how often requests need to happen?
English
1
0
0
10
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #14 Training an AI model is expensive. But inference is what runs constantly. Every prompt triggers a forward pass through billions of parameters. Training happens once. Inference happens millions of times. That’s where most real-world compute gets used.
English
1
0
5
55
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #13 What actually happens during inference? When you ask an AI a question, the model isn’t “thinking.” It’s running a forward pass through the network and predicting the next token step by step. No learning. Just computation. #DecodingTech
English
1
0
4
39
Bhavik Songara
Bhavik Songara@LearnWithBhavik·
@SachdevaMi94386 Great breakdown. Token limits explain why context disappears mid-chat. Prompt structure really matters when working with LLMs. Let’s connect.
English
1
0
0
2
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #12 AI models have token limits. Your prompt + previous messages + the response must all fit inside that window. If it gets too long, older context gets dropped. That’s why AI sometimes forgets things mid-conversation. #DecodingTech
English
1
0
3
40
Karan
Karan@iamkarank5·
Hey @X algorithm, Looking to connect with people interested in: 🎨 Frontend 🧠 Backend 🤖 GenAI ✨ Full Stack ⚙️ DevOps ✅ DSA 🧠 AI/ML 🌐 Web3 📊 Data Science 💼 Freelancing 🐍 Python 🚀 Startups Let’s learn, grow, and build in public 💪🔥
English
53
0
36
1.6K
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #11 AI models don’t read text the way we do. They read tokens. A token can be a word, part of a word, or even punctuation. For example: “unbelievable” → un + believ + able Before AI understands language, language is broken into tokens. #DecodingTech
English
0
0
6
54
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #10 Before transformers, most language models used RNNs. They processed text one word at a time. Transformers changed that. They process all words simultaneously using attention. Parallel processing + better context understanding. #DecodingTech
English
2
0
3
49
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Happy Holi everyone! 🌸 Wishing you a day full of colours, happiness, and sweet moments with the people you love.
English
3
0
5
53
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #9 Why does AI training use GPUs instead of CPUs? CPUs handle tasks sequentially. GPUs handle thousands of calculations in parallel. Training AI means updating billions of parameters at once. AI progress = parallel computation. #DecodingTech
English
0
0
3
50
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #8 Pretraining vs Fine-tuning Pretraining teaches a model how language works. Fine-tuning teaches it how to behave. One builds understanding. The other shapes responses. That’s why the same base model can power very different applications. #DecodingTech
English
1
0
6
49
Daksh
Daksh@DaKSH18_·
If you are in tech Let's connect & follow each other
English
199
1
163
7.4K
Adegbenga Adefemi
Adegbenga Adefemi@adefemi_a30197·
@SachdevaMi94386 AI sees shapes, not words. Meaning emerges from distances in high-dimensional space. Let’s connect.
English
1
0
0
5
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #7 AI doesn’t understand words. It converts them into vectors. Every word (or token) becomes a list of numbers placed in a high-dimensional space. Words with similar meanings end up closer together in that space. Language → geometry → meaning #DecodingTech
English
1
0
4
58