Arama Sonuçları: "#decodingtech"

20 sonuç
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #13 What actually happens during inference? When you ask an AI a question, the model isn’t “thinking.” It’s running a forward pass through the network and predicting the next token step by step. No learning. Just computation. #DecodingTech
English
1
0
4
39
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #12 AI models have token limits. Your prompt + previous messages + the response must all fit inside that window. If it gets too long, older context gets dropped. That’s why AI sometimes forgets things mid-conversation. #DecodingTech
English
1
0
3
40
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #11 AI models don’t read text the way we do. They read tokens. A token can be a word, part of a word, or even punctuation. For example: “unbelievable” → un + believ + able Before AI understands language, language is broken into tokens. #DecodingTech
English
0
0
6
54
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #10 Before transformers, most language models used RNNs. They processed text one word at a time. Transformers changed that. They process all words simultaneously using attention. Parallel processing + better context understanding. #DecodingTech
English
2
0
3
49
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #9 Why does AI training use GPUs instead of CPUs? CPUs handle tasks sequentially. GPUs handle thousands of calculations in parallel. Training AI means updating billions of parameters at once. AI progress = parallel computation. #DecodingTech
English
0
0
3
50
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #8 Pretraining vs Fine-tuning Pretraining teaches a model how language works. Fine-tuning teaches it how to behave. One builds understanding. The other shapes responses. That’s why the same base model can power very different applications. #DecodingTech
English
1
0
6
49
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #7 AI doesn’t understand words. It converts them into vectors. Every word (or token) becomes a list of numbers placed in a high-dimensional space. Words with similar meanings end up closer together in that space. Language → geometry → meaning #DecodingTech
English
1
0
4
58
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #6 How does an AI actually learn from its mistakes? Through backpropagation. After making a prediction, the model calculates how wrong it was. Then it sends that error backward through the network adjusting weights slightly at each layer. #DecodingTech
English
1
0
7
51
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #5 When an AI gives different answers to the same question, it’s not confused. Most models sample from probability distributions. Same input. Slightly different random sampling. Different output. AI is probabilistic, not deterministic. #DecodingTech
English
1
0
4
67
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #4 A neural network doesn’t “think.” Each neuron just: • Multiplies inputs by weights • Adds them • Applies an activation function Stack millions of these tiny math operations together - and you get intelligence. AI is layered linear algebra. #DecodingTech
English
2
0
7
66
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #3 When you adjust “temperature” in an AI model, you’re not changing intelligence. You’re changing randomness. Low temperature → safer, predictable answers. High temperature → more creative, more risky outputs. Same model. Different behaviour. #DecodingTech
English
1
0
7
93
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #2 When people say a model has 7B or 70B parameters… Those aren’t facts stored inside it. They’re adjustable weights. Each parameter slightly influences how the model reacts to input. More parameters = more capacity Not more knowledge. #DecodingTech
English
1
0
3
60
Mishthi Sachdeva
Mishthi Sachdeva@SachdevaMi94386·
Decoding Tech #1 Why are modern AI models so good at language? They don’t read word by word. Transformers use "attention": they look at all words at once and assign importance to each one. That shift made context understanding dramatically better. #DecodingTech
English
2
0
8
92
Computer History Museum
Computer History Museum@ComputerHistory·
Introducing Decoding Tech, our new podcast 🎧 Explore the past, present & future of tech with leading experts & pioneers. Now, CHM Live’s best talks are available in audio-only format.  Listen on Spotify, Apple Podcasts & more! #CHM #TechPodcast #DecodingTech
English
3
1
5
909