muler

331 posts

muler

muler

@codeccpp

Katılım Mayıs 2025
29 Takip Edilen6 Takipçiler
muler retweetledi
santi
santi@santtiagom_·
how to scale an app from zero to millions of users
santi tweet media
English
18
99
1.5K
78.8K
muler retweetledi
Mehmet Songur
Mehmet Songur@mehmetsongur_·
Don Sheehy'nin Python ile Veri Yapıları notları, hem algoritma mantığını hem de temiz kod yazmayı aynı anda öğretiyor. PDF 👇
Mehmet Songur tweet media
Türkçe
1
9
54
2.6K
muler retweetledi
Razia Aliani
Razia Aliani@RaziaAliani·
Google just dropped 145 pages documenting how researchers use Gemini to tackle scientific problems. 𝘚𝘢𝘷𝘦 & 𝘙𝘦𝘵𝘸𝘦𝘦𝘵 (𝘵𝘰 𝘩𝘦𝘭𝘱 𝘺𝘰𝘶𝘳 𝘯𝘦𝘵𝘸𝘰𝘳𝘬) A few things that stood out to me (in simple terms): - In one case, the AI was used as an adversarial reviewer and caught a serious flaw in a cryptography proof that had passed human review. That’s a very different use than “summarise this PDF.” - The model links tools from very different fields (for example, using theorems from geometry/measure theory to make progress on algorithms questions). This is where its wide reading really matters. - They don’t let the model run wild. Humans still choose the problems, check every proof, and decide what’s actually new. The model is there to suggest ideas, spot gaps, and do the heavy algebra. - Agentic loops, not just chat In some projects, they plug Gemini into a loop where it: -- proposes a mathematical expression, -- writes code to test it, -- reads the error messages, and -- fixes itself. (humans only step in when something promising appears) We are moving past the era of simple chat prompts and into a more sophisticated era of research. ⮑ If your institution is interested in hosting an AI session or a workshop, request your training here: forms.gle/dbRtc7j2W4zZyL…
Razia Aliani tweet media
English
10
110
389
26.1K
muler retweetledi
Spor Rehberi
Spor Rehberi@sagIikIiyasam·
Evde karın antrenmanı
Türkçe
0
205
1.4K
79K
muler retweetledi
Shraddha Bharuka
Shraddha Bharuka@BharukaShraddha·
Millions of people use ChatGPT, Claude, and Gemini every day. But almost nobody understands what actually happens between hitting Enter and seeing words appear on the screen. So I'm providing the entire pipeline into one clean visual 👇 Here’s the breakdown: → 𝗧𝗼𝗸𝗲𝗻𝗶𝘇𝗲𝗿 Your input isn’t processed as words. It’s split into tokens. “gravity” → ["grav", "ity"] Each token → a numeric ID. That’s why LLMs struggle with letters — they never truly “see” them. → 𝗘𝗺𝗯𝗲𝗱𝗱𝗶𝗻𝗴 𝗟𝗮𝘆𝗲𝗿 Every token becomes a high-dimensional vector (e.g. 4096 dims). This is where meaning begins: Similar words → closer in vector space. → 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗲𝗿 𝗕𝗹𝗼𝗰𝗸𝘀 (×𝗡) The real intelligence lives here. • Self-attention (Q, K, V) → tokens “look” at each other • FFN → processes each position • Repeated dozens of times This is how context is built. → 𝗞𝗩 𝗖𝗮𝗰𝗵𝗲 The most important optimization. Instead of recomputing everything → the model stores past attention (K, V) ⚡ Faster generation ⚠️ But memory grows with sequence length → This is the real bottleneck → 𝗦𝗮𝗺𝗽𝗹𝗶𝗻𝗴 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆 The model doesn’t pick words. It outputs probabilities. How you sample = how it behaves: • Greedy → predictable • Top-K → limited randomness • Top-P → balanced • Temperature → creativity control Same model. Completely different outputs. → 𝗦𝗽𝗲𝗰𝘂𝗹𝗮𝘁𝗶𝘃𝗲 𝗗𝗲𝗰𝗼𝗱𝗶𝗻𝗴 A hidden speed hack. • Small model predicts ahead • Large model verifies in parallel ✅ If correct → multiple tokens generated instantly This is how responses feel fast. → 𝗗𝗲𝘁𝗼𝗸𝗲𝗻𝗶𝘇𝗲𝗿 + 𝗦𝘁𝗿𝗲𝗮𝗺𝗶𝗻𝗴 Token IDs → converted back to text That “typing” effect you see? It’s not UI animation. It’s literally token-by-token generation. ⚠️ Two things most people miss: → 𝗣𝗿𝗲𝗳𝗶𝗹𝗹 phase = compute-heavy → 𝗗𝗲𝗰𝗼𝗱𝗲 phase = memory-heavy Different problems. Different optimizations. This is why AI inference is still expensive. I’ve studied these systems deeply… and honestly, the more you learn, the crazier it gets. It’s not just AI. It’s engineering at insane scale. Which part of this pipeline surprised you the most? 👇
Shraddha Bharuka tweet media
English
14
171
669
22.5K
muler retweetledi
Kişisel Savunma
Kişisel Savunma@korunmakilavuzu·
Düz Göbek İçin 10 Dakikalık Egzersizler
Türkçe
1
100
625
58.8K
muler retweetledi
Tom Yeh
Tom Yeh@ProfTomYeh·
Softmax vs Sigmoid ✍️ Interact 👉 byhand.ai/Khlg9b = Softmax = Softmax is how deep networks turn raw scores into a probability distribution — the final layer of every classifier, and the core of every attention head in a transformer. To see what it does, picture five boba tea shops on the same block, all competing for your dollar. Five candidates: a, b, c, d, e — different chains, different brewing styles, different pearls. A boba reviewer hands you a 𝘤𝘩𝘦𝘸𝘪𝘯𝘦𝘴𝘴 𝘴𝘤𝘰𝘳𝘦 for each — higher means perfectly chewy "QQ" pearls with the right bite (ask a Taiwanese friend to find out what QQ means). Negative scores are real: mushy bobas, overcooked pearls, a batch left sitting too long. How do you turn five chewiness scores into an allocation that adds to a whole dollar? You could spend everything at the chewiest shop, but that ignores how good the runners-up are. Softmax is the smooth alternative. Read the diagram left to right. First, raise each score to e^{x} — this does two things: it turns negative chewiness into small positives, and it stretches the gaps between scores exponentially. Then sum all five into a single total Z. Finally, divide each e^{x} by Z to get a probability. The five probabilities add up to one, so you can read them as percentages of your dollar. The chewiest shop gets the biggest slice — but never the whole dollar. That's the point of softmax: it ranks confidently while still leaving room for the others. = Sigmoid = Sigmoid squashes any real number into a probability between 0 and 1 — the classic activation for binary classification, and still the gating function inside LSTMs and GRUs. Same boba block as the previous Softmax example, narrowed to just two contenders — a hot new shop `a` with chewiness score x, and your usual go-to `b` whose score is pinned at zero (the neutral baseline you've come to expect). Sigmoid is just softmax with two players, one of them pinned to zero. Read the diagram left to right. First, raise each score to e^{x} — for the usual shop `b` whose score is zero, this is just e^0 = 1 (the constant baseline). Then sum the two into a total Z. Finally, divide each e^{x} by Z to get a probability. The two probabilities add up to one — the new shop wins more of your dollar when its pearls get chewier, and your usual keeps the rest. That's the point of sigmoid: it turns a single chewiness score into a clean 0-to-1 chance you'll try the new place over your usual. --- AI Math, Algorithms, Architectures by hand ✍️ Subscribe to my 60K+ reader newsletter 👉 byhand.ai
English
9
168
1.2K
71.1K
muler retweetledi
Bedroom Fitness
Bedroom Fitness@BedroomFitnes·
Just 3 days people will think you did liposuction
English
78
565
2.8K
479.5K
muler retweetledi
Nainsi Dwivedi
Nainsi Dwivedi@NainsiDwiv50980·
Learn AI for free directly from top companies. 1 - Anthropic: anthropic.skilljar.com 2 - Google: grow.google/ai 3 - Meta: ai.meta.com/resources/ 4 - NVIDIA: developer.nvidia.com/cuda 5 - Microsoft: learn.microsoft.com/en-us/training/ 6 - OpenAI: academy.openai.com 7 - IBM: skillsbuild.org 8 - AWS: skillbuilder.aws 9 - DeepLearning.AI: deeplearning.ai 10 - Hugging Face: huggingface.co/learn 👇Comment "Learning" if you find this helpful. Repost so others can take help. Must bookmark for future reference.
Nainsi Dwivedi tweet media
English
36
343
1.7K
74.4K
muler retweetledi
muler retweetledi
Ai With Piyas
Ai With Piyas@piyascode9·
🚨 SHOCKING: An ex-Anthropic researcher just leaked the exact internal prompting framework the team uses. Most people treat Claude like a basic chatbot and leave 60–70% of its reasoning power on the table. These 10 prompts are how the pros actually use it — tested internally for maximum clarity, honesty, and depth. Copy-paste ready. Zero fluff. Save this thread. Your Claude game is about to change forever. (Pro tip: use them in order for compound results)
Ai With Piyas tweet media
English
63
95
455
78.2K
muler retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
Someone compiled the actual CS curriculum from MIT, Stanford, Berkeley, and Carnegie Mellon into one free GitHub repo and it has 67,300 stars. It's called awesome-courses and it links directly to lecture videos, assignments, and exams from courses universities charge $80,000 a year to teach. No certificate at the end. Just the knowledge the certificate is supposed to represent. Organized by subject: Systems, AI, Theory, Security, Compilers, Databases, Programming Languages, Distributed Systems. Every entry tells you the school, the professor, and exactly what materials are available before you click a single link. Coursera charges $49/month for recorded versions of courses that are already free. This repo is the original. 67.3K stars. MIT License. 100% Opensource. github.com/prakhar1989/aw…
Ihtesham Ali tweet media
English
20
268
1.3K
59.2K
muler retweetledi
Kawsar
Kawsar@Kawsar_Ai·
During a job interview, if they ask: “Do you have any questions for us?” USE THE GOLDEN RESPONSE:
English
118
799
4.5K
2.1M
Allie Beth Stuckey
Allie Beth Stuckey@conservmillen·
@iamBrianBJ Every justice system for all of time has had imperfect people, yet God still demands the death penalty for murder. I’m going to trust that He knows justice better than we do
English
32
6
384
9.7K
Allie Beth Stuckey
Allie Beth Stuckey@conservmillen·
If, after a fair trial, we applied the death penalty quickly, consistently, and publicly to murderers, the “temperature” would go down real quick. “Because the sentence against an evil deed is not executed speedily, the heart of the children of man is fully set to do evil.” Ecclesiastes 8:11
English
155
942
6.7K
261.7K
muler
muler@codeccpp·
@conservmillen Got any verses from the scriptures to back that up?
English
0
0
0
644
muler
muler@codeccpp·
@MikeWingerii Got any verses from Scripture to back that up?
English
0
0
2
164
muler
muler@codeccpp·
@SlowToWrite True. And you saw a video of Trump depicting the Obamas as apes, and you go, “Trump didn’t mean to imply racist stereotypes.”
English
0
0
0
25
Samuel Sey
Samuel Sey@SlowToWrite·
In the Garden of Eden, Satan said: “Did God really say?” Sometimes on social media, he says: “Did what you saw with your own eyes really happen? How can you trust Trump or the mainstream narrative?”
English
15
8
124
4.1K
muler
muler@codeccpp·
@SlowToWrite You need it as well. You need to stop lying.
English
0
0
1
92
Samuel Sey
Samuel Sey@SlowToWrite·
The Right needs the gospel just as much as the Left.
English
64
151
1.7K
27.7K
muler retweetledi
Evan Luthra
Evan Luthra@EvanLuthra·
🚨Salesforce just replaced 4,000 customer support reps with AI agents. Their CEO said it out loud: went from 9,000 heads to 5,000. Anthropic’s agents team just dropped a free 30-min video teaching you how to build these exact agents. From the engineers who built the stack. Meanwhile beginners are charging $5k, $7k, $12k/month building them for clients. CANCEL ALL YOUR WEEKEND PLANS AND BUILD AI AGENTS:
English
55
103
585
108.8K