kalyan • as/acc

4.2K posts

kalyan • as/acc banner
kalyan • as/acc

kalyan • as/acc

@midx34

hungry for 💰 | novel at research | views are my own

가입일 Şubat 2024
388 팔로잉197 팔로워
고정된 트윗
kalyan • as/acc
kalyan • as/acc@midx34·
Serious question: A human is born with innate iq, which either grows or depreciates with the kind of data being fed into the brain, as the person grows. Why did the entire AI research ecosystem attack and depend heavily on data to build intelligence? Why not the raw iq first?
English
4
2
15
1.4K
Julia Li
Julia Li@jjuulliiaallii·
I will fly you out to sf all expenses paid. You’ll get: > 3 days in sf  > fully covered flights, food and housing  > a chance to build with @photon_hq photon residency is for TOP growth, engineers and designers that reimagine agent-human interaction Reply “link” for the link to join us in sf
Julia Li tweet media
English
921
36
1K
109.4K
kalyan • as/acc
kalyan • as/acc@midx34·
I am taking a break. There are many things to sort out so idk when I'll be back. Bye till then.
English
0
0
2
210
vixhaℓ
vixhaℓ@TheVixhal·
Today, the warden of the girls hostel suddenly came into my girlfriend's room… and the worst part was that I was in my girlfriend's room. Now imagine the scene: - Warden banging on the door. - My girlfriend panicking. Me standing there like, “Bro, this is how my college journey ends.” The warden starts interrogating me: - What are you doing here? - Where is your ID card? - Are you really her cousin? Each question felt like a mini-death penalty. I knew one wrong answer and my entire semester GPA would be replaced by an FIR number. And that's basically what Naive Bayes Classifier does: It looks at multiple pieces of evidence (features), naively assumes they're independent, and classifies you into a category based on probability. Naive Bayes is a simple yet powerful classification algorithm based on Bayes Theorem that makes a “naive” assumption: all features are independent of each other (even when they're not). Formula: P(C | F) = (P(F | C) × P(C)) / P(F) Since features are independent: - P(F | C) - P(F1 | C) × P(F2 | C) × ... × P(Fn | C) - C = class - F = feature - Fn = nth feature Where: - C: The category we want to predict - F: The evidence we observe - P(C): Prior probability of the class P(C | F): Posterior probability (what we want to find) P(F | C): Likelihood of seeing these features given the class Let's take an example: You receive an email. Is it SPAM or HAM (not spam)? Email content: "Congratulations! You won free money. Click here now!" Historical Data: Total emails: 100 (60 Spam, 40 Ham) Word frequencies: - free: (40, 2) - money: (35, 5) - click: (30, 3) - congratulations: (25, 8) money: (35, 5): Out of 60 spam emails, the word ‘money’ appears in 35 of them. Out of 40 ham (non-spam) emails, the word ‘money’ appears in only 5. Let's solve step by step: Step 1: Calculate Prior Probabilities - P(Spam) = 60/100 = 0.6 - P(Ham) = 40/100 = 0.4 Step 2: Calculate Likelihoods for SPAM P(free | Spam) - 40/60= 0.667 P(money | Spam) - 35/60 = 0.583 P(click | Spam) - 30/60 = 0.5 P(congratulations | Spam) - 25/60 = 0.417 Combined likelihood: - P(F | C) - P(F1 | C) × P(F2 | C)... × P(Fn | C) - P(Words | Spam) - 0.667 × 0.583 × 0.5 × 0.417 - 0.081 Step 3: Calculate Likelihoods for HAM P(free | Ham) - 2/40 = 0.05 P(money | Ham) - 5/40 = 0.125 P(click | Ham) - 3/40 = 0.075 P(congratulations | Ham) - 8/40 = 0.2 Combined likelihood: - P(Words | Ham) - 0.05 × 0.125 × 0.075 × 0.2 - 0.0000938 Step 4: Calculate Posterior Probabilities For Spam: - P(Spam | Words) - P(Words | Spam) × P(Spam) - 0.081 × 0.6 = 0.0486 For Ham: - P(Ham | Words) - P(Words | Ham) × P(Ham) - 0.0000938 × 0.4 - 0.0000375 Step 5: Compare and Classify - P(Spam | Words) > P(Ham | Words) - The email is classified as SPAM. Prediction confidence: Spam probability = P(Spam | Words) / P(Spam | Words) + P(Ham | Words) - 0.0486 / (0.0486 + 0.0000375) - 0.9992 Convert in percentage: 0.9992 × 100 = 99.92% Final Answer: This email is SPAM with 99.92% confidence. Congratulations, you've just learned Naive Bayes Classifier! Real-World Applications of Naive Bayes: 1. Spam Detection: Gmail and other email services use Naive Bayes as part of their spam filtering. 2. Sentiment Analysis: Classifying movie reviews, tweets, or product reviews as positive, negative, or neutral based on word patterns. 3. Document Classification: Categorizing news articles (sports, politics, tech), research papers, or support tickets into predefined categories. 4. Medical Diagnosis: Given symptoms (features), predicting diseases. 5. Real-Time Prediction: Because it's so fast, it's used in systems requiring instant classification: content moderation, fraud detection, recommendation filtering.
English
40
16
464
46.3K
`
`@ick_real·
EMPLOYED PEOPLE ONLY!!! What’s the hardest part of ur job????
English
10K
649
14.5K
6.9M
maharshi
maharshi@maharshii·
i may have legit cooked, will let y'all know soon.
maharshi tweet media
English
6
0
102
6K
kalyan • as/acc
kalyan • as/acc@midx34·
@iBuild its very good but ig im confortable with cursor. I like the way antigravity presents itself
English
0
0
1
47
void.
void.@iBuild·
this works impressively nice, gotta say goodbye to cursor atp.
void. tweet media
English
131
17
782
45.4K
AshutoshShrivastava
AshutoshShrivastava@ai_for_success·
Thank you Google DeepMind team for Gemini 3 swag. 💙 🖤
English
34
12
465
30.1K
Vaibhav Sisinty
Vaibhav Sisinty@VaibhavSisinty·
Next Reel: The best unknown Indian AI companies. 🇮🇳 (Last one did 2M+ views) Drop the hidden gems below. 👇
English
22
0
58
10.5K
kalyan • as/acc
kalyan • as/acc@midx34·
@lochan_twt i seriously don't understand how the investors are so dumb. many people get funded for lame ideas, but when I tried with an actual idea, i couldn't find those retards. the world is too big.
English
0
0
0
11
Saumya Saxena
Saumya Saxena@saxenasaheb·
People who don’t drink and smoke, what do you do at parties?
English
863
7
628
142.7K
purusha - n/eti
purusha - n/eti@purusa0x6c·
I am learning dsa from harkirat and web3 from striker.
English
38
4
321
26.1K
kalyan • as/acc
kalyan • as/acc@midx34·
@karpathy sir, idk how to ask but I request you to respond to this.
kalyan • as/acc@midx34

@purusa0x6c Agreed, but you've enlisted some examples of what happens naturally. In today's case, we're trying to have control over the intelligence we're building, right? Why can't we manipulate it? Why don't we try to build a concept of innate iq and try to play with it?

English
2
0
2
486