SheCodes

565 posts

SheCodes banner
SheCodes

SheCodes

@learnBigO

1% better each day, all time learner, Let's grow together!

Katılım Ağustos 2022
81 Takip Edilen66 Takipçiler
SheCodes
SheCodes@learnBigO·
@wojakcodes Same, even if its decent amount i would still choose being adult over being Kid.
English
0
0
0
17
‎Wojak Codes
‎Wojak Codes@wojakcodes·
being an adult ain’t so bad when you make lots of money. I’d pick this over obeying broke and stupid teachers at school just cos they’re older than me lol.
English
18
25
488
7.4K
SheCodes
SheCodes@learnBigO·
@blinkitcares I returned my Blinkit order and was told refund would arrive in 3–5 days. It has now been 22 days and I haven’t received it. Please process my refund urgently or escalate this issue. And if the refund is completed please send a copy. I messaged OrderID in DM also.
English
0
0
0
18
SheCodes
SheCodes@learnBigO·
@blinkitcares still waiting for my refund.. Transaction is still not reflecting in my bank statement and i checked with bank as well, they are saying no transaction is done with RRN. you provided. And i am seeing others tweets as well and it looks like you just scam us this way!!
English
0
0
0
30
SheCodes
SheCodes@learnBigO·
@blinkitcares services are really bad, chat support is even worse. I am still waiting for my refund and I couldn't find any contact to query about this....
English
2
0
2
42
Piyush
Piyush@astraphiliaa·
@learnBigO For real I have an interview don't feel like studying
English
1
0
1
16
SheCodes
SheCodes@learnBigO·
I am struggling to find motivation to study to get better at what I am doing rn.😮‍💨 #tech #jobswitch
English
1
0
1
30
SheCodes
SheCodes@learnBigO·
PA is here!!
English
0
0
2
31
SheCodes retweetledi
Math Files
Math Files@Math_files·
Bayes’ theorem is probably the single most important thing any rational person can learn. So many of our debates and disagreements that we shout about are because we don’t understand Bayes’ theorem or how human rationality often works. Bayes’ theorem is named after the 18th-century Thomas Bayes, and essentially it’s a formula that asks: when you are presented with all of the evidence for something, how much should you believe it? Bayes’ theorem teaches us that our beliefs are not fixed; they are probabilities. Our beliefs change as we weigh new evidence against our assumptions, or our priors. In other words, we all carry certain ideas about how the world works, and new evidence can challenge them. For example, somebody might believe that smoking is safe, that stress causes mouth ulcers, or that human activity is unrelated to climate change. These are their priors, their starting points. They can be formed by our culture, our biases, or even incomplete information. Now imagine a new study comes along that challenges one of your priors. A single study might not carry enough weight to overturn your existing beliefs. But as studies accumulate, eventually the scales may tip. At some point, your prior will become less and less plausible. Bayes’ theorem argues that being rational is not about black and white. It’s not even about true or false. It’s about what is most reasonable based on the best available evidence. But for this to work, we need to be presented with as much high-quality data as possible. Without evidence—without belief-forming data—we are left only with our priors and biases. And those aren’t all that rational.
Math Files tweet media
English
2.2K
8.4K
37K
27.2M
SheCodes
SheCodes@learnBigO·
Isn't it too easy to get engagements through ragebait?
English
1
0
0
30
Abhijit
Abhijit@abhijitwt·
tier-3 students are under the delusion that college doesn't matter meanwhile, companies: already rejected them
Abhijit tweet media
English
124
62
1.9K
216.5K
SheCodes retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
DevOps vs. MLOps vs. LLMOps: Many teams are trying to apply DevOps practices to LLM apps. But DevOps, MLOps, and LLMOps solve fundamentally different problems. Here's why this matters: 88% of ML initiatives struggled to reach production using traditional DevOps practices. And LLMs introduce challenges that even MLOps wasn't designed for. Let's break it down: → DevOps is software-centric. You write code, test it, and deploy it. The feedback loop is straightforward: Does the code work or not? The primary artifact is code. Testing is deterministic. The tooling is mature after 15+ years. → MLOps is (model + data) centric. Here, you're dealing with data drift, model decay, and continuous retraining. The code might be fine, but the model's performance can degrade over time because the world changes. A fraud detection model might work perfectly at launch, then fail within weeks as fraudsters adapt. The primary artifact expands to code + data + models. You need to version all three. This is why MLflow, DVC, and feature stores became essential. → LLMOps is foundation-model-centric. Here, you're typically not training models from scratch. Instead, you select a foundation model and optimize through three parallel paths: - Prompt Engineering - Context/RAG Setup - Fine-Tuning Unlike DevOps and MLOps, these paths run in parallel, not sequentially. But here's what really separates LLMOps: The monitoring is completely different. In MLOps, you track data drift, model decay, and accuracy. In LLMOps, you're watching for: - Hallucination detection - Bias and toxicity - Token usage and cost - Human feedback loops This is because LLM outputs are non-deterministic. You can't just check if the output is "correct." You need to ensure it's safe, grounded, and cost-effective. 63% of production AI systems experience dangerous hallucinations within their first 90 days. The cost model also flips. MLOps costs are training-heavy (GPU hours during development). LLMOps costs are inference-heavy (every request consumes tokens). This is why prompt efficiency, caching, and model routing matter so much in LLMOps. The evaluation loop in LLMOps feeds back into all three optimization paths simultaneously. Failed evals might mean you need better prompts, richer context, OR fine-tuning. So it's not a linear pipeline anymore. One more thing: prompt versioning and RAG pipelines are now first-class citizens in LLMOps, just like data versioning became essential in MLOps. And the ops layer you choose should match the system you're building. 👉 Over to you: What does your LLM monitoring stack look like right now? _____ Find me → @akshay_pachaar For more insights and tutorials on AI and Machine Learning!
GIF
English
23
145
639
32.6K
SheCodes
SheCodes@learnBigO·
Is life worth taking risks?
English
0
0
0
9
SheCodes
SheCodes@learnBigO·
UP parents and their obsession with govt. jobs🫤🤦‍♂️
English
0
0
0
8