EL-3ashmawy

7.8K posts

EL-3ashmawy

EL-3ashmawy

@AcountUnknown9

هذا الحساب برئ من السيسي و زبانيته امام ما يحدث في غزه و امام كل ظلم وقع على اي مظلوم

Egypt Katılım Mart 2011
2.3K Takip Edilen162 Takipçiler
EL-3ashmawy retweetledi
Abeer mohammed
Abeer mohammed@AbeerMohamm12·
غزة تتعرض للقصف الآن،،،🥺💔 لا تجعله خبر عااابر أحنا قاعدين بنموتت أنشرو عنا وصلو صوتنا🫶
Abeer mohammed tweet media
العربية
63
1.6K
2.4K
27.3K
EL-3ashmawy retweetledi
avrl ☘
avrl ☘@avrldotdev·
Applied System Design (Real Scale) 12 How Airbnb prevents double booking Problem Two users, one in Tokyo and one in New York, hit 'Confirm Booking' for the exact same cabin at the exact same time. How does the system decide who wins without a 'Race Condition'?
avrl ☘ tweet media
English
14
6
68
2.4K
EL-3ashmawy retweetledi
Nitin.nn
Nitin.nn@NitinthisSide_·
🧵 Day 21/30 — #SystemDesign Load Balancer: The silent system that keeps everything fast & alive One server handling all traffic? Works… until it doesn’t. Traffic spikes → server overload → downtime. That’s why systems use a Load Balancer. A load balancer sits between users and servers and distributes incoming requests across multiple instances. Flow: User → Load Balancer → Multiple Servers Goal: → No single server overloaded → Better performance → High availability Types (keep it simple) Layer 4 (Transport) → Works on IP + Port → Fast, less intelligent Layer 7 (Application) → Works on HTTP/HTTPS → Smart routing (headers, paths) Common Algorithms → Round Robin (equal distribution) → Least Connections (send to least busy) → IP Hash (same user → same server) → Weighted (based on server capacity) Why It Matters → Handles traffic spikes → Improves uptime → Enables horizontal scaling → Supports failover → Better latency Real Usage → AWS ELB / ALB → NGINX → HAProxy → Cloudflare Every scalable system uses one. Golden Rule Scaling = adding more servers. Load balancer = using them efficiently. #30DaysOfSystemDesign #LoadBalancing #BackendEngineering
Nitin.nn tweet media
Nitin.nn@NitinthisSide_

🧵 Day 20/30 — #SystemDesign Kafka: The backbone behind real-time data pipelines at scale Most systems don’t just serve requests. They produce continuous streams of events — user clicks, payments, logs, metrics, notifications. Handling this data reliably and at scale is not trivial. That’s where Apache Kafka comes in. Kafka is a distributed event streaming platform used to build real-time data pipelines and streaming applications. It allows services to publish and consume events efficiently without tight coupling. ---------------------- Core Idea !! Instead of services calling each other directly: → Producer sends event to Kafka → Kafka stores it durably → Multiple consumers read and process independently Flow: Producer → Kafka Topic → Consumers One event can power many systems at once. --------------------- Key Concepts 1. Topic A category of events (e.g., orders, payments, logs) 2. Partition Topics are split into partitions for parallel processing 3. Producer Sends messages to Kafka 4. Consumer Reads messages from Kafka 5. Consumer Group Multiple consumers sharing load 6. Offset Position of message in partition These concepts define Kafka’s power. ------------------------ Why Kafka Matters → High throughput (millions of messages/sec) → Fault tolerant (replication across brokers) → Scalable horizontally → Durable storage (events persisted) → Real-time processing → Decouples systems It turns data into a streaming backbone. --------------------------- Real-World Example E-commerce order placed: → Event sent to Kafka (OrderCreated) Consumers: → Payment service processes transaction → Inventory updates stock → Notification sends email → Analytics tracks event → Recommendation system updates behavior All from one event stream. -------------------------------- Why Companies Use Kafka → Netflix for event streaming pipelines → LinkedIn (creator of Kafka) → Uber for real-time data flows → Amazon for internal streaming systems → Fintech apps for transaction streams Kafka powers modern data-driven systems. -------------------------------- Important Strength : Kafka stores events, not just forwards them. Consumers can: → Read in real-time → Replay old events → Recover from failures → Process at their own pace This makes systems resilient. ---------------------------- Challenges Most Ignore Kafka is powerful, but not simple: → Requires cluster management → Partition design is critical → Ordering only within partition → Exactly-once semantics is complex → Monitoring and tuning needed Misuse leads to complexity quickly. --------------- Kafka vs Queue Queue: → Message consumed once Kafka: → Message stored + can be consumed multiple times Kafka is more like a log system than a simple queue. Don’t use it for simple request-response systems. #30DaysOfSystemDesign #Kafka #BackendEngineering

English
12
20
119
2.9K
EL-3ashmawy
EL-3ashmawy@AcountUnknown9·
@MrXroboT الف مبروك انك حاولت تبين نفسك ذكي في كود اشك هيطلع بكوالتي عالي مكسبتش غير ان الـ Team عندك هيشيل منك
العربية
0
0
26
1.5K
Bess Gates 
Bess Gates @MrXroboT·
انهارده كان عندنا في الشركه sprint planning ف هما عمالين يشرحوا التاسكات و عمالين يفسروها قولتلهم بقولكوا ايه، انا تاسكات الsprint كله عملتها في الويك اند سوا backend و frontend و قومت فاتح عارضلهم demo live لتاسكات الsprint كله معموله🤣😌
العربية
18
0
127
15.5K
EL-3ashmawy retweetledi
Dhanian 🗯️
Dhanian 🗯️@e_opore·
𝐖𝐡𝐚𝐭 𝐢𝐬 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐢𝐧𝐠 𝐢𝐧 𝐒𝐲𝐬𝐭𝐞𝐦 𝐃𝐞𝐬𝐢𝐠𝐧? Most systems fail not because of bad code,but because they can’t handle scale. Load balancing is the technique that keeps systems fast, reliable, and always available. 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐢𝐧𝐠? • Load balancing is the process of distributing incoming network traffic across multiple servers. • It ensures no single server gets overwhelmed, improving performance and reliability. • It helps systems scale horizontally by adding more servers instead of upgrading one machine. • It increases fault tolerance—if one server fails, traffic is redirected to others. 𝐁𝐞𝐟𝐨𝐫𝐞 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐢𝐧𝐠 User → Single Server → Database One server handles everything. If traffic spikes → slow response or crash. If server fails → system goes down completely. 𝐀𝐟𝐭𝐞𝐫 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐢𝐧𝐠 User → Load Balancer → Multiple Servers → Database Traffic is distributed evenly. System stays fast under heavy load. Failures are handled without downtime. 𝐓𝐲𝐩𝐞𝐬 𝐨𝐟 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐢𝐧𝐠 • Round Robin – Requests are distributed one by one to each server. • Least Connections – Traffic goes to the server with the fewest active connections. • IP Hash – Requests are routed based on user IP for session consistency. • Weighted Load Balancing – More powerful servers handle more requests. 𝐇𝐨𝐰 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐢𝐧𝐠 𝐖𝐨𝐫𝐤𝐬? User → Request → Load Balancer → Select Server → Forward Request → Server Processes → Response → Load Balancer → User 𝐓𝐡𝐞 𝐅𝐥𝐨𝐰 User sends a request to the system. Load balancer receives the request. It selects the best server based on an algorithm. The request is forwarded to that server. Server processes the request and sends a response. Load balancer returns the response to the user. 𝐖𝐡𝐲 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐢𝐧𝐠 𝐌𝐚𝐭𝐭𝐞𝐫𝐬 • Prevents server overload • Improves application speed • Ensures high availability • Enables scalability • Handles traffic spikes efficiently Before load balancing, systems break under pressure. After load balancing, systems grow with demand. Load balancing is the backbone of scalable and reliable system design. 📘 Grab System Design ebook here: codewithdhanian.gumroad.com/l/urcjee
Dhanian 🗯️ tweet media
English
6
33
147
2.8K
EL-3ashmawy retweetledi
Tech Fusionist
Tech Fusionist@techyoutbe·
Kubernetes (64 Key Concepts) 👇👇
Tech Fusionist tweet media
Tech Fusionist@techyoutbe

Kubernetes Simplified: Most people get overwhelmed by K8s. Here’s the simplest way to understand it 👇 Part1 (1-5) @techfusionist/note/c-247306631?r=4cjx17&utm_medium=ios&utm_source=notes-share-action" target="_blank" rel="nofollow noopener">substack.com/@techfusionist… Part1 (6-10) @techfusionist/note/c-251142337?r=4cjx17&utm_medium=ios&utm_source=notes-share-action" target="_blank" rel="nofollow noopener">substack.com/@techfusionist

Español
1
42
195
6.2K
EL-3ashmawy retweetledi
freeCodeCamp.org in Arabic
freeCodeCamp.org in Arabic@freeCodeCampAR·
بناء AI Agent واحد… أصبح سهلًا لكن بناء نظام متعدد Agents يعمل في الإنتاج؟ هنا تبدأ اللعبة الحقيقيةالمشكلة ليست في الفكرة بل في البنيةكيف تدير الحالة؟ كيف تربط الأدوات؟ كيف تنسّق بين Agents مختلفة؟ وكيف تضمن الجودة مع الوقت؟ هذا الكتاب يجيب على هذه الأسئلة بكود عملي يعمل على جهازكبدون Cloud بدون API Keys بدون تكلفةستبني نظامًا حقيقيًا Multi-Agent Systemباستخدام:LangGraph لإدارة الـ state MCP لربط الأدوات بشكل موحد A2A لتنسيق التواصل بين Agents Ollama لتشغيل النماذج محليًاوستطبق ذلك على مشروع فعلي نظام يتعلم معك يشرح يختبر ويتكيّف الفكرة هنا ليست بناء Agent بل بناء نظام كاملنفس النمط يُستخدم اليوم في: التدريب، الدعم، onboarding، والمبيعات الفرق ليس في المجال بل في الهندسة السؤال الحقيقي:هل تبني Agent بسيط… أم نظامًا قادرًا على العمل في العالم الحقيقي؟ freecodecamp.org/news/how-to-bu… #الوكلاء_الأذكياء
العربية
0
2
11
284
EL-3ashmawy retweetledi
Tom Yeh
Tom Yeh@ProfTomYeh·
Softmax vs Sigmoid ✍️ Interact 👉 byhand.ai/Khlg9b = Softmax = Softmax is how deep networks turn raw scores into a probability distribution — the final layer of every classifier, and the core of every attention head in a transformer. To see what it does, picture five boba tea shops on the same block, all competing for your dollar. Five candidates: a, b, c, d, e — different chains, different brewing styles, different pearls. A boba reviewer hands you a 𝘤𝘩𝘦𝘸𝘪𝘯𝘦𝘴𝘴 𝘴𝘤𝘰𝘳𝘦 for each — higher means perfectly chewy "QQ" pearls with the right bite (ask a Taiwanese friend to find out what QQ means). Negative scores are real: mushy bobas, overcooked pearls, a batch left sitting too long. How do you turn five chewiness scores into an allocation that adds to a whole dollar? You could spend everything at the chewiest shop, but that ignores how good the runners-up are. Softmax is the smooth alternative. Read the diagram left to right. First, raise each score to e^{x} — this does two things: it turns negative chewiness into small positives, and it stretches the gaps between scores exponentially. Then sum all five into a single total Z. Finally, divide each e^{x} by Z to get a probability. The five probabilities add up to one, so you can read them as percentages of your dollar. The chewiest shop gets the biggest slice — but never the whole dollar. That's the point of softmax: it ranks confidently while still leaving room for the others. = Sigmoid = Sigmoid squashes any real number into a probability between 0 and 1 — the classic activation for binary classification, and still the gating function inside LSTMs and GRUs. Same boba block as the previous Softmax example, narrowed to just two contenders — a hot new shop `a` with chewiness score x, and your usual go-to `b` whose score is pinned at zero (the neutral baseline you've come to expect). Sigmoid is just softmax with two players, one of them pinned to zero. Read the diagram left to right. First, raise each score to e^{x} — for the usual shop `b` whose score is zero, this is just e^0 = 1 (the constant baseline). Then sum the two into a total Z. Finally, divide each e^{x} by Z to get a probability. The two probabilities add up to one — the new shop wins more of your dollar when its pearls get chewier, and your usual keeps the rest. That's the point of sigmoid: it turns a single chewiness score into a clean 0-to-1 chance you'll try the new place over your usual. --- AI Math, Algorithms, Architectures by hand ✍️ Subscribe to my 60K+ reader newsletter 👉 byhand.ai
English
8
140
1K
61.5K
EL-3ashmawy retweetledi
Alex Xu
Alex Xu@alexxubyte·
CI/CD Pipeline Explained in Simple Terms
Alex Xu tweet media
English
4
101
532
24.8K
EL-3ashmawy retweetledi
freeCodeCamp.org in Arabic
freeCodeCamp.org in Arabic@freeCodeCampAR·
كيف تنتقل من استخدام الـ AI… إلى التحكم فيه؟ هذا ليس مجرد استخدام نماذج بل مرحلة اسمها: Fine-Tuning الفكرة ببساطة: بدل أن تتكيف أنت مع النموذج تجعل النموذج يتكيف معك في هذا الكورس من freeCodeCamp ستتعلم كيف تنتقل من الأساسيات إلى التطبيق العملي تفهم الفرق بين: Pre-training Prompt Engineering Fine-Tuning ثم تدخل للأهم كيف تخصص النموذج لمهام حقيقية ستتعلم تقنيات مثل: RLHF QLoRA حتى تتمكن من تعديل نماذج ضخمة بإمكانيات محدودة هذا ليس شرحًا نظريًا بل طريق لتصبح AI Engineer حقيقي السؤال: هل تكتفي باستخدام النماذج… أم تريد بناء نماذج تفهم احتياجك؟ freecodecamp.org/news/how-to-fi… #نماذج_اللغة_الكبيرة
العربية
0
5
14
518
EL-3ashmawy retweetledi
Vishwanath Patil
Vishwanath Patil@patilvishi·
Scaling isn’t just adding servers. It’s about distributing traffic smartly. Here’s a simple Load Balancing cheat sheet 👇 - Round Robin - Least Connections - IP Hash - Weighted - health checks & real-world use Follow @patilvishi for more system design content..
Vishwanath Patil tweet media
English
1
28
128
2.7K
EL-3ashmawy retweetledi
Dhanian 🗯️
Dhanian 🗯️@e_opore·
15 Essential DevOps Principles Every Developer Should Master: 1. COLLABORATION CULTURE → Break silos between developers and operations → Encourage shared responsibility → Improve communication across teams DevOps starts with people, not tools. 2. CONTINUOUS INTEGRATION (CI) → Merge code changes frequently → Automatically build and test code → Detect bugs early Small, frequent updates reduce risk. 3. CONTINUOUS DELIVERY (CD) → Automate release pipelines → Ensure code is always deployable → Reduce manual intervention Ship faster with confidence. 4. INFRASTRUCTURE AS CODE (IaC) → Manage infrastructure using code → Use tools like Terraform, CloudFormation → Version control infrastructure No more manual server setup. 5. AUTOMATION EVERYWHERE → Automate testing, deployment, monitoring → Reduce human errors → Increase efficiency If it’s repetitive, automate it. 6. MONITORING & OBSERVABILITY → Track system metrics and logs → Use tools like Prometheus, Grafana → Detect issues in real-time Visibility is key to reliability. 7. VERSION CONTROL EVERYTHING → Store code, configs, scripts in Git → Track every change → Enable collaboration Everything should be reproducible. 8. MICRO SERVICES & CONTAINERS → Use Docker for containerization → Deploy services independently → Improve scalability Lightweight and flexible deployments. 9. ORCHESTRATION → Manage containers using Kubernetes → Automate scaling and deployment → Ensure high availability Control complex distributed systems. 10. SECURITY (DEVSECOPS) → Integrate security into pipelines → Scan for vulnerabilities → Protect secrets and credentials Security is everyone’s responsibility. 11. FAST FEEDBACK LOOPS → Get feedback quickly from tests and monitoring → Fix issues early → Improve iteration speed Faster feedback = better products. 12. SCALABILITY & HIGH AVAILABILITY → Design for traffic spikes → Use load balancers and auto-scaling → Avoid downtime Your system should handle growth seamlessly. 13. CONFIGURATION MANAGEMENT → Manage system configs consistently → Use tools like Ansible, Chef → Avoid configuration drift Consistency across environments matters. 14. CONTINUOUS TESTING → Run automated tests at every stage → Include unit, integration, and E2E tests → Ensure code quality Testing is part of the pipeline, not an afterthought. 15. LEARN & IMPROVE CONTINUOUSLY → Analyze failures → Conduct post-mortems → Optimize processes DevOps is a continuous journey, not a destination. DevOps is about speed, reliability, and collaboration — enabling teams to build and deliver better software faster. If you want to master DevOps in depth: Grab the DevOps Handbook: codewithdhanian.gumroad.com/l/yiirl
Dhanian 🗯️ tweet media
English
13
23
100
2.3K
EL-3ashmawy retweetledi
د. عبدالرحمن ذياب
كورس مجاني مهم جداً Building AI Agents (Full Course) مقدم من: DeepLearning. المدة: تقريبًا 4–6 ساعات.
0
29
191
9K
EL-3ashmawy retweetledi
د. عبدالرحمن ذياب
هذا الشخص توصّل لنظام يضاعف سرعة التعلّم بشكل هائل باستخدام الذكاء الاصطناعي: NotebookLM + Gemini + Obsidian مزيج ذكي يحوّل أي موضوع معقّد إلى معرفة سريعة وسهلة الاستيعاب.
العربية
1
100
710
31K
EL-3ashmawy retweetledi
William Outa
William Outa@william_outa·
للمهتمين/ات...أكثر من ٥٠٠ كتاب بصيغة pdf من جامعة MIT. في كافة ميادين المعرفة والعلوم.تحميل مجاني. direct.mit.edu/books/search-r…
العربية
18
901
4.3K
300.7K
EL-3ashmawy retweetledi
Ojas Sharma
Ojas Sharma@OjasSharma276·
Finally, after a lot of procrastination, I’ve started this Distributed Systems playlist. The playlist consists of 20 videos, and I’ve just finished Lecture 1: Introduction. Here are some points from the first video: >Explained the basic requirements of an infrastructure >Discussed achieving abstraction in infrastructure >Covered the impact of scalability on performance >Introduced key aspects of fault tolerance (availability, recoverability, consistency) >Gave a high level overview of MapReduce and how it works The video wasn’t very detailed, but the playlist looks promising.
Ojas Sharma tweet media
Ojas Sharma@OjasSharma276

Distributed Systems I will share the video insights.

English
14
99
1K
55.5K
EL-3ashmawy retweetledi
Jaydeep
Jaydeep@_jaydeepkarale·
Kubernetes Service Discovery Clearly Explained !!! Every Pod in Kubernetes gets its own IP. Sounds great… until it breaks your system. In a Deployment, you don’t run just one Pod. You run multiple replicas: pod-1 → 10.0.0.1 pod-2 → 10.0.0.2 pod-3 → 10.0.0.3 Each Pod has its own IP. So far, so good. Now imagine another service wants to call this app. What should it use? 10.0.0.1? 10.0.0.2? 10.0.0.3? There’s no single stable address. Already a problem. Now it gets worse. Pods are ephemeral. If a Pod dies: it is deleted a new Pod is created with a completely NEW IP Example: old pod → 10.0.0.2 ❌ new pod → 10.0.0.9 ✅ Your system is now pointing to a dead IP. So the real problem is: 👉 Pod IPs are dynamic 👉 Clients need a stable way to connect Without that, service-to-service communication is unreliable. This is exactly what Kubernetes Services solve. Instead of pointing to Pods directly, you create a Service. And here’s the key idea: 👉 You don’t connect to Pods 👉 You connect to a logical group of Pods How does Kubernetes know which Pods belong to that group? Using: labels & selectors Pods are tagged with labels: labels: app: user-service The Service defines a selector: selector: app: user-service That’s it. This simple mapping connects them. Now Kubernetes keeps track of: “All Pods with label app=user-service belong to this Service” Even if Pods die or restart or scale up/down, The mapping is always updated. And the Service gives you a stable entry point: a fixed IP (ClusterIP) a DNS name So clients just call: 👉 user-service Not individual Pods. Behind the scenes Service finds matching Pods via labels traffic is routed to one of them, dead Pods are automatically removed, new Pods are automatically added No manual updates. So the transformation is: ❌ Before: Call fragile Pod IPs that keep changing ✅ After: Call a stable Service that tracks Pods dynamically If you remember one thing: Kubernetes doesn’t make Pods stable. It makes access to Pods stable. That’s the real purpose of a Service. Not just load balancing. But solving the “changing IP” problem elegantly.
Jaydeep tweet media
English
18
93
421
20.4K
EL-3ashmawy retweetledi
Dhanian 🗯️
Dhanian 🗯️@e_opore·
Top 30 RESTful APIs Concepts - Explained like to a 5 Year Old kid.
Dhanian 🗯️ tweet media
English
4
23
115
2.2K