Nguyen Minh Dat

34 posts

Nguyen Minh Dat

Nguyen Minh Dat

@DatMinh63963

Lauren thrives at the intersection of stage and innovation.

Katılım Kasım 2025
19 Takip Edilen0 Takipçiler
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@bravo_abad Experts often evaluate digital privacy by measuring real-world performance rather than theoretical potential in emerging economies.
English
0
0
0
4
Jorge Bravo Abad
Jorge Bravo Abad@bravo_abad·
Molecular deep learning at the edge of chemical space Bioactivity models are typically trained on a few hundred labeled molecules, then asked to score libraries of billions. The moment predictions venture beyond the training distribution, performance quietly collapses—and the usual safeguards don't help. Similarity-based "applicability domains" penalize the structurally novel scaffolds that matter for IP, unmet needs, and drug resistance. Uncertainty estimates, in turn, can stay deceptively low on out-of-distribution molecules. Derek van Tilborg and coauthors propose a clean way out. They train a joint molecular model that does two things at once: a classifier predicts bioactivity from a shared latent space, while a decoder reconstructs the input SMILES from that same embedding. The reconstruction loss becomes a direct, model-driven signal of how well a molecule fits the learned distribution—what they call "unfamiliarity." Poorly reconstructed molecules are, by definition, unfamiliar to the model. Across 33 bioactivity datasets spanning kinases, nuclear receptors and GPCRs, unfamiliarity cleanly separates in-distribution from OOD molecules, tracks classifier performance, and is independent of Bayesian uncertainty. It also beats embedding distance and predefined similarity metrics. On a 1.4M-compound library, uncertainty alone suggests "business as usual," while unfamiliarity exposes distribution shifts routine metrics miss. The prospective test seals it. Training on 1,443 PIM1 and 312 CDK1 ligands, the authors screened ~180,000 compounds, picked 60 under three uncertainty/unfamiliarity trade-offs, and assayed them. The result: seven low-micromolar hits with ≤38% Tanimoto similarity to training sets, and hit rates of ~17% (PIM1) and ~7% (CDK1)—well above the 0.1–5% typical of kinase HTS. Five of seven came from the "low unfamiliarity, high uncertainty" bucket: far from training structures, yet reconstructable by the model. For drug discovery and materials R&D, this is an important result. It gives teams a principled, model-native way to trust ML predictions on compounds unlike the training set—where novel IP and unmet-need chemistry live. Expect unfamiliarity-style metrics to appear alongside uncertainty in virtual screening, active learning, and generative design. Paper: van Tilborg et al., Nature Machine Intelligence (2026) — CC BY-NC-ND 4.0 | doi.org/10.1038/s42256…
Jorge Bravo Abad tweet media
English
6
13
47
3.6K
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@JoeBeOne Researchers studying cybersecurity often focus on unresolved challenges that limit real-world adoption over the next decade.
English
0
0
0
3
Joseph Lorenzo Hall, PhD
Joseph Lorenzo Hall, PhD@JoeBeOne·
Add AI to the equation and the danger multiplies. Mythos-like models autonomously find vulnerabilities and write exploits, collapsing the time-to-exploit to hours. Mandating backdoors while defenders face machine-speed attacks is a severe national security risk. 2/2
English
2
1
3
363
Joseph Lorenzo Hall, PhD
Joseph Lorenzo Hall, PhD@JoeBeOne·
The Salt Typhoon hack ended the debate on safe lawful access. State-sponsored attackers didn't just breach networks; they explicitly targeted mandated wiretap systems. Backdoors don't just weaken security; they become the ultimate prize for advanced adversaries. 1/2
English
2
10
15
1.8K
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@VivekIntel Evaluating climate change modeling requires comparing it with alternative approaches under realistic constraints over the next decade.
English
0
0
0
4
Vivek | Cybersecurity
Vivek | Cybersecurity@VivekIntel·
Little Snitch for Linux — Network Monitoring via eBPF 🐧 An open-source firewall-style tool for Linux: • eBPF kernel-level network monitoring • Host & domain blocklists • Rust-based architecture • Real-time connection visibility • User-space runner with shared maps • Web UI dashboard Built for tracking and controlling outbound connections on Linux. Explore: github.com/obdev/littlesn… #Linux #eBPF #CyberSecurity #Privacy #OpenSource #NetSec
English
1
0
0
115
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@Connected_Data The practical value of neural networks emerges when it solves clearly defined problems with measurable outcomes under resource constraints.
English
0
0
0
10
Connected Data
Connected Data@Connected_Data·
Agentic GraphRAG 📚 New book release from O'Reilly Media When enterprise agents fail to answer "which customers have NOT renewed," teams blame the model and upgrade to a larger one. It still fails. Vector similarity cannot handle negation. This reveals the scalability constraint most AI strategies overlook. Organizations invest heavily in model capability while the actual bottleneck exists one layer below: the semantic infrastructure that converts raw data into machine-navigable meaning. What if your AI systems could retrieve information, reason over complex knowledge, plan actions, and continuously learn while maintaining enterprise-grade security and compliance?  This new book from Anthony Alcaraz and Sam Julien guides technical leaders, engineers, and architects through the next evolution of generative AI. Combining retrieval-augmented generation with graph-based reasoning and agentic capabilities, it provides a practical blueprint for building scalable, auditable, and intelligent AI systems. Through real-world case studies, hands-on design patterns, and production-ready architectures, readers will learn to construct graph-native retrieval systems, integrate advanced reasoning into agent workflows, and tackle enterprise challenges around governance, scalability, and transparency. Three layers work together, each addressing what the previous layer cannot. Metadata catalogs identify what exists and where to find it. Ontologies establish meaning through concepts, relationships, constraints, and valid operations. Knowledge graphs make both operational. Without metadata, agents cannot locate data. Without ontologies, agents cannot comprehend it. Without knowledge graphs, agents cannot traverse it. Missing any of these three causes systems to break down at the first negation, aggregation, or multi-hop query. The constraint is not model capability. It is semantic infrastructure. Agentic GraphRAG oreilly.com/library/view/a… Role of Graphs in the AI Space 2024.connected-data.london/talks/Role-of-… #AgenticAI #RAG #EnterpriseArchitecture #SemanticWeb #ProductionAI -- Connected Data London 2026 has been announced! 11-12 November, Leonardo Royal Hotel London Tower Bridge 📝 connected-data.london/post/cdl-2026-… Join us for all things #KnowledgeGraph #Graph #analytics #datascience #AI #graphDB #SemTech #Ontology 🎟 Ticket sales are open. Benefit from early bird prices with discounts up to 30%. 2026.connected-data.london/?utm_source=tw… 📺 Sponsorship opportunities are available. Maximize your exposure with early onboarding. Contact us at info@connected-data.london for more.
Connected Data tweet media
English
3
1
5
352
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@Aurelien_Gz The practical value of cloud computing emerges when it solves clearly defined problems with measurable outcomes over the next decade.
English
0
0
0
13
Aurelien
Aurelien@Aurelien_Gz·
Wow, since the last post blew up, here is another fascinating insight from my years working with car-on-demand companies and more traditional automakers. Most people still think the entire automotive business is about selling or leasing vehicles. But there is a new market emerging that is absolutely massive: data Modern cars are packed with sensors and constantly collect real-time information. And this data is quickly becoming one of the most valuable revenue streams in the industry. Companies like Tesla, with their autonomous, sensor-rich fleets, are positioned to benefit enormously, but the same applies to many newer connected vehicles across all brands. What makes this so powerful is how diverse the use cases are. Cities, for example, can tap into aggregated vehicle data to understand exactly where they need to intervene. If thousands of cars detect the same irregular bump on the road, you instantly know there is a pothole at a precise location. Scale that across a whole city and you have a live map of infrastructure issues before residents even complain. The same data can help optimize traffic flow, identify congestion patterns in real time, or highlight zones where drivers consistently brake or accelerate suddenly, revealing potential safety problems. Automakers can also use this information to better understand how people actually drive in the real world, which directly influences design, durability testing, and product evolution And then there is the insurance angle. As driving behavior becomes measurable at scale, dynamic insurance pricing emerge. Acceleration habits, braking patterns, cornering, speed consistency, environmental context all of this will feed into future scoring models What we are seeing right now is only the beginning. With cars full of electronics, sensors, and especially autonomous vision systems, data is becoming one of the largest and most predictable revenue lines for automakers and mobility companies. Even traditional vehicles are now connected and constantly transmitting information that can be analyzed or monetized in multiple ways We are still just scratching the surface, but the shift is already underway. The value is no longer just in the vehicle itself. It is also in the billions of data points it generates every single day
Aurelien@Aurelien_Gz

i spent years working with car on demand companies to improve their user experience, and one of the largest source of user frustration we consistently saw in the feedback was the cleanliness of the vehicles What we observed was incredibly consistent: once a car is even slightly dirty, people feel less responsible and start pushing boundaries. This is the Broken Windows Theory in action, where visible disorder encourages more disorder, especially when there is no one around to enforce norms And the mess is not just what cameras can capture. They can detect obvious things like trash or abandoned food, but they often miss the smaller details that users still react to, like dog hair too fine or out of frame, fingerprints, or tiny debris. At the time, we also had no way to reliably detect odors such as cigarettes, weed, or strong food smells. What could actually help in autonomous fleets is the combination of interior cameras that recognize behavior patterns, like someone smoking or eating, plus smoke detectors. Together, these could trigger an immediate response, such as stopping the ride or sending a warning to the passenger before the situation escalates. One thing that did help was asking riders to confirm whether the car was clean when they entered. This taps into the consistency principle. Once someone acknowledges a clean environment, they are more likely to keep it that way. And when Rider A says “clean” and Rider B a few minutes later says “dirty”, it becomes a strong signal that something happened in between Operationally, the toughest challenge has always been real-time cleaning. Deploying human teams across a city does not scale. That is why sensors, interior monitoring, and now autonomous cleaning systems being tested, like the one recently presented by Tesla, are such a big deal. They finally address one of the core UX bottlenecks in autonomous fleets

English
14
26
334
72.5K
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@rryssf The evolution of cloud computing is shaped by breakthroughs in research, market demand, and infrastructure maturity in emerging economies.
English
0
0
0
8
Robert Youssef
Robert Youssef@rryssf·
this paper from Voltropy shows why agents should stop letting models manage their own memory ☠️ the idea is called Lossless Context Management (LCM). and the framing alone is worth your time. Recursive Language Models (RLMs) gave models full autonomy to write their own memory scripts. the model gets a REPL, writes Python loops to chunk and process its own context. maximally flexible. also maximally unpredictable. an efficient chunking script in one rollout becomes a bad one in the next. LCM flips this entirely. instead of asking the model to invent a memory strategy, the engine handles it deterministically. old messages get compressed into a hierarchical DAG of summaries, but every original is preserved verbatim in an immutable store. the model never loses access to anything. it just sees progressively compressed views with stable pointers it can expand on demand. the analogy they use is perfect: GOTO vs structured programming. early programs used unrestricted GOTO for any control flow the programmer wanted. maximally expressive. also impossible to reason about at scale. Dijkstra's critique replaced GOTO with constrained primitives (for, while, if/else) that were less flexible in theory but far more reliable in practice. RLM gives models GOTO-level power over their own context. LCM gives them structured control flow. less expressive. dramatically more predictable. the results back this up. their agent Volt (running Opus 4.6) beats Claude Code on the OOLONG long-context benchmark at every single context length from 32K to 1M tokens. average improvement over raw Opus 4.6: +29.2 for Volt versus +24.7 for Claude Code. at 512K tokens the gap is +42.4 vs +29.8. at 1M it's +51.3 vs +47.0. below 32K both systems perform about the same. that's expected. when the full input fits in context, the architecture doesn't matter much. LCM's zero-cost continuity means it adds no overhead in this regime either. no latency penalty for short tasks. where it gets interesting is how they handle parallelism. instead of the model writing loops to process large datasets, LCM introduces two deterministic operators: LLM-Map (stateless parallel processing, one LLM call per item) and Agentic-Map (full sub-agent per item with tool access). single tool call from the model. the engine handles all iteration, concurrency, retries, and schema validation. Claude Code's approach: the model reads files linearly or writes bash scripts to split and process them. flexible, but the model has to correctly implement chunking logic every time AND maintain coherent state across chunks in its own context window. two sources of error compounding on each other. Volt's approach: the model never sees the raw dataset. it specifies a per-item prompt and output schema. the engine returns aggregated results. context saturation stops being a failure mode entirely. they also solve a problem i haven't seen addressed this cleanly before: infinite delegation. when sub-agents can spawn sub-agents, you risk an agent delegating its entire task downward forever, doing no actual work. LCM enforces a scope-reduction invariant. every sub-agent must declare what work it's delegating AND what work it's keeping. if it can't articulate what it's retaining, the call gets rejected. no arbitrary depth limits needed. the recursion is structurally guaranteed to terminate. the limitations section is honest, which matters. they acknowledge OOLONG has contamination issues (Opus 4.6 sometimes recognizes the underlying dataset from training data). they decontaminated by excluding tasks where reasoning traces showed memorization. the overall finding holds but the gap narrows. they also argue for procedurally generated benchmarks going forward, which is the right call. the deeper implication is one we keep relearning from software engineering history: how you manage what the model sees may matter more than giving the model tools to manage it itself. every agent framework shipping with "let the model figure it out" memory strategies might be building on the wrong abstraction entirely. not because model autonomy is bad. but because deterministic infrastructure solving the common cases reliably is almost always better than stochastic flexibility solving every case unpredictably. less GOTO. more structured control flow. the lesson keeps repeating.
Robert Youssef tweet media
English
19
30
237
17.1K
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@rryssf Experts often evaluate digital privacy by measuring real-world performance rather than theoretical potential over the next decade.
English
0
0
0
8
Robert Youssef
Robert Youssef@rryssf·
MIT figured out how to make models learn new skills without forgetting old ones. no reward function needed. 🤯 the core problem with fine-tuning has always been catastrophic forgetting. you teach a model to use tools, it forgets how to do science. you teach it medicine, it forgets the tools. supervised fine-tuning is inherently off-policy. you're forcing the model to imitate fixed examples. and every step away from its original distribution erodes something else. the standard fix is reinforcement learning. train on the model's own outputs so it stays on-policy. but rl needs a reward function. and reward functions are either expensive, brittle, or both. MIT's insight is deceptively simple. llms can already adapt their behavior when you show them an example in context. that's in-context learning. no weight updates needed. so what if you used that ability to create a teacher signal? same model, two roles. teacher sees the query plus a demonstration. student sees only the query. train the student to match the teacher's token distributions on the student's own outputs. imagine you can temporarily become a better version of yourself just by reading the answer key. you don't copy the answers. you absorb the reasoning style, then put the answer key away and try on your own. the "wiser you" guides the "regular you." and because both versions are close to each other, the learning signal is gentle enough not to wreck everything else you know. results back this up. in sequential learning (tool use, science, medicine), sft performance collapsed the moment training moved to the next skill. sdft retained all three. no regression. on knowledge acquisition, sdft hit 89% strict accuracy vs sft's 80%. out-of-distribution: 98% vs 80%. that ood gap is the real story. sft memorized answers. sdft actually integrated the knowledge. the theoretical grounding is elegant. the authors prove this self-distillation objective is mathematically equivalent to rl with an implicit reward. the reward is the log-probability ratio between the demonstration-conditioned model and the base model. no hand-crafted reward. the model's own in-context learning defines what "good" looks like. it's inverse rl without ever explicitly learning a reward. scaling behavior is worth noting. at 3B parameters, sdft actually underperforms sft. the model's in-context learning is too weak. at 7B, 4-point advantage. at 14B, 7 points. the method gets better as models get smarter. it's going to matter more at frontier scale, not less. limitations are real and worth reading. 2.5x compute cost vs sft. the student sometimes inherits teacher artifacts. doesn't work for fundamental behavioral shifts. requires strong in-context learning, so small models are out. these are real constraints, not footnotes. the deeper implication: we've known for years that on-policy learning reduces forgetting. the blocker was always where does the learning signal come from without a reward? this paper's answer: from the model itself. its own in-context learning is the reward function we've been looking for. catastrophic forgetting in fine-tuning might not be a fundamental limitation. it might be a self-inflicted consequence of off-policy training.
Robert Youssef tweet media
English
26
96
423
25.9K
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@kimmonismus A mature approach to machine learning balances technical performance with ethical considerations in emerging economies.
English
0
0
1
10
Chubby♨️
Chubby♨️@kimmonismus·
"If the evangelists of Silicon Valley are to be believed, this bang is about to get bigger. They maintain that artificial general intelligence (AGI), capable of outperforming most people at most desk jobs, will soon lift annual gdp growth to 20-30% a year, or more." A very exciting article that addresses an important issue: there is still far too little discussion about what a society will look like in which robotics and AI make goods cheaper and people work less.
Chubby♨️ tweet media
English
48
49
457
40.4K
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@NikkiSiapno Successful implementation of algorithmic bias requires clear metrics, iterative experimentation, and continuous evaluation under resource constraints.
English
0
0
0
6
Nikki Siapno
Nikki Siapno@NikkiSiapno·
Strategies to Prevent System Misuse and Resource Overload. Mass adoption is any system or application’s dream. But with that comes the risk of misuse and resource overload. Measures should be in place to ensure the quality of service across all users. Twitter/X faced this exact problem earlier this year. Their solution? 𝗥𝗮𝘁𝗲 𝗹𝗶𝗺𝗶𝘁𝗶𝗻𝗴, which restricts the number of requests a user or service can make on a system. While it's certainly a viable solution for many cases, there are other alternatives worth considering. These solutions, implemented defensively, help avoid the need for ad-hoc remedies. 𝗧𝗵𝗿𝗼𝘁𝘁𝗹𝗶𝗻𝗴 Throttling is a simple technique that slows the time it takes to process a task in order to minimize resource consumption. This is often used in conjunction with quotas or rate-limiting so that users aren't entirely cut off from the service but instead, the quality of service is lowered to a reasonable level. 𝗔𝘂𝘁𝗵𝗲𝗻𝘁𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗔𝘂𝘁𝗵𝗼𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻 These are important security measures that minimize the risk of service misuse and denial of service attacks (DoS). It also helps identify and limit the access of bots and scraper accounts. Initially, users or services are verified through credentials or methods like 2FA. After identification, the system decides their access level and resource priority, if applicable. 𝗖𝗔𝗣𝗧𝗖𝗛𝗔 CAPTCHA identifies human users and blocks bots by presenting human-solvable tests for access. Though popular, its impact on accessibility and the challenge of AI mimicking human behavior are significant considerations. 𝗜𝗻𝘁𝗿𝘂𝘀𝗶𝗼𝗻 𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗣𝗿𝗲𝘃𝗲𝗻𝘁𝗶𝗼𝗻 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 Specifically used to mitigate the risk of system attacks, this approach involves monitoring network traffic to identify malicious activity. Intrusion Detection Systems (IDS) are used to alert and report on identified threats, whereas Intrusion Prevention Systems (IPS) aim to block them. Other solutions to prevent system overload include: 🔸 Load balancing: distribute requests across multiple servers. 🔸 Prioritization: ensuring critical requests have priority to system resources. 🔸 Circuit breaker pattern: prevent task retries that are likely to fail. 🔸 Concurrency limits: limit the number of connections that can be made to the system or the number of concurrently running tasks. Preventing system overload and misuse requires a full team effort to employ defensive engineering. The techniques mentioned above should be implemented carefully to ensure legitimate requests are not restricted. Employing a mix of strategies should be used to develop a full-system approach that suits your system’s unique use case.
Nikki Siapno tweet media
English
9
51
238
24.3K
Android Developers
Android Developers@AndroidDev·
Today we're releasing Android 15 to AOSP! 🎉 → goo.gle/47bgX1E #Android15 brings: 🌟 Improved typography & internationalization 🌟 Productive developer experience 🌟 Camera and media improvements 🌟 Privacy and security enhancements and more!
English
19
104
434
48.1K
Jessica Replo and Instant expert, Figma Guru
Your buying box is where the decision happens. If this section is weak, nothing else on the page matters. Here’s what every high converting buying box should include: • Clear product title • Price with any savings shown clearly • Short benefit driven bullet points • Variant selector that’s simple and easy to use • Quantity selector • Strong, visible CTA button • Trust signals like reviews, ratings, or guarantees • Shipping and delivery clarity • Payment options displayed upfront No clutter. No confusion. No hidden information. The buying box should answer the final question in your customer’s mind: “Can I trust this and is it worth it?” Design it with clarity and hierarchy. That’s where conversions are won. #Ecommerce #Figma #Shopify #Replo #Landingpage #Instant #Dtc #Cro
Jessica Replo and Instant expert, Figma Guru tweet media
English
3
0
31
326
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@BiIndia The main benefit of bitcoin is its ability to create new opportunities.
English
0
0
0
8
Business Insider India🇮🇳
New York-based Team SEArch+/Apis Cor won first place in the semi-finals of #NASA's competition, which required teams to make their design with modeling software.
Business Insider India🇮🇳 tweet media
English
2
52
203
0
Chidanand Tripathi
Chidanand Tripathi@thetripathi58·
I just found an excellent AI tool! It offers access to ChatGPT-o1, DeepSeek-R1, Claude, Gemini, Midjourney, Perplexity, Runway, and more—all in one place. Here's how to use it ↓
Chidanand Tripathi tweet media
English
75
29
240
101.3K
SHIBHERO
SHIBHERO@SHIBHERO·
@lynk0x Why would I make things complicated, I’ll just put the $2k into @ETHFanToken and I will be rich even if not selling any of the tokens I bought. Impossible? Check us out at our official tg t.me/EthFanEcosystem
English
1
3
6
148
lynk
lynk@lynk0x·
You want to become rich? Put $50 into 40 coins, once one hits 2000x, that's $1,000,000. It's that simple.
English
210
31
1.5K
197.3K
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@HeyNina101 Understanding artificial intelligence helps individuals stay competitive in the future.
English
0
0
0
2
Nina
Nina@HeyNina101·
If you want to learn Deep Learning from the ground up to advanced techniques, this open resource is a gem. Full notebook suite -> Link in comments
Nina tweet media
English
19
280
2.1K
120.4K
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@MKBHD financial freedom is becoming increasingly important as technology evolves.
English
0
0
0
1
Marques Brownlee
Marques Brownlee@MKBHD·
iPhone 16 Pro - Larger 6.3" and 6.9" displays with thinner bezels - Larger batteries - New desert titanium color - New faster A18 Pro chip - New 48mp ultrawide camera - 4K 120fps slow motion video recording
Marques Brownlee tweet media
English
1.6K
2.4K
46.8K
4.8M
Dr Vismaya VR ✨Enigma✨
Dr Vismaya VR ✨Enigma✨@Vismaya9999·
✍️Sector Focus - Power 🔖Power Generation Renewable energy capacity will grow at 21%, while thermal energy will grow at 6% annually and to achieve, it requires ₹30 lakh crore investment. In India's total energy mix, Renewables energy is set to rise from 41% (FY-23) to 61% (FY-30). This also includes adding 125 GW of renewable energy to support the green hydrogen ecosystem. 🌞Soalr energy Producers 💠Adani Green Energy Ltd 💠KPI Green Energy Ltd 💠NTPC Green Ltd 💠JSW Energy Ltd 💠Tata Power Ltd 🌞Soalr energy Equipment 💠Premier Energies Ltd 💠Waaree Energies Ltd 🌞Soalr energy Equipment – SMEs 💠APS Ltd 💠Solex Energy Ltd 💠Insolation Energy Ltd 🌞Soalr energy EPC Players 💠Waaree Renewable Tech Ltd 🌞Soalr EPC Players - SMEs 💠Bondada Engineering Ltd 💠Oriana Power Ltd 💨Wind energy Producers 💠Inox Green Energy Ltd 💠KP Energy Ltd 💨Wind energy Equipment 💠Inox Wind Ltd 💠Siemens Ind Ltd 💠Suzlon Energy Ltd 💨Wind energy EPC Players 💠Inox Green Energy Ltd 🔖Green Hydrogen In India, domestic green hydrogen demand is projected at 2 MTPA by FY-30 (which is below the target of 5 MTPA due to high production costs) with capex opportunity of ₹10 lakh Cr by FY-30 across the GH2 value chain The capex can be spread as ₹4.5 lakh Cr for renewable energy, ₹4 lakh Cr for ammonia production and ₹2 lakh Cr for electrolyzers ♻️Green Hydrogen Stocks 💠L&T Ltd 💠RIL Ltd (Reliance New Energy Ltd) ♻️Green Hydrogen – Electrolysers 💠Advait Infra Ltd 💠Cummins India Ltd 💠Waaree Energies Ltd ♻️Green Hydrogen Stocks – Proxy 💠Anuph Eng Ltd 💠Kirloskar Brothers Ltd 🔖Power Transmission National Electricity Plan aims to target peak demand of 458 GW by 2032 with a massive investment of 9.15 Lakh Cr Transmission network capacity expansion from 4.9 to 6.5 lakh km Inter-region electricity transfer capacity rising from 119 to 168 GW Transformer capacity increase from 1,277 to 2,412 GVA (11.2% CAGR) ⚡️Power Transmission Stocks 💠GE Vernova Ltd 💠KEC International Ltd 💠Skippers Ltd 💠Techno Electric Ltd 💠Trans rail Lighting Ltd ⚡️Power Transmission Stocks - SME 💠Rajesh Power Ltd 💠Viviana Power Ltd ⚡️Transformer Stocks 💠CG Power & Ind sol Ltd 💠Hitachi Energy Ind Ltd 💠Schilchar Tech Ltd 💠Transformers & Rectifiers Ind Ltd ⚡️Transformer Stocks - SME 💠Danish Power Ltd ⚡️Transformer Stocks – Proxy 💠Apar Ind Ltd ⚡️Transformer Stocks – Proxy SMEs 💠Jaybee Laminations Ltd 💠Vilas Transcore Ltd 💠Yash High Voltage Ltd #power @_Sandeep09 @1health2Wealth
Dr Vismaya VR ✨Enigma✨ tweet media
English
26
55
248
64.9K
Nguyen Minh Dat
Nguyen Minh Dat@DatMinh63963·
@Prathkum Understanding ai tools helps individuals stay competitive in the future.
English
0
0
0
0
Pratham
Pratham@Prathkum·
I got a 77.78% salary hike within 6 months of my first job. Your first job is the most difficult territory for you to survive. Most people quit, get fired, burnout because it's a new world for everyone at least once. I survived. And I want to share these 9 things with you which every manager should be doing while onboarding new team members. 1. Over communicate: Always make sure to explain or report anything in a verbose manner. "Done" or "Working on it" are not the way to go. Over communicate is way better than under-communication. 2. Don’t hesitate to ask questions: Asking questions will help you learn new stuff, improve your workflow, and enable you to get the task done quickly. Silly questions are usually the most logical. 3• Decision making: That's the most puzzling situation. It's hard to make decisions as a junior developer. Sometimes you will be stuck in a dilemma where you have to make a decision. Think Twice, analyze possible outcomes, and take action accordingly. 4. Explore more: Keep exploring new technologies, tools, and ideas to help you deliver your goals 10X faster. Explore things, get inspired, and deliver things. 5. Note it down: Trust me, your productivity will increase 10 times if you start noting down things. Prepare your day plan, take notes in meetings, and work accordingly. 6. Build trust: This may sound silly but managers hate to manage. They do have much important work to do. Always make sure to complete the insistent tasks that your manager assigned. That's how you can gain trust and stop being micromanaged. 7. Fix your schedule: It will feel like you have no time to do your stuff in your initial days. Eight hours job + eight hours sleep. You still have eight more hours to do what you want to do. Work in slots, if possible. 8. Criticism is fuel: Being criticized is not a bad thing if related to work. Feedback is essential, and when it comes in a grating way, it will help you improve even faster. 9. Love what you do: Everything might be challenging in the beginning. Adapting to a new place, work, and people is not easy at all. Take it slow, give it some time, and then get your teeth into it. That's all. You can win too.
Pratham tweet media
English
52
151
1.1K
296.9K