Álvaro Tejeda

1.1K posts

Álvaro Tejeda banner
Álvaro Tejeda

Álvaro Tejeda

@ATL_85

https://t.co/VxowRGiEdz - Finance, Data Science, AI, Recommender systems, Gaming, iGaming..

เข้าร่วม Mart 2012
908 กำลังติดตาม264 ผู้ติดตาม
Álvaro Tejeda
Álvaro Tejeda@ATL_85·
@yuriymatso How is your performance during the period Oct-25 to dec-25? Is where I find most of my models taking the bigger DD..
English
0
0
1
190
Yuriy Matso
Yuriy Matso@yuriymatso·
Developed a new NQ strategy with the following approach: 1. Very basic buy and sell signals using only 2 MA indicators 2. Trained 3 machine learning models on 2017-2024 data using 80+ features 3. 2025-2026 is a new data that the ML models traded on The 3 ML models vote on each signal. The strategy goes either long or short only if 3 models agree. This strategy is superior in terms of returns to all my prior rule based NQ strategies: 1. Total Profit is $1m+ 2. Profit Factor is 2.4 3. Max DD on new data is $20K 4. Win rate is 64% What do you need to develop these kind of systems: - Claude Code / Codex - Continuous NQ data from 2017 to present - Basic understanding of machine learning - Basic understanding of Python ecosystem (e.g., PyTorch) - API to a brokerage to test and automate
Yuriy Matso tweet media
English
34
16
163
19.9K
MapleStax Trades
MapleStax Trades@MapleStax·
I’ve made over $100,000 in just 3 months using my custom CBC strategy. Now I turned it into a FREE TradingView indicator that tells you exactly when to buy and sell—no guesswork. I’ve been testing it for months, and it’s been printing. 📈 Like + Comment “Trade” - I’ll DM it to you. (Must be following to DM) $SPY $SPX
MapleStax Trades tweet media
English
833
100
1.2K
160.5K
Sean trades
Sean trades@SRxTrades·
I only trade A+ Setups which is why I make bank from trading I've created a simple checklist so you can find the best breakouts Follow this guide to the T and watch your win rate double... Like + Reply "checklist" and I'll send it to you.
Sean trades tweet media
English
2.3K
183
3.2K
400.2K
Álvaro Tejeda รีทวีตแล้ว
vLLM
vLLM@vllm_project·
🚀 DeepSeek-OCR — the new frontier of OCR from @deepseek_ai , exploring optical context compression for LLMs, is running blazingly fast on vLLM ⚡ (~2500 tokens/s on A100-40G) — powered by vllm==0.8.5 for day-0 model support. 🧠 Compresses visual contexts up to 20× while keeping 97% OCR accuracy at <10×. 📄 Outperforms GOT-OCR2.0 & MinerU2.0 on OmniDocBench using fewer vision tokens. 🤝 The vLLM team is working with DeepSeek to bring official DeepSeek-OCR support into the next vLLM release — making multimodal inference even faster and easier to scale. 🔗 github.com/deepseek-ai/De… #vLLM #DeepSeek #OCR #LLM #VisionAI #DeepLearning
vLLM tweet mediavLLM tweet mediavLLM tweet media
English
53
364
2.6K
1.5M
Álvaro Tejeda
Álvaro Tejeda@ATL_85·
Lo ridículamente mal organizado que está el @coolturalfest de Almería es épico, cola de dos horas para entrar, hasta nunca! #nuncamas
Español
1
0
6
196
Álvaro Tejeda รีทวีตแล้ว
Sumanth
Sumanth@Sumanth_077·
Stanford CS336: Large Language Models from Scratch! This is a comprehensive course on LLMs, covers the full process of building one from scratch, including data collection, pretraining, transformer architecture, training, evaluation, and deployment.
Sumanth tweet media
English
9
108
665
50.1K
Álvaro Tejeda รีทวีตแล้ว
Dorsa
Dorsa@dorsa_rohani·
New fastest shortest-path algorithm in 41 years! Tsinghua researchers broke Dijkstra’s 1984 “sorting barrier,” achieving O(m log^(2/3) n) time. This means faster route planning, less traffic, cheaper deliveries, and more efficient networks - and a CS curriculum revamp =)
Dorsa tweet media
English
337
3.2K
29.4K
2.4M
Álvaro Tejeda รีทวีตแล้ว
Mayank Pratap Singh
Mayank Pratap Singh@Mayank_022·
I trained a 100 million parameter DeepSeek V3 LLM from scratch Here's what you need to know. Previously I trained traditional GPT-2 architecture which has become obsolete with recent LLM advancements. Most recent models like Llama, Mistral, DeepSeek, and GPT-4 use latest architectures. ✦  Model Configuration of my SLM DeepSeek V3 - Parameters: 109,032,032 - Embedding Dimension: 512 - Layers: 8 - Heads: 8 - Experts (MoE): 8 - Experts per token: 2 ✦ DeepSeek brings major architectural changes: - Multi Head Latent Attention - Mixture of Experts - RMS Norm - Multi Token Prediction ✦ Dataset Challenge - TinyStories is great for learning SLMs. I trained GPT-2 on it previously with good results. - But I needed a more challenging dataset. - If I use TinyStories again on DeepSeek, how would I know MHLA, MoE or MTP works better than old architecture? - The old architecture can handle it, so new DeepSeek would too without utilizing latest advancements. That's why I moved to FineWeb-Edu dataset Thanks @YuvrajS9886 for the suggestion for this dataset ✦ Training Journey - Rented A100 PCIe GPU and trained the model. - Did test runs. During final run, model was 65% trained but stopped due to glitch after 4 hours. - Fixed all edge cases and ran training again with increased config parameters. - Final training: 7 hours, 20,000 epochs 𝐓𝐨𝐭𝐚𝐥 𝐆𝐏𝐔 𝐜𝐨𝐬𝐭: $17 - $9.53 for main 7-hour run - $7.42 for experiments and demos ✦ Reflection Amazing long project that taught me latest architectural advancements. I'll reimplement and revisit after a few weeks because there's too much complexity, mostly in Multi Head Latent Attention part. Need to make concepts stronger. Code github.com/Mayankpratapsi… Final trained Model huggingface.co/Mayank022/Deep… Dataset huggingface.co/datasets/Huggi… Resources Huge shoutout to @raj_dandekar again for creating one of the most detailed video series about DeepSeek - this was my primary resource for the implementation. Playlist youtube.com/watch?v=QWNxQI… Blogs by @MaartenGr These are excellent visual blogs to understand MoE in detail. Thanks Maarten for your amazing contributions to the community through your books and blogs newsletter.maartengrootendorst.com/p/a-visual-gui… Blogs on MoE huggingface.co/blog/moe Implemention of MoE from scratch by @aviTwit3 huggingface.co/blog/AviSoori1… One of the most detailed blogs on implementing Mixture of Experts. Thanks Avinash for this blog - it helped me understand Mixture of Experts much better. If you're someone in the 𝐌𝐋 & 𝐋𝐋𝐌 space, would love to 𝐜𝐨𝐧𝐧𝐞𝐜𝐭 and discuss this field in general, so give a follow up for that.
YouTube video
YouTube
English
15
113
743
47.9K
Álvaro Tejeda รีทวีตแล้ว
Deedy
Deedy@deedydas·
DeepSeek just dropped the single best end-to-end paper on large model training. It covers — Software (MLA, training in FP8, DeepEP, LogFMT) — Hardware (Multi-Rail Fat Tree, Ethernet RoCE switches) — Mix (IBGDA, 3FS filesystem) DeepSeek's engineering depth is insane. Must read.
Deedy tweet media
English
44
669
4.4K
327.7K
Álvaro Tejeda รีทวีตแล้ว
elvis
elvis@omarsar0·
AI Agents vs. Agentic AI Interesting paper summarizing distinctions between AI Agents and Agentic AI. It also talks about the key ideas, solutions, and the future. Here are my notes:
elvis tweet media
English
227
1.1K
5.6K
752.6K
Álvaro Tejeda รีทวีตแล้ว
Tom Yeh
Tom Yeh@ProfTomYeh·
Autoencoder by hand✍️Excel~ I designed this exercise to show how an Encoder-Decoder network convert input to code and reconstruct input from code. It is annotated with equations, PyTorch, and graphs. 👇Join the 'AI Math' community. Download xlsx.
English
9
266
1.5K
101.5K
Álvaro Tejeda
Álvaro Tejeda@ATL_85·
Awesome event by @AiBirras ! The AI future in Granada is brighter than ever! Listening to experts such as @draxus is always a pleasure!
English
0
0
1
18
Álvaro Tejeda รีทวีตแล้ว
Máster DATCOM UGR
Máster DATCOM UGR@DatcoMugr·
Me complace presentar a mi buen compañero y amigo @ATL_85 para impartir una ponencia sobre la carrera profesional en Ciencia e Ingeniera del Dato, consejos para que nuestros estudiantes progresen en el mundo del business intelligence
Máster DATCOM UGR tweet media
Español
0
2
3
239
Álvaro Tejeda รีทวีตแล้ว
Markus J. Buehler
Markus J. Buehler@ProfBuehlerMIT·
We trained a graph-native AI, then let it reason for days, forming a dynamic relational world model on its own - no pre-programming. Emergent hubs, small-world properties, modularity, & scale-free structures arose naturally. The model then exploited compositional reasoning & uncovered uncoded properties from deep synthesis: Materials with memory, microbial repair, self-evolving systems. Video shows it unfolding, made with @grok @xai.
English
116
337
2.4K
358.8K
Álvaro Tejeda รีทวีตแล้ว
John Rush
John Rush@johnrushx·
I've tried all (24) AI coding agents & IDEs 😵‍💫 [Cursor, Softgen, Windsurf, Wrapifai, Copilot, Lovable, Bolt, v0, Replit, MarsX, Claude, AmazonQ, Pear, Devin, Github Spark, IDX, Webdraw, Tempo, Cline, Continue, Databutton, Base44, Qodo, Aider] The Vibe Coding giga-thread:
English
468
1.5K
12.7K
2.3M
Álvaro Tejeda รีทวีตแล้ว
Operador Nuclear
Operador Nuclear@OperadorNuclear·
Salvemos la central nuclear de Almaraz. Manifestación el sábado 18 de enero a las 10:00 que saldrá desde el ayuntamiento de Almaraz. @SiAlmaraz
Español
41
1.4K
3.4K
71.7K