Martin Pascua

2.8K posts

Martin Pascua banner
Martin Pascua

Martin Pascua

@MartinPascuaDev

Desarrollador. Researcher. Research in Generative Al. Formo parte del equipo de LN Data. Hincha de River ♥️

Katılım Aralık 2020
2K Takip Edilen200 Takipçiler
Martin Pascua retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨Someone just open sourced a computer that works when the entire internet goes down. It's called Project N.O.M.A.D. A self-contained offline survival server with AI, Wikipedia, maps, medical references, and full education courses. No internet. No cloud. No subscription. It just works. Here's what's packed inside: → A local AI assistant powered by Ollama (works fully offline) → All of Wikipedia, downloadable and searchable → Offline maps of any region you choose → Medical references and survival guides → Full Khan Academy courses with progress tracking → Encryption and data analysis tools via CyberChef → Document upload with semantic search (local RAG) Here's the wildest part: A solar panel, a battery, a mini PC, and a WiFi access point. That's it. That's your entire off-grid knowledge station. 15 to 65 watts of power. Works from a cabin, an RV, a sailboat, or a bunker. Companies sell "prepper drives" with static PDFs for $185. This gives you a full AI brain, an entire encyclopedia, and real courses for free. One command to install. 100% Open Source. Apache 2.0 License.
Nav Toor tweet media
English
595
4K
24.2K
1.1M
Martin Pascua retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Thank you Jensen and NVIDIA! She’s a real beauty! I was told I’d be getting a secret gift, with a hint that it requires 20 amps. (So I knew it had to be good). She’ll make for a beautiful, spacious home for my Dobby the House Elf claw, among lots of other tinkering, thank you!!
NVIDIA AI Developer@NVIDIAAIDev

🙌 Andrej Karpathy’s lab has received the first DGX Station GB300 -- a Dell Pro Max with GB300. 💚 We can't wait to see what you’ll create @karpathy! 🔗 #dgx-station" target="_blank" rel="nofollow noopener">blogs.nvidia.com/blog/gtc-2026-… @DellTech

English
524
829
19K
991.3K
Martin Pascua retweetledi
Agustin Mario Gimenez
CLAUDIO EN 🇦🇷 @AnthropicAI publica datos de uso de #Claude por país. Argentina está 63 de 116. Usa IA un 25% menos de lo esperado para su PBI. Perú, Chile y Uruguay nos pasan. ¿Para qué usan IA los argentinos? El 5.3% para código. El 5% para tareas de la facultad. Y el dato más argentino de todos: somos el país que más usa IA para escribir ficción creativa. 1.6 veces más que el promedio mundial. Literal: le pedimos a la IA que nos escriba cuentos. Lo que casi no aparece: business planning (2.3%), estrategia (apenas visible), marketing (no está en el top 10). Las empresas argentinas todavía no se subieron.
Agustin Mario Gimenez tweet media
Español
43
104
1.1K
104.8K
Martin Pascua retweetledi
MiniMax_Agent
MiniMax_Agent@MiniMaxAgent·
MiniMax-M2.7 just landed in MiniMax Agent. The model helped build itself. Now it's here to build for you. ↓ Try Now: agent.minimax.io
MiniMax_Agent tweet media
English
72
186
1.4K
603.6K
Martin Pascua retweetledi
¿Por qué es tendencia?
¿Por qué es tendencia?@porquetendencia·
"DLSS 5": Porque NVIDIA anunció una ‘tecnología de inteligencia artificial que mejora la fidelidad visual en los videojuegos’
Español
143
293
7K
403.8K
Martin Pascua retweetledi
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
Super excited to see @Microsoft getting involved and helping to make MS Teams top notch for @openclaw!
Brad Groux@BradGroux

Sorry for the slow updates today, huge things brewing! We're going to have big news shortly, massive changes coming (for the better) to OpenClaw for Microsoft Teams sooner rather than later! I hope to have a roadmap out to you all sometime next week. I can say that I've spoken to more than a dozen MIcrosoft employees who want to be involved, and we have a team of six dedicated to helping us as we sit, and I am sure that will grow. They're dogfooding OpenClaw, it isn't all talk. I was also on a call today with @steipete and many from the team, and they too want to get Microsoft Teams, and other extensions and plugins in a better state. It is inspiring to see so many tremendously talented folks rowing in the same direction for the common good. Thank you to all the Microsofties out there and thanks to all of you for your patience! I love being the dumbest guy in the room, and working with the amazing volunteers at OpenClaw, and at Microsoft - I can assure you that's the case!

English
52
118
2.1K
310.1K
Martin Pascua retweetledi
Kris Kashtanova
Kris Kashtanova@icreatelife·
🎉 We just released Rotate Object in Photoshop (beta) 🎉 You can now rotate 2D images! 🤯 Then use Harmonize to add light and shadows, to blend it perfectly with the rest of the scene. It's like Turntable in Illustrator, but instead of vectors, it's pixels in Photoshop!
English
371
923
9.7K
1.3M
Martin Pascua retweetledi
Elon Musk
Elon Musk@elonmusk·
Macrohard or Digital Optimus is a joint xAI-Tesla project, coming as part of Tesla’s investment agreement with xAI. Grok is the master conductor/navigator with deep understanding of the world to direct digital Optimus, which is processing and actioning the past 5 secs of real-time computer screen video and keyboard/mouse actions. Grok is like a much more advanced and sophisticated version of turn-by-turn navigation software. You can think of it as Digital Optimus AI being System 1 (instinctive part of the mind) and Grok being System 2. (thinking part of the mind). This will run very competitively on the super low cost Tesla AI4 ($650) paired with relatively frugal use of the much more expensive xAI Nvidia hardware. And it will be the only real-time smart AI system. This is a big deal. In principle, it is capable of emulating the function of entire companies. That is why the program is called MACROHARD, a funny reference to Microsoft. No other company can yet do this.
English
8.3K
11.9K
80.7K
47.5M
Martin Pascua retweetledi
Perplexity
Perplexity@perplexity_ai·
Announcing Personal Computer. Personal Computer is an always on, local merge with Perplexity Computer that works for you 24/7. It's personal, secure, and works across your files, apps, and sessions through a continuously running Mac mini.
English
1.7K
3.5K
32.5K
14M
Martin Pascua retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
Andrej Karpathy tweet media
English
966
2.1K
19.4K
3.5M
Santi Torres
Santi Torres@SantiTorAI·
OpenClaw es una máquina de hacer dinero Pero depende de cómo lo uses Por eso he creado “OpenClaw: instalación, skills, multiagentes y buenas prácticas” ¡GRATIS solo durante 24h! Solo: 1. Dale like 2. Comenta “CLAW” 3. Sígueme para recibir el DM
Santi Torres tweet media
Español
214
19
233
12.9K
Martin Pascua retweetledi
Román Avila Páez
Román Avila Páez@andavip·
Hace unos días hablé con mi viejo sobre lo que pasa con Claude. Es ingeniero civil y con 78 años me contó una analogía que me dejó tranquilo. Antes de que existieran las calculadoras, los cálculos estructurales se hacían a mano: polígonos, matrices, todo en papel. Tardaban semanas y terminaban con el cerebro fundido. Después llegó la regla de cálculo y pasaron a tardar días. Y cuando apareció la calculadora electrónica, solo tardaban horas. La profesión nunca cambió. Solo se volvieron más eficientes. Y ahí me quedé reflexionando sobre lo que me pasa a mí hoy. Antes la planeación era importante si tus recursos eran escasos. Hacías un experimento, elegías un tipo de diseño, y apostabas todo a que funcionara. Si no servía, descartarlo era caro. En tiempo y en plata. Por eso existen sistemas horribles, rehacer algo que ya funciona es “carísimo”. Hoy podemos probar tres esquemas distintos para resolver lo mismo. No mockups, cosas reales. Verlos en vivo y decidir cuál sigue y cuál muere. El costo de equivocarse bajó tanto que experimentar dejó de ser un lujo. Las herramientas cambian. La profesión no.
Español
2
10
51
20K
Martin Pascua retweetledi
Google en español
Google en español@googleespanol·
🚨 Pomelli ya está disponible en Latinoamérica. Nuestra herramienta de @GoogleLabs usa IA para analizar el ADN de tu marca y generar ideas de copy, campañas e imágenes alineadas a tu estilo en minutos. 🛒 Si trabajas en marketing, redes o tienes una pyme, esto te va a ahorrar horas de producción. 👀 Empieza aquí 👇 labs.google.com/pomelli
Español
73
760
8.9K
2.6M
Martin Pascua retweetledi
Alan Daitch
Alan Daitch@AlanDaitch·
Todo lo que te dijeron sobre la potencia de la inteligencia artificial es mentira. Y la primera prueba real recién acaba de salir. ¿Y sabés por qué? Imaginате que un alumno tiene las respuestas del examen antes de rendirlo. ¿Le creés la nota? Eso es exactamente lo que pasa con la IA. La mayoría de las pruebas más usadas para medir su capacidad usaban problemas que ya existían en internet, que la IA pudo haber memorizado durante su entrenamiento. Para colmo, el único intento serio de resolverlo, FrontierMath, fue financiado por OpenAI. Es un alumno que se toma la prueba a sí mismo. Todo esto cambió cuando un grupo de 11 de los matemáticos más respetados del planeta — un ganador del equivalente al Nobel de Matemática junto con gente de Harvard, Yale, Stanford y Columbia — decidió hacer algo que nunca se había hecho. Se llama "First Proof" y es el examen perfecto para terminar con la discusión. Cada uno aportó un problema sacado de su propia investigación que les llevó semanas o meses resolver. La clave es que jamás se habían publicado en internet. Como si fuera poco, también incluyeron una conjetura: un problema abierto hace más de una década para el que nadie encontraba solución. Convocaron a las mejores inteligencias artificiales del planeta y les dieron una semana para resolverlos sin ninguna ayuda humana. Sorprendentemente, la que mejor rindió fue la IA de Google DeepMind, resolviendo 6 de los 10, ¡incluida la conjetura! La IA gastó más tiempo pensando en ese problema que en todo el resto junto. En los 4 que no resolvió, fue la única IA que logró una proeza: admitir que no sabía en lugar de inventar cualquier cosa. Pero antes de concluir "ya está, la IA piensa", los matemáticos están sumidos en un debate profundo. Las soluciones son correctas, pero la manera fue como resolver un cubo Rubik girando las caras al azar a la velocidad de la luz hasta que queda armado. Lo resolviste, pero no a partir de una nueva teoría elegante sino de paciencia infinita. Entonces, ¿esto cuenta como pensar? ¿Y si, en el fondo, no importa mientras sigamos resolviendo lo que antes no podíamos? La revancha es el 14 de marzo. 🔗 Fuente: 1stproof.org
Alan Daitch tweet media
Español
28
200
757
45.9K
Martin Pascua retweetledi
Qwen
Qwen@Alibaba_Qwen·
🚀 Introducing the Qwen 3.5 Small Model Series Qwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B ✨ More intelligence, less compute. These small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL: • 0.8B / 2B → tiny, fast, great for edge device • 4B → a surprisingly strong multimodal base for lightweight agents • 9B → compact, but already closing the gap with much larger models And yes — we’re also releasing the Base models as well. We hope this better supports research, experimentation, and real-world industrial innovation. Hugging Face: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw…
Qwen tweet media
English
916
2.9K
21.4K
8.9M
Martin Pascua retweetledi
SambaNova
SambaNova@SambaNovaAI·
SN50 is here, the fastest chip built for agentic AI. Max speed of up to 5X faster; run agentic AI at a 3X lower cost than GPUs, unlocking cloud-scale inference economics. We’ve also planned a multi-year strategic collaboration with @intel &raised $350M+ from @Vista_Equity, Cambium Capital & @TRowePrice to scale manufacturing &cloud capacity. Learn more: bit.ly/4qUsx9F
English
14
55
240
70.2K
Martin Pascua retweetledi
Mark Gadala-Maria
Mark Gadala-Maria@markgadala·
This story is actually insane: • dude drops $2000 on a DJI robot vacuum like a lunatic • refuses to use the normal app like a peasant • Sammy Azdoufal fires up Claude to crack the API so he can drive it with an xbox controller • Claude delivers the goods • pulls an auth token from their servers, connects successfully • except the system thinks he controls 7000 vacuums • checks again • yep, seven thousand • DJI built authentication with zero device ownership verification • any valid token works for any unit on the planet • Sammy now has eyes inside homes across 24 countries • live vacuum camera feeds everywhere • full floor plans from the mapping data • some guy in germany eating cereal at 3am, unaware his roomba is snitching • one API call away from being the most informed burglar in history • all he wanted was to steer his vacuum with a joystick • does the right thing and reports it • DJI fixes it in two days • back to normal life with his stupidly expensive floor cleaner • IoT companies stay undefeated at shipping garbage security
Mark Gadala-Maria tweet media
English
1.1K
9.9K
64.6K
8.6M
Martin Pascua retweetledi
Wildminder
Wildminder@wildmindai·
17,000 tokens per second!! Read that again! LLM is hard-wired directly into silicon. no HBM, no liquid cooling, just raw specialized hardware. 10x faster and 20x cheaper than a B200. the "waiting for the LLM to think" era is dead. Code generates at the speed of human thought. Transition from brute-force GPU clusters to actual AI appliances. taalas.com/the-path-to-ub…
Wildminder tweet mediaWildminder tweet mediaWildminder tweet media
English
387
949
7.5K
1.8M
Martin Pascua
Martin Pascua@MartinPascuaDev·
No es un LLM de los grandes , es un modelo de 8B . Pero 17K tokens/seg y corriendo en CPU te cambian la perspectiva. Más que el tamaño del modelo, lo que impresiona es la velocidad. La sensación es que el 2026 en novedades de IA empezó hace como 300 días
Wildminder@wildmindai

17,000 tokens per second!! Read that again! LLM is hard-wired directly into silicon. no HBM, no liquid cooling, just raw specialized hardware. 10x faster and 20x cheaper than a B200. the "waiting for the LLM to think" era is dead. Code generates at the speed of human thought. Transition from brute-force GPU clusters to actual AI appliances. taalas.com/the-path-to-ub…

Español
0
0
0
39