Unai Martínez

12.2K posts

Unai Martínez banner
Unai Martínez

Unai Martínez

@newuni

Ingeniero de Software | Cofundador en Ideable Solutions

Katılım Nisan 2008
666 Takip Edilen339 Takipçiler
Unai Martínez retweetledi
Carlos Santana
Carlos Santana@DotCSV·
¡ÉPICO! Se confirma que la IA ha resuelto el primer problema matemático del benchmark FrontierMath: Open Problems, que se compone de problemas que aún no estaban resueltos tras intentos de la comunidad matemática. El primero de muchos más por venir!
Epoch AI@EpochAIResearch

AI has solved one of the problems in FrontierMath: Open Problems, our benchmark of real research problems that mathematicians have tried and failed to solve. See thread for more.

Español
36
213
1.6K
118.6K
Unai Martínez retweetledi
Kiko Beats
Kiko Beats@Kikobeats·
I'm glad to introduce `optimo` ✨ A simple CLI for optimizing media for the web optimo.microlink.io
Kiko Beats tweet media
English
18
42
687
41K
Unai Martínez retweetledi
Claude
Claude@claudeai·
1 million context window: Now generally available for Claude Opus 4.6 and Claude Sonnet 4.6.
Claude tweet media
English
1.2K
2K
25.2K
5.5M
Unai Martínez retweetledi
Antonio Ortiz
Antonio Ortiz@antonello·
Sobre la forma de estar más equivocado con la inteligencia artificial que me encuentro a día de hoy: es la de quién probó ChatGPT en 2023, comprobó cuánto se equivocada, entendió cual era el origen de su entrenamiento y se quedó con aquella foto. Es una suerte de "desinformación culta" que impide entender todo lo que puede hacer ahora mismo. La metáfora del loro estocástico confunde el mecanismo de entrenamiento con la naturaleza del sistema resultante. De hecho sospecho que el debate se va a mover rápidamente. De que la IA es demasiado tonta, no te puedes fiar y comete muchísimos errores a que el problema es, precisamente, que la IA es demasiado lista. En Error500 hoy con la muerte del loro estocástico: error500.net/p/el-loro-esto…
Antonio Ortiz tweet media
Español
16
36
122
22.4K
Unai Martínez retweetledi
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Say hello to Gemini Embedding 2, our new SOTA multimodal model that lets your bring text, images, video, audio, and docs into the same embedding space! 👀
Logan Kilpatrick tweet media
English
271
452
5.6K
849.2K
Unai Martínez retweetledi
Bernardo Quintero
Bernardo Quintero@bquintero·
Pequeña anécdota/reflexión sobre el futuro, cada vez más presente, de los ingenieros, programadores y demás personas que nos ganamos la vida tecleando :) Esto me ocurrió la semana pasada y es lo que viene... linkedin.com/pulse/lo-human…
Español
7
46
134
27.5K
Unai Martínez retweetledi
Mario Zechner
Mario Zechner@badlogicgames·
recommended viewing.
Gergely Orosz@GergelyOrosz

9 interesting observations from my conversation with Mitchell Hashimoto (@mitchellh, creator of Ghostty, founder of HashiCorp): 1. Vagrant was created because dev environment setup was an unbillable time sink at a consultancy. At the Ruby on Rails shop where Mitchell worked, jumping onto another client’s project could waste half a day. This inspired building Vargant. 2. Terraform won, despite being 7th to market. Terraform won through relentless conference presence, community building, and a better developer experience — not timing. 3. HashiCorp had no real business for four years and their first commercial product was a full-on failure. The initial product, Atlas, required customers to adopt the entire HashiCorp stack. It was a hard sell. HashiCorp pivoted to selling individual services like Vault, and this approach proved to be a winner. 4. VMware almost bought HashiCorp for ~$100M, and Terraform would have not happened if it did. VMWare took took the offer to their board, where they rejected to buy with a single vote. Mitchell said that Terraform probably never would’ve existed if the VMWare purchase went through. 5. Mitchell’s new rule for building software: always have an agent running in the background doing something. He kicks off tasks before leaving the house — research, edge-case analysis, library comparisons — so work progresses while he drives or is away. 6. Open source is moving from “default trust” to “default deny” — and Mitchell thinks that’s how it should be. This is because AI makes it trivial to create plausible-looking but incorrect and low-quality contributions. As he put it: “open source has always been a system of trust. Before, we’ve had default trust. Now it’s just default deny.” 7. Git and GitHub may not survive the agentic era in their current form. Agents cause so much churn that merge queues become untenable, branches proliferate, and repos balloon. Mitchell compares the needed shift to Gmail’s revolution for email: “We’re at the Gmail moment for version control... never delete, archive everything.” 8. The best engineers Mitchell ever hired had boring, invisible backgrounds. No GitHub contributions, no public profiles, companies you’ve never heard of. “Every moment you spend on social media is taking away from something else... the best engineers are the ones that context-switch the least.” 9. Mitchell’s advice for AI-skeptical engineers: start by reproducing your research, not your code. As he puts it: “There’s a lot of people like, ‘I don’t want it to write code for me.’ But just delegate some of the research part.” He uses agents for library comparisons, edge-case analysis, and deep research — not just code generation. Mitchell: “You don’t need to pick up on the ‘it must replace you as a person’ kind of propaganda.” Watch the full episode here: youtu.be/WjckELpzLOU Other platforms and transcript: newsletter.pragmaticengineer.com/p/mitchell-has…

English
2
2
60
10.4K
Unai Martínez retweetledi
Jerónimo López
Jerónimo López@jerolba·
OpenClaw puede ser un buen experimento: ver cómo evoluciona una base de código escrita mayormente por IA y como de mantenible resulta con el tiempo. Lo que comenta Zhao es una de las cosas que más me preocupa de ceder el control a la IA
Chayenne Zhao@GenAI_is_real

I've been using Claude Code heavily lately, and while doing so, I've been casually watching the OpenClaw codebase evolve. What I've witnessed mirrors a pattern I've seen play out with every agent framework before it — and it's worth talking about. OpenClaw is a remarkable project. It went from zero to one of the most-starred repos on GitHub in under a week. And now, with AI agents actively contributing to its own development, the codebase is doing something extraordinary: it's expanding at a pace no human team could match — or meaningfully oversee. A month ago, the repo sat around 400k lines of code. Now it's pushing 1 million. Daily commits are holding steady above 500. There's even a lean fork — nanobot — that replicates the core functionality in roughly 4,000 lines, advertising itself as "99% smaller." That contrast alone tells you something important about what's happening to the original. From a software engineering standpoint, this is not a sign of health. Velocity without comprehensibility is just entropy with good PR. What we're witnessing is a codebase that has crossed a threshold: it is no longer humanly maintainable. No engineer can meaningfully review these commits. No architect can hold the system model in their head. Technical debt isn't accumulating — it's compounding, at AI speed, every single day. This raises a question I can't stop thinking about: Does there exist any project in the world that can grow sustainably — maintaining architectural clarity while continuously expanding functionality — with zero meaningful human involvement? Not "AI assists humans," but genuine autonomous stewardship of a living codebase? If that's possible, then what kinds of projects still can't be fully AI-maintained today? Is it complexity? Ambiguity in requirements? The need for taste and restraint? And the deepest question: will we eventually reach a point where every software project can be fully maintained by AI — including the AI systems doing the maintaining? My instinct is this: AI is extraordinarily good at local optimization. Write this function. Fix this bug. Add this feature. But "keeping a system simple" is not a local problem. It requires global aesthetic judgment — the ability to say "we could add this, but we shouldn't." That kind of restraint might be the last genuinely human contribution to software engineering. Or maybe I'm wrong. Maybe future AI systems will develop something like taste. Maybe they'll learn that the most important code is often the code you don't write. I genuinely don't know. But watching a codebase grow from 400k to 1M lines in a single month, driven almost entirely by agents, makes me feel like we're all about to find out — whether we're ready or not.

Español
1
2
8
3K
Unai Martínez retweetledi
Addy Osmani
Addy Osmani@addyosmani·
Introducing the Google Workspace CLI: github.com/googleworkspac… - built for humans and agents. Google Drive, Gmail, Calendar, and every Workspace API. 40+ agent skills included.
English
654
1.6K
15K
5.4M