Martín Obiols

963 posts

Martín Obiols banner
Martín Obiols

Martín Obiols

@olemoudi

Escribo sobre Ciberdefensa Personal y OpSec. Protejo a @BBVA de ciberamenazas y fraude digital. Sólo opiniones personales.

Madrid Katılım Nisan 2009
763 Takip Edilen1.7K Takipçiler
Sabitlenmiş Tweet
Martín Obiols
Martín Obiols@olemoudi·
Me alegra mucho compartir que ya está publicado oficialmente mi primer libro: «Ciberdefensa Personal para Celebridades» Una guía para que personas especialmente expuestas y sin conocimientos técnicos se enfrenten a las amenazas digitales más avanzadas.
Martín Obiols tweet media
Español
3
5
7
1.6K
Martín Obiols retweetledi
Kpaxs
Kpaxs@Kpaxs·
Here a controversial take: most of the authority that exists in any organization was never formally granted to anyone. It was assumed, exercised, and then retroactively legitimized by the fact that it worked.
Kpaxs@Kpaxs

I call it the "Refrigerator Principle" Most organizational dysfunction exists because everyone assumes someone else has the authority to fix it, and the fastest path forward is often just pretending you have that authority and dealing with forgiveness rather than permission.

English
102
610
6.9K
406.5K
Martín Obiols retweetledi
Kuba Gretzky
Kuba Gretzky@mrgretzky·
It's the escalator-and-stairs problem. You know the stairs are healthier for you, but the escalator can get you to your goal faster and with less effort. Most people will pick the escalator every single time. We're up for a wild ride, as it is extremely hard to motivate yourself to get good at anything that the AI is already doing much better, especially when you're just starting out.
English
1
1
7
619
Martín Obiols retweetledi
Cesar Cerrudo
Cesar Cerrudo@cesarcer·
Counterarguments Drawn from Eliot's Artificially Intelligent book David Eliot's book provides a rigorous, historically grounded framework that directly challenges several of the article's core premises. Here are the major counterarguments: 1. "New jobs will appear" is a prefabricated comfort, not an argument Eliot calls this out explicitly in Chapter 23. He labels the claim that "innovation inevitably creates new jobs" as a "prefabricated argument" that "does not hold up when put under the microscope." He identifies three specific failures in this reasoning that the article never addresses: Speed: AI developed quietly for years and has arrived with enormous momentum. Unlike the gradual introduction of the power loom or the ATM, AI can hit countless segments of the economy "fast and hard." Eliot warns that even if new jobs eventually materialize, we should expect "a prolonged period between the layoffs and the emergence of new jobs" that will be "both economically and mentally scarring for those who get caught in the middle." The article's author concedes speed is different but then moves on without grappling with this implication. Scale: This is where Eliot's argument is sharpest. He points out that previous automating technologies were narrow — an automatic cow-milking system doesn't disrupt other industries. But AI is a general purpose technology, like the steam engine or the internet. It cuts across industries simultaneously. So the classic safety valve — "jobs lost in one sector are created in another" — breaks down, because AI is creating efficiencies in the other sectors at the same time. The article's historical examples (ATMs, barcode scanners, spreadsheets) were all single-industry technologies. Eliot argues this makes them fundamentally misleading as analogies. Skills mismatch: Even granting that new jobs will emerge, Eliot asks the question the article ignores entirely: will displaced workers be able to do those new jobs? AI disproportionately threatens "skilled workers" — people who invested years and thousands of dollars acquiring specialized knowledge. A 45-year-old with two decades of expertise in a now-automated field cannot simply pivot to an AI engineering role. The new jobs require "fundamentally different skills to those which multiple generations have trained for." The article's breezy advice — "you need curiosity, a willingness to experiment" — brushes past the reality that retraining is expensive, time-consuming, and psychologically devastating for people mid-career with families to support. 2. "Employed" is not the same as "living well" — the dignity problem The article counts jobs and declares victory. Eliot insists we look beyond the numbers. He draws a devastating parallel to the Industrial Revolution: "Children worked in factories and starved in the streets. But they were employed." Workers had jobs, but those jobs were grueling, degrading, 10–16 hour days for poverty wages. The quantity of employment recovered; the quality of life collapsed for a generation or more. Eliot warns that displaced skilled workers forced into low-skilled jobs "will not be able to provide a similar standard of living or social status," leading to "adverse mental effects and resentment of the system that betrayed these workers." The article's cheerful ATM-and-cashier statistics tell you nothing about whether the new jobs paid as well, offered the same security, or provided meaning. Eliot's framework insists we ask those questions. 3. The Luddites weren't wrong — they were misunderstood The article uses the Luddites as a cautionary tale of foolish resistance: "do you want to be remembered the way they are?" Eliot offers a radically different reading. He argues the Luddites "were not anti-technology" — many embraced the new machines and were eager to work alongside them. What they feared was that their employers would use the technology not to improve products but to "cheapen them" while gaining "more control over their workers." The Luddites' real fight was over power — who gets to decide how technology is implemented and who captures the gains. Eliot writes that factories "could have functioned, and continued to make profits, without completely replacing their workers," but owners chose maximum extraction over shared benefit. The article mocks the Luddites without engaging with the substance of their complaint, which, as Eliot argues, is remarkably relevant today: the question is not whether AI will create value, but who decides how that value is distributed. 4. The "garden" metaphor hides the question of who owns the garden The article says "the economy is not a pie. It's a garden. And technology is rain." Eliot would agree that technology grows the garden — but would immediately ask: who owns the garden? Chapters 19 and 20 of Eliot's book lay out how AI development requires massive surveillance infrastructure and oceans of data. The companies that control this data — Google, Apple, Meta, Amazon, Microsoft — gain "immense economic and social power." They "get to decide what types of AI are made, and for whom. They decide what types of problems we try to solve, and how." Eliot's deepest fear is that "many countries are ceding too much power over how our futures will be shaped to companies whose motives are not to make a better society for all — but instead to accumulate more money and power." The article's framework is entirely silent on this. It assumes that a bigger pie automatically means broadly shared prosperity. Eliot argues the opposite: without democratic control over how AI is built and deployed, the benefits will concentrate among those who already have power, just as they did during the Industrial Revolution. 5. The "it's just a tool" framing is dangerously naive The article's central dismissal of the "this time is different" objection rests on: "it is still a tool." Eliot dedicates much of his book to demolishing this exact framing. He argues that "no technology is apolitical" — every technology embeds the choices, values, and ideologies of its creators. AI is not a neutral tool like a hammer. It is a system trained on biased data, deployed within existing power structures, and shaped by corporate incentives. Chapter 22 demonstrates this concretely through predictive policing (which codifies and amplifies racial bias), Amazon's hiring AI (which discriminated against women because it learned from biased historical data), and the "black box problem" (where deep learning systems make consequential decisions that cannot be reverse-engineered or audited). Calling AI "just a tool" obscures all of this. A hammer doesn't perpetuate systemic racism; a deep learning system trained on policing data can and does. 6. The article ignores surveillance as a precondition of AI The article treats AI as if it springs into existence from clever engineering. Eliot reveals the infrastructure underneath: AI runs on data, data is produced by surveillance, and surveillance requires "digital enclosures" — controlled environments where your every action becomes fuel for machine learning. Google Search, the Apple ecosystem, Facebook, and the concept of the Metaverse are all digital enclosures designed to extract data from users. This means the "unseen" side of AI isn't just new jobs and opportunities — it's also an unprecedented expansion of the surveillance apparatus, one that most people are "blissfully unaware" of. The article's Bastiat framework conveniently only applies the "unseen" concept to positive outcomes. Eliot shows there are deeply negative "unseen" effects too: erosion of privacy, concentration of informational power, and the creation of systems that can monitor, classify, and control populations in ways that would make the East German Stasi envious. *This was created the help of AI
English
0
1
15
991
Martín Obiols
Martín Obiols@olemoudi·
Con el objetivo de concienciar sobre el impacto que tiene en nuestra privacidad la recopilación de analítica "anónima" que realizan las grandes plataformas... he creado Sagan. Obtén un informe personalizado. Más info en: metadatos.ciberdefensavip.es
Español
1
4
5
1.2K
Martín Obiols retweetledi
Lucid™
Lucid™@cammakingminds·
I am the greatest engineer the world has ever seen.
Lucid™ tweet media
English
0
4
18
492
Martín Obiols retweetledi
Runa Sandvik
Runa Sandvik@runasand·
Huge win for Hannah Natanson and the Washington Post today: the judge ruled that the government cannot search the devices they seized from her. washingtonpost.com/national-secur…
Runa Sandvik tweet media
English
7
102
249
15.9K
Martín Obiols retweetledi
Justin Elze
Justin Elze@HackingLZ·
Twitter seems to greatly overestimate how much regular people care about sitting in front of a computer doing tech stuff these days. Younger generations are happy to use apps on their phones, but this isn’t the 90s/00s anymore, regular people aren't locked to a PC. If you make writing software as easy as generating a website, it's awesome, but realize normal people aren't making websites either. Kids have fewer computer skills than when most of you were younger.
English
19
27
583
38.3K
Martín Obiols
Martín Obiols@olemoudi·
Been trying CC myself for a few days now. This take feels accurate.
Andrej Karpathy@karpathy

A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.

English
0
0
1
182
Martín Obiols retweetledi
Malte Ubl
Malte Ubl@cramforce·
Major refactorings used to be expensive to do and very risky. Now they are only very risky.
English
55
138
2.6K
106.3K
Martín Obiols
Martín Obiols@olemoudi·
@olgarusu Nadie envía cosas que requieren de acción urgente por email o SMS/WA. Imagina avisar a los bomberos de un fuego por ese medio.
Español
0
0
0
39
Martín Obiols
Martín Obiols@olemoudi·
This is exactly the gist. Without proper context of the user, online ads do not perform well for the one who pays for them
Sajid Ali Anjum@sajidalianjum

@tomwarren I am not sure ads alone can cover their losses. We will see. The main issue with ads is that performing well in advertising often pushes companies toward aggressive data collection, something Google is often criticized for.

English
0
0
0
129
Martín Obiols retweetledi
mickey friedman
mickey friedman@mickeyxfriedman·
she was beautiful, like code that compiles on the first try but also you just knew that there was something deeply wrong with her, like code that compiles on the first try
English
134
2.4K
29.1K
514.8K
Martín Obiols
Martín Obiols@olemoudi·
Great introductory essay on how money works these days. Kids should receive content like this at school. via @javutin #article" target="_blank" rel="nofollow noopener">phrack.org/issues/71/17_m…
English
0
1
5
297
Martín Obiols
Martín Obiols@olemoudi·
@herqles_es la indignación aumenta porque los datos son gratuitos y fácilmente consultables. Pero la info sobre nuestra localización en cada momento es pública (aunque no gratuita) gracias a la plaga de redes analíticas que llevamos en el móvil. nytimes.com/interactive/20… @Raggiomoral
Español
1
0
0
40
ʜᴇʀQʟᴇs
ʜᴇʀQʟᴇs@herqles_es·
🇪🇸 | Un joven explica el riesgo que la timo-baliza del gobierno de España supone para la seguridad de los conductores.
Español
70
843
3.1K
1.2M
Martín Obiols retweetledi
Zack Voell
Zack Voell@zackvoell·
“Yes I feel fully recharged after a restful holiday”
Zack Voell tweet media
English
209
14.2K
124.7K
2.6M