
Dan Lorenc
13.2K posts

Dan Lorenc
@lorenc_dan
OSS Supply Chain Security. Founder/CEO/Primary Ariba Admin at https://t.co/sGmuUU9JbG Sigstore: https://t.co/dWKlyYu6kv







I am the main developer fixing security issues in FFmpeg. I have fixed over 2700 google oss fuzz issues. I have fixed most of the BIGSLEEP issues. And i disagree with the comments @ffmpeg (Kieran) has made about google. From all companies, google has been the most helpfull & nice






AI bug hunting as Microsoft EEE. Embrace - Commit to open source. Extend - Use replaceable FOSS components in your workflow. Extinguish - Release AI hounds to make so many bug reports they cannot innovate before you outfox. Oh hi, @Google.





Recently, there was a clash between the popular @FFmpeg project, a low-level multimedia library found everywhere… and Google. A Google AI agent found a bug in FFmpeg. FFmpeg is a far-ranging library, supporting niche multimedia files, often through reverse-engineering. It is entirely the result of volunteers and a marvellous piece of technology. For people who have never been on the receiving end of ‘security researchers’, it is difficult to understand why there is a pushback against them. Think about the commons. In Quebec, these are pieces of land where farmers send their cows during the summer. It is collectively owned, like FFmpeg. Everyone is responsible to care for the commons if they are using it. If you are not using it, you are supposed to stay away. Now, imagine a rich corporation comes in and sends its well-paid agents into the commons to find issues with it. Maybe a broken barrier or a dangerous hole. So far so good… But instead of fixing the issues, the corporation says “you have a month to fix the issue or else I will report you to the government”. How much love would the big corporation get in this context? Why do the security researchers insist on disclosing the issue without having contributed to fixing it? So that they can get credit for it. That's their entire scheme: find issues, irrespective of whether they affect the use case of their employer... after all, all issues no matter how small can be potentially significant at some point... and then brag about it without doing the hard work of trying to fix it. Let me be clear that no everyone working in security behaves this way. Many are good actors. But there are enough 'security researchers' behaving as parasites that it has become a recognizable pattern. « But Daniel, who should be fixing the bugs then? » If you are paying for commercial support, then get in touch with the folks you are paying. If you are not paying, then it is on you. It says so in the licenses. It is part of the moral code open source. It is part of the legal framework. Let me be clear. You do not get to bite back at Linus Torvalds if a bug in the linux kernel crashes your server. What you do is that you identify the issue, narrow it down and propose a fix. If you cannot do it, then you pay someone to do it. Or you just do not use Linux.

This. Perfectly explained. Reporting issues in an open source project, without providing fixes, and then scaring to disclose the issue if not fixed within a small timeline is a d**k move. You cannot ask anything, if you are not paying for it.


Recently, there was a clash between the popular @FFmpeg project, a low-level multimedia library found everywhere… and Google. A Google AI agent found a bug in FFmpeg. FFmpeg is a far-ranging library, supporting niche multimedia files, often through reverse-engineering. It is entirely the result of volunteers and a marvellous piece of technology. For people who have never been on the receiving end of ‘security researchers’, it is difficult to understand why there is a pushback against them. Think about the commons. In Quebec, these are pieces of land where farmers send their cows during the summer. It is collectively owned, like FFmpeg. Everyone is responsible to care for the commons if they are using it. If you are not using it, you are supposed to stay away. Now, imagine a rich corporation comes in and sends its well-paid agents into the commons to find issues with it. Maybe a broken barrier or a dangerous hole. So far so good… But instead of fixing the issues, the corporation says “you have a month to fix the issue or else I will report you to the government”. How much love would the big corporation get in this context? Why do the security researchers insist on disclosing the issue without having contributed to fixing it? So that they can get credit for it. That's their entire scheme: find issues, irrespective of whether they affect the use case of their employer... after all, all issues no matter how small can be potentially significant at some point... and then brag about it without doing the hard work of trying to fix it. Let me be clear that no everyone working in security behaves this way. Many are good actors. But there are enough 'security researchers' behaving as parasites that it has become a recognizable pattern. « But Daniel, who should be fixing the bugs then? » If you are paying for commercial support, then get in touch with the folks you are paying. If you are not paying, then it is on you. It says so in the licenses. It is part of the moral code open source. It is part of the legal framework. Let me be clear. You do not get to bite back at Linus Torvalds if a bug in the linux kernel crashes your server. What you do is that you identify the issue, narrow it down and propose a fix. If you cannot do it, then you pay someone to do it. Or you just do not use Linux.



Great Question! Here's an example of it in action: >where is this demand that you must do something >You can leave your users vulnerable to being hacked for months, if you so desire. Coercion takes many forms, it does not need to be explicit.

Google literally runs a program to pay people to fix bugs in critical OSS projects. Ffmpeg is explicitly in scope. Anyone can just send a fix and fill out a form and get paid. github.com/google/bughunt… This is all so dumb.

