

ippsec
4.6K posts






Probably one of my favorite @NetworkChuck Videos - youtube.com/watch?v=dbMXi9…, loved the take on his hatred for ai, but also loves it. Definitely in the same boat, it scares me how capable it has become in such a short time. The other thing that really scares me is the frontier labs will likely always be a black box. The specific thing that scares me is how they use the data they collect. AFAIK - The Terms of Service when paying for the API and Subscription are wildly different, and I don't see much talk about that. I believe the API gives the user a lot more ownership over the data, where-as subscription, it is retained longer, and there are far fewer legal protections. I hear numbers like my $200 subscription can cost them anywhere from $2000 to $10,000/m. That's a lot of money to lose, and I know the money loss is offset by many things like the majority of users not making full use of their subscription -- But I can't imagine AI always being this cheap. So, a fear is that I will become dependent on a service that I will be priced out of in the future. Additionally, many platforms (ex: reddit/twitter) put things in place to stop AIs from freely harvesting data, but I don't think those types of stops really block them when users are installing tools on their devices. For example, the "anti-bot captcha" isn't really doing much when the user has an extension that gives the Frontier Lab the data behind that block anyway. Is this data sent to them? I really don't know but it seems the threat landscape has rapidly changed when it comes to data collection. I don't hate AI; it is wildly fun and does make me feel like a "10x engineer". I just hope it's a service that always remains available, and places don't start closing the doors once they have everything they need. As odd as it sounds, and I can't believe I'm saying this, but I hope GRC can aid us here. It would be nice if AIs obeyed when sites told them to go away, but my experience is the AI recognizes the site doesn't want them, but also acknowledges it could be prompt injection, so it trusts the user over the service. Obviously, the user could do some type of prompt injection so the AI doesn't see the refusal, and local models can always ignore it -- but atleast it would help places stop the unintentional leakages due to ignorance. I imagine it's easier to kick users off the platform that use prompt injection to bypass gaurdrails versus when nothing is stopping them. I really hope I'm just ignorant here, and someone can post why I'm wrong.




























@realdancro @0xTib3rius @0xacb Speed isn't always better - Scanning ports too quickly will cause a DOS amongst many layer 3 switches or printers. Printer are fragile & those switches aren't designed to handle that many blocks I'm sure most offsec pro's have a horror story of the junior/ctfr that ran rustscan


