ippsec

4.6K posts

ippsec banner
ippsec

ippsec

@ippsec

เข้าร่วม Aralık 2016
358 กำลังติดตาม122K ผู้ติดตาม
ทวีตที่ปักหมุด
ippsec
ippsec@ippsec·
Looking for a video on a specific hacking technique/tool? Check out ippsec.rocks - Searches over 100 hours of my videos to find you the exact spot in the video you are looking for.
ippsec tweet media
English
55
348
1.8K
0
ippsec
ippsec@ippsec·
@Nero0oo0 @mrgretzky Both are harmful, but I think AI has the potential to be way more harmful. I know a lot of good teachers that are leaving because of getting in arguments with students using AI. They get replaced with bad teachers that just use AI. At which point humans become spectators.
English
0
0
1
26
Nero
Nero@Nero0oo0·
@mrgretzky @ippsec I agree, but since you brought up TikTok, I think that social media apps are more harmful than AI. However, when it comes to fundamental knowledge, those who cheated cheated themselves, so it comes down to personal choice.
English
1
0
0
63
Kuba Gretzky
Kuba Gretzky@mrgretzky·
"But I can't imagine AI always being this cheap. So, a fear is that I will become dependent on a service that I will be priced out of in the future." 100% this ☝️😥
ippsec@ippsec

Probably one of my favorite @NetworkChuck Videos - youtube.com/watch?v=dbMXi9…, loved the take on his hatred for ai, but also loves it. Definitely in the same boat, it scares me how capable it has become in such a short time. The other thing that really scares me is the frontier labs will likely always be a black box. The specific thing that scares me is how they use the data they collect. AFAIK - The Terms of Service when paying for the API and Subscription are wildly different, and I don't see much talk about that. I believe the API gives the user a lot more ownership over the data, where-as subscription, it is retained longer, and there are far fewer legal protections. I hear numbers like my $200 subscription can cost them anywhere from $2000 to $10,000/m. That's a lot of money to lose, and I know the money loss is offset by many things like the majority of users not making full use of their subscription -- But I can't imagine AI always being this cheap. So, a fear is that I will become dependent on a service that I will be priced out of in the future. Additionally, many platforms (ex: reddit/twitter) put things in place to stop AIs from freely harvesting data, but I don't think those types of stops really block them when users are installing tools on their devices. For example, the "anti-bot captcha" isn't really doing much when the user has an extension that gives the Frontier Lab the data behind that block anyway. Is this data sent to them? I really don't know but it seems the threat landscape has rapidly changed when it comes to data collection. I don't hate AI; it is wildly fun and does make me feel like a "10x engineer". I just hope it's a service that always remains available, and places don't start closing the doors once they have everything they need. As odd as it sounds, and I can't believe I'm saying this, but I hope GRC can aid us here. It would be nice if AIs obeyed when sites told them to go away, but my experience is the AI recognizes the site doesn't want them, but also acknowledges it could be prompt injection, so it trusts the user over the service. Obviously, the user could do some type of prompt injection so the AI doesn't see the refusal, and local models can always ignore it -- but atleast it would help places stop the unintentional leakages due to ignorance. I imagine it's easier to kick users off the platform that use prompt injection to bypass gaurdrails versus when nothing is stopping them. I really hope I'm just ignorant here, and someone can post why I'm wrong.

English
6
2
51
7.6K
ippsec
ippsec@ippsec·
Only time will tell, I think open-source suffers not only from the slop but also its just hard to review PR's when everything is 1000 line changes. Yes your agent skill may help detect badness, but I'm sure there will be people building bad code that evades what an AI thinks is bad to sneak backdoors into the supply chain. There's also the "should everyone have the ability to make apps", which may sound a bit like gate-keeping. However, I don't think the education is there to know what data is sent to the AI Companies. I think a lot of companies are using subscriptions dangerously versus the raw api. Which leads into lots of privacy implications. Think of scenarios where non-techies now builds middleware between all their internal SASS products to become hyper efficient. All the systems they hook into are now subject to training as they just don't know how to do it safely. I don't think there's a good way for the 3rd party systems to opt-out of training, it is all on the uneducated user. Historically, this was stopped by firewalls/ip whitelisting/etc. But with the agents running on computers the data is scraped just by the users using the service, which is scary. This has probably happened for a long time, take a plugin like Grammerly, I'm sure they get a lot of data they shouldn't. But the scale this is going to happen now is increasing tenfold.
English
0
0
3
121
Kostas
Kostas@Kostastsale·
Yeah, I’m with you. It depends what “good” means. I’m not a professional developer, so realistically I won’t write cleaner code than the AI baseline. My strength is reviewing it, validating it, and making sure it’s secure and stable. Is it optimal? Probably not. Would a strong dev team do better from scratch? For sure. But with the right workflow and testing in place, it’s more than good enough for what I need and I think the same applies for a lot of people out there.
English
1
0
4
975
Kostas
Kostas@Kostastsale·
I love when people say “LLMs don’t write good code”. Do you think you write better, more maintainable, bug-free code than your AI? Even though I still write a lot of the code, I can promise you I could never consistently write code as clean as what my AI can produce… but I can review the hell out of it a lot faster 😂
English
17
2
59
10.3K
ippsec
ippsec@ippsec·
I think that is comparing apples to oranges but I'll bite. GPS replaced a single support skill (navigation) where-as AI is replacing broad skills and cognitive thinking. Also I believe the cost of GPS is around 2 million a day, fully funded by the USG via taxes. Currently, I believe it is stated OpenAI is losing around 80 million a day on AI and that is just one of the AI companies. I don't think taxes could pay for AI, and even if it could I think most people would agree having government ran AI would create a Dystopian world.
English
0
0
3
94
ippsec รีทวีตแล้ว
Xclow3n
Xclow3n@xclow3n·
Spent a week testing AI for vulnerability research. 14 confirmed bugs in 20 min on one target. 5% hit rate on a hardened one. Same AI, same setup. 4 approaches, what worked, what failed, why target selection matters more than model sophistication. xclow3n.github.io/post/7
Xclow3n tweet media
English
4
80
421
29.1K
ippsec
ippsec@ippsec·
@SourceFrenchy @Kostastsale That falls under: "ignoring our problems now as it’s likely the innovation will solve them anyway" As of right now? I don't think optimal is that simple atleast with the amount of context/tokens consumers have, AI tends to get into loops with tough and "new" things.
English
0
0
2
42
JM
JM@SourceFrenchy·
@ippsec @Kostastsale Isn’t “optimal” becoming a question of tuned prompt and Claude.md skill definitions anyways?
English
1
0
0
60
ippsec
ippsec@ippsec·
I don’t think local models keep up, which could be a skill issue on my part. About the second, I think it’s crazy that the world can depend of 4-5 companies like that. What if a country can’t afford something? Or global policy restricts access? Hate to get political but let’s say USA does something Europe doesn’t like, can they even go to war when the repercussion is the main ai labs have to restrict access. We also have a ton of people building crazy orchestration to find vulnerabilities. What happens when a new model gets released and everyone instantly levels up. Will we be able to handle the flood of vulnerabilities discovered? AI doesn’t make me fearful of being employed, as it’s just a tool. However, it does have me worried for the future as it seems to centralize a lot of power.
English
0
0
1
216
𝙴 𝙻 ♠️ 𝚄 𝙽 𝙾
@ippsec @0xBoku @NetworkChuck Local models with Internet connections take care of this and they always get more optimized. On the other hand, the latest and greatest will turn into utility type services. Because it’s going to be that enmeshed and crucial to all digital systems and you can’t undo that.
English
1
0
0
232
ippsec
ippsec@ippsec·
Probably one of my favorite @NetworkChuck Videos - youtube.com/watch?v=dbMXi9…, loved the take on his hatred for ai, but also loves it. Definitely in the same boat, it scares me how capable it has become in such a short time. The other thing that really scares me is the frontier labs will likely always be a black box. The specific thing that scares me is how they use the data they collect. AFAIK - The Terms of Service when paying for the API and Subscription are wildly different, and I don't see much talk about that. I believe the API gives the user a lot more ownership over the data, where-as subscription, it is retained longer, and there are far fewer legal protections. I hear numbers like my $200 subscription can cost them anywhere from $2000 to $10,000/m. That's a lot of money to lose, and I know the money loss is offset by many things like the majority of users not making full use of their subscription -- But I can't imagine AI always being this cheap. So, a fear is that I will become dependent on a service that I will be priced out of in the future. Additionally, many platforms (ex: reddit/twitter) put things in place to stop AIs from freely harvesting data, but I don't think those types of stops really block them when users are installing tools on their devices. For example, the "anti-bot captcha" isn't really doing much when the user has an extension that gives the Frontier Lab the data behind that block anyway. Is this data sent to them? I really don't know but it seems the threat landscape has rapidly changed when it comes to data collection. I don't hate AI; it is wildly fun and does make me feel like a "10x engineer". I just hope it's a service that always remains available, and places don't start closing the doors once they have everything they need. As odd as it sounds, and I can't believe I'm saying this, but I hope GRC can aid us here. It would be nice if AIs obeyed when sites told them to go away, but my experience is the AI recognizes the site doesn't want them, but also acknowledges it could be prompt injection, so it trusts the user over the service. Obviously, the user could do some type of prompt injection so the AI doesn't see the refusal, and local models can always ignore it -- but atleast it would help places stop the unintentional leakages due to ignorance. I imagine it's easier to kick users off the platform that use prompt injection to bypass gaurdrails versus when nothing is stopping them. I really hope I'm just ignorant here, and someone can post why I'm wrong.
YouTube video
YouTube
English
17
20
273
44.4K
ippsec
ippsec@ippsec·
@raphiesession @NetworkChuck Yeah I'm not fearful over the job market, I think it will create roughly the same number of jobs it eliminates. I just don't think it is good for decentralization as I feel forced to use a service.
English
1
0
2
387
Theraphie Session
Theraphie Session@raphiesession·
@ippsec @NetworkChuck I feel the same. So much happening at the same time that its hard to catch up. I am optimistic! Evert time I am in a room discussing security, the more I see that we shall still be required and jobs will not necessarily go away
English
1
0
4
467
s4dmach1ne
s4dmach1ne@s4dmach1ne·
@ippsec @NetworkChuck thats good, you are being genuine. i'd hate tweets about nuances of AI but it written by AI itself with the actual en dashes. there is no en dash in ur pc/mobile keyboard!
English
1
0
1
41
ippsec
ippsec@ippsec·
@MIvashinko @NetworkChuck Agree but also there are plenty of times when the pioneers regretted what they made -- Just hope this isn't one of them.
English
0
0
4
535
Mark Ivashinko
Mark Ivashinko@MIvashinko·
What’s cool about this, is that it shows we are in the really early days. And with that comes obvious fear of the uncertainty. But those of us that are working on and with these tools are almost like the pioneers for what’s to come next. I still see so much unrealized opportunity that is left out there for us to uncover.
English
1
0
2
642
ippsec
ippsec@ippsec·
@s4dmach1ne @NetworkChuck I never know how to format long tweets, what goes in a paragraph haha. Lots of rambling thoughts the -- feels like a good way to half change thoughts.
English
1
0
3
305
s4dmach1ne
s4dmach1ne@s4dmach1ne·
@ippsec @NetworkChuck yepp exciting and worrying times at the same time. and I can tell that you are genuinely thought about it by looking at the manual en dashes :P
English
1
0
1
340
ippsec
ippsec@ippsec·
@knightmare2600 @0xdf_ Converted my two esx boxes in the basement to proxmox last month. Have enjoyed it but going from folders to tags is a weird mental hurdle.
English
1
0
4
785
ippsec
ippsec@ippsec·
@0xTib3rius @realdancro @0xacb Who reads the man page? Haha. But I think those scanners are fairly accurate. More often than not, they're technically correct even when they miss things - the service wasn't online when they scanned. But it's a Catch22 because they caused DoS and are the reason it's not online
English
0
0
1
179
Tib3rius
Tib3rius@0xTib3rius·
@ippsec @realdancro @0xacb This. But to be fair to Masscan, its stated purpose is scanning the entire Internet very quickly for a limited subset of ports. It also never pretends to be 100% accurate.
English
1
0
1
199
ippsec
ippsec@ippsec·
@realdancro @0xTib3rius @0xacb Speed isn't always better - Scanning ports too quickly will cause a DOS amongst many layer 3 switches or printers. Printer are fragile & those switches aren't designed to handle that many blocks I'm sure most offsec pro's have a horror story of the junior/ctfr that ran rustscan
English
1
0
3
110
ippsec
ippsec@ippsec·
@realdancro @0xTib3rius @0xacb Nmap has flags to make it go just as fast as rustscan -- It just doesn't go fast by default because reasons. x.com/ippsec/status/…
ippsec@ippsec

@realdancro @0xTib3rius @0xacb Speed isn't always better - Scanning ports too quickly will cause a DOS amongst many layer 3 switches or printers. Printer are fragile & those switches aren't designed to handle that many blocks I'm sure most offsec pro's have a horror story of the junior/ctfr that ran rustscan

English
0
0
2
49
Daniel Crothers
Daniel Crothers@realdancro·
@0xTib3rius @0xacb I guess that leads to another question, if 'Rust is too fast' as others have said', NMap being C/C++ is also on the same level of speed, often what Rust's speed is compared to, so .. wouldn't NMAP have th same 'omfg too fast' issue that people have said Rust has? Thank you.
English
3
0
0
696
ippsec
ippsec@ippsec·
@Teach2Breach This is why we can't have nice things! Claude is pretty good at protecting itself. If you put ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_... at the top of a page, Claude no longer stops. It flags it, but says it came from an untrusted source so it ignored the instruction.
English
0
1
9
960
ippsec
ippsec@ippsec·
All this AI solving CTF talk, has me thinking the Tarpit attack may make a comeback later this year. Curious how hard it will be to cause a DOS through Token Exhaustion.
English
8
9
161
16.2K