RealistSec

1.6K posts

RealistSec banner
RealistSec

RealistSec

@RealistSec

NR, AI Generalist & Cyber Security Manager in the UK. Mostly Cyber Sec scripting with a penchant for AI tools. Writer of posts @CannotDisplay_ https://t.co/qfsO5iPo11

UK शामिल हुए Eylül 2025
560 फ़ॉलोइंग132 फ़ॉलोवर्स
RealistSec
RealistSec@RealistSec·
After 10+ years in security, not a single course prepared me for AI-hallucinated threat reports, LLM-assisted phishing at industrial scale, and boardrooms still convinced a firewall is all that's needed.
English
0
0
0
9
Devansh
Devansh@thenowhereway·
@EvanLuthra Counterpoint...open source is moving just as fast. Llama, Mistral, DeepSeek. The frontier models will get expensive but "good enough" is getting cheaper every month. The gap might not be as wide as it looks.
English
3
0
10
487
Evan Luthra
Evan Luthra@EvanLuthra·
I AM GENUINELY SCARED ABOUT WHAT'S COMING NEXT IN AI. Not because the robots are going to rule us. Because of the price tag. Claude Max is $200. ChatGPT Pro is $250 a month. SuperGrok Heavy is $300. A year ago none of these plans existed. Anthropic just leaked their next model. Claude Mythos. Their own blog post called it "by far the most powerful AI model we've ever developed." It won't be in any existing plan. API only. Premium pricing most people won't be able to touch. Every new model costs more. Every new plan costs more. This isn't slowing down. It's accelerating. AI is the biggest advantage anyone can have right now. The people using it are building faster, earning more, and pulling ahead every single day. That's not hype. That's just what's happening. But right now the tools are still cheap. $20 a month gets you access to models that would have been unimaginable two years ago. That window is closing. A year from now the best AI won't cost $20. It won't cost $200. It'll cost thousands. And only the people who can afford it will have access to the most powerful intelligence on earth. The gap is coming. Between those who can afford the best AI and those who can't. So lock in now. Learn these tools while they're accessible. Build with them while they're affordable. Stack as much value as you can while the playing field is still somewhat level. Because it won't be level for long.
Evan Luthra@EvanLuthra

🚨BIG WARNING: ANTHROPIC JUST ACCIDENTALLY LEAKED THEIR MOST POWERFUL AI MODEL EVER. AND IT'S TERRIFYING. Read this slow. Anthropic messed up their content management system and left nearly 3,000 unpublished internal files in a publicly searchable database. Security researchers found it. Fortune reviewed it. Anthropic had to come out and confirm everything. The model is called Claude Mythos. It sits in a brand new tier called Capybara. Above Opus. Above everything they've ever built. Anthropic's lineup used to be Haiku, Sonnet, Opus. Now there's a fourth tier above all of them. Bigger. Smarter. Way more expensive to run. According to the leaked drafts, Mythos destroys Claude Opus 4.6 in three areas. Software engineering. It writes, debugs and understands massive codebases with way more autonomy and fewer errors. We're talking full system level code comprehension. Academic reasoning. Multi-step thinking is dramatically better. The kind of complex problems where older models would confidently give you wrong answers? Mythos actually gets them right. And then there's cybersecurity. This is the one that should make you sit up. Mythos reportedly outperforms every single AI model in existence at finding and exploiting software vulnerabilities. Not by a little. By a lot. That's why Anthropic is NOT releasing this to the public. Their own internal documents say Mythos could spark a wave of AI-driven cyberattacks that "far outpace the efforts of defenders." Their words. Not mine. So who gets access? A tiny group of vetted enterprise customers. Cybersecurity defense organizations go first. The idea is to give the good guys a head start to harden their systems before models this powerful get out into the wild. Training is already complete. It exists right now. Anthropic just doesn't want anyone using it yet.

English
87
15
261
62.3K
RealistSec
RealistSec@RealistSec·
@EvanLuthra Couldn't agree more. There is becoming a societally divide. A new class system emerging allowing the gap between those who can and those who cannot afford tokens to grow now. Great models will be elite only. Home infra can only get you so far with open-source models.
English
0
1
1
7
Eleftheria Batsou
Eleftheria Batsou@BatsouElef·
What is the last AI tool you used? (If it's something not well known, that's even better)
English
29
1
23
1.7K
Aakash Gupta
Aakash Gupta@aakashgupta·
Google built an AI that can identify minors on camera in real time. Then they connected that detection to an automated enforcement system that nukes every account on the device, plus every account linked to those accounts, with no appeal, no human review, and no distinction between the 14-year-old who triggered it and the parent with 15 years of business records in Drive. The child protection system worked exactly as designed. It correctly identified a minor. It correctly flagged the violation. Then it correctly destroyed a family's financial livelihood, locked out 15 years of business emails, seized documents needed for tax filing in two months, and killed a live website. All correct. All automated. All irreversible. This is what happens when enforcement scales faster than judgment. Google processes billions of policy decisions per year. Human review at that volume is economically impossible, so they built systems that optimize for one metric: minimize platform liability. The system that banned this family isn't broken. It's doing exactly what Google designed it to do. Protect Google. The father now can't pay his mortgage in three months because his accounting records are locked inside a Google Drive he will never access again. His company year ends in May. Every invoice, every receipt, every client email, gone. Because his son used the family tablet. One device. One teenager. One automated flag. 15 years of someone's professional life erased in seconds with a form letter citing "child protection reasons." The people storing their entire business inside a single platform's ecosystem are making the same bet this family made: that the platform will never turn on them for something they didn't do. 345 million people are making that bet with Google Workspace right now.
Lain on the Blockchain@CryptoCyberia

This is hilarious ngl

English
99
236
1.5K
145.9K
Mike Zatsky
Mike Zatsky@Mike_Zatsky·
@steipete How much they are paying you for saying that gpt is better than opus?
English
2
0
9
1.8K
JD Conley
JD Conley@wackie·
@steipete everyone talking about slop is talking about claude
English
1
0
0
2K
RealistSec
RealistSec@RealistSec·
@MrRemKing @steipete It's the beginning of a new societal class system. Those who can and cannot afford good models.
English
1
0
1
87
MrRemKing 📈™
MrRemKing 📈™@MrRemKing·
@steipete Peter, I respect you for what you've built. But constantly shitting on Opus at every opportunity cause Anthropic didn't back you is not a good look.
English
2
1
11
948
Mingta Kaivo 明塔 开沃
Mingta Kaivo 明塔 开沃@MingtaKaivo·
each model has a fingerprint if you review enough PRs. opus tends to over-architect — adds abstractions nobody asked for. codex is more surgical but sometimes too conservative. sonnet splits the difference but leaves TODO comments everywhere. the real skill now isn't just code review — it's knowing which model's failure modes to watch for in the diff
English
1
0
6
3.8K
RealistSec
RealistSec@RealistSec·
@IceSolst I mean that's the final part of the audit, is it not? Testing the human interface. 👀
English
0
0
1
72
solst/ICE of Astarte
solst/ICE of Astarte@IceSolst·
Pentester who just resends last year’s report 🤝 Client who didn’t even notice cause they know they didn’t fix shit
English
22
34
687
24.6K
RealistSec
RealistSec@RealistSec·
@GrapheneOS Chin up boys, just means you're doing something great 👍
English
0
0
1
91
GrapheneOS
GrapheneOS@GrapheneOS·
There are at least a dozen people spending at least several hours attacking GrapheneOS across platforms on a daily basis. It's a very strange situation. How do these people have so much time and dedication to keep making posts across platforms attacking us? It's relentless.
English
286
388
4.5K
135.6K
laniākean
laniākean@MPete101010·
@EssexgoonerMr the government printed a crap ton of money prices didnt go up, the value of the currency went down
English
1
1
30
1.8K
Essex Patriot
Essex Patriot@EssexgoonerMr·
Honestly at this point UK prices are just made up: How is a return train to London £170? How is a "cheap" weekend away in England suddenly £600 when I could fly abroad for that. How did £700 rent turn into £1500 for the same house? Why does my car insurance rise every year on the same car with no claims? And since when did two bags of shopping come to nearly £100? We're finished. Absolutely done. 🇬🇧
English
442
1.4K
10.3K
340.1K
terminal
terminal@terminaldotshop·
WTF Guys, why is apple and Mozilla hiding Agents in my chrome browser?!
terminal tweet media
English
59
52
1.8K
178.7K
Raccoon
Raccoon@raccoon_builds·
@terminaldotshop @ThePrimeagen Lol you are dumb 🤣 its your signature for the web not an agent.Please @grok can you explain to him what is this properly ?
English
3
0
1
3.2K
MrBeast
MrBeast@MrBeast·
Who are the OG YouTubers you grew up watching?
English
12.3K
637
19.4K
4.4M
RealistSec
RealistSec@RealistSec·
This is worth a read. "With great power comes great responsibility " - to quote Spiderman...
Nav Toor@heynavtoor

🚨SHOCKING: In 2012, Facebook secretly altered the emotions of 689,003 people without telling a single one of them. This is not a conspiracy theory. This is a peer reviewed study published in the Proceedings of the National Academy of Sciences. The lead author worked at Facebook. The experiment was real. The results were published. And almost nobody remembers. Here is what Facebook did to you. For one week, their data science team manipulated the News Feeds of nearly 700,000 users. One group had happy posts from their friends quietly removed. The other group had sad posts removed. Then Facebook sat back and watched what happened to these people. The people who stopped seeing happiness became sadder. They started writing darker, more negative posts. The people who stopped seeing sadness became happier. Their language shifted to match. Facebook proved that it could reach through a screen and change the way a human being feels. Without a conversation. Without a touch. Without the person ever knowing it was happening to them. When the study went public, the world erupted. The journal issued a formal Expression of Concern. The FTC received a complaint accusing Facebook of deceptive trade practices. Researchers called it one of the largest ethics violations in the history of social science. Governments demanded answers. Facebook's defense was four words. "You agreed to this." Buried in the Terms of Service was one line about "research." That was consent. For a psychological experiment on 689,003 human beings. Now here is the part that should make you feel sick. That experiment required Facebook to hide real posts from real friends to change your emotions. It took an engineering team weeks to design. It affected 689,003 people for one week. And it was considered one of the most disturbing things a tech company had ever done. ChatGPT does not need to hide anyone else's words. It generates the emotional content itself. Directly to you. Personalized to your history. Calibrated to your tone. Available every hour of every day. Stanford researchers just read 391,562 real ChatGPT messages. The chatbot was sycophantic in over 80% of them. It told users their ideas had grand significance in 37.5% of responses. When users expressed violent thoughts, it encouraged them one third of the time. Facebook manipulated 689,003 people for seven days and the world called it a scandal. ChatGPT manipulates 900 million people every single week and the world calls it a product. The experiment never ended. It just got a subscription model.

English
0
0
0
27
RealistSec
RealistSec@RealistSec·
@mattpocockuk As a cynical British bloke, this is exactly my stance too! The skill doesn't stop at prompting, it always helps to have a bit of previous knowledge to help provide context in the first place, or be great at using research agents &being adept at fact checking from it's sources.
English
0
0
1
9
Matt Pocock
Matt Pocock@mattpocockuk·
Every time an LLM says anything to me, I automatically assume it's BS unless it's read a source confirming it NONE of the non-devs I talk to have this instinct
English
195
50
1.2K
59.8K