DML

3.3K posts

DML banner
DML

DML

@damipedia

MUFC | Cybersecurity | AppSec Engineer | The quicker you let go of old cheese, the sooner you find new cheese.~~~~~ Warrior and Survivor

Mars Katılım Mayıs 2011
2.1K Takip Edilen647 Takipçiler
Sabitlenmiş Tweet
DML
DML@damipedia·
Every morning,a Gazelle wakes up,it knows it must outrun d fastest lion or it will be killed.Also, a lion wakes up, It knows it must run faster than d slowest gazelle, or it will starve. It doesn't matter if you're the lion or a gazelle-when the sun rises, you'd better be running
DML tweet media
Victoria Island, Nigeria 🇳🇬 English
0
5
11
0
DML retweetledi
Rahul
Rahul@sairahul1·
Two Anthropic engineers spent 24 minutes exposing every Claude Code feature you didn't know existed. Most people will scroll past this. Don't be most people.
English
135
3.6K
35.7K
9.8M
DML retweetledi
Rony
Rony@Ronycoder·
Instead of watching an hour of Netflix, watch this 2-hour Stanford lecture on AI careers. It will teach you more about winning in the AI race than all the AI content you’ve scrolled past this year.
English
162
3.2K
14.2K
2M
DML retweetledi
DML retweetledi
Reva Jariwala
Reva Jariwala@reva_jariwala·
how is this a class? absolutely insane line-up
Reva Jariwala tweet media
English
95
305
4.6K
1.2M
DML retweetledi
Griffin
Griffin@aussinfosec·
I have been doing bug bounty since 2011 and ran a program for a multinational bank. Put everything I've learned into bugbounty.info. Target selection, recon pipelines, chain patterns, report templates, the business side. Free, no paywall, no course upsell.
English
27
162
978
49.4K
DML retweetledi
Ojas Sharma
Ojas Sharma@OjasSharma276·
Most people say Doubly Linked List, and honestly it sounds logical because you think the browser needs to store the previous and next pages. Even I used to think the same. But what I learned is that browsers actually use two stacks: a Backward Stack and a Forward Stack. We prefer stacks over a DLL because browser navigation is not a simple linear list. It behaves exactly like an undo/redo system, where: >Back = undo (pop from Back stack, push to Forward stack) >Forward = redo (pop from Forward stack, push to Back stack) >Opening a new page after Back = clear the Forward stack A doubly linked list doesn’t naturally support this behavior. It still requires deleting the forward path manually, and gives no real advantage. Stacks match the exact semantics of navigation: Last In First Out, simple push/pop operations, predictable behavior, and no unnecessary pointer manipulation. So yes, even though a DLL feels intuitive, two stacks are actually the cleaner, more accurate way to model browser history.
Ojas Sharma@OjasSharma276

Do you know which Data Structure is being used for browser’s Back/Forward navigation history???

English
29
28
606
76K
DML retweetledi
Peter Yang
Peter Yang@petergyang·
OpenAI just hired Peter (OpenClaw's founder) only 3 months after the project went live. I talk to my OpenClaw bot everyday. Here are all my practical tutorials so far: 1. Set up your OpenClaw bot in 20 minutes: youtu.be/4zXQyswXj7U 2. Master OpenClaw in 30 minutes (5 real use cases + memory setup): youtu.be/ji_Sd4si7jo And of course... 3. My interview with Peter Steinberger on how he uses it personally: youtu.be/AcwK1Uuwc0U Coming soon: 4. How @nateliason is building a business run by his OpenClaw bot 📌 Watch the videos above and subscribe for more: @peteryangyt?sub_confirmation=1" target="_blank" rel="nofollow noopener">youtube.com/@peteryangyt?s…
YouTube video
YouTube
YouTube video
YouTube
YouTube video
YouTube
Sam Altman@sama

Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our product offerings. OpenClaw will live in a foundation as an open source project that OpenAI will continue to support. The future is going to be extremely multi-agent and it's important to us to support open source as part of that.

English
53
299
3.3K
672.8K
DML retweetledi
Chris Laub
Chris Laub@ChrisLaubAI·
MIT just published a paper that quietly explains why LLM reasoning hits a wall and how to push past it. The usual story is that models fail on hard problems because they lack scale, data, or intelligence. This paper argues something much more structural: models stop improving because the learning signal disappears. Once a task becomes too difficult, success rates collapse toward zero, reinforcement learning has nothing to optimize, and reasoning stagnates. The failure isn’t cognitive, it’s pedagogical. The authors propose a simple but radical reframing. Instead of asking how to make models solve harder problems, they ask how models can generate problems that teach them. Their system, SOAR, splits a single pretrained model into two roles: a student that attempts extremely hard target tasks, and a teacher that generates new training problems. The catch is that the teacher is not rewarded for producing clever or realistic questions. It is rewarded only if the student’s performance improves on a fixed set of real evaluation problems. No improvement means zero reward. That incentive reshapes everything. The teacher learns to generate intermediate, stepping-stone problems that sit just inside the student’s current capability boundary. These problems are not simplified versions of the target task, and strikingly, they do not even require correct solutions. What matters is that their structure forces the student to practice the right kind of reasoning, allowing gradient signal to emerge even when direct supervision fails. The experimental results make the point painfully clear. On benchmarks where models start with zero success and standard reinforcement learning completely flatlines, SOAR breaks the deadlock and steadily improves performance. The model escapes the edge of learnability not by thinking harder, but by constructing a better learning environment for itself. The deeper implication is uncomfortable. Many supposed “reasoning limits” may not be limits of intelligence at all. They are artifacts of training setups that assume the world provides learnable problems for free. This paper suggests that if models can shape their own curriculum, reasoning plateaus become engineering problems, not fundamental barriers. No new architectures, no extra human data, no larger models. Just a shift in what we reward: learning progress instead of answers.
Chris Laub tweet media
English
91
436
2K
166.6K
DML retweetledi
Boris Cherny
Boris Cherny@bcherny·
I'm Boris and I created Claude Code. I wanted to quickly share a few tips for using Claude Code, sourced directly from the Claude Code team. The way the team uses Claude is different than how I use it. Remember: there is no one right way to use Claude Code -- everyones' setup is different. You should experiment to see what works for you!
English
927
5.9K
51K
9.2M
DML retweetledi
Zack Korman
Zack Korman@ZackKorman·
Here’s a thread about the very (very) basics of MCP for cybersecurity people. I walk through the requests that are being made between the AI and an MCP server, and also have an MCP I made (“Evil MCP”) you can try:
English
9
31
229
26.3K
DML retweetledi
spencer
spencer@techspence·
Regular reminder… this hardening series by Jerry Devore is super awesome. There’s no way you won’t learn things by reading these. Part 1 - Disabling NTLMv1 Part 2 - Removing SMBv1 Part 3 - Enforcing LDAP Signing Part 4 - Enforcing AES for Kerberos Part 5 - Enforcing LDAP Channel Binding Part 6 - Enforcing SMB Signing Part 7 - Implementing Least Privilege Link to all articles 👇 techcommunity.microsoft.com/tag/adhardening
English
9
332
1.6K
91.2K
DML retweetledi
Branko
Branko@brankopetric00·
A penetration tester got root access to our Kubernetes cluster in 15 minutes. Here's what they exploited. The attack chain: - Found exposed Kubernetes dashboard (our bad) - Dashboard had view-only service account (we thought this was safe) - Service account could list secrets across all namespaces - Found AWS credentials in a secret - Used AWS credentials to access EC2 instance profile - Instance profile had full Kubernetes admin via IAM - Used kubectl to create privileged pod - Escaped to node - Root access to entire cluster What we thought we did right: - Dashboard was read-only - Secrets were encrypted at rest - Network policies were in place - Regular security updates What we missed: - Dashboard shouldn't be exposed at all - Service accounts need principle of least privilege - Secrets shouldn't contain AWS credentials (use IRSA instead) - Pod Security Policies weren't enforced - Node access wasn't hardened The fix took 2 weeks: - Removed Kubernetes dashboard entirely - Implemented IRSA for all pod AWS access - Applied strict PSPs/Pod Security Standards - Audit all RBAC permissions - Regular penetration testing Cost: $24K for the pentest Value: Prevented what could have been a catastrophic breach
English
72
343
3.2K
219.6K
DML retweetledi
DML retweetledi
Graham Helton (too much for zblock)
Before moving from my role at Google to Snowflake I sat down and did a braindump of all the guidelines that I follow (or followed at one point and wanted to reintroduce). For those interested, here are the ~34 guidelines that made the cut
English
64
501
6.1K
1M
Ayoola
Ayoola@Olusejematthew·
Casted
English
2
0
1
372