Aleks

29 posts

Aleks

Aleks

@seqradev

Formal methods advocate. Seqra. Engineering OpenTaint — the open source taint analysis engine for the AI era.

Katılım Kasım 2025
41 Takip Edilen3 Takipçiler
Sabitlenmiş Tweet
Aleks
Aleks@seqradev·
🚨Seqra just open-sourced a security analyzer that your agents will love to work with. It's called OpenTaint. The open source taint analysis engine for the AI era. github.com/seqra/opentaint Here is why you need it:
English
1
0
0
616
Aleks
Aleks@seqradev·
@LiveOverflow We are building this kind of tool to find deeply hidden vulnerabilities in applications, using taint analysis as a powerful, configurable search engine, so that an LLM can express the vulnerability pattern it found as a config for the engine.
English
1
0
0
12
LiveOverflow 🔴
LiveOverflow 🔴@LiveOverflow·
I have had several discussions now about the cost and ROI of AI. And many people like to throw agents at the problem, which are incredibly expensive. My opinion is that at some point this is not sustainable. Traditional businesses operate on 10-20% margins. If you compare cost of Agent to cost of labour you get massive margins. But at some point economy will base-line, right? And then it's not justifiable to spend 10-100x on Agents burning tokens, if you can have optimised "deterministic" LLM workflows. Thoughts?
English
19
2
101
12K
Jinjing Liang
Jinjing Liang@JinjingLiang·
And if you're looking for open source (and free) alternatives for security scanning: • Semgrep (@semgrep) – Fast SAST with custom rules github.com/semgrep/semgrep • Trivy (@AquaTrivy) – Containers, SBOMs, IaC & more github.com/aquasecurity/t… • Gitleaks (@gitleaks) – Super fast secrets detection github.com/gitleaks/gitle… • Checkov – IaC security (Terraform, K8s, etc.) github.com/bridgecrewio/c… • Grype (@GrypeProject) – Lightweight container vuln scanner github.com/anchore/grype All integrate easily into GitHub / GitLab CI.
English
1
0
2
166
Cursor
Cursor@cursor_ai·
Cursor Security Review is now available for Teams and Enterprise plans. Run two types of always-on agents: 1. Security Reviewer checks every PR for vulnerabilities and leaves comments. 2. Vulnerability Scanner runs scheduled scans of your codebase and posts findings in Slack.
English
92
174
1.8K
398K
Aleks
Aleks@seqradev·
opentaint scan
GIF
English
0
0
0
56
Aleks
Aleks@seqradev·
Opus 4.7 is quite good at writing rules for OpenTaint. Right now it's writing tests for XSS taint rules, checking if XSS is real at runtime using Playwright, and tuning rules to reduce false positives.
English
0
0
0
72
Aleks
Aleks@seqradev·
This is a taint rule.
Aleks tweet media
English
0
0
0
62
Aleks
Aleks@seqradev·
We build an open-source taint analyser for the AI era to make lean application security a reality. Discovering a vulnerability is only half the problem. Doing it at scale, without waste, is the other half. And for that, we need advanced formal methods more than ever.
Anthropic@AnthropicAI

Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing

English
1
0
0
105
ThePrimeagen
ThePrimeagen@ThePrimeagen·
Hey, you got a cool project that you are building? Link it I want to yap about cool projects
English
1.7K
39
2.5K
217.2K
Aleks
Aleks@seqradev·
I doubt LLM scanners scale for large codebases well. Not to mention the costs and unpredictability of results. Use LLMs to write rules for SAST to find new kinds of vulnerabilities and to do so precisely. Use SAST tools that allow for this.
AISecHub@AISecHub

llm-sast-scanner - github.com/SunWeb3Sec/llm… A general-purpose Static Application Security Testing (SAST) skill for LLM-based code vulnerability analysis. Designed to be loaded by AI coding agents (Claude Code, OpenAI Codex, etc.) to perform structured source-to-sink taint analysis across 34 vulnerability classes.

English
0
0
0
158
Aleks
Aleks@seqradev·
Another problem is the context limit problem. Can you feed the whole production-size codebase to LLM at once? How much will it cost you to do it at least once a week? Think about it.
English
0
0
0
116
Aleks
Aleks@seqradev·
But can we really rely on them? Every time your agentic system sends a prompt to a model, it will get different responses. Every time. It doesn't matter how smart the model you're using is. This is inherent.
English
1
0
0
137
Aleks
Aleks@seqradev·
OpenTaint is our first step to trustworthy agentic application security. The AI agents are dramatically good at the security reviewer role. AI pentesting and reverse engineering tools can now find flaws that have been hidden for years. And this is fascinating.
English
2
0
1
248