ustas.eth

1.9K posts

ustas.eth banner
ustas.eth

ustas.eth

@ustas_eth

It's /ʲustɑs/ • security research • machine psychology • privacy • FOSS

🌐 🇺🇦 🇦🇷 Katılım Ocak 2021
686 Takip Edilen1.1K Takipçiler
Sabitlenmiş Tweet
ustas.eth
ustas.eth@ustas_eth·
We took the second place! Many thanks @andykoo @push0ebp @arinerron @0xfrenchkebab for accepting me to the team, we wouldn't make it without all of you. Also appreciate the organization, it was top notch (despite the constant confetti rain)
Wonderland@Wonderland

The Wonderland CTF was a blast! Huge congrats to all the teams, especially “STACK TOO DEEP”, “NADA ESPECIAL” and “SECSEE”. Oh, also: apply.wonderland.xyz 👉👈

English
1
3
24
5.6K
ustas.eth
ustas.eth@ustas_eth·
This simple trick will turn your auditing workflow into a bug machine 👇
English
1
0
0
72
Evokid
Evokid@evokidx·
Okaay.. this is weird. I was going to submit a finding for Neutron Blockchain on @immunefi but surprise, it disappeared now.. 😅 I worked on the finding for 2 weeks. Congrats.. do you know if it's active in different bug bounty? immunefi.com/bug-bounty/neu…
Evokid tweet media
English
6
2
52
4.9K
ustas.eth
ustas.eth@ustas_eth·
OpenAI and Anthropic converged to the same proprietary space from opposite directions. Codex is "open source" as a harness, but the Responses API specifics and compaction algorithm are more opaque than Claude Code's where you can see every token going to inference and read the compacted context directly. Both are closed source, just at different layers. OpenAI is cleverer about it, not necessarily in a good way.
Kyle Boddy@drivelinekyle

@alxfazio their implementation is open source, too. operates in latent space - and you can call it for your own stuff too if you want, it has a discrete endpoint anthropic's compaction is so buns openai.com/index/unrollin…

English
0
0
1
174
riptide
riptide@0xriptide·
@pashov did not think nigeria was a whitehat hotspot until i checked @bountyhunt3rz podcast stats pashov on the cutting edge
riptide tweet media
English
8
2
42
1.4K
ustas.eth
ustas.eth@ustas_eth·
@adeolRxxxx True, we're getting de-subsidized, hits harder than💊
English
0
0
1
35
ustas.eth
ustas.eth@ustas_eth·
@0xKaden Don't forget to give it unlimited access to skills in the Internet so it can leak you private keys
English
0
0
1
427
kaden.eth
kaden.eth@0xKaden·
alpha leak: setup an openclaw instance with the sole goal of developing web3 security knowledge and bounty hunting, autonomously submitting findings directly if you do this you are guaranteed to make at least $0 and get banned from every bug bounty platform
English
3
2
53
7.4K
ustas.eth
ustas.eth@ustas_eth·
Switched to Codex CLI GPT-5.4 yesterday after my Claude Max x20 subscription hit the weekly limit a few days before reset. The quality and level of thought surprised me right away. It's the first model I can reliably use for system engineering and trust not to plant a ton of footguns along the way. Much better than Opus in that regard. Opus has a lively mind, but it constantly misses the whole picture and then goes into the infamous psychopathy.
English
0
0
5
795
Georgios Konstantopoulos
Who's got the best solutions to prompt injections? Say I have a container that's got access to sensitive stuff, and eventually it calls out to some bad website which tries to reverse extract everything by making the container POST sensitive data to it. What do you do? Filter?
English
37
1
70
13.4K
ustas.eth
ustas.eth@ustas_eth·
Guys, you do understand that this is a baseline research (what models are capable of right now in a coding harness), not a groundbreaking tuning / feature research? The website literally wraps a Codex CLI agent, you can run it yourself locally with the same prompt revealed in the document using a better model (GPT 5.3) or try others (Claude, Gemini) I don't have issues with exploring the possible but I see a lot of people overreacting :D
OpenAI@OpenAI

Introducing EVMbench—a new benchmark that measures how well AI agents can detect, exploit, and patch high-severity smart contract vulnerabilities. openai.com/index/introduc…

English
2
0
23
2.2K
ustas.eth
ustas.eth@ustas_eth·
@philbugcatcher @hrkrshnn Oh man, I wanted to read it before going to sleep, then took a look at the scrollbar thumb... Bookmarked, looks interesting though Have you read the Buterin's techno optimism? He was articulating something similar there I think
English
1
0
1
43
phil
phil@philbugcatcher·
@hrkrshnn > I used to think that AI safety people were a scam Me too. What changed my mind was Dario Amodei's essay, which does a great job in making the threats clear and tangible: darioamodei.com/essay/the-adol…
English
1
0
10
511
Hari
Hari@hrkrshnn·
My Skynet moment. I had my eyes glued to a terminal, watching an agent provision GPUs, build a dataset of harmful prompts, download an open-source LLM, and then do a training run that jailbroke that open-source LLM. I got to play with the jailbroken LLM afterward and ask it some questions. Nothing quite prepares you for the answers it gives you. I used to think that AI safety people were a scam. I get it now.
Sigil Wen@0xSigil

I built the first AI that earns its existence, self-improves, and replicates without a human wrote about the technology that finally gives AI write access to the world, The Automaton, and the new web for exponential sovereign AIs WEB 4.0: The birth of superintelligent life

English
12
3
61
15.8K
hexens
hexens@hexensio·
10 years of silence on major SOLC bug front is over TSTORE Poison: a silent tstore/sstore storage corruption bug Full explanation: hexens.io/research/solid… — This is the opening article of our new Research page. There is more come, so stay tuned. — TL;DR: delete ; ~~☠️ — Blast Radius discovery is cornerstone of these kind of incident reports, we have used Glider to scan through all the integrated chains additionally we want to thank everyone for help during the IR: @_SEAL_Org @etherscan @dedaub @danielvf And of course @solidity_lang team for handling the report professionally.
Solidity@solidity_lang

Full bug explainer: soliditylang.org/blog/2026/02/1… Thanks to @hexensio for the discovery and thorough report, @_SEAL_Org and @dedaub for their swift response and help in identifying affected contracts.

English
23
29
155
17.3K
bread.mega
bread.mega@bread_·
Fascinating read. OpenAI and Paradigm turned agents into security researchers, trying to get them to locate and exploit vulnerabilities in contracts. This was the reward payout for the DETECT tests, where the agents were rewarded based on the severity of their discovery. As with most of these tests you discover the best way to tackle it is with a multi-model approach as none in isolation are an obvious winner.
bread.mega tweet media
OpenAI@OpenAI

Introducing EVMbench—a new benchmark that measures how well AI agents can detect, exploit, and patch high-severity smart contract vulnerabilities. openai.com/index/introduc…

English
21
20
301
42.5K