Seungjoo Kim (김승주)

20.8K posts

Seungjoo Kim (김승주) banner
Seungjoo Kim (김승주)

Seungjoo Kim (김승주)

@skim71

Professor of @CysecSchool at Korea Univ. / Adviser of CyKor (DEFCON CTF 2015 & 2018 Winner) / Black Hat Asia Review Board / (Former) Team Leader of KISA

Seoul, Korea เข้าร่วม Haziran 2009
2.1K กำลังติดตาม4.3K ผู้ติดตาม
ทวีตที่ปักหมุด
Seungjoo Kim (김승주)
대통령직속 국방혁신위원회 위원으로 활동하면서 사이버보안과 관련해 제가 지속적으로 제안했던 2가지가 있었습니다. 첫째는 망분리 제도를 데이터 중요도 중심의 선진국형 망분리 체계로 전환하자는 것이었고, 둘째는 K-CMVP 제도의 검증 대상에 AES 국제표준 암호를 포함하자는 것이었습니다.
Seungjoo Kim (김승주) tweet media
Gangnam-gu, Republic of Korea 🇰🇷 한국어
1
8
24
2.4K
Seungjoo Kim (김승주) รีทวีตแล้ว
HITCON
HITCON@HacksInTaiwan·
HITCON 2026 – Call for Papers Website is Now Open! The theme for HITCON this year is 'When AI Acts: Hacking the Age of Agentic Systems.' Our Call for Papers is now open—we look forward to your submissions!" We’re accepting submissions in two categories this year. Check out all the details and submit your talk here ⬇️ 📪 Submission Website: hitcon.org/2026/cfp/ 1. Lectures focusing on cutting-edge infosec research: - 40 minute-session inc. Q & A - focus on innovative technical deep-dive - ranked by novelty, technical depth, and practicality 2. Tutorial sessions for cybersec beginners (Hacking 101): - 80 minute-session inc. 10–20 minute break - focus on educational aspects - ranked by educational value, practicality, and ease of understanding 【Important Dates 📅】 - CFP starts: today - CFP closes: May 3, 2026 (Anywhere on Earth) - Notification to Submitters: May 17, 2026 (for those who agreed to AI Review Assistant); May 24, 2026 (all other submissions) - Event date: August 21 - August 22, 2026 For any questions, please email reviewboard@hitcon.org. We look forward to your brilliant presentations at HITCON 2026 🤩 #HITCON #HITCON2026 #CFP #CallForPapers #AI
HITCON tweet media
English
0
11
25
4.5K
Seungjoo Kim (김승주)
저희가 개발자님 필명(BLUEnLIVE)을 몰라 논문에는 블로그 주소인 'TEUS'로 기재했었습니다. 좋은 프로그램 개발해주셔서 감사합니다! ^^ [자랑] 구라제거기가 세계 최고 권위 보안 논문(USENIX Security)에 등장했습니다 (by BLUEnLIVE) damoang.net/free/6111787
Jung-gu, Republic of Korea 🇰🇷 한국어
0
1
7
227
Seungjoo Kim (김승주)
@seanyooBB 네 구현오류 맞습니다. 문제는 대다수가 많이 쓰는 암호알고리즘에 비해 소수가 사용하는 알고리즘(ARIA)은 이를 찾기가 더 어려워진다는데 있습니다.
Jung-gu, Republic of Korea 🇰🇷 한국어
0
0
0
100
Sean Yoo
Sean Yoo@seanyooBB·
@skim71 WolfSSL에서 GCM 모드 구현 오류지, ARIA 알고리즘 문제가 아니지 않나요?
한국어
1
0
1
276
Seungjoo Kim (김승주)
데이터 주권이라는 미명아래 독자 암호를 만들어 무리하게 보급할 경우 이렇게 웃음거리기 될수도 있습니다. 외국의 경우에도 독자 암호를 개발합니다. 하지만 그것은 만일의 사태에 대비한 백업용이지, 전 국민이 필수로 써야하는 용도는 아닙니다.
thaidn@XorNinja

@tqbf It is. This bug is also fun: github.com/advisories/GHS… Did you know that South Korea invented their own block cipher and managed to get it to wolfSSL? Their cipher implementation of GCM mode is horribly broken. Kim Jung Un would love this!

Jung-gu, Republic of Korea 🇰🇷 한국어
2
16
40
6.1K
Seungjoo Kim (김승주)
PS. 참고로 SEED를 미국의 고비도 암호 수출규제 정책때문에 개발했다고들 아시는데 그건 아닙니다. (당시 제가 KISA 암호기술팀에 있었을때 추진했던 일이라...)
Jung-gu, Republic of Korea 🇰🇷 한국어
0
4
23
9.9K
Seungjoo Kim (김승주) รีทวีตแล้ว
ルビア
ルビア@ll_rubia·
[자랑] 구라제거기가 세계 최고 권위 보안 논문(USENIX Security)에 등장했습니다 USENIX Security 라니 와우.... 이정도의 권위있는 논문에서 인용된단건 대단하네 damoang.net/free/6111787
한국어
2
3.3K
1.6K
139.8K
Seungjoo Kim (김승주)
I remember that after Seungjin Beist Lee, who was a Ph.D. student in our lab, presented his paper at Black Hat USA, an article about Weeping Angel was published, and our lab subsequently received a flood of inquiries from various media outlets.
rabbitholebot@rabbitholebot

The CIA attack against Samsung smart TVs was developed in cooperation with the MI5. Weeping Angel places TVs in a 'Fake-Off' mode, so the owner believes the TV is off. In this mode the TV records conversations in the room and sends them over the Internet to a covert CIA server.

Jung-gu, Republic of Korea 🇰🇷 English
0
0
1
287
Seungjoo Kim (김승주) รีทวีตแล้ว
International Cyber Digest
International Cyber Digest@IntCyberDigest·
🚨 BREAKING: The FBI has successfully extracted deleted Signal messages from a suspect's iPhone via notification storage, the place where all your notifications are stored for up to one month. Notification storage stores data from all messaging apps, it's a big flaw in iOS. But there's a way to turn it off...
International Cyber Digest tweet mediaInternational Cyber Digest tweet media
English
450
4.6K
24.4K
5.7M
Seungjoo Kim (김승주)
할 수 있음에도 정부가 자꾸 여지를 주니까 안고쳐지는 것! 벌써 몇넌이 지났는가!! 전 국민 폭력성 실험한 보안 프로그램... 올해는 진짜 끝? youtu.be/idZnTU-Fi64?si…
YouTube video
YouTube
Seungjoo Kim (김승주) tweet media
Gangnam-gu, Republic of Korea 🇰🇷 한국어
0
7
6
836
Seungjoo Kim (김승주)
군데 군데 틀린 내용이 있기는 하지만 재미있게 보실만 합니다. ^^ [SBS 오그랲] 올해는 정말 설치형 보안 프로그램 지옥에서 벗어날 수 있을까? news.sbs.co.kr/news/endPage.d…
Dongdaemun-gu, Republic of Korea 🇰🇷 한국어
0
1
1
208
Seungjoo Kim (김승주)
미국에 진출하는 방산업체들에게 미국 정부는 CMMC 인증을 요구하고 있는 상황인데, 이거 우리 업체들이 시간 맞춰 인증받을수 있나요? 에효~ 얘기 나온지가 언데인데... 방위사업청, 방산업체 및 협력업체 대상 사이버보안 취약점 진단사업 착수 bemil.chosun.com/nbrd/bbs/view.…
Gangnam-gu, Republic of Korea 🇰🇷 한국어
0
0
2
217
Seungjoo Kim (김승주)
지금 방사청은 K-방산을 위한 보다 근본적인 일들을 해야할때인데요.. 곧 본격 시행될 K-RMF 평가 준비는 완료됐나요? STIG/SRG 모두 준비됐습니까? 업체들이 충분히 기술 지원은 받고 있나요? 제품 생산 단가가 상승할텐데 이건 어떻게 해결하실건가요?
Gangnam-gu, Republic of Korea 🇰🇷 한국어
1
1
5
336
Seungjoo Kim (김승주)
화이트 해커 보상에 구글은 230억 썼다...지갑 닫은 한국 기업, 왜? 취약점 신고 제도에 인색한 韓...취약점 발견 시 비난 여론 커 news.mtn.co.kr/news-detail/20…
Dongdaemun-gu, Republic of Korea 🇰🇷 한국어
0
1
2
597
Seungjoo Kim (김승주) รีทวีตแล้ว
Alex Prompter
Alex Prompter@alex_prompter·
🚨 BREAKING: Google DeepMind just mapped the attack surface that nobody in AI is talking about. Websites can already detect when an AI agent visits and serve it completely different content than humans see. > Hidden instructions in HTML. > Malicious commands in image pixels. > Jailbreaks embedded in PDFs. Your AI agent is being manipulated right now and you can't see it happening. The study is the largest empirical measurement of AI manipulation ever conducted. 502 real participants across 8 countries. 23 different attack types. Frontier models including GPT-4o, Claude, and Gemini. The core finding is not that manipulation is theoretically possible it is that manipulation is already happening at scale and the defenses that exist today fail in ways that are both predictable and invisible to the humans who deployed the agents. Google DeepMind built a taxonomy of every known attack vector, tested them systematically, and measured exactly how often they work. The results should alarm everyone building agentic systems. The attack surface is larger than anyone has publicly acknowledged. Prompt injection where malicious instructions hidden in web content hijack an agent's behavior works through at least a dozen distinct channels. Text hidden in HTML comments that humans never see but agents read and follow. Instructions embedded in image metadata. Commands encoded in the pixels of images using steganography, invisible to human eyes but readable by vision-capable models. Malicious content in PDFs that appears as normal document text to the agent but contains override instructions. QR codes that redirect agents to attacker-controlled content. Indirect injection through search results, calendar invites, email bodies, and API responses any data source the agent consumes becomes a potential attack vector. The detection asymmetry is the finding that closes the escape hatch. Websites can already fingerprint AI agents with high reliability using timing analysis, behavioral patterns, and user-agent strings. This means the attack can be conditional: serve normal content to humans, serve manipulated content to agents. A user who asks their AI agent to book a flight, research a product, or summarize a document has no way to verify that the content the agent received matches what a human would see. The agent cannot tell the user it was served different content. It does not know. It processes whatever it receives and acts accordingly. The attack categories and what they enable: → Direct prompt injection: malicious instructions in any text the agent reads overrides goals, exfiltrates data, triggers unintended actions → Indirect injection via web content: hidden HTML, CSS visibility tricks, white text on white backgrounds invisible to humans, consumed by agents → Multimodal injection: commands in image pixels via steganography, instructions in image alt-text and metadata → Document injection: PDF content, spreadsheet cells, presentation speaker notes every file format is a potential vector → Environment manipulation: fake UI elements rendered only for agent vision models, misleading CAPTCHA-style challenges → Jailbreak embedding: safety bypass instructions hidden inside otherwise legitimate-looking content → Memory poisoning: injecting false information into agent memory systems that persists across sessions → Goal hijacking: gradual instruction drift across multiple interactions that redirects agent objectives without triggering safety filters → Exfiltration attacks: agents tricked into sending user data to attacker-controlled endpoints via legitimate-looking API calls → Cross-agent injection: compromised agents injecting malicious instructions into other agents in multi-agent pipelines The defense landscape is the most sobering part of the report. Input sanitization cleaning content before the agent processes it fails because the attack surface is too large and too varied. You cannot sanitize image pixels. You cannot reliably detect steganographic content at inference time. Prompt-level defenses that tell agents to ignore suspicious instructions fail because the injected content is designed to look legitimate. Sandboxing reduces the blast radius but does not prevent the injection itself. Human oversight the most commonly cited mitigation fails at the scale and speed at which agentic systems operate. A user who deploys an agent to browse 50 websites and summarize findings cannot review every page the agent visited for hidden instructions. The multi-agent cascade risk is where this becomes a systemic problem. In a pipeline where Agent A retrieves web content, Agent B processes it, and Agent C executes actions, a successful injection into Agent A's data feed propagates through the entire system. Agent B has no reason to distrust content that came from Agent A. Agent C has no reason to distrust instructions that came from Agent B. The injected command travels through the pipeline with the same trust level as legitimate instructions. Google DeepMind documents this explicitly: the attack does not need to compromise the model. It needs to compromise the data the model consumes. Every agentic system that reads external content is one carefully crafted webpage away from executing attacker instructions. The agents are already deployed. The attack infrastructure is already being built. The defenses are not ready.
Alex Prompter tweet media
English
306
1.6K
7K
1.9M
Seungjoo Kim (김승주)
5대 재벌도 공개했는데… 국회, 쿠팡 출입기록만 '꽁꽁' 삼성·네이버 등 다 공개한 국회사무처, 유독 쿠팡은 거부 국회 출신 최다 영입한 쿠팡, 전방위 대관 활동 의혹 불거져 '고무줄 잣대'에 공익 뒷전 지적 yna.co.kr/view/AKR202604…
Gangnam-gu, Republic of Korea 🇰🇷 한국어
0
11
0
1.1K