Tanat Tonguthaisri

111.8K posts

Tanat Tonguthaisri banner
Tanat Tonguthaisri

Tanat Tonguthaisri

@gastronomy

addict to vibe coding and experimenting with various LLMs.

Amphoe Pak Kret Katılım Mart 2009
1K Takip Edilen1.5K Takipçiler
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
Strengthening Human-Centric Chain-of-Thought Reasoning Integrity in LLMs via a Structured Prompt Framework: Chain-of-Thought (CoT) prompting has been used to enhance the reasoning capability of LLMs. However, its reliability in security-sensitiv… analysis.bit.ly/4cePXBp
English
0
0
0
32
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
GPU Acceleration of TFHE-Based High-Precision Nonlinear Layers for Encrypted LLM Inference: Deploying large language models (LLMs) as cloud services raises privacy concerns as inference may leak sensitive data. Fully Homomorphic Encryption (… respectively.bit.ly/4sCkqQ9
English
0
0
0
38
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
Undetectable Conversations Between AI Agents via Pseudorandom Noise-Resilient Key Exchange: AI agents are increasingly deployed to interact with other agents on behalf of users and organizations. We ask whether two such agents, operated by diffe… interest.bit.ly/4ctsk9p
English
0
0
0
9
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
Mapping the Exploitation Surface: A 10,000-Trial Taxonomy of What Makes LLM Agents Exploit Vulnerabilities: LLM agents with tool access can discover and exploit security vulnerabilities. This is known. What is not known is which features of a sys… prompts.arxiv.org/abs/2604.04561…
English
0
0
0
17
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
Poisoned Identifiers Survive LLM Deobfuscation: A Case Study on Claude Opus 4.6: When an LLM deobfuscates JavaScript, can poisoned identifier names in the string table survive into the model's reconstructed code, even when the model demo… neededhttps://arxiv.org/abs/2604.04289v1
English
0
0
0
15
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
LLM-Enabled Open-Source Systems in the Wild: An Empirical Study of Vulnerabilities in GitHub Security Advisories: Large language models (LLMs) are increasingly embedded in open-source software (OSS) ecosystems, creating complex interactions among… systems.bit.ly/3O94PsP
English
0
0
0
27
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
Semantics Over Syntax: Uncovering Pre-Authentication 5G Baseband Vulnerabilities: Modern 5G user equipment (UE) processes Radio Resource Control (RRC) configuration messages during early control-plane exchanges, before authentication and integrity … sites.bit.ly/4860Ap1
English
0
0
0
7
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
Towards Unveiling Vulnerabilities of Large Reasoning Models in Machine Unlearning: Large language models (LLMs) possess strong semantic understanding, driving significant progress in data mining applications. This is further enhanced by large r… pipelines.bit.ly/4bUjzVJ
English
0
0
0
21
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
LOCARD: An Agentic Framework for Blockchain Forensics: Blockchain forensics inherently involves dynamic and iterative investigations, while many existing approaches primarily model it through static inference pipelines. We propose a paradigm shift toward… bit.ly/4sgCHCg
English
0
0
0
11
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
CoopGuard: Stateful Cooperative Agents Safeguarding LLMs Against Evolving Multi-Round Attacks: As Large Language Models (LLMs) are increasingly deployed in complex applications, their vulnerability to adversarial attacks raises urgent safety co… scenarios.bit.ly/4ctshdJ
English
0
0
0
16
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
Causality Laundering: Denial-Feedback Leakage in Tool-Calling LLM Agents: Tool-calling LLM agents can read private data, invoke external services, and trigger real-world actions, creating a security problem at the point of tool execution. We iden… systems.bit.ly/41WmXcF
English
0
0
0
14
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
TraceGuard: Structured Multi-Dimensional Monitoring as a Collusion-Resistant Control Protocol: AI control protocols use monitors to detect attacks by untrusted AI agents, but standard single-score monitors face two limitations: they miss subtle… framework.bit.ly/3Qp287a
English
0
0
0
5
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
Automating Cloud Security and Forensics Through a Secure-by-Design Generative AI Framework: As cloud environments become increasingly complex, cybersecurity and forensic investigations must evolve to meet emerging threats. Large Language … infrastructures.bit.ly/4sjNrzJ
English
0
0
0
21
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
LiquiLM: Bridging the Semantic Gap in Liquidity Flaw Audit via DCN and LLMs: Traditional consensus mechanisms, such as Proof of Stake (PoS), increasingly reveal an excessive dependency on large liquidity providers. Although the Proof of Liq… certification.bit.ly/4seMAjF
English
0
0
0
15
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
Systematic Integration of Digital Twins and Constrained LLMs for Interpretable Cyber-Physical Anomaly Detection: Cyber attacks targeting Industrial Control Systems (ICS) have become increasingly sophisticated and hard to identify. Detecting suc… detection.bit.ly/4c8Ccnx
English
0
0
0
21
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
CREBench: Evaluating Large Language Models in Cryptographic Binary Reverse Engineering: Reverse engineering (RE) is central to software security, particularly for cryptographic programs that handle sensitive data and are highly prone to vulnerabilities. … bit.ly/4bUjEbZ
English
0
0
0
13
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
A Faceted Classification of Authenticator-Centric Authentication Techniques: Authentication is a fundamental security means for protecting system resources. Authenticator-centric authentication techniques (AuthN Techniques) address how mechanisms an… work.bit.ly/47KzOST
English
0
0
0
5
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
Perceptual Gaps: ASCII Art and Overlapping Audio as CAPTCHA: As multimodal large language models (LLMs) advance, traditional CAPTCHAs have become obsolete at distinguishing humans from bots. To address this shift, this paper aims to investigate the … bots.bit.ly/4ctLgVy
English
0
0
0
17
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
AttackEval: A Systematic Empirical Study of Prompt Injection Attack Effectiveness Against Large Language Models: Prompt injection has emerged as a critical vulnerability in large language model (LLM) deployments, yet existing research is heavily … systems.bit.ly/3PWj58W
English
0
0
1
38
Tanat Tonguthaisri
Tanat Tonguthaisri@gastronomy·
SecPI: Secure Code Generation with Reasoning Models via Security Reasoning Internalization: Reasoning language models (RLMs) are increasingly used in programming. Yet, even state-of-the-art RLMs frequently introduce critical security vulnerabilities… CWEs.bit.ly/47J0q6F
English
0
0
0
11