
Humbled and excited to receive MURI on the general topic of robust learning. PIs are from US (UW, UCSD, CMU, PSU) and Australia (University of Melbourne, Macquarie University, University of Newcastle). cto.mil/wp-content/upl…
CATCH MURI
58 posts

@catch_muri
The Joint MURI-AUSMURI in Cybersecurity Assurance for Teams of Computers and Humans (CATCH)

Humbled and excited to receive MURI on the general topic of robust learning. PIs are from US (UW, UCSD, CMU, PSU) and Australia (University of Melbourne, Macquarie University, University of Newcastle). cto.mil/wp-content/upl…

Congrats to @jhasomesh, who recently received the prestigious “Test of Time” award during the 33rd @USENIXSecurity Symposium! “This paper was highly influential,” said @kamalikac, professor of @ucsd_cse and director of the (FAIR) team at @AIatMeta. cs.wisc.edu/2024/09/10/uw-…

It is always a source of pride when your students succeed. Recently, Matt Fredrikson (middle name "not on Twitter":-)) got tenure at @CSDatCMU Matt was an integral part of my group at @WisconsinCS and did some amazing work. He is also a co-founder of grayswan.ai The company focusses on security/safety of AI. Congrats Matt! Proud advisor moment.


📰 Excited to be organizing a workshop on Interpretability @NeurIPSConf'24, called 'Interpretable AI : Past, Present and Future' Submit to our workshop for all things inherently interpretable! Submission ddl : 30 Aug 🔗 interpretable-ai-workshop.github.io Follow this account for updates!

Congratulations to @CyLab faculty members Matt Fredrikson of @S3DatCMU and Bryan Parno of @SCSatCMU, @CMU_ECE on winning “Test of Time” Awards at last week’s @USENIXSecurity Symposium in Philadelphia! Learn more about their research: cylab.cmu.edu/news/2024/08/2… #usesec24

*Must read* for anyone interested in ML security, by Nicholas Carlini. Attacks are the only way we know whether or not a purportedly secure system actually is. Moreover, I consider personal attacks like this unacceptable in my research communities. nicholas.carlini.com/writing/2024/w…



Delighted to receive a Best Paper Award 🏆for my latest work — FairProof : Confidential and Certifiable Fairness for Neural Networks (arxiv.org/abs/2402.12572) — at the Privacy-ILR Workshop @iclr_conf ! (my 1st :) Will also be presented @icmlconf Slides : bit.ly/fairproof_slid…





(Second Call for Papers) Submit your work on the security for GenAI systems and applications. *Security Architectures for Generative AI (SAGAI'24)* is a new workshop at IEEE S&P this year. Full CFP: sites.google.com/view/sagai2024… Submission deadline: February 5, 2024




Our paper presents the first provable watermarking scheme for large language models (LLMS) with public detectability or verifiability: we use a private key for watermarking and a public key for watermark detection. Work with Jaiden, Sanjam, Mingyuan, Mohammad, and Saeed. Comments most welcome. We will update in the next version. eprint.iacr.org/2023/1661.pdf


