

FaRo
10K posts

@faroit
@[email protected] Audio-AI researcher at @audioshakeai (Before: @inria, @FraunhoferIIS / @uniFAU). All in 17.68% of grey




This month at #ISMIR2023, AudioShake’s Research team presented a new benchmark for automatic lyric transcription systems– one that accounts for the nuances of music. You can read more on their new paper on AudioShake: audioshake.ai/post/new-bench…

















Models such as Stable Diffusion are trained on copyrighted, trademarked, private, and sensitive images. Yet, our new paper shows that diffusion models memorize images from their training data and emit them at generation time. Paper: arxiv.org/abs/2301.13188 👇[1/9]

