
Defne Circi
105 posts

Defne Circi
@DCirci
AI Resident @LilaSciences | PhD Candidate @DukeU







📢 New Preprint from @raghavlite on Multimodal Contrastive Learning: Breaking the Batch Barrier (B3) 📢 TL;DR: Smart batch mining based on community detection achieves state of the art on the MMEB benchmark. Preprint: arxiv.org/pdf/2505.11293 Code: github.com/raghavlite/B3





🚀 Today we are announcing the third LLM Hackathon for Applications in Materials and Chemistry (Sept 11-12). Last year, hundreds of researchers participated from across the world, and 34 teams submitted open examples of what can be achieved with last year's AI. This year, we are expecting even more amazing applications with higher powered models, agentic frameworks, and new scientific foundation models. Your imagination is the only limit. Designing next-gen therapeutics and medical treatments, capturing and storing energy, requires solving problems in materials and chemistry faster than ever. This is your opportunity to not only help build the applications that will transform the future, but to join a community that will uplift you and provide you a path to gain experience in AI for science. How can you participate? - We will have in-person sites around the world (likely >10 sites) - We will also have a full-featured virtual presence enabling people to participate from around the world. If you have an internet connection, you can participate. - Join our Slack for access to a community of over 500 LLM practitioners Interested in sponsoring this event? DM me for details Registration details in the comments.

Are LLMs linguistically productive and systematic in morphologically-rich languages as good as humans? No 🤨 Our new NAACL 2025 paper (arxiv.org/abs/2410.12656) reveals a significant performance gap between LLMs and humans in linguistic creativity and morphological generalization.





Are LLMs linguistically productive and systematic in morphologically-rich languages as good as humans? No 🤨 Our new NAACL 2025 paper (arxiv.org/abs/2410.12656) reveals a significant performance gap between LLMs and humans in linguistic creativity and morphological generalization.







