
(1) Learning transducers from data has been an open problem for decades. In a new paper with @lecslab, we present a highly effective approach that learns FSTs by imitating the hidden-state geometry of an RNN.
Alexis Palmer
839 posts

@lexicutioner
Computational linguist, CU Boulder Ling. Low-resource & endangered languages, lang documentation, computational discourse and semantics. Musician. she/her

(1) Learning transducers from data has been an open problem for decades. In a new paper with @lecslab, we present a highly effective approach that learns FSTs by imitating the hidden-state geometry of an RNN.






The call for papers is out for the 5th edition of the Workshop on Multilingual Representation Learning which will take place in Suzhou, China co-located with EMNLP 2025! See details below!












Final opportunity: Multiple PhD students to start in Fall 2025. Combine neural and symbolic/interpretable models of language, vision, and action, and work with world-class advisors at @Saar_Uni, MPI Informatics, @mpi_sws, @CISPA ,@DFKI. Details: neuroexplicit.org/jobs/

Breaking news: Google is winning on every AI front. This is not just about Gemini 2.5 but about a reality that OpenAI and Anthropic fans have ignored for too long. Here's a non-exhaustive list: - Gemini 2.5 Pro is the best model in the world according to benchmarks, vibe checks, high-taste testers, and firsthand testimonies. It's also fast and cheap compared to similar models (Google offers it for free on the Gemini app!) - Gemini 2.5 Flash (to be announced soon) is much faster and much cheaper, so it captures perfectly the Pareto frontier of cost-performance of cost-efficient models. - Gemma 3 is a highly competitive open-source model, as good or better than Llama 4 and DeepSeek models. - That's just LLMs. Google is world-class in image (Imagen 3), video (Veo 2), voice (Chirp 3), and music (Lyria). They're integrating them all in Vertex AI. - Deep Research with Gemini 2.5 Pro is *twice as good* as OpenAI's Deep Research, according to human testers. Other agents? Yes: Project Astra (assistant) and Project Mariner (computer interaction) - They just launched Agent2Agent, compatible and complementary to Anthropic's MCP, which they will build in-house as well. - And they keep publishing papers in top journals (Nature) and going to the top conferences (ICLR, NeurIPS), whereas others jealously keep their most important stuff for themselves. - That's just the AI stuff, but Google is also a consumer software company with seven 2+ billion monthly users: Search, YouTube, Gmail, Android, Chrome, Maps, and Play Store - A hyperscaler (Google Cloud) - A hardware company (TPUs, Ironwood) - And a phone company (Pixel). How can OpenAI or Anthropic or even Meta fight such a beast? Let’s wait for their responses to this. I’ll be here to cover any newsworthy release—even if I’ve already made my bet on who’s most likely to win. (Read the full post in the link below.)



