KCL NLP

114 posts

KCL NLP

KCL NLP

@kclnlp

@KingsCollegeLon Natural Language Processing group

London, UK Katılım Haziran 2024
52 Takip Edilen133 Takipçiler
KCL NLP
KCL NLP@kclnlp·
🧬 Causal Fine-Tuning under Latent Confounder Shift (led by Jialin Yu) — causal fine-tuning stays robust when latent confounders shift between training and deployment distributions. (lnkd.in/ebwAwUsn)
English
0
0
0
52
KCL NLP
KCL NLP@kclnlp·
🌊 Linear Ensembles Wash Away LLM Watermarks (led by Runcong Zhao) — Averaging outputs from 3–5 LLMs cancels watermark signals, exposing a fundamental fragility in current watermarking schemes.
English
1
0
0
67
KCL NLP retweetledi
Hanqi Yan
Hanqi Yan@yan_hanqi·
🎉 @kclnlp led by @yulanhe is recruiting a 2-year postdoc in NLP/LLM! 🤖 🔬 Faithful reasoning, meta-reasoning with cognitive consideration to build reliable and impactful systems for high-stakes educational assessment. 🎓 🤝 Collaborated with AQA. 📅 Application deadline: 15 May 2026 🔗 Job post: lnkd.in/eHcka7ae
English
0
3
14
919
KCL NLP
KCL NLP@kclnlp·
xMemory replaces chunk-level indexing with semantic component indexing, together with a top-down retrieval strategy that improves both answer quality and token efficiency.
English
1
0
2
70
KCL NLP
KCL NLP@kclnlp·
Our work on agent memory, xMemory (Beyond RAG for Agent Memory: Retrieval by Decoupling and Aggregation), has been discussed by projects such as Claude Memory, PageIndex, and OpenClaw. Glad to see this direction resonating. @HZhanghao @yulanhe @LinGui_KCL @dair_ai @VentureBeat
English
1
0
6
444
KCL NLP retweetledi
Hanqi Yan
Hanqi Yan@yan_hanqi·
🚀 20th Jan, 14:00-18:00. Join our tutorial at #AAAI26: "Structured Representation Learning: Interpretability, Robustness, and Transferability for LLMs" Learn how structured representations can make LLMs more interpretable, robust, and transferable across tasks. 📍 AAAI 2026 Singapore 🔗 srl4llm.github.io #AI #MachineLearning #LLMs #NLProc
English
3
1
6
504
KCL NLP retweetledi
Hanqi Yan
Hanqi Yan@yan_hanqi·
🎓 Fully-funded PhD position (start from 10.2026) at King's College London! 🤖 Building robust & fair AI for early detection of treatment resistance in schizophrenia 👥 Co-supervised by Prof. James MacCabe (Psychology) & myself (Informatics) Apply now 👇 showcase.drive-health.org.uk/project/genai-…
English
0
2
15
1K
KCL NLP retweetledi
Zhenyi Shen
Zhenyi Shen@zhenyishen22·
Sparse-attention (SA) models should resemble full attention (FA) when acting as a proxy — but in practice, SA training often produces surprisingly low sparsity in its attention maps, making them potentially suboptimal. Introducing SSA (Sparse Sparse Attention), a new pre-training method that explicitly encourages a sparser attention distribution while preserving model quality. TL;DR — SSA delivers: 🔹 FA-mode PPL on par with FA models, while achieving SOTA performance in SA mode. 🔹 State-of-the-art commonsense reasoning 🔹 Strong length extrapolation — stable performance up to 32k tokens 🔹 Smoothly adapts to different sparsity budgets, with performance improving as more tokens become visible 📄 Paper: arxiv.org/abs/2511.20102
Zhenyi Shen tweet mediaZhenyi Shen tweet media
English
0
2
7
268
KCL NLP
KCL NLP@kclnlp·
🧠Join us online for talks & a panel discussion on Latent Reasoning in Large Language Models 🎙️Speakers: Zeyuan Yang (UMass Amherst) — Machine Mental Imagery 🏆 Best Paper, ICCV KnowledgeMR Heming Xia (PolyU) — TokenSkip: Controllable CoT Compression in LLMs @LinGui_KCL @yulanhe
KCL NLP tweet media
English
2
3
4
413
KCL NLP
KCL NLP@kclnlp·
It's EMNLP time! 🚀 Members of KCLNLP are heading to Suzhou with 11 exciting works! The full schedule is in the image, come to our poster/oral sessions and let's talk NLP 🧵👇 @kclinformatics #NLP #AI #KCLNLP
KCL NLP tweet media
English
1
1
3
855