NTU SPML Lab

73 posts

NTU SPML Lab banner
NTU SPML Lab

NTU SPML Lab

@ntu_spml

Speech Processing and Machine Learning Lab at National Taiwan University

Katılım Mayıs 2021
8 Takip Edilen315 Takipçiler
NTU SPML Lab retweetledi
Hung-yi Lee (李宏毅)
Hung-yi Lee (李宏毅)@HungyiLee2·
Join us for the Dynamic-SUPERB call-for-tasks event. Submit your innovative task to challenge the speech foundation models that can understand task instruction. Let's push the boundaries of what speech foundation models can do! github.com/dynamic-superb…
Hung-yi Lee (李宏毅) tweet media
English
1
17
75
8.9K
NTU SPML Lab retweetledi
Hung-yi Lee (李宏毅)
Hung-yi Lee (李宏毅)@HungyiLee2·
Join the Webinar Series for Advancements in Audio, Speech and Language Technology. Next up: "End-to-End Automatic Speech Recognition" by Dr. Jinyu Li from Microsoft on May 10 @ 1:00 pm EDT (May 11 @ 1:00 am Taiwan time) Register now: ieee.webex.com/weblink/regist…
English
0
7
51
5.9K
NTU SPML Lab retweetledi
Hung-yi Lee (李宏毅)
Hung-yi Lee (李宏毅)@HungyiLee2·
Fine-tuning the LLaMA-2-Chat model may degrade its original capabilities (arxiv.org/abs/2401.03129). But here's a lifeline: Chat Vector (arxiv.org/abs/2310.04799) keeps a chat model's original capability (it also works on Mistral). Recommend to everyone fine-tuning their LLMs.
Hung-yi Lee (李宏毅) tweet media
English
2
79
360
38.4K
NTU SPML Lab retweetledi
Cheng Han Chiang (姜成翰)
🚀Thrilled to share our new paper: Merging Facts, Crafting Fallacies ✅+✅+✅→❌ arxiv.org/abs/2402.05629 🤔 Does combining factual claims form a factual paragraph? 🙅🏻 LLMs can generate nonfactual paragraphs composed of factual claims! 💯 Existing metrics can't handle this!
Cheng Han Chiang (姜成翰) tweet media
English
1
4
29
5.4K
NTU SPML Lab retweetledi
Hung-yi Lee (李宏毅)
Hung-yi Lee (李宏毅)@HungyiLee2·
Thrilled to see the team continuously enhancing the materials based on my online lectures! 🚀 Despite never having met them in person, their dedication truly impresses me. Check out the amazing work at github.com/datawhalechina…
English
4
36
209
15.4K
NTU SPML Lab retweetledi
Cheng Han Chiang (姜成翰)
📢New Paper Alert🎉 Excited to share our EACL'24 paper 🤔 Do LLMs generate redundant reasonings? 📚 We create questions that can be answered w/o calculations ➡️ LLMs tend to answer with unnecessary reasonings and calculations arxiv.org/abs/2401.11467 #eacl2024 #NLProc #LLM
Cheng Han Chiang (姜成翰) tweet media
English
1
5
24
2.3K
NTU SPML Lab retweetledi
Hung-yi Lee (李宏毅)
Hung-yi Lee (李宏毅)@HungyiLee2·
Excited to speak at #ASRU2023 tomorrow (December 20) at 11:30 AM (GMT+8) on "The Journey of Advancements in Speech Foundation Models"! We'll explore the evolution of speech foundation models. Below, please find the slides: drive.google.com/file/d/1ZWfnOE…
English
2
23
76
8.4K
NTU SPML Lab retweetledi
Hung-yi Lee (李宏毅)
Hung-yi Lee (李宏毅)@HungyiLee2·
Join us for an enlightening afternoon with distinguished speech researchers, Dr. Andreas Stolcke and Prof. Torbjørn Svendsen. Their talks will take place at Barry Lam Hall (博理館) (reurl.cc/krNxl9), R101 (Auditorium), NTU, on December 21st, starting at 2:20PM. #ASRU2023
Hung-yi Lee (李宏毅) tweet mediaHung-yi Lee (李宏毅) tweet media
English
0
6
31
3.8K
NTU SPML Lab retweetledi
Cheng Han Chiang (姜成翰)
🎉🌱In the early stages of my research journey, I'm humbly honored to receive the Google PhD Fellowship🏆 So much more to learn, discover, and explore in this exciting path🚀 🙏 Infinite thanks to my advisor, @HungyiLee2 , for his guiding light. This can't happen without him
Google AI@GoogleAI

In 2009, Google created the PhD Fellowship Program to recognize and support outstanding graduate students pursuing exceptional research in computer science and related fields. Today, we congratulate the recipients of the 2023 Google PhD Fellowship! goo.gle/3PYfLXl

English
4
8
68
11.3K
NTU SPML Lab retweetledi
Hsuan Su
Hsuan Su@jacksukk·
🚀 Introducing the Prompt Benchmark Challenge (PBC) 🚀 Curious about which prompts maximize LLM performance? Join us on the quest to uncover the ultimate prompts for Large Language Models! Explore more at 👉 llm.ee.ntu.edu.tw/prompt-benchma… #PBC #LLM #Prompt
English
2
8
18
3K
NTU SPML Lab retweetledi
Hung-yi Lee (李宏毅)
Hung-yi Lee (李宏毅)@HungyiLee2·
Join us for ASRU's satellite event - the Workshop on Speech Foundation Models & Performance Benchmarks (SPARKS), on Dec 16th, 2023, in Taiwan. 📌 Paper Submission: Oct 19th 🔗 Webpage: sites.google.com/g.ntu.edu.tw/s… Tip: When registering for ASRU, tick the SPARKS option. #ASRU
English
0
18
52
5.6K
NTU SPML Lab retweetledi
WAVLab | @CarnegieMellon
📢 Registration for the SPARKS workshop at #ASRU2023 is now OPEN! Dive deep into speech foundation models and benchmarking. Get ready for discussions on next-gen speech tech! 🎙️ 📄 Paper Submission: 10/19 🗓️ Workshop: 12/16 Details 👉 sites.google.com/g.ntu.edu.tw/s…
English
0
14
26
3.5K
NTU SPML Lab retweetledi
Cheng Han Chiang (姜成翰)
I will be at #ACL2023NLP next week ✈️ to share our three papers on diverse topics. Looking forward to meeting old friends and making some new friends. ✨ Stop by our poster if you want to chat! 😁
Cheng Han Chiang (姜成翰) tweet media
English
0
3
31
2.8K
NTU SPML Lab retweetledi
Hung-yi Lee (李宏毅)
Hung-yi Lee (李宏毅)@HungyiLee2·
Attending #ICASSP2023 in Rhodes, Greece? Don't miss the workshop on "Self-supervision in Audio, Speech & Beyond". Dive deep into the advancements in self-supervised learning. Catch me delivering the workshop keynote @ Jupiter Ballroom, 8:40 a.m. GMT+3. sites.google.com/view/icassp-sa…
English
2
16
91
14.5K
NTU SPML Lab retweetledi
Roger Tseng
Roger Tseng@RogerTseng·
Another interesting discovery I wish to highlight is that Approach 2, when trained without any textual data, still shows signs of learning the different branching directions (a lang.-specific syntactic property) of English and Korean from speech directly.
Roger Tseng tweet mediaRoger Tseng tweet media
English
1
2
2
265
NTU SPML Lab retweetledi
Roger Tseng
Roger Tseng@RogerTseng·
We compare them by using the segmentation determined by the unsupervised ASR model to obtain segment representations for Approach 2. With this settings, both approaches show similar performance, suggesting that good segmentation alone is sufficient for parsing speech.
Roger Tseng tweet media
English
1
2
2
133