Yenting Lin
Yen-Ting Lin is a Ph.D. candidate at National Taiwan University, advised by Professor Yun-Nung (Vivian) Chen. His research focuses on large language models (LLMs), and he is currently leading the Taiwan-LLM project, which develops language models optimized for Traditional Chinese, addressing the unique linguistic needs of Taiwan.
He has gained practical experience through internships at companies such as Meta GenAI in summer 2024, NVIDIA Research in spring 2024, Amazon Alexa AI in the summers of 2021, 2022, and 2023, MediaTek Research in the summer of 2019, and IBM Research in the summer of 2020.
Yen-Ting is actively seeking industry research scientist or engineer positions starting in 2025.
Taiwan
Company / Organisation –National Taiwan University
Session
[ONLINE presentation] This talk introduces TAIWAN-LLM, a pioneering Large Language Model specifically designed for Traditional Chinese as used in Taiwan. We'll discuss how TAIWAN-LLM addresses the underrepresentation of Traditional Chinese in existing language models, bridging the linguistic and cultural divide. The presentation will cover our approach to developing a culturally aligned model, including the use of a comprehensive Taiwanese corpus, instruction fine-tuning, and real user feedback incorporation. We'll share evaluation results demonstrating TAIWAN-LLM's superior performance in understanding and generating Traditional Chinese text compared to existing models.