Best Paper Awards
Best Long Paper Awards
Why do language models perform worse for morphologically complex languages?
Catherine Arnett and Benjamin Bergen
Towards Understanding Multi-Task Learning (Generalization) of LLMs via Detecting and Exploring Task-Specific Neurons
Yongqi Leng, Deyi Xiong
Best Short Paper Awards
Beyond Surprisal: A Dual Metric Framework for Lexical Skill Acquisition in LLMs
Nazanin Shafiabadi and Guillaume Wisniewski
Detecting Conversational Mental Manipulation with Intent-Aware Prompting
Jiayuan Ma, Hongbin Na, Zimu Wang, Yining Hua, Yue Liu, Wei Wang and Ling Chen
Best Social Impact Paper Awards
CHIFRAUD: A Long-term Web Text Dataset for Chinese Fraud Detection
Min Tang, Lixin Zou, Zhe Jin, ShuJie Cui, Shiuan Ni Liang and Weiqing Wang
PIRsuader: A Persuasive Chatbot for Mitigating Psychological Insulin Resistance in Type-2 Diabetic Patients
Sujatha Das Gollapalli and See-Kiong Ng
Best Low-Resource Language Paper Awards
BERT-based Classical Arabic Poetry Authorship Attribution
Lama Alqurashi, Serge Sharoff, Janet Watson and Jacob Blakesley
Best Dataset Paper Awards
NYT-Connections: A Deceptively Simple Text Classification Task that Stumps System-1 Thinkers
Angel Yahir Loredo Lopez, Tyler McDonald and Ali Emami
VeritasQA: A Truthfulness Benchmark Aimed at Multilingual Transferability
Javier Aula-Blasco, Júlia Falcão, Susana Sotelo, Silvia Paniagua, Aitor Gonzalez-Agirre and Marta Villegas
Outstanding Paper Awards
Language Models Encode the Value of Numbers Linearly
Fangwei Zhu, Damai Dai and Zhifang Sui
MLLM-I2W: Harnessing Multimodal Large Language Model for Zero-Shot Composed Image Retrieval
Tong Bao, Che Liu, Derong Xu, Zhi Zheng and Tong Xu
Do Large Language Models Mirror Cognitive Language Processing?
Yuqi Ren, Renren Jin, Tongxuan Zhang and Deyi Xiong
Enhancing Emotional Support Conversations: A Framework for Dynamic Knowledge Filtering and Persona Extraction
Jiawang Hao and Fang Kong
VideoQA-TA: Temporal-Aware Multi-Modal Video Question Answering
Zhixuan Wu, Bo Cheng, Jiale Han, Jiabao Ma, Shuhao Zhang, Yuli Chen and Changbo Li
Best Demonstration Paper Awards
MuRAR: A Simple and Effective Multimodal Retrieval and Answer Refinement Framework for Multimodal Question Answering
Zhengyuan Zhu, Daniel Lee, Hong Zhang, Sai Sree Harsha, Loic Feujio, Akash Maharaj and Yunyao Li