We are delighted to announce that the esteemed speakers listed below have graciously accepted our invitation to deliver keynote speeches at the main conference of COLING 2025:

Katrin Erk

Katrin Erk

Word meaning, in computational linguistics and beyond

Abstract: What kinds of things are word meanings, and how can we model them computationally? In the time of distributional models, the study of polysemy structure always seemed just out of reach. Now, recent language models provide exhilarating new prospects for studying lexical meaning: Looking at embeddings, we can finally see usage groups, both similar to and intriguingly different from dictionary senses, especially in the cultural traces or “story traces” that they show. With prompting, we can get at more complex structures of meaning, like properties of frames and narratives. But there is so much to be figured out first. We have many promising techniques, but we don’t yet have reliable best practices and tools for analyzing language models for lexical meaning. It is also still difficult to distinguish the signal from the noise: When the picture that language models show us of word meaning diverges from how we humans organize dictionaries, then which parts are important facets of meaning that we have overlooked, and which parts are just peculiarities of the computational system? To make progress here, we also need to further develop our theories of the lexicon.

Bio: Katrin Erk is a Professor of Linguistics and Computer Science at the University of Texas at Austin. She earned her Ph.D. from Saarland University in Germany in 2002, focusing on tree description languages and ellipsis. Her research expertise lies in computational linguistics, particularly in semantics. She specializes in developing distributed, flexible approaches to describing word meaning and integrating them with representations at the sentence or discourse level. Her work includes studying flexible representations of word meaning constrained by context and exploring frameworks that draw inferences based on sentence structure and word meanings. She also investigates narrative schemas and their influence on word meaning and inference. In October 2024, it was announced that she will join the University of Massachusetts Amherst in September 2025, holding a joint position in the Department of Linguistics and the Manning College of Information and Computer Sciences. Throughout her career, she has received several awards and honors, including a CSLI Fellowship at Stanford in 2017 and a Google Faculty Research Award in 2018.

Emmanuel Dupoux

Emmanuel Dupoux

Learning a language like infants do: results and challenges for developmentally inspired NLP

Abstract: Instead of building AI systems that match human adults performance on tasks of interest, why not building an “AI child” that will be able to learn autonomously any task? This rather old idea has been notoriously difficult to implement, yet progress in machine learning and ecological datasets of parent- child interaction put us today in a good position to get a stab at it. We first describe the general conditions of autonomous language learning in the human child (continuous, scarce, noisy, multimodal, interactive data; fast, stable, overlapping learning curves), and propose methodological principles to compare child and machine learning abilities back to back. We then present some first results on (textless) speech language models showing a 3 to 5 orders of magnitude gap in sample efficiency (in favor of human children) and discuss competing hypotheses about what children have that current ai systems are lacking that could explain this large performance gap.

Bio: Emmanuel Dupoux is a Professor of Cognitive Psychology at the École des Hautes Études en Sciences Sociales (EHESS) in Paris. He earned his Ph.D. in Cognitive Psychology from EHESS in 1989, focusing on the mechanisms and representations that enable infants to acquire language and become cognitively functional within their culture. His research expertise lies in cognitive development, psycholinguistics, language acquisition, cognitive modeling, and machine learning. He specializes in studying early language acquisition, phonological ‘deafnesses’ in speech perception, and the development of social cognition. He also investigates how machine learning and artificial intelligence can provide quantitative models of processing and learning in infants. Throughout his career, he has received several awards and honors, including an Advanced ERC grant and the organization of the Zero Resource Speech Challenge (2015, 2017, 2019) and the Intuitive Physics Benchmark (2019).

Partha Talukdar

Partha Talukdar

Towards linguistically and culturally inclusive LLMs

Abstract: Large Language Models (LLMs) have seen tremendous progress over the last few years with increasing adoption across the globe. Even though there are more than 7000 languages in the world, LLMs are currently usable in only a handful of them. Moreover, as LLMs get used across geographies, there is a need for them to become adaptive to regional cultural nuances and local norms. These raise interesting research challenges in language and culture, especially due to the limited availability of representative data and evaluations. In this talk, I shall present an overview of our research in this promising area of inclusive LLMs. I shall talk about Project Vaani where the goal is to capture and make available the speech landscape of India using a unique geo-anchored approach, modular approaches such as CALM to increase the scope of LLMs using composition, and benchmarks such as CUBE to evaluate the cultural knowledge of LLMs.

Bio: Partha is a Researcher at Google DeepMind India where he leads the Languages group. He is also a Faculty Member at IISc Bangalore. Previously, Partha was a Postdoctoral Fellow in the Machine Learning Department at Carnegie Mellon University. He received his PhD (2010) from the University of Pennsylvania. Partha is broadly interested in making AI more inclusive with a view towards benefiting a broader part of the world population. Partha is a recipient of several awards, including an Outstanding Paper Award at ACL 2019 and the ACM India Early Career Award 2022. He is also a Fellow of the Indian National Academy of Engineering. He is a co-author of a book on Graph-based Semi-Supervised Learning. Homepage: https://parthatalukdar.github.io/