Keynotes
We are excited to share the following speakers have kindly accepted to give invited talks at INLG2024.
Keynote 1: Yulan He
Time: 25th Sep. 13:30-14:15
Title: Enhancing LLM Reasoning through Reflection and Refinement
Abstract: The reasoning capabilities of Large Language Models (LLMs) can be greatly improved through reflection and refinement techniques. This talk delves into several scenarios illustrating these advancements. First, we explore how LLM self-refinement, guided by multi-perspective solution seeking and self-consistency checks, produces superior results in question answering (QA) compared to traditional self-reflection approaches. Second, in the domain of student answer scoring, we decompose the scoring process into a series of binary questions that evaluate the presence of key answer elements. This method constructs a thought tree, enabling the generation of rationales and facilitating explainable scoring. Third, we address the challenge of causal event extraction. Unlike many current findings, LLM evaluators do not align well with human evaluation results. To address this,, we propose training a smaller language model on human evaluation data to serve as an evaluator and reward model, allowing us to develop a robust causal event extraction system. Lastly, in the context of mystery murder games, we demonstrate that equipping LLMs with multiple sensory inputs allows for better situational evaluation and more effective navigation in the search for suspects. The talk will conclude with an exploration of future research directions for further enhancing the reasoning capabilities of LLMs.
Short Bio: Yulan He is a Professor in Natural Language Processing at the Department of Informatics in King’s College London, UK. She directs the NLP group there. Yulan obtained her PhD degree from the University of Cambridge. She is currently holding a prestigious 5-year UKRI Turing AI Fellowship. Yulan’s research interests lie in the integration of machine learning and natural language processing for text understanding. Recently, she has focused on addressing the limitations of Large Language Models (LLMs), aiming to enhance their reasoning capabilities, robustness, and explainability. She has published over 250 papers on topics such as machine reading comprehension, model interpretability and trustworthy AI, NLP for health, finance and education. She has received several prizes and awards for her research, including a SWSA Ten-Year Award, a CIKM Test-of-Time Award, and AI 2020 Most Influential Scholar Honourable Mention. She served as the General Chair for AACL-IJCNLP 2022 and a Program Co-Chair for various conferences such as ECIR 2024, CCL 2024, and EMNLP 2020. Her research has received support from the EPSRC, Royal Academy of Engineering, EU-H2020, Innovate UK, British Council, and industrial funding.
Keynote 2: Mark Riedl
Time: 26th Sep. 10:00-10:45
Title: The Quest for Automated Story Generation
Abstract: The grand challenge of automated story generation has been a persistent theme in the study of artificial intelligence since the beginning of the field. Yet after half a century, it is not yet solved. One of the reasons automated story generation has been elusive is because it touches upon many other challenging, unsolved problems in AI: planning with language, modeling and reasoning about communicative intent, language understanding, sociocultural and commonsense knowledge, and theory of mind to name a few. In this talk I will selectively walk through the history of the pursuit of automated story generation. As the field of AI itself has gone through major phases, from symbolic to learning to neural systems, so has story generation research. I will conclude the talk by speculating on where automated story generation might continue to progress, and how the pursuit of story generation may inform the larger field of artificial intelligence.
Short Bio: Dr. Mark Riedl is a Professor in the Georgia Tech School of Interactive Computing and Associate Director of the Georgia Tech Machine Learning Center. Dr. Riedl’s research focuses on human-centered artificial intelligence—the development of artificial intelligence and machine learning technologies that understand and interact with human users in more natural ways. Dr. Riedl’s recent work has focused on story understanding and generation, computational creativity, explainable AI, and teaching virtual agents to behave safely. His research is supported by the NSF, DARPA, ONR, the U.S. Army, U.S. Health and Human Services, Disney, Google, Meta, and Amazon. He is the recipient of a DARPA Young Faculty Award and an NSF CAREER Award.
Keynote 3: Kees van Deemter
Time: 26th Sep. 13:30-14:15
Title: Why are we in this game? Dimensions of explanatory value in NLG models.
Abstract: NLG models are commonly evaluated in terms of their performance, for instance by means of computational metrics or judgements by human participants. In this talk, I will ask what additional criteria there might be in terms of which NLG models should be evaluated. In doing so, I will propose a set of criteria that emerge when we view NLG (and NLP more broadly) as a scientific enterprise whose aim is to explain what people say and how they say it; these criteria include generality, parsimony, and support from linguistic or other theories. To illustrate my proposal, I will focus on a research topic that has a long history in NLG, namely Referring Expressions Generation (REG), comparing some recent REG models in terms of the aforementioned criteria. I will conclude by asking whether the same criteria apply to application-oriented NLG as well, and what it might mean for institutional policies if the NLG community took my proposal onboard.
K. van Deemter (2023) Dimensions of Explanatory Value in NLP Models. Computational Linguistics 49 (3).
Short Bio: Kees van Deemter has worked in Natural Language Processing from 1984 and on Natural Language Generation from about 1994. He likes to work with scholars in neighbouring disciplines, such as linguists, logicians, and psycholinguists. He is the author of "Not Exactly: in Praise of Vagueness" (Oxford University Press 2010) and of "Computational Models of Referring: a Study in Cognitive Science" (MIT Press 2016). Having led Utrecht University's NLP group from 2018 until March 2024, he is currently an Emeritus Professor at Utrecht University.
Keynote 4: Koichiro Yoshino
Time: 27th Sep. 13:30-14:15
Title: Embodied Language Generation for Autonomous Robot
Abstract: With the improved performance provided by large language models, language generation systems are being used in a wide variety of applications, including robotics. In the field of robotics, language generation systems are used not only for response generation but also for robot's action planning. Embodiment is essential for utilizing language models in such robotic tasks. When performing language generation from the vast amount of real-world information available to the robot, it is important to choose which information to use and to set up a point of view from which to make that choice. This talk will focus on these points while introducing our efforts related to embodied language generation systems for robots.
Short Bio: Koichiro Yoshino is an Associate Professor at Tokyo Institute of Technology, a Team Leader at the Institute of Physical and Chemical Research (RIKEN), and an Affiliate Professor of Nara Institute of Science and Technology (NAIST). He received his bachelor's degree of arts from Keio University in 2009, master's degree in informatics from Kyoto University in 2011, and Ph.D. in informatics from Kyoto University in 2014. He worked at Kyoto University as a postdoc and NAIST as an assistant professor. From 2024, a cross-appointment between an Associate Professor at School of Computing, Tokyo Institute of Technology and a Team Leader at Guardian Robot Project, RIKEN. From 2019 to 2020, he was a visiting researcher of Heinrich-Heine-Universität Düsseldorf, Germany. He is working on areas of spoken and natural language processing, especially robot dialogue systems. Dr. Koichiro Yoshino received several honors, including the best paper award of IWSDS2020, IWSDS2024, and the best paper award of the 1st NLP4ConvAI workshop. He is a member of IEEE-SLTC, a member of DSTC Steering Committee, an action editor of ARR, a board member of SIGdial and a board member of ANLP. He is a senior member of IPSJ and a member of JSAI and RSJ.