Active and Passive LLM Personalization: When to personalize and what role do humans play in the personalization process?
EMNLP 2025 in Suzhou, China on November 9
Call for Papers
Large language models (LLMs) have demonstrated remarkable capabilities in various NLP tasks. However, the extent to which these models can and should adapt to individual users' needs remains an open question. Therefore, this workshop will focus on the personalization of LLMs to meet individual user's needs and preferences. From the user's perspective the personalization process can be either passive, where the LLM learns from observing user behavior, or active, where the user directly guides the personalization. The questions of how to personalize effectively, when to personalize, which personalization paradigm to apply (active vs. passive) remain open questions that are important to address. Further, active and passive personalization each have several challenges and open questions in their own right.
Successfully answering these kinds of questions inherently requires an interdisciplinary approach, combining expertise from a wide variety of fields, such as: NLP, human-computer interaction, linguistics, cognitive science, behavioral science, psychology, ethics, etc. Thus, the workshop will promote interdisciplinary research necessary for effective, user-first personalization, driving towards solving the challenges of passive and active personalization.
Successfully answering these kinds of questions inherently requires an interdisciplinary approach, combining expertise from a wide variety of fields, such as: NLP, human-computer interaction, linguistics, cognitive science, behavioral science, psychology, ethics, etc. Thus, the workshop will promote interdisciplinary research necessary for effective, user-first personalization, driving towards solving the challenges of passive and active personalization.
🏆 Best Student Paper Award: We are excited to award the best student paper with an iPad!
Topics
We invite submissions related, but not limited, to the following topics in the space of active and passive LLM personalization:
- Algorithmic approaches (user persona/profile creation, in-context learning, RAG, parameter tuning, user representation learning, user-specific reasoning, etc.)
- Evaluation methods for offline and online use cases
- Potential risks of personalized LLMs (creating an echo chamber, limiting user exploration)
- User studies or surveys to understand when, where, and how active vs. passive personalization is effective
- Dataset creation and curation (collecting and curating a dataset from real users, creating and validating a synthetic dataset, identifying salient samples for personalization, etc.)
Schedule
All times listed are China Standard Time (CST). The workshop will take place on November 9, 2025 in room A108. We will provide hybrid/virtual options as soon as they are avaiable.
- 9:00 - 9:45 AM — Invited Talk: Diyi Yang
- 9:45 - 10:30 PM — Invited Talk: Yang Deng
- 10:30 - 11:00 AM — Coffee Break
- 11:00 AM - 11:15 PM — Oral Presentation 1
- 11:15 AM - 12:30 PM — Lightning Videos & Poster Session
- 12:30 - 2:00 PM — Lunch Break
- 2:00 - 2:45 PM — Invited Talk: Vukosi Marivate
- 2:45 - 3:30 PM — Oral Presentations
- 3:30 - 4:00 PM — Coffee Break
- 4:00 - 4:45 PM — Invited Talk: Rik Koncel-Kedziorski
- 4:45 - 5:30 PM — Invited Talk: Hannah Rose Kirk
Accepted Papers
PediaMind-R1: A Temperament-Aware Language Model for Personalized Early Childhood Care Reasoning via Cognitive Modeling and Preference Alignment
Zihe Zhang, Can Zhang, Yanheng Xu, Xin Hu, Jichao Leng
Zihe Zhang, Can Zhang, Yanheng Xu, Xin Hu, Jichao Leng
PromptTailor: Multi-turn Intent-Aligned Prompt Synthesis for Lightweight LLMs
Yizhou Xu, Janet Davis
Yizhou Xu, Janet Davis
Enhancing Rating Prediction with Off-the-Shelf LLMs Using In-Context User Reviews
Koki Ryu, Hitomi Yanaka
Koki Ryu, Hitomi Yanaka
Modeling Layered Consciousness with Multi-Agent Large Language Models
Sang Hun Kim, Jongmin Lee, Dongkyu Park, So Young Lee, Yosep Chong
Sang Hun Kim, Jongmin Lee, Dongkyu Park, So Young Lee, Yosep Chong
Hateful Person or Hateful Model? Investigating the Role of Personas in Hate Speech Detection by Large Language Models
Shuzhou Yuan, Ercong Nie, Mario Tawfelis, Helmut Schmid, Hinrich Schütze, Michael Färber
Shuzhou Yuan, Ercong Nie, Mario Tawfelis, Helmut Schmid, Hinrich Schütze, Michael Färber
When Should Agents Ask? Decision-Theoretic Adaptive Communication for LLM Agents
Yijiang River Dong, Tiancheng Hu, Zheng Hui, Caiqi Zhang, Ivan Vulić, Andreea Bobu, Nigel Collier
Yijiang River Dong, Tiancheng Hu, Zheng Hui, Caiqi Zhang, Ivan Vulić, Andreea Bobu, Nigel Collier
One-Topic-Doesn't-Fit-All: Transcreating Reading Comprehension Test for Personalized Learning
Jieun Han, Daniel Lee, Haneul Yoo, Jinsung Yoon, Junyeong Park, Suin Kim, So-Yeon Ahn, Alice Oh
Jieun Han, Daniel Lee, Haneul Yoo, Jinsung Yoon, Junyeong Park, Suin Kim, So-Yeon Ahn, Alice Oh
BluePrint: A Social Media User Dataset for LLM Persona Evaluation and Training
Aurélien Bück-Kaeffer, Je Qin Chooi, Dan Zhao, Maximilian Puelma Touzel, Kellin Pelrine, Jean-François Godbout, Reihaneh Rabbany, Zachary Yang
Aurélien Bück-Kaeffer, Je Qin Chooi, Dan Zhao, Maximilian Puelma Touzel, Kellin Pelrine, Jean-François Godbout, Reihaneh Rabbany, Zachary Yang
Augmenting Dialog with Think-Aloud Utterances for Modeling Individual Personality Traits by LLM
Seiya Ishikura, Hiroaki Yamada, Tatsuya Hiraoka, Hiroaki Yamada, Takenobu Tokunaga
Seiya Ishikura, Hiroaki Yamada, Tatsuya Hiraoka, Hiroaki Yamada, Takenobu Tokunaga
Is Passive Expertise-Based Personalization Enough? A Case Study in AI-Assisted Test-Taking
Li Siyan, Jason Zhang, Akash Maharaj, Yuanming Shi, Yunyao Li
Li Siyan, Jason Zhang, Akash Maharaj, Yuanming Shi, Yunyao Li
Is Active Persona Inference Necessary for Aligning Small Models to Personal Preferences?
Zilu Tang, Afra Feyza Akyürek, Ekin Akyürek, Derry Wijaya
Zilu Tang, Afra Feyza Akyürek, Ekin Akyürek, Derry Wijaya
Minority-Aware Satisfaction Estimation in Dialogue Systems via Preference-Adaptive Reinforcement Learning
Yahui Fu, Zi Haur Pang, Tatsuya Kawahara
Yahui Fu, Zi Haur Pang, Tatsuya Kawahara
Collaborative User Prompt for Personalized Generative Recommendation
Jerome Ramos, Bin Wu, Aldo Lipani
Jerome Ramos, Bin Wu, Aldo Lipani
Analyzing Trade-offs Between Faithfulness and Correctness in LLM Personalization
Tiasa Singha Roy, Vishakh Padmakumar
Tiasa Singha Roy, Vishakh Padmakumar
Personality Editing for Language Models through Relevant Knowledge Editing
Seojin Hwang, Yumin Kim, Byeongjeong Kim, Donghoon Shin, Hwanhee Lee
Seojin Hwang, Yumin Kim, Byeongjeong Kim, Donghoon Shin, Hwanhee Lee
Important Dates
- ARR submission deadline: May 19, 2025
- ARR commitment deadline: September 4, 2025 (Submission link)
- Direct submission deadline: August 1 (Submission link)
- Notification of acceptance: September 22
- Camera ready deadline: October 6
- Workshop dates: November 9th, Co-located with EMNLP 2025
Guidelines
We follow the standard *ACL template, and welcome short (up to 4 pages) and long (up to 8 pages) papers. References and appendices are not included in the page limits.
We welcome direct submissions via OpenReview, as well as submissions made through ARR. For submissions made through ARR, we will calibrate scores such that they are suitable for a workshop submission (for example if initially intended as a full conference paper). Please include both the original submission and all ARR reviews when uploading ARR submissions.
Submit directly via OpenReview: https://openreview.net/group?id=EMNLP/2025/Workshop/PALS
See Important Dates for deadlines. All accepted papers are non-archival and can be published on preprint servers, such as ArXiv.
We welcome direct submissions via OpenReview, as well as submissions made through ARR. For submissions made through ARR, we will calibrate scores such that they are suitable for a workshop submission (for example if initially intended as a full conference paper). Please include both the original submission and all ARR reviews when uploading ARR submissions.
Submit directly via OpenReview: https://openreview.net/group?id=EMNLP/2025/Workshop/PALS
See Important Dates for deadlines. All accepted papers are non-archival and can be published on preprint servers, such as ArXiv.
Dual Submissions
We allow submissions under review at other venues, but please check the submission policies of the respective venues as this might be different.
Anonymity Period
There is no anonymity period.
Reciprocal Reviewing
It is expected that at least one author from each paper will review. Reviewers should expect a load of no more than 4 papers.
Invited Speakers
Vukosi Marivate
Associate Prof @ University of Pretoria
Yang Deng
Assistant Prof @ Singapore Management University
Diyi Yang
Assistant Prof @ Stanford
Rik Koncel-Kedziorski
Apple
Hannah Rose Kirk
UK AI Safety Institute
Organizers
Katherine Metcalf
Apple
Maartje ter Hoeve
Apple
Andrew Silva
Apple
Clemencia Siro
Phd Student @ University of Amsterdam
Lucie Charlotte Magister
Phd Student @ University of Cambridge
Advisory Board
Natalie Schluter
Apple
Barry-John Theobald
Apple