PALS Workshop@EMNLP

TAILORING AI:

EXPLORING ACTIVE AND
PASSIVE LLM
PERSONALIZATION (PALS)
Active and Passive LLM Personalization: When to personalize and what role do humans play in the personalization process?
EMNLP 2025 in Suzhou, China on November 9

Call for Papers

Large language models (LLMs) have demonstrated remarkable capabilities in various NLP tasks. However, the extent to which these models can and should adapt to individual users' needs remains an open question. Therefore, this workshop will focus on the personalization of LLMs to meet individual user's needs and preferences. From the user's perspective the personalization process can be either passive, where the LLM learns from observing user behavior, or active, where the user directly guides the personalization. The questions of how to personalize effectively, when to personalize, which personalization paradigm to apply (active vs. passive) remain open questions that are important to address. Further, active and passive personalization each have several challenges and open questions in their own right.

Successfully answering these kinds of questions inherently requires an interdisciplinary approach, combining expertise from a wide variety of fields, such as: NLP, human-computer interaction, linguistics, cognitive science, behavioral science, psychology, ethics, etc. Thus, the workshop will promote interdisciplinary research necessary for effective, user-first personalization, driving towards solving the challenges of passive and active personalization.
🏆 Best Student Paper Award: We are excited to award the best student paper with an iPad!

Topics

We invite submissions related, but not limited, to the following topics in the space of active and passive LLM personalization:

Schedule

All times listed are China Standard Time (CST). The workshop will take place on November 9, 2025 in room A108. We will provide hybrid/virtual options as soon as they are avaiable.

Accepted Papers

PediaMind-R1: A Temperament-Aware Language Model for Personalized Early Childhood Care Reasoning via Cognitive Modeling and Preference Alignment
Zihe Zhang, Can Zhang, Yanheng Xu, Xin Hu, Jichao Leng
PromptTailor: Multi-turn Intent-Aligned Prompt Synthesis for Lightweight LLMs
Yizhou Xu, Janet Davis
Enhancing Rating Prediction with Off-the-Shelf LLMs Using In-Context User Reviews
Koki Ryu, Hitomi Yanaka
Modeling Layered Consciousness with Multi-Agent Large Language Models
Sang Hun Kim, Jongmin Lee, Dongkyu Park, So Young Lee, Yosep Chong
Hateful Person or Hateful Model? Investigating the Role of Personas in Hate Speech Detection by Large Language Models
Shuzhou Yuan, Ercong Nie, Mario Tawfelis, Helmut Schmid, Hinrich Schütze, Michael Färber
When Should Agents Ask? Decision-Theoretic Adaptive Communication for LLM Agents
Yijiang River Dong, Tiancheng Hu, Zheng Hui, Caiqi Zhang, Ivan Vulić, Andreea Bobu, Nigel Collier
One-Topic-Doesn't-Fit-All: Transcreating Reading Comprehension Test for Personalized Learning
Jieun Han, Daniel Lee, Haneul Yoo, Jinsung Yoon, Junyeong Park, Suin Kim, So-Yeon Ahn, Alice Oh
BluePrint: A Social Media User Dataset for LLM Persona Evaluation and Training
Aurélien Bück-Kaeffer, Je Qin Chooi, Dan Zhao, Maximilian Puelma Touzel, Kellin Pelrine, Jean-François Godbout, Reihaneh Rabbany, Zachary Yang
Augmenting Dialog with Think-Aloud Utterances for Modeling Individual Personality Traits by LLM
Seiya Ishikura, Hiroaki Yamada, Tatsuya Hiraoka, Hiroaki Yamada, Takenobu Tokunaga
Is Passive Expertise-Based Personalization Enough? A Case Study in AI-Assisted Test-Taking
Li Siyan, Jason Zhang, Akash Maharaj, Yuanming Shi, Yunyao Li
Is Active Persona Inference Necessary for Aligning Small Models to Personal Preferences?
Zilu Tang, Afra Feyza Akyürek, Ekin Akyürek, Derry Wijaya
Minority-Aware Satisfaction Estimation in Dialogue Systems via Preference-Adaptive Reinforcement Learning
Yahui Fu, Zi Haur Pang, Tatsuya Kawahara
Collaborative User Prompt for Personalized Generative Recommendation
Jerome Ramos, Bin Wu, Aldo Lipani
Analyzing Trade-offs Between Faithfulness and Correctness in LLM Personalization
Tiasa Singha Roy, Vishakh Padmakumar
Personality Editing for Language Models through Relevant Knowledge Editing
Seojin Hwang, Yumin Kim, Byeongjeong Kim, Donghoon Shin, Hwanhee Lee

Important Dates

Guidelines

We follow the standard *ACL template, and welcome short (up to 4 pages) and long (up to 8 pages) papers. References and appendices are not included in the page limits.
We welcome direct submissions via OpenReview, as well as submissions made through ARR. For submissions made through ARR, we will calibrate scores such that they are suitable for a workshop submission (for example if initially intended as a full conference paper). Please include both the original submission and all ARR reviews when uploading ARR submissions.

Submit directly via OpenReview: https://openreview.net/group?id=EMNLP/2025/Workshop/PALS

See Important Dates for deadlines. All accepted papers are non-archival and can be published on preprint servers, such as ArXiv.

Dual Submissions

We allow submissions under review at other venues, but please check the submission policies of the respective venues as this might be different.

Anonymity Period

There is no anonymity period.

Reciprocal Reviewing

It is expected that at least one author from each paper will review. Reviewers should expect a load of no more than 4 papers.

Invited Speakers

Vukosi Marivate

Vukosi Marivate

Associate Prof @ University of Pretoria
Yang Deng

Yang Deng

Assistant Prof @ Singapore Management University
Diyi Yang

Diyi Yang

Assistant Prof @ Stanford
Rik Koncel-Kedziorski

Rik Koncel-Kedziorski

Apple
Hannah Rose Kirk

Hannah Rose Kirk

UK AI Safety Institute

Organizers

Katherine Metcalf

Katherine Metcalf

Apple
Maartje ter Hoeve

Maartje ter Hoeve

Apple
Andrew Silva

Andrew Silva

Apple
Clemencia Siro

Clemencia Siro

Phd Student @ University of Amsterdam
Lucie Charlotte Magister

Lucie Charlotte Magister

Phd Student @ University of Cambridge

Advisory Board

Natalie Schluter

Natalie Schluter

Apple
Barry-John Theobald

Barry-John Theobald

Apple