EmoLoop - Public
This is the repository for systems trained and evaluated in the paper Infusing Emotions into Task-oriented Dialogue Systems: Understanding, Management, and Generation presented at SIGDIAL 2024.
All experiments are conducted in the ConvLab-3 environment. EmoLoop is implemented as a part of ConvLab-3 toolkit now. You can find it as a branch called emo_loop
in the official ConvLab-3 Github repository here.
Relevant Modules
DST: convlab/dst/emodst
NLG: convlab/nlg/scbart
System Policy: convlab/policy/vtrace_DPT
User Policy convlab/policy/emoUS_v2
EmoLLAMA convlab/e2e/emotod
Please refer to the README.md
(README_EmoLoop.md
for vtrace_DPT) in each respective module folder for training and testing the module, including the system policy.
Model Checkpoints
Model checkpoints of modular systems can be found on Zenodo at https://zenodo.org/records/14810836.
For the end-to-end model, please refer to the instructions in convlab/e2e/emotod
and code here to reproduce the model.
Interactive Evaluation with User Simulator
To run the interactive evaluation with a user simulator and the modular system, use evaluate.py
in ConvLab3/convlab/policy folder. Specifically:
python convlab/policy/evaluate.py --model_name DDPT --model_path path_to_model_checkpoint
For input arguments:
- model_name: DDPT, specifying the DDPT policy architecture
- model_path: path to the trained DDPT model checkpoint. This should be the path to the checkpoint including model name prefix (some_path/best_vtrace, ignoring suffices like .pol.mdl, .val.mdl, .optimizer). See ConvLab3/convlab/policy/vtrace_DPT/README_EmoLoop.md for further explanation.
- config_path: configuration json specifying the pipeline. This should be the same as the one used for training:
ConvLab3/convlab/policy/vtrace_DPT/configs/emoloop_pipeline_config.json
. You can also specify other arguments as specified in the script. For instance,--num_dialogues
for number of dialogues to interact and--verbose
to print utterances.
To run the interactive evaluation with a user simulator and the end-to-end system, use run_interaction.py
in ConvLab3/convlab/e2e/emotod folder.
Citation
@inproceedings{feng-etal-2024-infusing,
title = "Infusing Emotions into Task-oriented Dialogue Systems: Understanding, Management, and Generation",
author = "Feng, Shutong and
Lin, Hsien-chin and
Geishauser, Christian and
Lubis, Nurul and
van Niekerk, Carel and
Heck, Michael and
Ruppik, Benjamin Matthias and
Vukovic, Renato and
Gasic, Milica",
editor = "Kawahara, Tatsuya and
Demberg, Vera and
Ultes, Stefan and
Inoue, Koji and
Mehri, Shikib and
Howcroft, David and
Komatani, Kazunori",
booktitle = "Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
month = sep,
year = "2024",
address = "Kyoto, Japan",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sigdial-1.60/",
doi = "10.18653/v1/2024.sigdial-1.60",
pages = "699--717",
abstract = "Emotions are indispensable in human communication, but are often overlooked in task-oriented dialogue (ToD) modelling, where the task success is the primary focus. While existing works have explored user emotions or similar concepts in some ToD tasks, none has so far included emotion modelling into a fully-fledged ToD system nor conducted interaction with human or simulated users. In this work, we incorporate emotion into the complete ToD processing loop, involving understanding, management, and generation. To this end, we extend the EmoWOZ dataset (Feng et al., 2022) with system affective behaviour labels. Through interactive experimentation involving both simulated and human users, we demonstrate that our proposed framework significantly enhances the user`s emotional experience as well as the task success."
}