model_type # Default: "roberta", Type of the model (Supported: "roberta", "bert", "electra")
model_name # Default: "roberta-base", Name of the model (Use -h to print a list of names)
model_path # Path to a model checkpoint
dataset_name # Default: "multiwoz21", Name of the dataset the model was trained on and/or is being applied to
local_files_only # Default: False, Set to True to load local files only. Useful for offline systems
nlu_usr_config # Path to a NLU config file. Only needed for internal evaluation
nlu_sys_config # Path to a NLU config file. Only needed for internal evaluation
nlu_usr_path # Path to a NLU model file. Only needed for internal evaluation
nlu_sys_path # Path to a NLU model file. Only needed for internal evaluation
no_eval # Default: True, Set to True if internal evaluation should be conducted
no_history # Default: False, Set to True if dialogue history should be omitted during inference
```
# Training
TripPy can easily be trained for the abovementioned supported datasets using the original code in the official [TripPy repository](https://gitlab.cs.uni-duesseldorf.de/general/dsml/trippy-public). Simply clone the code and run the appropriate DO.* script to train a TripPy DST. After training, set model_path to the preferred checkpoint to use TripPy in ConvLab-3.
# Training and evaluation with PPO policy
Switch to the directory:
```
cd ../../policy/ppo
```
Edit trippy_config.json and trippy_config_eval.json accordingly, e.g., edit paths to model checkpoints.
For training, run
```
train.py --path trippy_config.json
```
For evaluation, run
```
train.py --path trippy_config_eval.json
```
# Paper
[TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State Tracking](https://aclanthology.org/2020.sigdial-1.4/)