From 535c9d7d7095303fbe9e62231c295447804aa20e Mon Sep 17 00:00:00 2001
From: zqwerty <zhuq96@hotmail.com>
Date: Tue, 29 Nov 2022 18:14:44 +0800
Subject: [PATCH] update README, unfinished

---
 README.md | 304 +++++++++++++++---------------------------------------
 1 file changed, 81 insertions(+), 223 deletions(-)

diff --git a/README.md b/README.md
index 38ff5f21..150ada1a 100755
--- a/README.md
+++ b/README.md
@@ -1,282 +1,140 @@
-# ConvLab-2
-[![Build Status](https://travis-ci.com/thu-coai/ConvLab-2.svg?branch=master)](https://travis-ci.com/thu-coai/ConvLab-2)
+# ConvLab-3
 
-**ConvLab-2** is an open-source toolkit that enables researchers to build task-oriented dialogue systems with state-of-the-art models, perform an end-to-end evaluation, and diagnose the weakness of systems. As the successor of [ConvLab](https://github.com/ConvLab/ConvLab), ConvLab-2 inherits ConvLab's framework but integrates more powerful dialogue models and supports more datasets. Besides, we have developed an analysis tool and an interactive tool to assist researchers in diagnosing dialogue systems. [[paper]](https://arxiv.org/abs/2002.04793)
+![PyPI](https://img.shields.io/pypi/v/convlab)![PyPI - Python Version](https://img.shields.io/pypi/pyversions/convlab)![GitHub](https://img.shields.io/github/license/ConvLab/ConvLab-3)
+
+**ConvLab-3** is a flexible dialog system platform based on a **unified data format** for task-oriented dialog (TOD) datasets. The unified format serves as the adapter between TOD datasets and models: datasets are first transformed to the unified format and then loaded by models. In this way, the cost of adapting $M$ models to $N$ datasets is reduced from $M\times N$ to $M+N$. While retaining all features of [ConvLab-2](https://github.com/thu-coai/ConvLab-2),  ConvLab-3 greatly enlarges supported datasets and models thanks to the unified format, and enhances the utility of reinforcement learning (RL) toolkit for dialog policy module. For typical usage, see our [paper](). Datasets and Trained models are also available on [Hugging Face Hub](https://huggingface.co/ConvLab).
 
 - [Installation](#installation)
 - [Tutorials](#tutorials)
-- [Documents](#documents)
+- [Unified Datasets](#Unified-Datasets)
 - [Models](#models)
-- [Supported Datasets](#Supported-Datasets)
-- [End-to-end Performance on MultiWOZ](#End-to-end-Performance-on-MultiWOZ)
-- [Module Performance on MultiWOZ](#Module-Performance-on-MultiWOZ)
-- [Issues](#issues)
-- [Contributions](#contributions)
+- [Code Structure]($Code-Structure)
+- [Contributing](#contributing)
+- [Team](#Team)
 - [Citing](#citing)
 - [License](#license)
 
 ## Updates
 
-2021.9.13:
+- **2022.11.30**: ConvLab-3 release.
 
-- Add [MultiWOZ 2.3](https://github.com/lexmen318/MultiWOZ-coref) dataset in `data` dir. The dataset adds co-reference annotations in addition to corrections of dialogue acts and dialogue states. [[paper]](https://arxiv.org/abs/2010.05594)
+## Installation
 
-2021.6.18:
+You can install ConvLab-3 in the following ways according to your need. Higher versions of `torch` and `transformers` may also work.
 
-- Add [LAUG](https://github.com/thu-coai/LAUG), an open-source toolkit for Language understanding AUGmentation. It is an automatic method to approximate the natural perturbations to existing data. Augmented data could be used to conduct black-box robustness testing or enhancing training. [[paper]](https://arxiv.org/abs/2012.15262)
-- Add [SC-GPT](https://github.com/pengbaolin/SC-GPT) for NLG. [[paper]](https://arxiv.org/abs/2002.12328)
+### Git clone and pip install in development mode (Recommend)
 
-## Installation
+For the latest and most configurable version, we recommend installing ConvLab-3 in development mode.
 
-Require python >= 3.6.
+Clone the newest repository:
 
-Clone this repository:
 ```bash
-git clone https://github.com/thu-coai/ConvLab-2.git
+git clone --depth 1 https://github.com/ConvLab/ConvLab-3.git
 ```
 
-Install ConvLab-2 via pip:
+Install ConvLab-3 via pip:
 
 ```bash
-cd ConvLab-2
+cd ConvLab-3
 pip install -e .
 ```
 
-## Tutorials
-
-- [Getting Started](https://github.com/thu-coai/ConvLab-2/blob/master/tutorials/Getting_Started.ipynb) (Have a try on [Colab](https://colab.research.google.com/github/thu-coai/ConvLab-2/blob/master/tutorials/Getting_Started.ipynb)!)
-- [Add New Model](https://github.com/thu-coai/ConvLab-2/blob/master/tutorials/Add_New_Model.md)
-- [Train RL Policies](https://github.com/thu-coai/ConvLab-2/blob/master/tutorials/Train_RL_Policies)
-- [Interactive Tool](https://github.com/thu-coai/ConvLab-2/blob/master/deploy) [[demo video]](https://youtu.be/00VWzbcx26E)
-
-## Documents
-Our documents are on https://thu-coai.github.io/ConvLab-2_docs/convlab.html.
-
-## Models
-
-We provide following models:
-
-- NLU: SVMNLU, MILU, BERTNLU
-- DST: rule, TRADE, SUMBT
-- Policy: rule, Imitation, REINFORCE, PPO, GDPL, MDRG, HDSA, LaRL
-- Simulator policy: Agenda, VHUS
-- NLG: Template, SCLSTM
-- End2End: Sequicity, DAMD, RNN_rollout
-
-For  more details about these models, You can refer to `README.md` under `convlab/$module/$model/$dataset` dir such as `convlab/nlu/jointBERT/multiwoz/README.md`.
-
-## Supported Datasets
-
-- [Multiwoz 2.1](https://github.com/budzianowski/multiwoz)
-  - We add user dialogue act (*inform*, *request*, *bye*, *greet*, *thank*), remove 5 sessions that have incomplete dialogue act annotation and place it under `data/multiwoz` dir.
-  - Train/val/test size: 8434/999/1000. Split as original data.
-  - LICENSE: Attribution 4.0 International, url: http://creativecommons.org/licenses/by/4.0/
-- [CrossWOZ](https://github.com/thu-coai/CrossWOZ)
-  - We offers a rule-based user simulator and a complete set of models for building a pipeline system on the CrossWOZ dataset. We correct few state annotation and place it under `data/crosswoz` dir.
-  - Train/val/test size: 5012/500/500. Split as original data.
-  - LICENSE: Attribution 4.0 International, url: http://creativecommons.org/licenses/by/4.0/
-- [Camrest](https://www.repository.cam.ac.uk/handle/1810/260970)
-  - We add system dialogue act (*inform*, *request*, *nooffer*) and place it under `data/camrest` dir.
-  - Train/val/test size: 406/135/135. Split as original data.
-  - LICENSE: Attribution 4.0 International, url: http://creativecommons.org/licenses/by/4.0/
-- [Dealornot](https://github.com/facebookresearch/end-to-end-negotiator/tree/master/src/data/negotiate)
-  - Placed under `data/dealornot` dir.
-  - Train/val/test size: 5048/234/526. Split as original data.
-  - LICENSE: Attribution-NonCommercial 4.0 International, url: https://creativecommons.org/licenses/by-nc/4.0/
-
-## End-to-end Performance on MultiWOZ
-
-*Notice*: The results are for commits before [`bdc9dba`](https://github.com/thu-coai/ConvLab-2/commit/bdc9dba72c957d97788e533f9458ed03a4b0137b) (inclusive). We will update the results after improving user policy.
-
-We perform end-to-end evaluation (1000 dialogues) on MultiWOZ using the user simulator below (a full example on `tests/test_end2end.py`) :
-
-```python
-# BERT nlu trained on sys utterance
-user_nlu = BERTNLU(mode='sys', config_file='multiwoz_sys_context.json', model_file='https://convlab.blob.core.windows.net/convlab-2/bert_multiwoz_sys_context.zip')
-user_dst = None
-user_policy = RulePolicy(character='usr')
-user_nlg = TemplateNLG(is_user=True)
-user_agent = PipelineAgent(user_nlu, user_dst, user_policy, user_nlg, name='user')
-
-analyzer = Analyzer(user_agent=user_agent, dataset='multiwoz')
-
-set_seed(20200202)
-analyzer.comprehensive_analyze(sys_agent=sys_agent, model_name='sys_agent', total_dialog=1000)
-```
-
-Main metrics (refer to `convlab/evaluator/multiwoz_eval.py` for more details):
-
-- Complete: whether complete the goal. Judged by the Agenda policy instead of external evaluator.
-- Success: whether all user requests have been informed and the booked entities satisfy the constraints.
-- Book: how many the booked entities satisfy the user constraints.
-- Inform Precision/Recall/F1: how many user requests have been informed.
-- Turn(succ/all): average turn number for successful/all dialogues.
-
-Performance (the first row is the default config for each module. Empty entries are set to default config.):
-
-| NLU         | DST       | Policy         | NLG         | Complete rate | Success rate | Book rate | Inform P/R/F1 | Turn(succ/all) |
-| ----------- | --------- | -------------- | ----------- | ------------- | ------------ | --------- | --------- | -------------- |
-| **BERTNLU** | RuleDST   | RulePolicy     | TemplateNLG |   90.5       |     81.3    |   91.1 | 79.7/92.6/83.5 | 11.6/12.3      |
-| **MILU**    | RuleDST | RulePolicy | TemplateNLG |    93.3       |   81.8      |   93.0    | 80.4/94.7/84.8 | 11.3/12.1      |
-| BERTNLU | RuleDST | RulePolicy | **SCLSTM**  |   48.5    | 40.2 | 56.9   | 62.3/62.5/58.7 |  11.9/27.1         |
-| BERTNLU     | RuleDST | **MLEPolicy**  | TemplateNLG |     42.7          |    35.9      |  17.6   | 62.8/69.8/62.9  |  12.1/24.1    |
-| BERTNLU | RuleDST | **PGPolicy**   | TemplateNLG |     37.4         |    31.7     |   17.4  |  57.4/63.7/56.9  |   11.0/25.3    |
-| BERTNLU | RuleDST | **PPOPolicy**  | TemplateNLG |     75.5         |    71.7    |   86.6    | 69.4/85.8/74.1  |  13.1/17.8   |
-| BERTNLU | RuleDST | **GDPLPolicy** | TemplateNLG |     49.4         |     38.4    |  20.1     |  64.5/73.8/65.6 |  11.5/21.3    |
-| None        | **TRADE** | RulePolicy | TemplateNLG |    32.4      |    20.1     |    34.7      |  46.9/48.5/44.0 |  11.4/23.9      |
-| None        | **SUMBT** | RulePolicy | TemplateNLG |   34.5       |   29.4     |   62.4    |  54.1/50.3/48.3  |   11.0/28.1     |
-| BERTNLU | RuleDST | **MDRG**       | None        | 21.6 | 17.8 | 31.2 | 39.9/36.3/34.8 | 15.6/30.5|
-| BERTNLU | RuleDST | **LaRL**       | None        | 34.8 | 27.0 | 29.6 | 49.1/53.6/47.8 |13.2/24.4|
-| None | **SUMBT** | **LaRL** | None |  32.9 | 23.7  |  25.9 | 48.6/52.0/46.7 | 12.5/24.3|
-| None | None | **DAMD***      | None | 39.5| 34.3 | 51.4 | 60.4/59.8/56.3 | 15.8/29.8 |
-
-*: end-to-end models used as sys_agent directly.
-
-## Module Performance on MultiWOZ
-
-### NLU
-
-By running `convlab/nlu/evaluate.py MultiWOZ $model all`:
-
-|         | Precision | Recall | F1    |
-| ------- | --------- | ------ | ----- |
-| BERTNLU | 82.48     | 85.59  | 84.01 |
-| MILU    | 80.29     | 83.63  | 81.92 |
-| SVMNLU  | 74.96     | 50.74  | 60.52 |
-
-### DST 
-
-By running `convlab/dst/evaluate.py MultiWOZ $model`:
+### Pip install from PyPI
 
-|             |  Joint accuracy  | Slot accuracy | Joint F1  |
-| --------    |   -------------   | -------------  | --------|
-|  MDBT       |   0.06           |      0.89       | 0.43    |
-|  SUMBT      |    0.30         |       0.96       | 0.83    |
-|   TRADE     |    0.40         |       0.96       | 0.84    |
+To use ConvLab-3 as an off-the-shelf tool, you can install via:
 
-### Policy
-
-*Notice*: The results are for commits before [`bdc9dba`](https://github.com/thu-coai/ConvLab-2/commit/bdc9dba72c957d97788e533f9458ed03a4b0137b) (inclusive). We will update the results after improving user policy.
-
-By running `convlab/policy/evalutate.py --model_name $model`
-
-|           | Task Success Rate |
-| --------- | ----------------- |
-| MLE       | 0.56              |
-| PG        | 0.54              |
-| PPO       | 0.89              |
-| GDPL      | 0.58              |
-
-### NLG
-
-By running `convlab/nlg/evaluate.py MultiWOZ $model sys`
-
-|          | corpus BLEU-4 |
-| -------- | ------------- |
-| Template | 0.3309        |
-| SCLSTM   | 0.4884        |
-
-## Translation-train SUMBT for cross-lingual DST
-
-### Train
-
-With Convlab-2, you can train SUMBT on a machine-translated dataset like this:
-
-```python
-# train.py
-import os
-from sys import argv
-
-if __name__ == "__main__":
-    if len(argv) != 2:
-        print('usage: python3 train.py [dataset]')
-        exit(1)
-    assert argv[1] in ['multiwoz', 'crosswoz']
-
-    from convlab.dst.sumbt.multiwoz_zh.sumbt import SUMBT_PATH
-    if argv[1] == 'multiwoz':
-        from convlab.dst.sumbt.multiwoz_zh.sumbt import SUMBTTracker as SUMBT
-    elif argv[1] == 'crosswoz':
-        from convlab.dst.sumbt.crosswoz_en.sumbt import SUMBTTracker as SUMBT
-
-    sumbt = SUMBT()
-    sumbt.train(True)
+```bash
+pip install convlab
 ```
 
-### Evaluate
+### Using Docker
 
-Execute `evaluate.py` (under `convlab/dst/`) with following command:
+We also provide [Dockerfile](https://github.com/ConvLab/ConvLab-3/blob/master/Dockerfile) for building docker. Basically it uses the `requirement.txt` and then installs ConvLab-3 in development mode.
 
 ```bash
-python3 evaluate.py [CrossWOZ-en|MultiWOZ-zh] [val|test|human_val]
-```
+# create image
+docker build -t convlab .
 
-evaluation of our pre-trained models are: (joint acc.)
+# run container
+docker run -dit convlab
 
-| type  | CrossWOZ-en | MultiWOZ-zh |
-| ----- | ----------- | ----------- |
-| val   | 12.4%       | 48.5%       |
-| test  | 12.4%       | 46.0%       |
-| human_val | 10.6%       | 47.4%       |
-
-`human_val` option will make the model evaluate on the validation set translated by human. 
+# open bash in container
+docker exec -it CONTAINER_ID bash
+```
 
-Note: You may want to download pre-traiend BERT models and translation-train SUMBT models provided by us.
+## Tutorials
 
-Without modifying any code, you could:
+- [Getting Started](https://github.com/thu-coai/ConvLab-2/blob/master/tutorials/Getting_Started.ipynb) (Have a try on [Colab](https://colab.research.google.com/github/thu-coai/ConvLab-2/blob/master/tutorials/Getting_Started.ipynb)!) 
+- [Introduction to unified data format](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets)
+- [Utility functions for unified datasets](https://github.com/ConvLab/ConvLab-3/blob/master/convlab/util/unified_datasets_util.py)
+- [How to add a new dataset](https://github.com/thu-coai/ConvLab-2/blob/master/tutorials/Add_New_Model.md)
+- How to add a new model
+- [How to use RL toolkit](https://github.com/thu-coai/ConvLab-2/blob/master/tutorials/Train_RL_Policies)
+- [Interactive tool](https://github.com/thu-coai/ConvLab-2/blob/master/deploy) [[demo video]](https://youtu.be/00VWzbcx26E)
+
+## Unified Datasets
+
+Current datasets in unified data format: (DA-U/DA-S stands for user/system dialog acts)
+
+| Dataset       | Dialogs | Goal               | DA-U               | DA-S               | State              | API result         | DataBase           |
+| ------------- | ------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
+| Camrest       | 676     | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |                    | :white_check_mark: |
+| WOZ 2.0       | 1200    |                    | :white_check_mark: |                    | :white_check_mark: |                    |                    |
+| KVRET         | 3030    |                    | :white_check_mark: |                    | :white_check_mark: | :white_check_mark: |                    |
+| DailyDialog   | 13118   |                    | :white_check_mark: |                    |                    |                    |                    |
+| Taskmaster-1  | 13175   |                    | :white_check_mark: | :white_check_mark: | :white_check_mark: |                    |                    |
+| Taskmaster-2  | 17303   |                    | :white_check_mark: | :white_check_mark: | :white_check_mark: |                    |                    |
+| MultiWOZ 2.1  | 10438   | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |                    | :white_check_mark: |
+| Schema-Guided | 22825   |                    | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |                    |
+| MetaLWOZ      | 40203   | :white_check_mark: |                    |                    |                    |                    |                    |
+| CrossWOZ (zh) | 6012    | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Taskmaster-3  | 23757   |                    | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |                    |
+
+Unified datasets are available under `data/unified_datasets` directory as well as [Hugging Face Hub](https://huggingface.co/ConvLab). We will continue adding more datasets listed in [this issue](https://github.com/ConvLab/ConvLab-3/issues/11). If you want to add a listed/custom dataset to ConvLab-3, you can create an issue for discussion and then create pull-request. We will list you as the [contributors](#Team) and highly appreciate your contributions!
 
-- download pre-trained BERT models from:
+## Models
 
-  - [bert-base-uncased](https://huggingface.co/bert-base-uncased)  for CrossWOZ-en
-  - [chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext)  for MultiWOZ-zh
+We list newly integrated models in ConvLab-3 that support unified data format and obtain strong performance. You can follow the link for more details about these models. Other models can be used in the same way as in ConvLab-2.
 
-  extract it to `./pre-trained-models`.
+| Task                           | Models                                                       | Input           | Output           |
+| ------------------------------ | ------------------------------------------------------------ | --------------- | ---------------- |
+| Response Generation            | [T5](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/base_models/t5) | Context         | Response         |
+| Goal-to-Dialog                 | [T5](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/base_models/t5) | Goal            | Dialog           |
+| Natural Language Understanding | [T5](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/base_models/t5), [BERTNLU](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/nlu/jointBERT), [MILU](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/nlu/milu) | Context         | DA-U             |
+| Dialog State Tracking          | [T5](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/base_models/t5), SUMBT, SetSUMBT, TripPy | Context         | State            |
+| RL Policy                      | DDPT, PPO, PG                                                | State, DA-U, DB | DA-S             |
+| Natural Language Generation    | [T5](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/base_models/t5), SC-GPT | DA-S            | Response         |
+| End-to-End                     | SOLOIST                                                      | Context, DB     | State, Response  |
+| User simulator                 | TUS, GenTUS                                                  | Goal, DA-S      | DA-U, (Response) |
 
-- for translation-train SUMBT model:
+Trained models are available on [Hugging Face Hub](https://huggingface.co/ConvLab).
 
-  - [trained on CrossWOZ-en](https://convlab.blob.core.windows.net/convlab-2/crosswoz_en-pytorch_model.bin.zip)
-  - [trained on MultiWOZ-zh](https://convlab.blob.core.windows.net/convlab-2/multiwoz_zh-pytorch_model.bin.zip)
-  - Say the data set is CrossWOZ (English), (after extraction) just save the pre-trained model under `./convlab/dst/sumbt/crosswoz_en/pre-trained` and name it with `pytorch_model.bin`. 
+## Code structure
 
-## Issues
 
-You are welcome to create an issue if you want to request a feature, report a bug or ask a general question.
 
-## Contributions
+## Contributing
 
-We welcome contributions from community.
+We welcome contributions from community. Please see issues to find what we need.
 
-- If you want to make a big change, we recommend first creating an issue with your design.
-- Small contributions can be directly made by a pull request.
-- If you like make contributions to our library, see issues to find what we need.
+- If you want to add a new dataset, model, or other feature, please describe the dataset/model/feature in an issue before creating pull-request.
+- Small change like fixing a bug can be directly made by a pull-request.
 
 ## Team
 
-**ConvLab-3** is maintained and developed by Tsinghua University Conversational AI group (THU-coai), the [Dialogue Systems and Machine Learning Group](https://www.cs.hhu.de/en/research-groups/dialog-systems-and-machine-learning.html) at Heinrich Heine University, Düsseldorf, Germany and Microsoft Research (MSR).
+**ConvLab-3** is maintained and developed by [Tsinghua University Conversational AI](http://coai.cs.tsinghua.edu.cn/) group (THU-COAI), the [Dialogue Systems and Machine Learning Group](https://www.cs.hhu.de/en/research-groups/dialog-systems-and-machine-learning.html) at Heinrich Heine University, Düsseldorf, Germany and Microsoft Research (MSR).
 
-We would like to thank:
+We would like to thank all contributors of ConvLab:
 
-Yan Fang, Zhuoer Feng, Jianfeng Gao, Qihan Guo, Kaili Huang, Minlie Huang, Sungjin Lee, Bing Li, Jinchao Li, Xiang Li, Xiujun Li, Jiexi Liu, Lingxiao Luo, Wenchang Ma, Mehrad Moradshahi, Baolin Peng, Runze Liang, Ryuichi Takanobu, Hongru Wang, Jiaxin Wen, Yaoqin Zhang, Zheng Zhang, Qi Zhu, Xiaoyan Zhu, Carel van Niekerk, Christian Geishauser, Hsien-chin Lin, Nurul Lubis, Xiaochen Zhu, Michael Heck, Shutong Feng, Milica Gašić.
+Yan Fang, Zhuoer Feng, Jianfeng Gao, Qihan Guo, Kaili Huang, Minlie Huang, Sungjin Lee, Bing Li, Jinchao Li, Xiang Li, Xiujun Li, Jiexi Liu, Lingxiao Luo, Wenchang Ma, Mehrad Moradshahi, Baolin Peng, Runze Liang, Ryuichi Takanobu, Dazhen Wan, Hongru Wang, Jiaxin Wen, Yaoqin Zhang, Zheng Zhang, Qi Zhu, Xiaoyan Zhu, Carel van Niekerk, Christian Geishauser, Hsien-chin Lin, Nurul Lubis, Xiaochen Zhu, Michael Heck, Shutong Feng, Milica Gašić.
 
 
 ## Citing
 
-If you use ConvLab-2 in your research, please cite:
+If you use ConvLab-3 in your research, please cite:
 
 ```
-@inproceedings{zhu2020convlab,
-    title={ConvLab-2: An Open-Source Toolkit for Building, Evaluating, and Diagnosing Dialogue Systems},
-    author={Qi Zhu and Zheng Zhang and Yan Fang and Xiang Li and Ryuichi Takanobu and Jinchao Li and Baolin Peng and Jianfeng Gao and Xiaoyan Zhu and Minlie Huang},
-    year={2020},
-    booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
-}
-
-@inproceedings{liu2021robustness,
-    title={Robustness Testing of Language Understanding in Task-Oriented Dialog},
-    author={Liu, Jiexi and Takanobu, Ryuichi and Wen, Jiaxin and Wan, Dazhen and Li, Hongguang and Nie, Weiran and Li, Cheng and Peng, Wei and Huang, Minlie},
-    year={2021},
-    booktitle={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
-}
+
 ```
 
 ## License
-- 
GitLab