Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
E
emoUS-public
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Iterations
Wiki
Requirements
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Package registry
Container registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
general
dsml
emoUS-public
Commits
5818e14e
Commit
5818e14e
authored
3 years ago
by
zz-jacob
Browse files
Options
Downloads
Patches
Plain Diff
fix bugs
parent
06b82d2c
No related branches found
No related tags found
No related merge requests found
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
convlab2/nlg/scgpt/evaluate.py
+1
-1
1 addition, 1 deletion
convlab2/nlg/scgpt/evaluate.py
convlab2/nlg/scgpt/main.py
+8
-2
8 additions, 2 deletions
convlab2/nlg/scgpt/main.py
with
9 additions
and
3 deletions
convlab2/nlg/scgpt/evaluate.py
+
1
−
1
View file @
5818e14e
...
...
@@ -246,7 +246,7 @@ class GentScorer(object):
## 2. Compute slot error rate
## 3. Detailed illustraction of how differet split
## of data affect performance
def
__init__
(
self
,
detectfile
):
def
__init__
(
self
):
self
.
bleuscorer
=
BLEUScorer
()
def
scoreERR
(
self
,
parallel_pairs
):
...
...
This diff is collapsed.
Click to expand it.
convlab2/nlg/scgpt/main.py
+
8
−
2
View file @
5818e14e
...
...
@@ -221,6 +221,8 @@ def test(model, nlg_data, ontology, model_path):
test_data
=
nlg_data
[
'
test
'
]
dialog_acts
=
[
act2str
(
item
[
'
dialogue_acts
'
])
for
item
in
test_data
]
golden_responses
=
[
item
[
'
utterance
'
]
for
item
in
test_data
]
# dialog_acts = dialog_acts[:10]
# golden_responses = golden_responses[:10]
outputs
=
inference_sents
(
model
,
dialog_acts
)
if
dist
.
get_rank
()
==
0
:
output_file
=
'
./test_output.txt
'
...
...
@@ -241,13 +243,15 @@ def test(model, nlg_data, ontology, model_path):
domain
=
ontology
[
'
domains
'
][
domain_name
]
for
slot_name
in
domain
[
'
slots
'
]:
slot
=
domain
[
'
slots
'
][
slot_name
]
if
'
possible_values
'
not
in
slot
:
continue
possible_vals
=
slot
[
'
possible_values
'
]
if
len
(
possible_vals
)
>
0
:
for
val
in
possible_vals
:
val2ds_dict
[
val
]
=
f
'
{
domain_name
}
-
{
slot_name
}
'
## missing values
score_list
=
[]
for
item
in
nlg
_data
:
for
item
in
test
_data
:
da
=
item
[
'
dialogue_acts
'
]
utterance
=
item
[
'
utterance
'
]
missing_count
=
0
...
...
@@ -263,11 +267,13 @@ def test(model, nlg_data, ontology, model_path):
if
value
.
strip
().
lower
()
not
in
utterance
.
lower
():
missing_count
+=
1
all_count
+=
1
if
all_count
==
0
:
continue
## redundant values
for
val
in
val2ds_dict
:
if
f
'
{
val
.
strip
().
lower
()
}
'
in
f
'
{
utterance
.
strip
().
lower
()
}
'
and
val
.
strip
().
lower
()
not
in
all_values
:
redundant_count
+=
1
item_score
=
float
(
redundant_count
+
all
_count
)
/
all_count
item_score
=
float
(
redundant_count
+
redundant
_count
)
/
all_count
score_list
.
append
(
item_score
)
ERR_Score
=
np
.
mean
(
score_list
)
print
(
f
'
BLEU:
{
BLEU_Score
}
\n
ERR_Score:
{
ERR_Score
}
'
)
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment