Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
A
Argument Relevance Presentation
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Requirements
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Package registry
Container registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Marc Feger
Argument Relevance Presentation
Commits
e3747a2f
Commit
e3747a2f
authored
4 years ago
by
Jan Lukas Steimann
Browse files
Options
Downloads
Patches
Plain Diff
Add first version for dataset chapter
parent
d702592f
No related branches found
No related tags found
No related merge requests found
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
slides/dataset.tex
+90
-2
90 additions, 2 deletions
slides/dataset.tex
with
90 additions
and
2 deletions
slides/dataset.tex
+
90
−
2
View file @
e3747a2f
\section
{
Dataset
}
\section
{
Dataset
}
\subsection
{
Dataset
}
\subsection
{
Corpus
}
\begin{frame}
\begin{frame}
This is the second slide
\cite
{
wachsmuth:2017a
}
.
\frametitle
{
Corpus
}
\begin{itemize}
\item
For our study, we used the Webis-ArgRank2017 dataset from Wachsmuth et
al.
\footnote
[1]
{
H. Wachsmuth, B. Stein, and Y. Ajjour "PageRank" for Argument
Relevance
}
\item
In this dataset Wachsmuth et al. constructed a ground-truth argument graph
as well as benchmark for argument ranking from this argument graph
\end{itemize}
\end{frame}
\begin{frame}
\frametitle
{
Corpus
}
\begin{itemize}
\item
The data are originally collected from the Argument Web and stored in an
argument graph
\item
The Argument Web was the largest existing argument database with a
structured argument corpora at that time
\end{itemize}
\end{frame}
\begin{frame}
\frametitle
{
Corpus
}
\begin{itemize}
\item
In the resulting argument graph G = (A, E)
\begin{itemize}
\item
Each node represents
$
a
_
i
\in
A
$
an argument consisting of a conclusion
$
c
_
i
$
and a not-empty set of premises
$
P
_
i
$
$
\Rightarrow
$
$
a
_
i
=
\langle
c
_
i, P
_
i
\rangle
$
\item
An edge
$
(
a
_
j, a
_
i
)
$
is given if the conclusion
$
a
_
j
$
is used as a premise
of
$
a
_
i
$
\item
Consequently,
$
P
_
i
=
\{
c
_
1
,...,c
_
k
\}
, k
\geq
1
$
\end{itemize}
\end{itemize}
\begin{figure}
\includegraphics
[width=0.4\linewidth]
{
bilder/DatasetLocalView2.png
}
\caption
{
Argument Graph from Argument Web
}
\end{figure}
\end{frame}
\subsection
{
Benchmark Argument Ranking
}
\begin{frame}
\frametitle
{
Benchmark Argument Ranking
}
\begin{itemize}
\item
To create the benchmark dataset, Wachsmuth et al. only kept arguments
from the graph that fulfill their requirements
\begin{itemize}
\item
If a conclusion was part in more than one argument, it was kept
\item
Furhtermore, Wachsmuth et al. removed all nodes that do not contain a
real claim
\item
Additionally, an argument:
\begin{itemize}
\item
has to be a valid-counter argument
\item
must be based on reasonalb premises
\item
must allow a logic interference to be drawn
\end{itemize}
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle
{
Benchmark Argument Ranking
}
\begin{itemize}
\item
The resulting benchmark dataset consist of 32 conclusions which
participated in 110 arguments
\item
These 110 arguements were ranked by seven experts from computational
linguistics and information retireval
\item
Each argument was ranked by how much each of its premises contributes to
the acceptance or rejection of the conlusion
\end{itemize}
\end{frame}
\subsection
{
Evaluation Method
}
\begin{frame}
\frametitle
{
Evaluation Method
}
\begin{itemize}
\item
To evaluate the agreement between the experts and to ensure comparability
were than used Kendall's
$
\tau
$
\item
Kendall
$
\tau
$
is correlation coefficient that indicates the agreement
between two quantities with respect to a property
\begin{itemize}
\item
In this case, this means the agreement between two experts with respect to
an argument
\item
-1 signifies a complete disagreement and +1 a complete agreement
\end{itemize}
\item
The mean over all experts for the evaluation of the benchmark is 0.36
\end{itemize}
\end{frame}
\end{frame}
\ No newline at end of file
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment