\item For the ranking of arguments, we measured the semantic similarity
\item For ranking the arguments, we measured the semantic similarity
between premise and conclusion
between the premises and conclusions
\item Here each word of the argument in embedded in a vector space and the
\item Each argument was embedded word-wise in an averaged vector space
average of the vectors of the argument is calculated
\item The resulting similarity was calculated by using $cos(c, p)$
\item The similarity of a premise and a conclusion is the calculated by the
angle between them
\item In the course of this experiment, we used three different embeddings
\item In the course of this experiment, we used three different embeddings
\begin{itemize}
\begin{itemize}
\item BERT\footnote{J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers
\item BERT\footnote{J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers
for language understanding,”}
for language understanding,”}
\item Elmo\footnote{M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep
\item ELMo\footnote{M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep
contextualized word representations,”}
contextualized word representations,”}
\item Glove\footnote{J. Pennington, R. Socher, and C. Manning, “Glove: Global vectors for word representation,”}
\item GloVe\footnote{J. Pennington, R. Socher, and C. Manning, “Glove: Global vectors for word representation,”}
\end{itemize}
\end{itemize}
\end{itemize}
\end{itemize}
\end{frame}
\end{frame}
...
@@ -24,9 +22,8 @@ contextualized word representations,”}
...
@@ -24,9 +22,8 @@ contextualized word representations,”}
\begin{frame}
\begin{frame}
\frametitle{Sentiment}
\frametitle{Sentiment}
\begin{itemize}
\begin{itemize}
\item Another approach to rank the argument is to measure how positive the tone
\item As another approach we used to measure the positivity of the argument
of the premises is
\item Therefore, we used a sentiment neural network based on FastText\footnote{A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov, “Bag of tricks for efficient text classification,”}, which was
\item For this, we used a sentiment neural network based on FastText\footnote{A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov, “Bag of tricks for efficient text classification,”}, which was