\item For ranking the arguments, we measured the semantic similarity
between the premises and conclusions
\item Each argument was embedded word-wise in an averaged vector space
\item The resulting similarity was calculated by using $cos(c, p)$
\item In the course of this experiment, we used three different embeddings
\begin{itemize}
\item BERT\footnote{J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers
for language understanding,”}
\item ELMo\footnote{M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep
contextualized word representations,”}
\item GloVe\footnote{J. Pennington, R. Socher, and C. Manning, “Glove: Global vectors for word representation,”}
\end{itemize}
\item Argument were embedded word-wise in an averaged vector space
\item The resulting similarity was calculated by using $Cos(c, p)$
\end{itemize}
\begin{block}{Embeddings used}
BERT by \cite{devlin2018bert}
ELMo by \cite{Peters:2018ELMo}
GloVe by \cite{pennington2014glove}
\end{block}
\end{frame}
\begin{frame}
\frametitle{Sentiment}
\begin{columns}
\column{0.6\textwidth}
\begin{itemize}
\item As another approach we used to measure the positivity of the argument
\itemTherefore, we used a sentiment neural network based on FastText\footnote{A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov, “Bag of tricks for efficient text classification,”}, which was
trained on film ratings of IMDb
\itemWe used a neural network based on FastText by \cite{Armand:2017FastText}
\item The neural network was trained to indicate the sentiment of IMDb film ratings