\item For ranking the arguments, we measured the semantic similarity
\item For ranking the arguments, we measured the semantic similarity
between the premises and conclusions
between the premises and conclusions
\item Each argument was embedded word-wise in an averaged vector space
\item Argument were embedded word-wise in an averaged vector space
\item The resulting similarity was calculated by using $cos(c, p)$
\item The resulting similarity was calculated by using $Cos(c, p)$
\item In the course of this experiment, we used three different embeddings
\begin{itemize}
\item BERT\footnote{J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers
for language understanding,”}
\item ELMo\footnote{M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep
contextualized word representations,”}
\item GloVe\footnote{J. Pennington, R. Socher, and C. Manning, “Glove: Global vectors for word representation,”}
\end{itemize}
\end{itemize}
\end{itemize}
\begin{block}{Embeddings used}
BERT by \cite{devlin2018bert}
ELMo by \cite{Peters:2018ELMo}
GloVe by \cite{pennington2014glove}
\end{block}
\end{frame}
\end{frame}
\begin{frame}
\begin{frame}
\frametitle{Sentiment}
\frametitle{Sentiment}
\begin{columns}
\column{0.6\textwidth}
\begin{itemize}
\begin{itemize}
\item As another approach we used to measure the positivity of the argument
\item As another approach we used to measure the positivity of the argument
\itemTherefore, we used a sentiment neural network based on FastText\footnote{A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov, “Bag of tricks for efficient text classification,”}, which was
\itemWe used a neural network based on FastText by \cite{Armand:2017FastText}
trained on film ratings of IMDb
\item The neural network was trained to indicate the sentiment of IMDb film ratings