Commit e28daa3f authored by Jan Lukas Steimann's avatar Jan Lukas Steimann
Browse files

Add baseline methods

parent e3747a2f
\subsection{Baseline Methods}
This is the third slide \cite{wachsmuth:2017a}.
\item For the ranking of arguments, we measured the semantic similarity
between premise and conclusion
\item Here each word of the argument in embedded in a vector space and then the
average of the vectors of the argument is calculated
\item The similarity of a premise and a conclusion is the calculated by the
angle between them
\item In the course of this experiment, we used three different embeddings
\item BERT\footnote{J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers
for language understanding,”}
\item Elmo\footnote{M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep
contextualized word representations,”}
\item Glove\footnote{J. Pennington, R. Socher, and C. Manning, “Glove: Global vectors for word representation,”}
\item Another approach to rank the argument is to measure how positive the tone
of the premises is
\item For this, we use a sentiment neural network based on FastText\footnote{A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov, “Bag of tricks for efficient text classification,”}, which was
trained on film ratings of IMDb
\ No newline at end of file
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment