diff --git a/recommender.tex b/recommender.tex
index e330ff7a083200c8225a9c05e6dc958778d0cc9b..090edf55dc6c715515c21c9d757a336c8eb1be1c 100644
--- a/recommender.tex
+++ b/recommender.tex
@@ -80,7 +80,7 @@ At the beginning, the matrices $\mathcal{P}, \mathcal{Q}$ are filled with \texti
 This approach is also called \textit{Funk-SVD} or \textit{SVD} in combination with section \ref{subsec:rmf} and \ref{subsec:bmf} \citep{Rendle19}.  The algorithm shown above can also be extended. Thus procedures like in section \ref{subsec:amf} can be solved. The second method from section \ref{subsec:amf} is then also called \textit{SVD++}. A coherent \textit{SGD} approach was given by \citet{Kor11}.
 
 \subsubsection{Alternating Least Square}
-The second method often used is \textit{alternating least square (ALS)}. In contrast to \textit{SGD}, the vectors $q_i, p_u$ are adjusted in \textit{two steps}. Since \textit{SGD} $q_i$ and $p_u$ are both unknown, this is a \textit{non-convex problem}. The idea of \textit{ALS} is to capture one of the two vectors and work with one unknown variable each. Thus the problem becomes \textit{quadratic} and can be solved optimally. For this purpose the matrix $\mathcal{P}$ is filled with \textit{random numbers} at the beginning. These should be as small as possible and can be generated by a \textit{gaussian-distribution}. Then $\mathcal{P}$ is recorded and all $q_i \in \mathcal{Q}$ are recalculated according to the \textit{least-square problem}. This step is then repeated in reverse order. \textit{ALS} terminated if a \textit{termination condition} such as the \textit{convergence} of the error is satisfied for both steps \citep{Zh08}.
+The second method often used is \textit{alternating least square (ALS)}. In contrast to \textit{SGD}, the vectors $q_i, p_u$ are adjusted in \textit{two steps}. Since \textit{SGD} $q_i$ and $p_u$ are both unknown, this is a \textit{non-convex problem}. The idea of \textit{ALS} is to capture one of the two vectors and work with one unknown variable each. Thus the problem becomes \textit{quadratic} and can be solved optimally. For this purpose the matrix $\mathcal{P}$ is filled with \textit{random numbers} at the beginning. These should be as small as possible and can be generated by a \textit{gaussian-distribution}. Then $\mathcal{P}$ is recorded and all $q_i \in \mathcal{Q}$ are recalculated according to the \textit{least-square problem}. This step is then repeated in reverse order. \textit{ALS} terminates if a \textit{termination condition} such as the \textit{convergence} of the error is satisfied for both steps \citep{Zh08}.
 
 \subsubsection{Bayesian Learning}
 The third approach is known as \textit{bayesian learning}. With this approach the so-called \textit{gibbs-sampler} is often used. The aim is to determine the \textit{common distribution} of the vectors in $\mathcal{P}, \mathcal{Q}$. For this purpose the \textit{gibbs-sampler} is given an initialization of \textit{hyperparameters} to generate the \textit{initial distribution}. The \textit{common distribution} of the vectors $q_i \in \mathcal{Q}, p_u \in \mathcal{P}$ is approximated by the \textit{conditional probabilities}. The basic principle is to select a variable in a \textit{reciprocal way} and to generate a value dependent on the values of the other variable according to its \textit{conditional distribution}, with the other values remaining unchanged in each \textit{epoch}. 
diff --git a/submission.pdf b/submission.pdf
index 07e416a844f6afd944bec38723c494246c7a9756..92cfba14b1cc037cc68daccd7e89d8fdfd0b31a0 100644
Binary files a/submission.pdf and b/submission.pdf differ