diff --git a/content-based-collaborative-filtering-comparison.tex b/content-based-collaborative-filtering-comparison.tex
index b2d0157ffefc9bf2e7f31d0d1fa56c65643b7a61..e2de83e3b2fc1987f97b332db645a716508aa181 100644
--- a/content-based-collaborative-filtering-comparison.tex
+++ b/content-based-collaborative-filtering-comparison.tex
@@ -1,11 +1,11 @@
 \begin{figure}[!ht]
   \centering
-  \begin{subfigure}[b]{0.25\linewidth}
+  \begin{subfigure}[b]{0.23\linewidth}
     \includegraphics[width=\linewidth]{Bilder/ContendBasedFlow.jpg}
     \caption{\textit{Content-Based}.}
     \label{fig:cb}
   \end{subfigure}
-  \begin{subfigure}[b]{0.25\linewidth}
+  \begin{subfigure}[b]{0.23\linewidth}
     \includegraphics[width=\linewidth]{Bilder/CollaborativeFlow.jpg}
     \caption{\textit{Collaborative-Filtering}.}
     \label{fig:cf}
diff --git a/overview.tex b/overview.tex
new file mode 100644
index 0000000000000000000000000000000000000000..94c6eebe40e332494c520fd22a3caa18ab0eb44b
--- /dev/null
+++ b/overview.tex
@@ -0,0 +1,6 @@
+\begin{figure}[!ht]
+  \centering
+    \includegraphics[scale=0.4]{Bilder/CFCBDiagramm.jpg}
+  \caption{Overview of the entire field of the \textit{recommender system} and their dependencies with each other.}
+  \label{fig:overview}
+\end{figure}
diff --git a/recommender.tex b/recommender.tex
index 5e9fc81f4289570b53d63ae0290cc3e73e94ac19..2273182b13d3eb7c03219eaa678c91d0470a95d0 100644
--- a/recommender.tex
+++ b/recommender.tex
@@ -8,10 +8,10 @@ Each of the \textit{users} in $\mathcal{U}$ gives \textit{ratings} from a set $\
 In the following, the two main approaches of \textit{collaborative-filtering} and \textit{content-based} \textit{recommender systems} will be discussed. In addition, it is explained how \textit{matrix factorization} can be integrated into the two ways of thinking.
 
 \subsection{Content-Based}
-\textit{Content-based} \textit{recommender systems} work directly with \textit{feature vectors}. Such a \textit{feature vector} can, for example, represent a \textit{user profile}. In this case, this \textit{profile} contains information about the \textit{user's preferences}, such as \textit{genres}, \textit{authors}, \textit{etc}.  This is done by trying to create a \textit{model} of the \textit{user}, which best represents his preferences. The different \textit{learning algorithms} from the field of \textit{machine learning} are used to learn or create the \textit{models}. The most prominent \textit{algorithms} are: \textit{tf-idf}, \textit{bayesian learning}, \textit{Rocchio's algorithm} and \textit{neural networks} \citep{Lops11, Ferrari19, DeKa11}. Altogether the built and learned \textit{feature vectors} are compared with each other. Based on their closeness, similar \textit{features} can be used to generate \textit{missing ratings}. Figure \ref{fig:cb} shows a sketch of the general operation of \textit{content-based recommenders}.
+\textit{Content-based} \textit{recommender systems (CB)} work directly with \textit{feature vectors}. Such a \textit{feature vector} can, for example, represent a \textit{user profile}. In this case, this \textit{profile} contains information about the \textit{user's preferences}, such as \textit{genres}, \textit{authors}, \textit{etc}.  This is done by trying to create a \textit{model} of the \textit{user}, which best represents his preferences. The different \textit{learning algorithms} from the field of \textit{machine learning} are used to learn or create the \textit{models}. The most prominent \textit{algorithms} are: \textit{tf-idf}, \textit{bayesian learning}, \textit{Rocchio's algorithm} and \textit{neural networks} \citep{Lops11, Ferrari19, DeKa11}. Altogether the built and learned \textit{feature vectors} are compared with each other. Based on their closeness, similar \textit{features} can be used to generate \textit{missing ratings}. Figure \ref{fig:cb} shows a sketch of the general operation of \textit{content-based recommenders}.
 
 \subsection{Collaborative-Filtering}
-Unlike the \textit{content-based recommender}, the \textit{collaborative-filtering recommender} not only considers individual \textit{users} and \textit{feature vectors}, but rather a \textit{like-minded neighborhood} of each \textit{user}.
+Unlike the \textit{content-based recommender (CF)}, the \textit{collaborative-filtering recommender} not only considers individual \textit{users} and \textit{feature vectors}, but rather a \textit{like-minded neighborhood} of each \textit{user}.
 Missing \textit{user ratings} can be extracted by this \textit{neighbourhood} and \textit{networked} to form a whole. It is assumed that a \textit{missing rating} of the considered \textit{user} for an unknown \textit{item} $i$ will be similar to the \textit{rating} of a \textit{user} $v$ as soon as $u$ and $v$ have rated some \textit{items} similarly. The similarity of the \textit{users} is determined by the \textit{community ratings}. This type of \textit{recommender system} is also known by the term \textit{neighborhood-based recommender} \citep{DeKa11}. The main focus of \textit{neighbourhood-based methods} is on the application of iterative methods such as \textit{k-nearest-neighbours} or \textit{k-means}.
 A \textit{neighborhood-based recommender} can be viewed from two angles: The first and best known problem is the so-called \textit{user-based prediction}.  Here, the \textit{missing ratings} of a considered \textit{user} $u$ are to be determined from his \textit{neighborhood} $\mathcal{N}_i(u)$. 
 $\mathcal{N}_i(u)$ denotes the subset of the \textit{neighborhood} of all \textit{users} who have a similar manner of evaluation to $u$ via the \textit{item} $i$. The second problem is that of \textit{item-based prediction}. Analogously, the similarity of the items is determined by their received ratings.
@@ -80,4 +80,10 @@ This approach is also called \textit{Funk-SVD} or \textit{SVD} in combination wi
 The second method often used is \textit{alternating least square (ALS)}. In contrast to \textit{SGD}, the vectors $q_i, p_u$ are adjusted in \textit{two steps}. Since \textit{SGD} $q_i$ and $p_u$ are both unknown, this is a \textit{non-convex problem}. The idea of \textit{ALS} is to capture one of the two vectors and work with one unknown variable each. Thus the problem becomes \textit{quadratic} and can be solved optimally. For this purpose the matrix $\mathcal{P}$ is filled with \textit{random numbers} at the beginning. These should be as small as possible and can be generated by a \textit{gaussian-distribution}. Then $\mathcal{P}$ is recorded and all $q_i \in \mathcal{Q}$ are recalculated according to the \textit{least-square problem}. This step is then repeated in reverse order. \textit{ALS} terminated if a \textit{termination condition} such as the \textit{convergence} of the error is satisfied for both steps \citep{Zh08}.
 
 \subsubsection{Bayesian Learning}
-The third approach is known as \textit{bayesian learning}. With this approach the so-called \textit{gibbs-sampler} is often used. The aim is to determine the \textit{common distribution} of the vectors in $\mathcal{P}, \mathcal{Q}$. For this purpose the \textit{gibbs-sampler} is given an initialization of \textit{hyperparameters} to generate the \textit{initial distribution}. The \textit{common distribution} of the vectors $q_i \in \mathcal{Q}, p_u \in \mathcal{P}$ is approximated by the \textit{conditional probabilities}. The basic principle is to select a variable in a \textit{reciprocal way} and to generate a value dependent on the values of the other variable according to its \textit{conditional distribution}, with the other values remaining unchanged in each \textit{epoch}. A detailed representation of the \textit{gibbs-sampler} was written by \citet{Rus08}.
\ No newline at end of file
+The third approach is known as \textit{bayesian learning}. With this approach the so-called \textit{gibbs-sampler} is often used. The aim is to determine the \textit{common distribution} of the vectors in $\mathcal{P}, \mathcal{Q}$. For this purpose the \textit{gibbs-sampler} is given an initialization of \textit{hyperparameters} to generate the \textit{initial distribution}. The \textit{common distribution} of the vectors $q_i \in \mathcal{Q}, p_u \in \mathcal{P}$ is approximated by the \textit{conditional probabilities}. The basic principle is to select a variable in a \textit{reciprocal way} and to generate a value dependent on the values of the other variable according to its \textit{conditional distribution}, with the other values remaining unchanged in each \textit{epoch}. 
+The approaches shown in sections 2.4.1 to 2.4.4 in combination with this learning approach are also known as \textit{bayesian probabilistic matrix-factorization (BPMF)}. A detailed elaboration of the \textit{BPMF} and the \textit{gibbs-sampler} was written by \citet{Rus08}.
+
+\subsection{Short Summary of Recommender Systems}
+As the previous section clearly shows, the field of \textit{recommender systems} is versatile. Likewise, the individual approaches from the \textit{CB} and \textit{CF} areas can be assigned to unambiguous subject areas. \textit{CF} works rather with \textit{graph-theoretical-approaches} while \textit{CB} uses methods from \textit{machine-learning}. Of course there are \textit{overlaps} between the approaches. Such overlaps are mostly found in \textit{matrix-factorization}. In addition to \textit{classical matrix- factorization}, which is limited to \textit{simple matrix-decomposition}, approaches such as \textit{SVD++} and \textit{BPMF} work with methods from \textit{CB} and \textit{CF}. \textit{SVD++} uses \textit{graph-based information} while \textit{BPMF} uses classical approaches from \textit{machine learning}. Nevertheless, \textit{matrix-factorization} forms a separate part in the research field of \textit{recommender systems}, which is strongly influenced by \textit{CB} and \textit{CF} ways of thinking. Figure \ref{fig:overview} finally shows a detailed overview of the different \textit{recommender-systems} and their dependencies.
+
+\input{overview}
\ No newline at end of file
diff --git a/submission.pdf b/submission.pdf
index b2003f7f3eca57a7d0edb3dd90eeb2ed5fe9c69e..b5b86adaad92ee70cd34cdbcf821c5286ea52809 100644
Binary files a/submission.pdf and b/submission.pdf differ
diff --git a/submission.tex b/submission.tex
index 5fd0b2aa88c8da1ce7a595bc6fee7692cced1fef..f9673706be07186b9095a36ebfb813cd48e4c6e5 100644
--- a/submission.tex
+++ b/submission.tex
@@ -64,7 +64,10 @@ A Study on Recommender Systems}
 \input{frontpage}
 \newpage
 \tableofcontents
+\thispagestyle{empty}
 \newpage
+\setcounter{page}{1}
+
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 % Hier beginnt der Inhalt!                                       %
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%