Skip to content
Snippets Groups Projects
Commit a23768e3 authored by Marc Feger's avatar Marc Feger
Browse files

Add more text for experiments

parent 381a6a4e
Branches
No related tags found
No related merge requests found
...@@ -46,9 +46,9 @@ As it turned out that the \textit{bayesian approach} gave more promising results ...@@ -46,9 +46,9 @@ As it turned out that the \textit{bayesian approach} gave more promising results
\subsection{Obeservations} \subsection{Obeservations}
The first observation that emerges from \textit{figure} \ref{fig:bayes_sampling_steps} is that the \textit{increase} in \textit{sampling steps} with a \textit{fixed dimensional embedding} also results in an \textit{improvement} in \textit{RMSE} for all models. Based on this, \textit{figure} \ref{fig:bayes_dimensional_embeddings} also shows that an \textit{increase} in the \textit{dimensional embedding} for \textit{512 sampling steps} also leads to an \textit{improvement} in the \textit{RMSE} for all models. Thus, both the \textit{number of sampling steps} and the size of the \textit{dimensional embedding} are involved in the \textit{RMSE} of \textit{matrix-factorization models} when they are trained using the \textit{bayesian approach}. The first observation that emerges from \textit{figure} \ref{fig:bayes_sampling_steps} is that the \textit{increase} in \textit{sampling steps} with a \textit{fixed dimensional embedding} also results in an \textit{improvement} in \textit{RMSE} for all models. Based on this, \textit{figure} \ref{fig:bayes_dimensional_embeddings} also shows that an \textit{increase} in the \textit{dimensional embedding} for \textit{512 sampling steps} also leads to an \textit{improvement} in the \textit{RMSE} for all models. Thus, both the \textit{number of sampling steps} and the size of the \textit{dimensional embedding} are involved in the \textit{RMSE} of \textit{matrix-factorization models} when they are trained using the \textit{bayesian approach}.
\subsubsection{Stronger Baselines}
As a second finding, the \textit{RMSE values} of the created models can be taken from \textit{figure} \ref{fig:bayes_dimensional_embeddings}. Several points can be addressed. Firstly, it can be seen that the \textit{individual inclusion} of \textit{implicit knowledge} such as \textit{time} or \textit{user behaviour} leads to a significant \textit{improvement} in the \textit{RMSE}. For example, models like \textit{bayesian timeSVD (0.7587)} and \textit{bayesian SVD++ (0.7563)}, which already use single implicit knowledge, beat the \textit{simple bayesian MF} with an \textit{RMSE} of \textit{0.7633}. In addition, it also shows that the \textit{combination} of \textit{implicit data} further improves the \textit{RMSE}. \textit{Bayesian timeSVD++} achieves an \textit{RMSE} of \textit{0.7523}. Finally, \textit{bayesian timeSVD++ flipped} can achieve an \textit{RMSE} of \textit{0.7485} by adding \textit{more implicit data}. As a second finding, the \textit{RMSE values} of the created models can be taken from \textit{figure} \ref{fig:bayes_dimensional_embeddings}. Several points can be addressed. Firstly, it can be seen that the \textit{individual inclusion} of \textit{implicit knowledge} such as \textit{time} or \textit{user behaviour} leads to a significant \textit{improvement} in the \textit{RMSE}. For example, models like \textit{bayesian timeSVD (0.7587)} and \textit{bayesian SVD++ (0.7563)}, which already use single implicit knowledge, beat the \textit{simple bayesian MF} with an \textit{RMSE} of \textit{0.7633}. In addition, it also shows that the \textit{combination} of \textit{implicit data} further improves the \textit{RMSE}. \textit{Bayesian timeSVD++} achieves an \textit{RMSE} of \textit{0.7523}. Finally, \textit{bayesian timeSVD++ flipped} can achieve an \textit{RMSE} of \textit{0.7485} by adding \textit{more implicit data}.
This results in the third and most significant observation of the experiment. Firstly, the \textit{simple bayesian MF} with an \textit{RMSE} of \textit{0.7633} already beat the best method \textit{MRMA} with an \textit{RMSE} of \textit{0.7634}. Furthermore, the best method \textit{MRMA} could be surpassed with \textit{bayesian timeSVD++} by 0.0149 with respect to the \textit{RMSE}. Such a result is astonishing, as it took \textit{one year} during the \textit{Netflix-Prize} to reduce the leading \textit{RMSE} from \textit{0.8712 (progress award 2007)} to \textit{0.8616 (progress award 2008)}. Additionally, this result is remarkable as it \textit{challenges} the \textit{last 5 years} of research on the \textit{MovieLens10M-dataset}. Based on the results obtained, the \textit{authors} see the first problem with the \textit{results} achieved on the \textit{MovieLens10M-dataset} as being that they were \textit{compared against} too \textit{weak baselines}. This results in the third and most significant observation of the experiment. Firstly, the \textit{simple bayesian MF} with an \textit{RMSE} of \textit{0.7633} already beat the best method \textit{MRMA} with an \textit{RMSE} of \textit{0.7634}. Furthermore, the best method \textit{MRMA} could be surpassed with \textit{bayesian timeSVD++} by 0.0149 with respect to the \textit{RMSE}. Such a result is astonishing, as it took \textit{one year} during the \textit{Netflix-Prize} to reduce the leading \textit{RMSE} from \textit{0.8712 (progress award 2007)} to \textit{0.8616 (progress award 2008)}. Additionally, this result is remarkable as it \textit{challenges} the \textit{last 5 years} of research on the \textit{MovieLens10M-dataset}. Based on the results obtained, the \textit{authors} see the first problem with the \textit{results} achieved on the \textit{MovieLens10M-dataset} as being that they were \textit{compared against} too \textit{weak baselines}.
\subsubsection{Stronger Baselines}
\subsubsection{Reproducability} \subsubsection{Reproducability}
\subsubsection{Inadequate validations} \subsubsection{Inadequate validations}
\ No newline at end of file
No preview for this file type
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment