\section{Introduction} Today's use of \textit{recommender systems} finds an increased and yet unconscious access to our everyday life. More and more areas of life are therefore subject to constant optimisation. Companies such as \textit{Netflix}, \textit{Amazon} and \textit{YouTube} adapt their product proposals to the individual wishes of their customers. To make this possible, the various \textit{collaborative-filtering} and \textit{content-based} \textit{recommender systems} are used. Since \citet{JuKa90} first presented \textit{recommender systems} as a kind of intelligent bookcase, much effort has been put into the development and research of such systems. The most diverse subject areas were not only illuminated by the industry. A whole new branch of research also opened up for science. In their work ``\textit{On the Diffculty of Evaluating Baselines A Study on Recommender Systems}`` \citet{Rendle19} show that current research on the \textit{MovieLens10M-dataset} leads in a wrong direction. In addition to general problems, they particulary list wrong working methods and missunderstood \textit{baselines} by breaking them by a number of simple methods such as \textit{matrix-factorization}. They were able to beat the existing \textit{baselines} by not taking them for granted. On the contrary, they questioned them and transferred well evaluated and understood properties of the \textit{baselines} from the \textit{Netflix-Prize} to them. As a result, they were not only able to beat the \textit{baselines} reported for the \textit{MovieLens10M-dataset}, but also the newer methods from the last five years of research. Therefore, it can be assumed that the current and former results obtained on the \textit{MovieLens10M-dataset} were not sufficient to be considered as a true \textit{baseline}. Thus they show the \textit{community} a critical error on which can be found not only in the evaluation of \textit{recommender systems} but also in other scientific areas. The first problem the authors point out that, scientific papers whose focus is on better understanding and improving existing \textit{baselines} do not receive recognition because they do not seem innovative enough. In contrast to industry, which tenders horrendous prizes for researching and improving such \textit{baselines}, there is a lack of such motivation in the scientific field. From the authors point of view, the scientific work on the \textit{MovieLens10M-dataset} is misdirected, because \textit{one-off evaluations} leading to \textit{one-hit-wonders}, which are then used as a starting point for further work. Thus \citet{Rendle19} points out as a second point of criticism, that the need for further basic research for the \textit{MovieLens10M-dataset} is not yet exhausted. This submission takes a critical look at the topic presented by \citet{Rendle19}. In addition, basic terms and the results obtained are presented in a way that is comprehensible to the non-experienced reader. For this purpose, the submission is divided into three subject areas. First of all, the non-experienced reader is introduced to the topic of \textit{recommender systems} in the section ``\textit{A Study on Recommender Systems}``. Subsequently, building on the first section, the work in the section ``\textit{On the Diffculty of Evaluating Baselines}`` is presented in detail. The results are then evaluated in a critical discourse.