Skip to main content

Recommender Systems

In this course, you will learn the essentials of recommender systems.

About This Course

In this course, participants learn the essentials of recommender systems. We start with an introduction to recommender systems and its applications. We then discuss the various ways and attention points of building a recommendation system. This is followed by an-depth discussion on collaborative filtering, one of the most popular ways of building recommender systems nowadays. We then discuss model based collaborative filtering, which relies on machine learning techniques to build recommendation systems. Next, we zoom in on matrix factorization to discover latent features in rating matrices. We then elaborate on content filtering and knowledge based filtering. We also discuss deep learning neural networks for building recommendation systems. The course concludes by reviewing recommendation system attacks, software and challenges.

The course provides a sound mix of both theoretical and technical insights, as well as practical implementation details. These are illustrated by several real-life case studies and examples. Throughout the course, the instructors also extensively report upon their research and industry experience.

Examples are given in Python both as Jupyter notebooks and plain Python code which the participants can easily copy/paste and run in their own Python environments.

The course features more than 3.5 hours of video lectures, more than 50 multiple choice questions, and various references to background literature. A certificate signed by the instructors is provided upon successful completion.

See Singular Value Decomposition to get a free teaser of the course contents.

We can also come and teach this course on-site in classroom format. If interested, please mail us at:


The enrollment fee for this course is EUR 250 (VAT excl.) per participant. Payments are securely handled by PayPal but wire transfer is also possible if you give us your payment details (including VAT number!). See our about page to learn more about our mission statement.

After enrollment, participants will get 1 year unlimited access to all course material (videos, R/Python scripts, quizzes and certificate).


Before subscribing to this course, you should have a basic understanding of descriptive statistics (e.g., mean, median, standard deviation, histograms, scatter plots, etc.) and inference (e.g., confidence intervals, hypothesis testing). You should also have followed and completed our Machine Learning Essentials course.

Course Outline

  • Chapter 0: Introduction
    • Instructor Team
    • Our Publications
    • Course Outline
    • Python
    • Python tutorial
    • Disclaimer
  • Chapter 1: Introduction to Recommender Systems
    • Definition
    • Business Value
    • Examples
    • Impact
    • Items and users
    • Personalized versus unpersonalized recommendation
    • Recommendation System Challenges
    • Quiz
  • Chapter 2: Building a Recommender System
    • User Interest
      • Explicit user interest
      • Implicit user interest
    • Rating matrix
      • Scalability problem
      • Sparsity problem
      • Rating bias problem
      • Long tail problem
    • Recommender system workings
      • Collaborative filtering
      • Content filtering
      • Knowledge based filtering
      • Hybrid recommender systems
    • Goal of Recommender System
    • Evaluating recommender systems
    • Evaluating Recommender Systems: precision (RMSE, MAD, Pearson correlation)
    • Evaluating Recommender Systems: conversion (confusion matrix, F-measure, precision recall curve, ROC curve)
    • Evaluating Recommender Systems: ranking (Mean Average Precision @k, Spearman’s rank order correlation, Kendall’s tau, Goodman-Kruskal gamma)
    • Evaluating Recommender Systems: Other criteria (diversity, user coverage, item coverage, serendipity, cold start, profit)
    • Quiz
  • Chapter 3: Collaborative Filtering
    • Definition
    • User-User Collaborative Filtering: Basic Intuition
    • User-User Collaborative Filtering: Similarity Measures (Pearson’s correlation coefficient, (adjusted) cosine measure, Jaccard index)
    • User-User Collaborative Filtering: Prediction
    • User-User Collaborative Filtering in Python (Jupyter notebook + plain Python code)
    • Item-Item Collaborative Filtering
    • Item-Item Collaborative Filtering in Python (Jupyter notebook + plain Python code)
    • K-nearest neighbor based Filtering: Extensions
      • Spreading activation
      • Slope One predictors
      • Recursive Collaborative Filtering
    • Slope One predictors in Python (Jupyter notebook + plain Python code)
    • K-nearest neighbor based Filtering: Advantages
    • K-nearest neighbor based Filtering: Disadvantages
    • K-nearest neighbor based Filtering : Scientific Perspective
    • Quiz
  • Chapter 4: Model based collaborative filtering
    • Clustering
    • Regression
    • Regression in Python (Jupyter notebook + plain Python code)
    • Association rules
    • Bayesian networks
    • Network Community Mining
    • Evaluation
    • Quiz
  • Chapter 5: Matrix Factorisation
    • Motivation
    • UV decomposition
    • Non-negative UV matrix decompositon
    • Non-negative UV matrix decompositon in Python (Jupyter notebook + plain Python code)
    • Singular Value Decomposition (SVD)
    • Singular Value Decomposition (SVD) in Python (Jupyter notebook + plain Python code)
    • Tensor Decomposition (short)
    • Closing thoughts
    • Quiz
  • Chapter 6: Content Filtering
    • Content Filtering: Basic Idea
    • Item and User profiles
    • Content Filtering in Python (Jupyter notebook + plain Python code)
    • Content Filtering: Advantages
    • Content Filtering: Disadvantages
    • Quiz
  • Chapter 7: Knowledge Based Filtering
    • Knowledge Based Filtering
    • Hybrid Filtering
    • Quiz
  • Chapter 8: Deep Learning neural networks for recommendation
    • Neural networks
    • Deep learning neural networks
    • Deep neural networks for recommendation
    • Autoencoders: AutoRec
    • AutoRec in Python (Jupyter notebook + plain Python code)
    • Item2Vec
    • Item2Vec in Python (Jupyter notebook + plain Python code)
    • Other Deep Learning Neural Networks for Recommendation (short)
    • Deep Learning: Evaluation
    • Deep Learning: Conclusion
    • Quiz
  • Chapter 9: Closing Thoughts
    • Recommender Systems: Attacks (nuke versus push attack)
      • Random attack
      • Average attack
      • Bandwagon attack
      • Segment attack models
    • Recommender Systems: Attack Detection
    • Recommender System Software
    • Challenges in Recommendation Systems
    • Quiz
  • Quiz
Prof. dr. Bart Baesens

Prof. dr. Bart Baesens

Bart was born in Bruges (West Flanders, Belgium) on February 27th, 1975. He speaks West-Flemish (which he is very proud of!), Dutch, French, a bit of German, some English and can order a beer in Chinese. He is married to Katrien Denys and has 3 kids (Ann-Sophie, Victor and Hannelore), and 2 cats (Felix and Simba). Besides enjoying time with his family, he is also a diehard Club Brugge soccer fan. Bart is a foodie and amateur cook. He loves drinking a good glass of wine (his favorites are white Viognier or red Cabernet Sauvignon) either in his wine cellar or when overlooking the authentic red English phone booth in his garden. His favourite pub is “In den Rozenkrans” in Kessel-Lo (close to Leuven) where you will often find him having a Gueuze Girardin 1882 or Tripel Karmeliet with a spaghetti of the house. Bart loves traveling and his favorite cities are: San Francisco, Sydney and Barcelona. He is fascinated by World War I and reads many books on the topic. He is not a big fan of being called professor Baesens (or even worse, professor Baessens), shopping (especially for clothes or shoes), pastis (or other anise-flavored drinks), vacuum cleaning (he can’t bare the sound), students chewing gum during their oral exam of Credit Risk Modeling (or had garlic for breakfast), long meetings (> 30 minutes), phone calls (asynchronous e-mail communication is a lot more efficient!), admin (e.g., forms and surveys) or French fries (Belgian fries are a lot better!). He is often praised for his sense of humor, although he is usually more modest about this.

Bart is also a professor of Big Data and Analytics at KU Leuven (Belgium) and a lecturer at the University of Southampton (United Kingdom). He has done extensive research on Big Data & Analytics, Credit Risk Modeling, Fraud Detection and Marketing Analytics. He has written more than 250 scientific papers, some of which have been published in well-known international journals (e.g., MIS Quarterly, Machine Learning, Management Science, MIT Sloan Management Review and IEEE Transactions on Knowledge and Data Engineering) and presented at top international conferences (e.g., ICIS, KDD, CAISE). He has received various best paper and best speaker awards. Bart is the author of 8 books: Credit Risk Management: Basic Concepts (Oxford University Press, 2009), Analytics in a Big Data World (Wiley, 2014), Beginning Java Programming (Wiley, 2015), Fraud Analytics using Descriptive, Predictive and Social Network Techniques (Wiley, 2015), Credit Risk Analytics (Wiley, 2016), Profit Driven Business Analytics (Wiley, 2017), Web Scraping for Data Science with Python (Apress, 2018), and Principles of Database Management (Cambridge University Press, 2018). He sold more than 25.000 copies of these books worldwide, some of which have been translated in Chinese, Russian and Korean. His research is summarized at For an overview of the courses he is teaching, see He also regularly tutors, advises and provides consulting support to international firms regarding their big data, analytics and credit risk management strategy.

Dr. ir. Michael Reusens

dr. ir. Michael Reusens

Michael was born in Leuven (Belgium) On October 7th, 1991. Being the best city in the world, Michael grew up, studied, and still lives in Leuven. He loves beers and good food, best paired with family and friends. His favorite pub is Thomas Stapleton, a cozy Irish pub in the heart of Leuven. Michael loves being creative and telling stories, which he does weekly while playing improv theatre with his group called The Beluga’s. In order to shed some of the calories consumed from all the good beer and food he also plays baseball with the Leuven Twins (Go Twins!).

Michael graduated as Master of Engineering: Computer Science at KU Leuven in June 2014. In April 2018, Michael successfully defended his Ph.D thesis at the department of Decision Sciences and Information Management. During his research he collaborated with the Flemish public employment services on developing state of the art job recommendation systems and researching how data mining algorithms and methodologies can be adapted to better match employers with possible employees. His work is published in well-known journals, such as Decision Support Systems and Knowledge Based Systems. Michael has since left academia, but is still regularly invited as guest-lecturer on the topic of recommender systems.

Michael now works as a data scientist at Statistics Flanders, the network of Flemish government agencies that develop, produce and publish official statistics.