4.55 sur 5
4.55

Construire des systèmes de recommandation avec l’apprentissage automatique et l’IA

Comment créer des systèmes de recommandation avec l'apprentissage en profondeur, le filtrage collaboratif et l'apprentissage automatique.
Comprendre et appliquer un filtrage collaboratif basé sur les utilisateurs et les éléments pour recommander des éléments aux utilisateurs
Créer des recommandations en utilisant le deep learning à grande échelle
Construire des systèmes de recommandation avec des réseaux de neurones et des machines de Boltzmann restreintes (RBM)
Faire des recommandations basées sur des sessions avec des réseaux de neurones récurrents et des unités récurrentes fermées (GRU)
Créer un framework pour tester et évaluer des algorithmes de recommandation avec Python
Appliquer les bonnes mesures du succès d'un système de recommandation
Construire des systèmes de recommandation avec des méthodes de factorisation matricielle telles que SVD et SVD++
Appliquez les apprentissages du monde réel de Netflix et YouTube à vos propres projets de recommandation
Combiner de nombreux algorithmes de recommandation dans des approches hybrides et d'ensemble
Utiliser Apache Spark pour calculer des recommandations à grande échelle sur un cluster
Utilisez K-Nearest-Neighbors pour recommander des articles aux utilisateurs
Résoudre le problème de "démarrage à froid" avec des recommandations basées sur le contenu
Comprendre les solutions aux problèmes courants avec les systèmes de recommandation à grande échelle

Nouveau ! Mise à jour pour Tensorflow 2, Amazon Customize et plus encore.

Découvrez comment créer des systèmes de recommandation grâce à l’un des pionniers d’Amazon dans le domaine. Frank Kane a passé plus de neuf ans chez Amazon, où il a géré et dirigé le développement de nombreuses technologies de recommandation de produits personnalisées d’Amazon.

Vous avez vu des recommandations automatisées partout : sur la page d’accueil de Netflix, sur YouTube et sur Amazon, car ces algorithmes d’apprentissage automatique découvrent vos intérêts uniques et présentent les meilleurs produits ou contenus pour vous en tant qu’individu. Ces technologies sont devenues au cœur de la  les employeurs technologiques les plus importants et les plus prestigieux, et en comprenant comment ils fonctionnent, vous deviendrez très précieux pour eux.

Nous couvrirons les algorithmes de recommandation éprouvés basés sur le quartier filtrage collaboratif et progresser vers des techniques plus modernes, notamment la factorisation matricielle et même l’apprentissage en profondeur avec des réseaux de neurones artificiels. En cours de route, vous apprendrez de la vaste expérience de Frank dans l’industrie pour comprendre les défis du monde réel que vous rencontrerez lors de l’application de ces algorithmes à grande échelle et avec des données du monde réel.

Les systèmes de recommandation sont complexes ; ne vous inscrivez pas à ce cours en vous attendant à un format d’apprentissage du code. Il n’y a pas de recette à suivre pour créer un système de recommandation ; vous devez comprendre les différents algorithmes et comment choisir quand appliquer chacun pour une situation donnée. Nous supposons que vous savez déjà coder.

Cependant, ce cours est très pratique ; vous développerez votre propre cadre pour évaluer et combiner de nombreux algorithmes de recommandation différents, et vous créerez même vos propres réseaux de neurones à l’aide de Tensorflow pour générer des recommandations à partir des évaluations de films du monde réel par de vraies personnes. Nous aborderons :

  • Créer un moteur de recommandations
  • Évaluer systèmes de recommandation
  • Filtrage basé sur le contenu à l’aide d’attributs d’élément
  • Basé sur le quartier filtrage collaboratif avec des méthodes basées sur les utilisateurs, les éléments et KNN CF
  • Méthodes basées sur des modèles, y compris la factorisation matricielle et SVD
  • Appliquer l’apprentissage profond, l’IA et les réseaux de neurones artificiels aux recommandations
  • Recommandations basées sur les sessions avec les réseaux de neurones récursifs
  • Scaling à des ensembles de données massifs avec l’apprentissage automatique Apache Spark , apprentissage en profondeur Amazon DSSTNE et AWS SageMaker avec des machines de factorisation
  • Défis du monde réel et des solutions avec des systèmes de recommandation
  • Études de cas fr sur YouTube et Netflix
  • Création hybride, recommandeurs d’ensemble

Ce cours complet vous emmène des premiers jours du filtrage collaboratif aux applications de pointe des réseaux de neurones profonds et aux techniques modernes d’apprentissage automatique pour recommander les meilleurs éléments à chaque utilisateur individuel.

Les exercices de codage de ce cours utilisent le langage de programmation Python . Nous incluons une introduction à Python si vous êtes nouveau dans ce domaine, mais vous aurez besoin d’une certaine expérience en programmation pour utiliser ce cours avec succès. Nous incluons également une courte introduction à l’apprentissage en profondeur si vous débutez dans le domaine de l’intelligence artificielle, mais vous devrez être capable de comprendre de nouveaux algorithmes informatiques.

Anglais de haute qualité, édité à la main fermé des légendes sont incluses pour vous aider à suivre.

J’espère vous voir bientôt dans le cours !

Getting Started

1
WeCours 101: Getting the Most From This Course
2
Note: Alternate dataset download location
3
[Activity] Install Anaconda, course materials, and create movie recommendations!

After a brief introduction to the course, we'll dive right in and install what you need: Anaconda (your Python development environment,) the course materials, and the MovieLens data set of 100,00 real movie ratings from real people. We'll then run a quick example to generate movie recommendations using the SVD algorithm, to make sure it all works!

4
Course Roadmap

We'll just lay out the structure of the course so you know what to expect later on (and when you'll start writing some code of your own!) Also, we'll provide advice on how to navigate this course depending on your prior experience.

5
What Is a Recommender System?

The phrase "recommender system" is a more general-sounding term than it really is. Let's briefly clarify what a recommender system is - and more importantly, what it is not.

6
Types of Recommenders

There are many different flavors of recommender systems, and you encounter them every day. Let's review some of the applications of recommender systems in the real world.

7
Understanding You through Implicit and Explicit Ratings

How do recommender systems learn about your individual tastes and preferences? We'll explain how both explicit ratings and implicit ratings work, and the strengths and weaknesses of both.

8
Top-N Recommender Architecture

Most real-world recommender systems are "Top-N" systems, that produce a list of top results to individuals. There are a couple of main architectural approaches to building them, which we'll review here.

9
[Quiz] Review the basics of recommender systems.

We'll review what we've covered in this section with a quick 4-question quiz, and discuss the answers.

Introduction to Python [Optional]

1
[Activity] The Basics of Python

After installing Jupyter Notebook, we'll cover the basics of what's different about Python, including its use of white-space. We'll dissect a simple function to get a feel of what Python code looks like.

2
Data Structures in Python

We'll look at using lists, tuples, and dictionaries in Python.

3
Functions in Python

We'll see how to define a function in Python, and how Python lets you pass functions to other functions. We'll also look at a simple example of a Lambda function.

4
[Exercise] Booleans, loops, and a hands-on challenge

We'll look at how Boolean expressions work in Python as well as loops. Then, we'll give you a challenge to write a simple Python function on your own!

Evaluating Recommender Systems

1
Train/Test and Cross Validation

Learn about different testing methodologies for evaluating recommender systems offline, including train/test, K-Fold Cross Validation, and Leave-One-Out cross-validation.

2
Accuracy Metrics (RMSE, MAE)

Learn about Root Mean Squared Error, Mean Absolute Error, and why we use these measures of recommendation prediction accuracy.

3
Top-N Hit Rate - Many Ways

Learn about several ways to measure the accuracy of top-N recommenders, including hit rate, cumulative hit rate, average reciprocal hit rank, rating hit rate, and more.

4
Coverage, Diversity, and Novelty

Learn how to measure the coverage of your recommender system, how diverse its results are, and how novel its results are.

5
Churn, Responsiveness, and A/B Tests

Measure how often your recommendations change (churn,) how quickly they respond to new data (responsiveness,) and why no metric matters more than the results of real, online A/B tests. We'll also talk about perceived quality, where you explicitly ask your users to rate your recommendations.

6
[Quiz] Review ways to measure your recommender.

In this short quiz, we'll review what we've learned about different ways to measure the qualities and accuracy of your recommender system.

7
[Activity] Walkthrough of RecommenderMetrics.py

Let's walk through this course's Python module for implementing the metrics we've discussed in this section on real recommender systems.

8
[Activity] Walkthrough of TestMetrics.py

We'll walk through our sample code to apply our RecommenderMetrics module to a real SVD recommender using real MovieLens rating data, and measure its performance in many different ways.

9
[Activity] Measure the Performance of SVD Recommendations

After running TestMetrics.py, we'll look at the results for our SVD recommender, and discuss how to interpret them.

A Recommender Engine Framework

1
Our Recommender Engine Architecture

Let's review the architecture of our recommender engine framework, which will let us easy implement, test, and compare different algorithms throughout the rest of this course.

2
[Activity] Recommender Engine Walkthrough, Part 1

In part one of the code walkthrough of our recommender engine, we'll see how it's used, and dive into the Evaluator class.

3
[Activity] Recommender Engine Walkthrough, Part 2

In part two of the walkthrough, we'll dive into the EvaluationData class, and kick off a test with the SVD recommender.

4
[Activity] Review the Results of our Algorithm Evaluation.

Wrapping up our review of our recommender system architecture, we'll look at the results of using our framework to evaluate the SVD algorithm, and interpret them.

Content-Based Filtering

1
Content-Based Recommendations, and the Cosine Similarity Metric

We'll talk about how content-based recommendations work, and introduce the cosine similarity metric. Cosine scores will be used throughout the course, and understanding their mathematical basis is important.

2
K-Nearest-Neighbors and Content Recs

We'll cover how to factor time into our content-based recs, and how the concept of KNN will allow us to make rating predictions just based on similarity scores based on genres and release dates.

3
[Activity] Producing and Evaluating Content-Based Movie Recommendations

We'll look at some code for producing movie recommendations based on their genres and years, and evaluate the results using the MovieLens data set.

4
A Note on Using Implicit Ratings.

A common point of confusion is how to use implicit ratings, such as purchase or click data, with the algorithms we're talking about. It's pretty simple, but let's cover it here.

5
[Activity] Bleeding Edge Alert! Mise en Scene Recommendations

In our first "bleeding edge alert," we'll examine the use of Mise en Scene data for providing additional content-based information to our recommendations. And, we'll turn the idea into code, and evaluate the results.

6
[Exercise] Dive Deeper into Content-Based Recommendations

In two different hands-on exercises, dive into which content attributes provide the best recommendations - and try augmenting our content-based recommendations using popularity data.

Neighborhood-Based Collaborative Filtering

1
Measuring Similarity, and Sparsity

Similarity between users or items is at the heart of all neighborhood-based approaches; we'll discuss how similarity measures fit into our architecture, and the effect data sparsity has on it.

2
Similarity Metrics

We'll cover different ways of measuring similarity, including cosine, adjusted cosine, Pearson, Spearman, Jaccard, and more - and how to know when to use each one.

3
User-based Collaborative Filtering

We'll illustrate how user-based collaborative filtering works, where we recommend stuff that people similar to you liked.

4
[Activity] User-based Collaborative Filtering, Hands-On

Let's write some code to apply user-based collaborative filtering to the MovieLens data set, run it, and evaluate the results.

5
Item-based Collaborative Filtering

We'll talk about the advantages of flipping user-based collaborative filtering on its head, to give us item-based collaborative filtering - and how it works.

6
[Activity] Item-based Collaborative Filtering, Hands-On

Let's write, run, and evaluate some code to apply item-based collaborative filtering to generate recommendations from the MovieLens data set, and compare it to user-based CF.

7
[Exercise] Tuning Collaborative Filtering Algorithms

In this exercise, you're challenged to improve upon the user-based and item-based collaborative filtering algorithms we presented, by tweaking the way candidate generation works.

8
[Activity] Evaluating Collaborative Filtering Systems Offline

Since collaborative filtering does not make rating predictions, evaluating it offline is challenging - but we can test it with hit rate metrics, and leave-one-out cross validation. Which we'll do, in this activity.

9
[Exercise] Measure the Hit Rate of Item-Based Collaborative Filtering

In the previous activity, we measured the hit rate of a user-based collaborative filtering system. Your challenge is to do the same for an item-based system.

10
KNN Recommenders

Learn how the ideas of neighborhood-based collaborative filtering can be applied into frameworks based on rating predictions, with K-Nearest-Neighbor recommenders.

11
[Activity] Running User and Item-Based KNN on MovieLens

Let's use SurpriseLib to quickly run user-based and item-based KNN on our MovieLens data, and evaluate the results.

12
[Exercise] Experiment with different KNN parameters.

Try different similarity measures to see if you can improve on the results of KNN - and we'll talk about why this is so challenging.

13
Bleeding Edge Alert! Translation-Based Recommendations

In our next "bleeding edge alert," we'll discuss Translation-Based Recommendations - an idea unveiled in the 2017 RecSys conference for recommending sequences of events, based on vectors in item similarity space.

Matrix Factorization Methods

1
Principal Component Analysis (PCA)

Let's learn how PCA allows us to reduce higher-dimensional data into lower dimensions, which is the first step toward understanding SVD.

2
Singular Value Decomposition

We'll extend PCA to the problem of making movie recommendations, and learn how SVD is just a specific implementation of PCA.

3
[Activity] Running SVD and SVD++ on MovieLens

Let's run SVD and SVD++ on our MovieLens movie ratings data set, and evaluate the results. They're really good!

4
Improving on SVD

We'll talk about some variants and extensions to SVD that have emerged, and the importance of hyperparameter tuning on SVD, as well as how to tune parameters in SurpriseLib using the GridSearchCV class.

5
[Exercise] Tune the hyperparameters on SVD

Have a go at modifying our SVD bake-off code to find the optimal values of the various hyperparameters for SVD, and see if it makes a difference in the results.

6
Bleeding Edge Alert! Sparse Linear Methods (SLIM)

We'll cover some exciting research from the University of Minnesota based on matrix factorization.

Introduction to Deep Learning [Optional]

1
Deep Learning Introduction

A quick introduction on what to expect from this section, and who can skip it.

2
Deep Learning Pre-Requisites

We'll cover the concepts of Gradient Descent, Reverse Mode AutoDiff, and Softmax, which you'll need to build deep neural networks.

3
History of Artificial Neural Networks

We'll cover the evolution of neural networks from their origin in the 1940's, all the way up to the architecture of modern deep neural networks.

4
[Activity] Playing with Tensorflow

We'll use the Tensorflow Playground to get a hands-on feel of how deep neural networks operate, and the effects of different topologies.

5
Training Neural Networks

We'll cover the mechanics of different activation functions and optimization functions for neural networks, including ReLU, Adam, RMSProp, and Gradient Descent.

6
Tuning Neural Networks

We'll talk about how to prevent overfitting using techniques such as dropout layers, and how to tune your topology for the best results.

7
Activation Functions: More Depth
8
Introduction to Tensorflow

We'll walk through an example of using Tensorflow's low-level API to distribute the processing of neural networks using Python.

9
Important Tensorflow setup note!
10
[Activity] Handwriting Recognition with Tensorflow, part 1

In this hands-on activity, we'll implement handwriting recognition on real data using Tensorflow's low-level API. Part 1 of 3.

11
[Activity] Handwriting Recognition with Tensorflow, part 2

In this hands-on activity, we'll implement handwriting recognition on real data using Tensorflow's low-level API. Part 2 of 3.

12
Introduction to Keras

Keras is a higher-level API that makes developing deep neural networks with Tensorflow a lot easier. We'll explain how it works and how to use it.

13
[Activity] Handwriting Recognition with Keras

We'll tackle the same handwriting recognition problem as before, but this time using Keras with much simpler code, and better results.

14
Classifier Patterns with Keras

There are different patterns to use in Keras for multi-class or binary classification problems; we'll talk about how to tackle each.

15
[Exercise] Predict Political Parties of Politicians with Keras

As an exercise challenge, develop your own neural network using Keras to predict the political parties of politicians, based just on their votes on 16 different issues.

16
Intro to Convolutional Neural Networks (CNN's)

We'll talk about how your brain's visual cortex recognizes images seen by your eyes, and how the same approach inspires artificial convolutional neural networks.

17
CNN Architectures

The topology of CNN's can get complicated, and there are several variations of them you can choose from for certain problems, including LeNet, GoogLeNet, and ResNet.

18
[Activity] Handwriting Recognition with Convolutional Neural Networks (CNNs)

We'll tackle handwriting recognition again, this time using Keras and CNN's for our best results yet. Can you improve upon them?

19
Intro to Recurrent Neural Networks (RNN's)

Recurrent Neural Networks are appropriate for sequences of information, such as time series data, natural language, or music. We'll dive into how they work and some variations of them.

20
Training Recurrent Neural Networks

Training RNN's involve back-propagating through time, which makes them extra-challenging to work with.

21
[Activity] Sentiment Analysis of Movie Reviews using RNN's and Keras

We'll wrap up our intro to deep learning by applying RNN's to the problem of sentiment analysis, which can be modeled as a sequence-to-vector learning problem.

22
Tuning Neural Networks
23
Neural Network Regularization Techniques

Deep Learning for Recommender Systems

1
Intro to Deep Learning for Recommenders

We'll introduce the idea of using neural networks to produce recommendations, and explore whether this concept is overkill or not.

2
Restricted Boltzmann Machines (RBM's)

We'll cover a very simple neural network called the Restricted Boltzmann Machine, and show how it can be used to produce recommendations given sparse rating data.

3
[Activity] Recommendations with RBM's, part 1

We'll walk through our implementation of Restricted Boltzmann Machines integrated into our recommender framework. Part 1 of 2.

4
[Activity] Recommendations with RBM's, part 2

We'll walk through our implementation of Restricted Boltzmann Machines integrated into our recommender framework. Part 2 of 2.

5
[Activity] Evaluating the RBM Recommender

We'll run our RBM recommender, and study its results.

6
[Exercise] Tuning Restricted Boltzmann Machines

You're challenged to tune the RBM using GridSearchCV to see if you can improve its results.

7
Exercise Results: Tuning a RBM Recommender

We'll review my results from the previous exercise, so you can compare them against your own.

8
Auto-Encoders for Recommendations: Deep Learning for Recs

We'll learn how to apply modern deep neural networks to recommender systems, and the challenges sparse data creates.

9
[Activity] Recommendations with Deep Neural Networks

We'll walk through our code for producing recommendations with deep learning, and evaluate the results.

10
Clickstream Recommendations with RNN's

We'll introduce "GRU4Rec," a technique that applies recurrent neural networks to the problem of clickstream recommendations.

11
[Exercise] Get GRU4Rec Working on your Desktop

As a more challenging exercise that mimics what you might do in the real world, try and port some older research code into a modern Python and Tensorflow environment, and get it running.

12
Exercise Results: GRU4Rec in Action

We'll review my results from the previous exercise.

13
Tensorflow Recommenders (TFRS): Intro, and Building a Retrieval Stage
14
Tensorflow Recommenders (TFRS): Building a Ranking Stage
15
TFRS: Incorporating Side Features and Deep Retrieval
16
TFRS: Multi-Task Recommenders, Deep & Cross Networks, ScaNN, and Serving
17
Bleeding Edge Alert! Deep Factorization Machines

We'll explore DeepFM, which combines the strengths of Factorization Machines and of Deep Neural Networks to produce a hybrid solution that out-performs either technique.

18
More Emerging Tech to Watch

We'll cover a few more "bleeding edge" topics, including Word2Vec, 3D CNN's for session-based recommendations, and feature extraction with CNN's.

Scaling it Up

1
WARNING: Don't install Java 16!
2
[Activity] Introduction and Installation of Apache Spark

We'll introduce Apache Spark as our first means of "scaling it up," and get it installed on your system if you want to experiment with it.

3
Apache Spark Architecture

We'll explain just enough about how Spark works to let you understand how it distributes its work across a cluster, and the main objects our sample code will use: RDD's and DataFrames.

4
[Activity] Movie Recommendations with Spark, Matrix Factorization, and ALS

We'll start by using Spark's MLLib to generate recommendations with ALS for our ml-100k data set.

5
[Activity] Recommendations from 20 million ratings with Spark

We'll scale things up, and use all of the cores on our local PC to process 20 million ratings and produce top-N recommendations with Apache Spark.

6
Amazon DSSTNE

Amazon open-sourced its recommender engine called DSSTNE, which makes it easy to apply deep neural networks to massive, sparse data sets and produce great recommendations at large scale.

7
DSSTNE in Action

Watch as we use Amazon DSSTNE on an EC2 Ubuntu instance to produce movie recommendations using a deep neural network.

8
Scaling Up DSSTNE

Let's explore how Amazon scaled DSSTNE up, paired with Apache Spark, to process their massive data and produce recommendations for millions of customers.

9
AWS SageMaker and Factorization Machines

Amazon's SageMaker service offers some machine learning algorithms that can be used for recommendations, including factorization machines.

10
SageMaker in Action: Factorization Machines on one million ratings, in the cloud

Watch as I use SageMaker from a cloud-hosted Notebook to pre-process the MovieLens 1-million-rating data set, train and save a Factorization Machine model, and deploy the model for making real-time predictions for movie recommendations.

11
Other Systems of Note (Amazon Personalize, RichRelevance, Recombee, and more)

A huge number of commercial SAAS offerings have emerged to offer easy-to-use recommender systems out of the box, and there are many open-source offerings that allow you to develop recommender systems at scale at as low a level as you want. We'll cover some of the more popular ones, and enumerate the rest.

12
Recommender System Architecture

The specifics of how you deploy a recommender system into production will depend on the environment you're working within, but we'll cover some high-level architectures to consider and some of the technologies you might employ.

Real-World Challenges of Recommender Systems

1
The Cold Start Problem (and solutions)

How do you make recommendations for a brand-new user, or with brand-new items? We'll cover some solutions to this "cold start problem."

2
[Exercise] Implement Random Exploration

One solution to the cold-start problem is random exploration of new items, using underutilized slots in recommendation results. Try implementing this within our framework.

3
Exercise Solution: Random Exploration

We'll walk through my implementation of random exploration, and look at the results.

4
Stoplists

Avoiding PR disasters requires you to keep certain items out of your recommender system entirely; that's what stoplists are for.

5
[Exercise] Implement a Stoplist

Try your hand at implementing a stoplist to keep movies with potentially offensive titles out of your results entirely.

6
Exercise Solution: Implement a Stoplist

We'll walk through my implementation of a stoplist, and see what it does.

7
Filter Bubbles, Trust, and Outliers

We'll cover three topics: how to prevent "filter bubbles" that shield users from new ideas, how to ensure users trust your recommendation through transparency, and the effect of outliers on your recommendations.

8
[Exercise] Identify and Eliminate Outlier Users

Filter out users that are more than 3-sigma from the mean number of ratings.

9
Exercise Solution: Outlier Removal

Review my implementation of outlier detection and removal, and see what effect it has on our results.

10
Fraud, The Perils of Clickstream, and International Concerns

We'll cover ways to prevent malicious users from gaming your system, a cautionary tale about relying too heavily on clickstream implicit data, and concerns specific to international deployment of your recommender system.

11
Temporal Effects, and Value-Aware Recommendations

Time can play a role in your recommendations - how long ago was a rating made, and during what season? We'll explore these temporal effects, and also talk about "value aware recommendations" and how to factor profit into your results.

Case Studies

1
Case Study: YouTube, Part 1

We'll explore the challenges unique to recommendations at YouTube, and their high level approach using deep learning.

2
Case Study: YouTube, Part 2

Dive deeper into YouTube's architecture for applying deep learning to both candidate generation and candidate ranking.

3
Case Study: Netflix, Part 1

At Netflix, "everything is a recommendation." Learn about their approach, and heavy use of hybrid algorithms.

4
Case Study: Netflix, Part 2

Dive deeper into how Netflix produces context-aware recommendations.

Hybrid Approaches

1
Hybrid Recommenders and Exercise

We'll cover the very simple concept of hybrid recommenders, and challenge you to build a HybridAlgorithm in our recommender framework that can combine any list of algorithms together into one.

2
Exercise Solution: Hybrid Recommenders

Explore my hybrid algorithm implementation, and check out its results.

Wrapping Up

1
More to Explore

Pointers to external books, conferences, and papers to keep you up to date beyond this course.

2
Bonus Lecture: More courses to explore!
Vous pouvez afficher et revoir les supports de cours indu00e9finiment, comme une chau00eene u00e0 la demande.
Absolumentu00a0! Si vous disposez d'une connexion Internet, les cours sur WeCours sont disponibles u00e0 tout moment sur n'importe quel appareil. Si vous n'avez pas de connexion Internet, certains instructeurs permettent u00e9galement u00e0 leurs u00e9tudiants de tu00e9lu00e9charger les cours. instructeur cependant, alors assurez-vous d'u00eatre de leur bon cu00f4tu00e9u00a0!
4.6
4.6 sur 5
Notes1929

Détails des Notes

Étoiles 5
1022
Étoiles 4
581
Étoiles 3
134
Étoiles 2
27
Étoiles 1
27
Suivre un cours
Garantie de remboursement de 30 jours

Inclut

11 heures de vidéo à la demande
Accès complet à vie
Accès sur le mobile et la télévision
Certificat d'achèvement

Archive

Working hours

Monday 9:30 am - 6.00 pm
Tuesday 9:30 am - 6.00 pm
Wednesday 9:30 am - 6.00 pm
Thursday 9:30 am - 6.00 pm
Friday 9:30 am - 5.00 pm
Saturday Closed
Sunday Closed