Machine Learning Course
I. Description
This course provides knowledge on artificial intelligence, with a particular focus on machine learning. Learners will gain a solid understanding of core machine learning concepts, including key algorithms for supervised and unsupervised learning, as well as widely-used techniques in the field. The course also guides learners on how to comprehensively set up, implement, and evaluate models, helping them understand the full process of building a machine learning system from start to finish.
II. What You'll learn
-
Grasp the core concepts and mechanics of foundational machine learning algorithms.
-
Develop complete machine learning models using TensorFlow, scikit-learn, and Keras, covering algorithms for regression (like Linear and Lasso Regression) and classification (such as SVM, Naive Bayes, Decision Trees).
-
Master techniques for fine-tuning, evaluating, and optimizing machine learning models to enhance their performance and accuracy.
III. Prerequisites
-
Have a strong foundation in linear algebra, probability, statistics, and basic calculus, enabling you to understand the mathematical concepts behind machine learning.
-
Be comfortable with Python programming, with practical experience in writing and debugging code for machine learning tasks.
-
Have hands-on experience working with popular machine learning libraries, including TensorFlow, Keras, and scikit-learn, to implement and experiment with various algorithms.
IV. Lecture Schedule
Lecture | Title | Description | Status | Resources |
---|---|---|---|---|
01 | Introduction to AI and Machine Learning | (-) Overview of AI and Machine Learning (-) Types of Machine Learning (-) Real-world applications of Machine Learning |
||
02 | Regression Analysis | (-) Linear Regression (-) Multiple Linear Regression (-) Polynomial Regression |
||
03 | Classification Algorithms | (-) Logistic Regression (-) Performance metrics: Precision, Recall, F1-score, AUC-ROC |
||
04 | Model Evaluation & Regularization | (-) Train-Test split & Cross validation (-) Bias-Variance trade-off (-) Regularization techniques: L1, L2 |
||
05 | Naive Bayes Classifier | (-) Bayes theorem & Naive Bayes assumptions (-) Gaussian, multinomial and Bernoulli Naive Bayes |
||
06 | Support Vector Machine | (-) SVM intuition & mathematics (-) Kernel trick: Linear & Non-linear SVM |
||
07 | Decision Tree | (-) Decision Tree structure (-) Entropy, gini impurity |
||
08 | Ensemble Methods | (-) Random Forest algorithm (-) Ensemble learning techniques: Bagging vs. Booting vs. Stacking |
||
09 | Dimensionality Reduction Techniques | (-) Principal Component Analysis (-) Singular Value Decomposition |
||
10 | Unsupervised Learning & Clustering | (-) K-Means Clustering (-) Gaussian Mixture Models |
||
11 | Feature Engineering & Data Preprocessing | (-) Handling missing data (-) Feature scaling & encoding (-) Feature selection & extraction |
||
12 | Hyperparameter Tuning & Model Optimization | (-) Grid search, Random Search, Bayesian optimization (-) Learning rate schedulers |
V. Acknowledgments
I would like to express my heartfelt gratitude to two authors: Andrew Ng, the creator of the CS229 Machine Learning course, and Aurélien Géron, the author of the book Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow. Their work has been a tremendous source of inspiration and motivation for me to create these lecture notes.