Skip to content

Deep Learning Course


I. Description

This course provides a comprehensive journey into Deep Learning, covering everything from fundamental neural network concepts to cutting-edge architecture like CNNs, LSTMs, Transformers and GAN. You will gain hands-on experience in building, training, and deploying AI models using Tensorflow/Pytorch.

II. What You'll learn

  • Neural networks, backpropagation and optimization techniques
  • Convolutional Neural Networks for image processing
  • Recurrent Neural Networks, Long Short Term Memory for sequential data
  • Transformers & Attention mechanisms
  • Generative Adversarial Networks for AI-generated content
  • Hyperparameter tuning and hyperparameter methods

III. Prerequisites

To take this course, you should have a basic understanding of Python programming, along with fundamental concepts in linear algebra, probability, statistics and machine learning. Prior experience with Tensorflow/Pytorch is helpful but not required.

IV. Lecture Schedule

Lecture Title Description Status Resources
01 Introduction in Deep Learning (-) Key concepts in Deep Learning
(-) Real-world applications of Deep Learning
(-) Overview of neural networks and their structure
02 Neural Networks and Backpropagation (-) Neural network structure
(-) Activation functions
(-) Feedforward process
(-) Backpropagation & Gradient Descent
03 Loss functions and Optimizers (-) Cross-Entropy, Huber Loss, Mean Squared Error
(-) Stochastic Gradient Descent, Momentum, Nesterov Accelerated Gradient
(-) AdaGrad, RMSProp, Adam, AdamW, LARS
04 Regularization and Batch Normalization (-) Overfitting vs. Underfitting
L1 (Lasso) & L2 (Ridge) regularization, Dropout, Early stopping
(-) Batch Normalization
05 Advanced Neural Networks (-) Convolutional Neural Networks
(-) Popular CNN architecture
(-) Residual connections
06 Recurrent Neural Networks & Long Short-Term Memory (-) RNN process sequential data
(-) LSTM in handling long-range dependencies
(-) Gated Recurrent Unit (GRU)
07 Transformers & Attention Mechanism (-) Key concepts
(-) Self-Attention & Multi-Head Attention
(-) Transformer architecture
08 Generative Models (-) Introduction to generative models
(-) GAN architecture
(-) Popular GAN variants
09 Hyperparameter Tuning (-) Choosing the best hyperparameters
(-) Hyperparameter optimization methods

V. Acknowledgements

This course is inspired by the collective knowledge and contributions of the deep learning community. I would like to extend my gratitude to researchers, educators and institutions whose work has shaped modern AI, including Stanford CS230 - Deep Learning (Andrew Ng), and key research papers on neural networks, transformers and generative models.