▤ 목차
Deep Neural Networks
Analyze the key computations underlying deep learning, then use them to build and train deep neural networks for computer vision tasks.
Learning Objectives
- Describe the successive block structure of a deep neural network
- Build a deep L-layer neural network
- Analyze matrix and vector dimensions to check neural network implementations
- Use a cache to pass information from forward to backpropagation
- Explain the role of hyperparameters in deep learning
- Build a 2-layer neural network
Deep Neural Network
Deep L-layer Neural Network
A deep neural network is a neural network that has several stacked hidden layers.
A shallow neural network is a neural network that has few stacked hidden layers.
Forward Propagation in a Deep Network
The formula doesn’t change at all.
Just remember what we’ve gone through with forward propagation.
Getting your Matrix Dimensions Right
The weight matrix's row would be the number of neurons in the current layer, and the column would be the number of neurons in the previous layer.
Why Deep Representations?
The earlier layers of the neural networks detect low-level (basic) features like edges and later layers detect high-level (complex) features like faces or whole images.
Typically with the results of the circuit theory, using XOR as neurons to compute a function, deep neural networks need fewer hidden units compared to shallow neural networks where more hidden units are needed to compute the same function.
Building Blocks of Deep Neural Networks
All the information provided is based on the Deep Learning Specialization | Coursera from DeepLearning.AI
'Coursera > Deep Learning Specialization' 카테고리의 다른 글
Neural Networks and Deep Learning (11) (0) | 2024.12.04 |
---|---|
Neural Networks and Deep Learning (10) (0) | 2024.11.28 |
Neural Networks and Deep Learning (8) (0) | 2024.11.26 |
Neural Networks and Deep Learning (7) (1) | 2024.11.25 |
Neural Networks and Deep Learning (6) (0) | 2024.11.23 |