본문 바로가기
728x90
반응형

AI24

Neural Networks and Deep Learning (11) ▤ 목차Deep Neural NetworksQuizQ1What is stored in the 'cache' during forward propagation for later use in backward propagation?$W^{[l]}$$Z^{[l]}$$b^{[l]}$$A^{[l]}$Answer더보기2Yes. This value is useful in the calculation of $dW^{[l]}$ in the backward propagation.Q2We use the “cache” in implementing forward and backward propagation to pass useful values to the next layer in the forward propagation. Tr.. 2024. 12. 4.
Neural Networks and Deep Learning (10) ▤ 목차Deep Neural NetworksDeep Neural NetworkForward and Backward PropagationOptional Reading: Feedforward Neural Networks in DepthFeedforward Neural Networks in Depth - Deep Learning Specialization / Deep Learning Resources - DeepLearning.AIParameters vs HyperparametersParameters are the weights and biases, something deep neural networks optimize to get close to the answer.Parameters are somethin.. 2024. 11. 28.
Neural Networks and Deep Learning (9) ▤ 목차Deep Neural NetworksAnalyze the key computations underlying deep learning, then use them to build and train deep neural networks for computer vision tasks.Learning ObjectivesDescribe the successive block structure of a deep neural networkBuild a deep L-layer neural networkAnalyze matrix and vector dimensions to check neural network implementationsUse a cache to pass information from forward .. 2024. 11. 27.
Neural Networks and Deep Learning (8) ▤ 목차Shallow Neural NetworksQuizQ1Which of the following is true? (Check all that apply.)$a^{[2]}$ denotes the activation vector of the 2nd layer.$a^{[2](12)}$ denotes the activation vector of the 2nd layer for the 12th training example.$a^{[2](12)}$ denotes activation vector of the 12th layer on the 2nd training example.$a^{[2]}_4$ is the activation output of the 2nd layer for the 4th training e.. 2024. 11. 26.
Neural Networks and Deep Learning (7) ▤ 목차Shallow Neural NetworksShallow Neural NetworkActivation FunctionsSo far we’ve been using sigmoid as our activation function, but we can use other activation functions such as hyperbolic tangent, aka tanh.Tanh falls within the interval between -1 and 1 and, when plotted, looks like a shifted sigmoid function.Using tanh can center your data to have a mean of 0 rather than 0.5 when using a sigm.. 2024. 11. 25.
Neural Networks and Deep Learning (6) ▤ 목차Shallow Neural NetworksBuild a neural network with one hidden layer, using forward propagation and backpropagation.Learning ObjectivesDescribe hidden units and hidden layersUse units with a non-linear activation function, such as tanhImplement forward and backward propagationApply random initialization to your neural networkIncrease fluency in Deep Learning notations and Neural Network Repre.. 2024. 11. 23.
728x90
반응형

home top bottom
}