본문 바로가기

728x90

인공지능

Neural Networks and Deep Learning (10) Deep Neural NetworksDeep Neural NetworkForward and Backward PropagationOptional Reading: Feedforward Neural Networks in DepthFeedforward Neural Networks in Depth - Deep Learning Specialization / Deep Learning Resources - DeepLearning.AIParameters vs HyperparametersParameters are the weights and biases, something deep neural networks optimize to get close to the answer.Parameters are something we.. 더보기
Neural Networks and Deep Learning (9) Deep Neural NetworksAnalyze the key computations underlying deep learning, then use them to build and train deep neural networks for computer vision tasks.Learning ObjectivesDescribe the successive block structure of a deep neural networkBuild a deep L-layer neural networkAnalyze matrix and vector dimensions to check neural network implementationsUse a cache to pass information from forward to b.. 더보기
Neural Networks and Deep Learning (8) Shallow Neural NetworksQuizQ1Which of the following is true? (Check all that apply.)$a^{[2]}$ denotes the activation vector of the 2nd layer.$a^{[2](12)}$ denotes the activation vector of the 2nd layer for the 12th training example.$a^{[2](12)}$ denotes activation vector of the 12th layer on the 2nd training example.$a^{[2]}_4$ is the activation output of the 2nd layer for the 4th training examp.. 더보기
Neural Networks and Deep Learning (7) Shallow Neural NetworksShallow Neural NetworkActivation FunctionsSo far we’ve been using sigmoid as our activation function, but we can use other activation functions such as hyperbolic tangent, aka tanh.Tanh falls within the interval between -1 and 1 and, when plotted, looks like a shifted sigmoid function.Using tanh can center your data to have a mean of 0 rather than 0.5 when using a sigmoid .. 더보기
Neural Networks and Deep Learning (6) Shallow Neural NetworksBuild a neural network with one hidden layer, using forward propagation and backpropagation.Learning ObjectivesDescribe hidden units and hidden layersUse units with a non-linear activation function, such as tanhImplement forward and backward propagationApply random initialization to your neural networkIncrease fluency in Deep Learning notations and Neural Network Represent.. 더보기
Neural Networks and Deep Learning (5) Neural Networks BasicsQuizQ1In logistic regression, given $\bf x$ and parameters $w ∈ {\mathbb R^{n_x}}, b∈{\mathbb R}$. Which of the following best expresses what we want $\hat y$ to tell us?Choice$\sigma(W \textbf{x} + b)$$P(y = \hat y | x)$$\sigma(W \bf x)$$P(y = 1|\bf x)$Hint더보기Remember that we are interested in the probability that $y=1$.Answer더보기4Yes. We want the output $\hat y$ to tell us.. 더보기
Neural Networks and Deep Learning (4) Neural Networks BasicsPython and VectorizationVectorizationVectorization is how to apply a function in parallel across all examples simultaneously.In Python, vectorization avoids using the for-loop and reduces the computation and time. True or false. Vectorization cannot be done without a GPU.더보기FalseMore Vectorization ExamplesWe can apply any function to matrices or vectors.In our example, we a.. 더보기
Neural Networks and Deep Learning (3) Neural Networks BasicsLogistic Regression as a Neural NetworkComputation GraphA computation graph is an organized forward pass (propagation) to compute the function of a neural network, followed by a backward pass (propagation) to calculate the gradients of a neural network.This is like looking at a math formula and calculating step by step to get the answer. One step of ________ propagation on .. 더보기

728x90