본문 바로가기

728x90

인공지능

2nd K-AI Manufacturing Competition (3) Model Evaluation and ResultsLoss Function and Training ProgressLoss Function:The model was trained using the binary cross-entropy loss function, which is well-suited for binary classification tasks. It calculates the divergence between the predicted probabilities and the true labels, guiding the model's optimization.Binary cross-entropy ensures that the model penalizes incorrect predictions prop.. 더보기
2nd K-AI Manufacturing Competition (2) Analysis Model DevelopmentAI Analysis Model Selection: GRURecurrent Neural Networks (RNNs) and LSTMsRecurrent Neural Networks (RNNs) are highly effective for sequential data processing due to their ability to capture temporal dependencies. However, traditional RNNs face limitations in retaining long-term dependencies, often suffering from the vanishing gradient problem.To address this, Long Shor.. 더보기
2nd K-AI Manufacturing Competition (1) Manufacturing Data Definition and ProcessingOverview of Manufacturing Data CollectionThe dataset analyzed in this study was collected from the melting and mixing process during powdered cream production. This data was obtained via PLCs and a Database Management System (DBMS) with a collection cycle set at 6-second intervals. The data collection period spans approximately two months, from March 4.. 더보기
2nd K-AI Manufacturing Competition (0) OverviewKAMP, an AI manufacturing platform managed by the Ministry of SMEs and Startups in the Republic of Korea, held a competition. The goal was to define and solve a problem based on the anonymized random dataset, and we received a melting tank dataset from the food manufacturing industry.Analysis BackgroundOverview of the Process and EquipmentThe dataset analyzed in this study originates fro.. 더보기
Backpropagation 인공 신경망을 학습시키기 위한 알고리즘 중 하나신경망을 학습시키는 목표 중 하나는 도출된 예측값과 실제 값의 차이(오차)를 줄이기 위함이다그렇기에 역전파를 사용해 오차를 모든 가중치에 전달하여 갱신을 하며 궁극적으로 오차를 줄이는 기법이다노드가 가지고 있는 가중치(weight)나 편향(bias) 같은 변수들을 어떻게 갱신(update) 하나?노드의 변수들을 어떻게 개별적으로 얼마큼 업데이트 하나?Chain rule(연쇄 법칙)을 이용해 위 두 가지 질문들을 해결할 수 있다Chain rule (연쇄법칙) 💡 Chain rule (연쇄법칙)함수 $f, g$가 있을 때$f$와 $g$가 모두 미분 가능하고$F=f(g(x))=f \circ g$로 정의된 합성 함수이면 $F$는 미분 가능하다.이때 $F'(x)=f.. 더보기
Improving Deep Neural Networks: Hyperparameter Tuning, Regularization, and Optimization (1) Practical Aspects of Deep LearningRegularizing your Neural NetworkClarification about Upcoming Regularization VideoPlease note that in the next video (Regularization) at 5:45, the Frobenius norm formula should be the following:$∣∣w^{[l]}∣∣^2=∑_{i=1}^{n^{[l]}}∑_{j=1}^{n^{[l−1]}}(w_{i,j}^{[l]})^2$The limit of summation of i should be from 1 to $n^{[l]}$,The limit of summation of j should be from 1.. 더보기
Improving Deep Neural Networks: Hyperparameter Tuning, Regularization, and Optimization (0) About this CourseIn the second course of the Deep Learning Specialization, you will open the deep learning black box to systematically understand the processes that drive performance and generate good results. By the end, you will learn the best practices to train and develop test sets and analyze bias/variance for building deep learning applications; be able to use standard neural network techn.. 더보기
Neural Networks and Deep Learning (11) Deep Neural NetworksQuizQ1What is stored in the 'cache' during forward propagation for later use in backward propagation?$W^{[l]}$$Z^{[l]}$$b^{[l]}$$A^{[l]}$Answer더보기2Yes. This value is useful in the calculation of $dW^{[l]}$ in the backward propagation.Q2We use the “cache” in implementing forward and backward propagation to pass useful values to the next layer in the forward propagation. True/F.. 더보기

728x90