MLB2024 | Neural Networks and Deep Learning by Coursera
Cross Entropy Loss function
If y=1 : make L = -log^y small => make ^y large, in sigmoid ^y ≈ 1
If y=0 : make L = -log(1-^y) small => make 1-^y large => make ^y small, in sigmoid ^y ≈ 0
∴ label (ground truth) 이 주어졌을 때 loss function을 통해 파라미터가 gt와 같은 값으로 학습할 수 있도록 구현된 식
Loss function : in single training example
Cost function : is the cost of your parameters, the avg of loss func
'Deep Learning' 카테고리의 다른 글
[강의] ML/DL 핵심 개념 정리 (0) | 2024.06.10 |
---|---|
Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR) (0) | 2024.03.17 |