ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • early stopping
    ML/DeepLearning 2025. 4. 7. 15:55

    출처: gpt

     

    좋아, 딥러닝에서 자주 쓰는 **early stopping** 개념을 간단하고 직관적으로 설명해줄게!

    ---

    ### 🛑 Early Stopping이란?

    > **모델이 더 이상 개선되지 않으면 학습을 조기 종료하는 기법**이야.

    ---

    ### 🤔 왜 쓰는 걸까?
    - **오버피팅(overfitting)** 방지:  
      너무 오래 학습하면 모델이 학습 데이터에 과하게 맞춰져서,  
      validation 성능은 오히려 **나빠질 수 있어**.

    - **학습 시간 절약**:  
      이미 최적 성능을 낸 뒤에는 **더 돌려봐야 의미 없음** → 시간 낭비 방지.

    ---

    ### 💡 어떻게 동작하냐면?
    보통은 아래 로직을 써:

    1. 매 epoch마다 **validation loss (또는 accuracy)** 를 확인해.
    2. `patience` 만큼의 epoch 동안 성능이 더 이상 좋아지지 않으면 **학습 중단**!

    예시:

    ```python
    patience = 3
    best_loss = float('inf')
    counter = 0

    for epoch in range(num_epochs):
        train()  # 학습
        val_loss = validate()

        if val_loss < best_loss:
            best_loss = val_loss
            counter = 0  # 성능이 좋아지면 초기화
            save_model()
        else:
            counter += 1
            if counter >= patience:
                print("Early stopping!")
                break
    ```

    ---

    ### 🔧 보통 이런 걸 설정해:
    | 파라미터 | 설명 |
    |----------|------|
    | `patience` | 몇 번까지 성능 안 좋아져도 참을지 |
    | `monitor` | 기준 지표 (e.g., val loss, val accuracy) |
    | `mode` | `min` (loss일 때) 또는 `max` (accuracy일 때) |

    ---

    ### 📈 시각적으로 보면:
    초반에는 validation loss가 계속 감소하다가,  
    어느 순간부터는 더 이상 개선되지 않고 다시 올라가 → 그 지점에서 멈추는 게 early stopping!

    ---

    필요하면 PyTorch 코드에 적용하는 방법도 알려줄 수 있어! 적용해보고 싶어?

     

     

    'ML > DeepLearning' 카테고리의 다른 글

    DeepFM  (0) 2025.05.07
    tabpfn 원리와 장단점  (0) 2025.04.30
    train과 valid에서 모니터링할 metric  (0) 2025.04.07
    mixed precision  (0) 2025.04.07
    cpu , 메모리, gpu  (0) 2025.04.03

    댓글

Designed by Tistory.