AdaBoost
Adaptive boosting that focuses on examples with large errors
Adaptive boosting that focuses on examples with large errors.
When to use:
- Have weak base learners
- Want classic ensemble method
- Simpler than gradient boosting
Strengths: Simple, less prone to overfitting, interpretable Weaknesses: Sensitive to noise and outliers, slower than modern boosting
Model Parameters
N Estimators (default: 50) Number of weak learners.
Learning Rate (default: 1.0) Shrinks the contribution of each learner.
Loss
- linear: Linear loss (default)
- square: Square loss
- exponential: Exponential loss
Random State (default: 42) Seed for reproducibility.