AdaBoost
Train AdaBoost to predict categorical outcomes
Adaptive boosting that focuses on misclassified examples.
When to use:
- Have weak base learners
- Want classic ensemble method
- Simpler than gradient boosting
Strengths: Simple, less prone to overfitting, interpretable Weaknesses: Sensitive to noise, slower than modern boosting
Model Parameters
N Estimators (default: 50) Number of weak learners.
Learning Rate (default: 1.0) Shrinks the contribution of each classifier.
Algorithm
- SAMME.R: Real boosting (default, better)
- SAMME: Discrete boosting
Random State (default: 42) Seed for reproducibility.