Gradient Boosting
Train Gradient Boosting to predict categorical outcomes
Classic gradient boosting algorithm from scikit-learn.
When to use:
- Similar to XGBoost but simpler
- Don't need cutting-edge performance
- Want more straightforward hyperparameters
Strengths: Good accuracy, interpretable feature importance, built into scikit-learn Weaknesses: Slower than XGBoost/LightGBM, less features
Model Parameters
Similar to XGBoost but with fewer options. Key parameters:
N Estimators, Max Depth, Learning Rate, Subsample, Min Samples Split, Min Samples Leaf