Gradient Boosting
Sequential tree boosting with high accuracy on regression tasks
Gradient Boosting Regressor builds an additive model by fitting each new tree to the residuals of the previous ensemble, minimizing a differentiable loss function.
When to use:
- High-accuracy regression where training time is acceptable
- Complex nonlinear feature interactions
- Feature importance alongside strong predictions
Input: Tabular data with the feature columns defined during training Output: Continuous predicted value
Model Settings (set during training, used at inference)
N Estimators (default: 100) Number of boosting stages.
Learning Rate (default: 0.1) Shrinks each tree's contribution. Pair with more estimators for better generalization.
Max Depth (default: 3) Depth of individual trees. Shallow trees (3–5) are standard.
Subsample (default: 1.0) Fraction of training samples per tree. Values < 1.0 introduce stochastic boosting.
Loss (default: squared_error)
Loss function to minimize. squared_error for MSE; absolute_error for MAE; huber for robust regression.
Inference Settings
No dedicated inference-time settings.