Decision Tree
Single interpretable tree-based regressor
A Decision Tree Regressor partitions the feature space into regions and predicts the mean target value within each region. It is highly interpretable but prone to overfitting without depth constraints.
When to use:
- When interpretability is required
- Quick baseline before ensemble methods
- Step-function-like target distributions
Input: Tabular data with the feature columns defined during training Output: Continuous predicted value
Model Settings (set during training, used at inference)
Max Depth (default: null — unlimited) Maximum tree depth. Constrain to 3–10 to prevent overfitting.
Min Samples Split (default: 2) Minimum samples to split an internal node.
Min Samples Leaf (default: 1) Minimum samples in a leaf node.
Criterion (default: squared_error)
Split quality measure. squared_error minimizes MSE; friedman_mse uses a variance improvement heuristic.
Max Features (default: null — all) Number of features considered per split.
Inference Settings
No dedicated inference-time settings. Each input row follows the learned decision path.