Dokumentation (english)

XGBoost

Industry-standard gradient boosting that builds trees sequentially, each correcting previous errors

Industry-standard gradient boosting that builds trees sequentially, each correcting previous errors.

When to use:

  • Kaggle competitions and production systems
  • Need top performance
  • Have sufficient training data
  • Can tune hyperparameters

Strengths: Excellent accuracy, handles missing values, built-in regularization, parallel processing Weaknesses: Many hyperparameters, can overfit, requires tuning

Model Parameters

N Estimators (default: 100) Number of boosting rounds. More = better but slower and may overfit.

Max Depth (default: 6) Maximum tree depth. Controls model complexity.

  • 3-5: Shallow, prevents overfitting
  • 6-10: Good default
  • 10+: Deep, captures complex patterns but may overfit

Learning Rate (default: 0.3) Step size shrinkage. Lower = more conservative = needs more trees.

  • 0.01-0.1: Conservative, needs many trees (>500)
  • 0.1-0.3: Balanced
  • 0.3+: Aggressive, fewer trees needed

Subsample (default: 1.0) Fraction of samples for each tree. <1.0 prevents overfitting.

  • 0.5-0.8: Aggressive regularization
  • 0.8-1.0: Standard

Colsample Bytree (default: 1.0) Fraction of features for each tree.

  • 0.5-0.8: Good for many features
  • 0.8-1.0: Standard

Reg Alpha (default: 0) L1 regularization on weights. Higher = more conservative.

Reg Lambda (default: 1) L2 regularization on weights. Higher = more conservative.

Random State (default: 42) Seed for reproducibility.

On this page


Command Palette

Search for a command to run...

Schnellzugriffe
STRG + KSuche
STRG + DNachtmodus / Tagmodus
STRG + LSprache ändern

Software-Details
Kompiliert vor 1 Tag
Release: v4.0.0-production
Buildnummer: master@64a3463
Historie: 68 Items