Dokumentation (english)

AdaBoost

Adaptive boosting ensemble that focuses on misclassified examples

AdaBoost sequentially trains weak classifiers (typically shallow trees), with each iteration re-weighting examples that were previously misclassified. This produces a strong classifier from simple base learners.

When to use:

  • Binary classification with noisy data where boosting helps focus on hard examples
  • When a simple, explainable ensemble is preferred over deep gradient boosting
  • Datasets of moderate size where training speed matters

Input: Tabular data with the feature columns defined during training Output: Predicted class label and class probabilities

Model Settings (set during training, used at inference)

N Estimators (default: 50) Maximum number of estimators (boosting rounds). Training stops early if perfect fit is achieved.

Learning Rate (default: 1.0) Shrinks each estimator's contribution. Lower values require more estimators but improve generalization.

Algorithm (default: SAMME.R) Boosting algorithm. SAMME.R uses probability estimates; SAMME uses discrete class labels.

Base Estimator Max Depth (default: 1) Depth of the decision tree stump base learner. Depth 1 (stumps) is the classic choice.

Inference Settings

No dedicated inference-time settings. The weighted combination of all estimators produces the final prediction.


Command Palette

Search for a command to run...

Schnellzugriffe
STRG + KSuche
STRG + DNachtmodus / Tagmodus
STRG + LSprache ändern

Software-Details
Kompiliert vor etwa 2 Stunden
Release: v4.0.0-production
Buildnummer: master@afa25ab
Historie: 72 Items