Dokumentation (english)

LightGBM

Fast leaf-wise gradient boosting for regression

LightGBM Regressor uses leaf-wise growth and histogram binning for fast, memory-efficient training. It handles large datasets and high-cardinality categoricals natively.

When to use:

  • Large datasets where training speed is critical
  • High-cardinality categorical features
  • Competitive accuracy with low training cost

Input: Tabular data with the feature columns defined during training Output: Continuous predicted value

Model Settings (set during training, used at inference)

N Estimators (default: 100) Number of boosting rounds.

Learning Rate (default: 0.1) Shrinkage per step.

Num Leaves (default: 31) Maximum leaves per tree. Key LightGBM parameter — more leaves fit more complex functions.

Max Depth (default: -1 — unlimited) Tree depth limit.

Min Child Samples (default: 20) Minimum data per leaf. Higher values regularize the model.

Subsample (default: 1.0) Row sampling fraction.

Objective (default: regression) Loss function. regression for MSE; regression_l1 for MAE; huber for robust regression.

Inference Settings

No dedicated inference-time settings.


Command Palette

Search for a command to run...

Schnellzugriffe
STRG + KSuche
STRG + DNachtmodus / Tagmodus
STRG + LSprache ändern

Software-Details
Kompiliert vor etwa 2 Stunden
Release: v4.0.0-production
Buildnummer: master@afa25ab
Historie: 72 Items