Multi-Layer Perceptron
Neural network regressor for complex nonlinear relationships
MLP Regressor is a feed-forward neural network that learns nonlinear feature transformations through hidden layers. Requires more data and tuning than tree-based models.
When to use:
- Complex nonlinear relationships not captured by tree-based models
- Sufficient data to support neural network training
- When smooth continuous predictions are important
Input: Tabular data with the feature columns defined during training Output: Continuous predicted value
Model Settings (set during training, used at inference)
Hidden Layer Sizes (default: (100,)) Neural network architecture as a tuple of hidden layer sizes.
Activation (default: relu) Activation function for hidden layers.
Solver (default: adam) Optimization algorithm.
Alpha (default: 0.0001) L2 regularization term.
Learning Rate Init (default: 0.001) Initial learning rate.
Max Iter (default: 200) Maximum training iterations.
Inference Settings
No dedicated inference-time settings. The trained weights are applied at prediction time.