Dimensionality Reduction
Project new data into the lower-dimensional space learned during training
Dimensionality reduction inference transforms new data points into the compressed representation learned during training. Provide the same feature columns — the model returns the projected coordinates for each row.
Note: t-SNE, LLE, MDS, and Spectral Embedding are training-only and do not support inference on new data.
Available Models
- PCA – Linear projection onto principal components
- UMAP – Nonlinear manifold projection preserving local and global structure
- Truncated SVD – SVD-based reduction, works well on sparse data
- Factor Analysis – Probabilistic linear model for latent factors
- ICA – Extract statistically independent source signals
- NMF – Non-negative factorization for parts-based representations
- LDA – Supervised linear projection maximizing class separability
- Isomap – Geodesic-distance manifold embedding
- Kernel PCA – PCA with nonlinear kernel transformations