Dokumentation (english)

OpenAI Text Embeddings 3 Large

High-performance text embeddings for semantic search, RAG, and clustering

OpenAI's text-embedding-3-large model produces 3072-dimensional dense embeddings. Optimized for long-context text (up to 8192 tokens). API-based — no GPU required.

When to use:

  • Semantic search over large document collections
  • RAG retrieval pipelines (query and document embeddings)
  • Text clustering and similarity grouping
  • Anomaly detection in text data

Input: Text string Output: 3072-dimensional embedding vector

Inference Settings

Normalize Output (default: true) Normalize the embedding vector to unit length (L2 norm = 1).

  • true: Use for cosine similarity — enables efficient dot-product similarity
  • false: Keep raw magnitudes — needed if downstream models expect unnormalized vectors

On this page


Command Palette

Search for a command to run...

Schnellzugriffe
STRG + KSuche
STRG + DNachtmodus / Tagmodus
STRG + LSprache ändern

Software-Details
Kompiliert vor etwa 2 Stunden
Release: v4.0.0-production
Buildnummer: master@afa25ab
Historie: 72 Items