Tutorial

How to Build an AI Classification API in Minutes: An Email Triage Pipeline

JJulia
March 20, 2026

#How to Build an AI Classification API in Minutes: An Email Triage Pipeline

Building natural language processing (NLP) models used to require extensive coding, complex tokenization, and infrastructure setup. Not anymore. In this tutorial, we'll build a complete text classification pipeline to automatically categorize and route customer support emails using real open-source data. No coding required.

#Step 1: Loading Support Ticket Data

To train our text classification model, we first need historical email data organized in a clean, tabular format. This allows the Transformer model to clearly distinguish between the input text and the target categories.

Here is how to bring your data into the pipeline:

  1. Upload the file: Navigate to the File Manager in your dashboard and upload your dataset.
  2. Set up the Flow: Open the Flow canvas.
  3. Connect the data: Add a Select Data node to the canvas, click on it, and select the uploaded file.

Tip: Unstructured Data? Let the AI handle it. If your support ticket export is a messy text dump and isn't neatly organized into a table, don't waste time formatting it manually. Just open the built-in chat interface and type: "Convert my dataset into a tabular format with columns for Subject, Body, Sender, and Ticket Type". The AI will automatically clean and structure the data for you!

#Step 2: Configuring and Training the AI Model

Now that our support ticket data is loaded, it’s time to add the Transformer model. We will use a classic, highly efficient NLP model called BERT.

Here is how to set up the training node on the canvas:

  1. Add a Train AI Model node to the flow and connect it to the Select Data node.
  2. Click on the node and configure the model type:
    • Category: Select Natural Language Processing.
    • Subcategory: Select Text Classification.
    • Model: Choose BERT Base (Behind the scenes, this selects the powerful bert-base-uncased model, perfect for reading standard email text).
  3. Click Save & Continue.

#Selecting Inputs and Labels

Next, the model needs to know which columns to read and which column to predict. In the Input tab:

  • Text Columns: Select the columns containing the email content (e.g., Subject and Body).
  • Label Column: Select the column containing the category to predict (e.g., Ticket Type).

#Setting the Training Parameters

  • Learning Rate (2e-5): This controls how aggressively the model updates its knowledge. For BERT, a small learning rate prevents it from "forgetting" its foundational language skills.
  • Batch Size (16 or 32): The number of emails the model processes at once before updating. 16 is a safe, stable choice.
  • Max Sequence Length (256): This sets the maximum number of words/tokens the model will read per email. 256 is usually plenty to capture the context of a support ticket without slowing down training.
  • Epochs (3): The number of times the model will read through the entire dataset. For fine-tuning BERT, 3 or 4 epochs is the sweet spot to learn the categories without overfitting.

Once you've entered these, click Save & Close.

#Run the Pipeline!

To see the results once the model finishes learning:

  1. Connect Output Preview node to the output of the Train AI Model node.
  2. Hit the Run Flow button at the top of your screen!

Depending on the size of dataset, the platform will take a few minutes to process the text and fine-tune the classification model.

#Step 3: Evaluating Model Performance

Once the training process is complete, the Output Preview node will display the final results. This step is necessary to understand how accurately the model learned to categorize the text.

#Reviewing the Metrics

The platform automatically calculates standard evaluation metrics to measure the AI's performance. The two most important numbers to check are:

  • Accuracy: The overall percentage of emails the model classified correctly.
  • F1 Score: A metric that balances precision and recall. This number is particularly useful for datasets where some categories appear much less frequently than others (for example, if there are hundreds of "Password Reset" tickets but only a few "Security Breach" tickets).

#Sample Data and Output Schema

Below the performance metrics, the preview node generates a sample data the final Output Schema. This is the exact structured format the model will use when returning its predictions in the real world. This schema is ready to be copied and pasted directly into the next step—setting up the inference API—to test the model on unseen text.

Output Schema and Sample Data

Figure 1: The Sample Data and Output Schema

#Step 4: Building the Inference and Routing Pipeline

After the model finishes training, the final phase is setting up the deployment pipeline to process new, incoming emails and route them to the appropriate departments.

#Loading the Model and Inputs

Start by creating a new flow. Add a Select Artifact node to the canvas. This node is responsible for loading the exact files generated during the training phase:

  • Model Weights: The fine-tuned intelligence of the model.
  • id2label: The mapping that translates raw model outputs into readable category names.
  • preprocessing_config: The tokenizer configuration required to process new text exactly as it was done during training.

Next, add an API Input node. In the Schema tab, paste the output schema copied from the previous evaluation step. In the Test Data tab, insert the sample data to use for the initial test run.

#Running the Inference

Connect both the Select Artifact node and the API Input node to a new Use AI Model node. Configure this node with the exact same settings used during training: select the Natural Language Processing category, the Text Classification subcategory, and the BERT Base model.

#Configuring the Routing Logic

To ensure the parsed email reaches the correct department, a routing mechanism must be added. While a Conditional node works well for simple two-category splits, a Switch node is necessary here to handle multiple support categories.

Configure the Switch node with the following parameters:

  • Field Path: Point this to the predicted category in the JSON output (for example, json.output.0.predictions.category).
  • Operator: Select Equal.
  • Value: Enter the specific category name (e.g., Technical Support).

#Setting Up Notifications and Testing

For each specific category branch coming out of the Switch node, attach a Mail Notification node. Enter the destination email address for that specific department, along with a subject line and any necessary notes.

Finally, click Run Flow. The platform will perform inference on the provided test data, identify the correct category, follow the Switch node logic, and automatically send an email alert to the respective department.

Canvas showing the inference pipeline for email classification

Figure 2: Inference workflow for email classification

#Step 5: Deploying the API

The final step is transitioning the tested inference flow into a live, accessible API. This allows external systems to send text and receive the predicted category in real time.

#Creating the Deployment

To set up the live endpoint:

  1. Navigate to the Deployment & APIs section in the dashboard and click Create Deployment.
  2. Fill in the basic information, such as deployment name.
  3. In the deployment configuration settings, select the appropriate deployment type.
  4. Generate an API key to ensure secure access to the endpoint.
  5. Click Create Deployment to finalize the setup.

#Integrating the Endpoint

Once the deployment is active, the system automatically generates the live API endpoint. It provides ready-to-use code snippets in multiple formats, including cURL, Python, and JavaScript. These snippets can be copied and pasted directly into existing software or internal email parsers, making integration fast and straightforward. The standalone AI classification service is now complete and ready to process real-world data.

Deployment setup example

Figure 3: Deployment setup

Data is your goldmine. Start mining today.

No credit card required.

Command Palette

Search for a command to run...

Schnellzugriffe
STRG + KSuche
STRG + DNachtmodus / Tagmodus
STRG + LSprache ändern
STRG + BSidepanel umschalten

Software-Details
Kompiliert vor etwa 2 Stunden
Release: v4.0.0-production
Buildnummer: master@3877b19
Historie: 93 Items