Skip to main content
This tutorial walks through building an AI-powered support ticket classifier using Node.js, LangChain, and OpenAI. The system takes unstructured support tickets and extracts structured metadata: category, urgency, sentiment, customer info, and suggested actions. Form processing is one of the more practical AI applications. Instead of manually reading tickets, routing them, and extracting data, you let the model do it.

What you’re building

A web app with a form where users paste support tickets. The AI analyzes each ticket and returns structured JSON:
  • Category (Billing, Technical, Feature Request, Bug Report, Account, or General Inquiry)
  • Urgency (Critical, High, Medium, or Low)
  • Sentiment (Positive, Neutral, or Negative)
  • Product or service mentioned, if any
  • Customer name and email, if provided
  • Reference numbers like ticket IDs, project IDs, invoice numbers
  • A one-sentence summary
  • Suggested actions for the support team
The frontend displays results as formatted HTML with color-coded badges, plus a toggle to see the raw JSON.

Prerequisites

You’ll need Node.js 22+, npm, an OpenAI API key from platform.openai.com, the Upsun CLI (docs.upsun.com/administration/cli), and Git.

Project setup

Create the project:
mkdir ticket-classifier
cd ticket-classifier
npm init -y
Install dependencies:
npm install express dotenv cors @langchain/core @langchain/openai
npm install -D typescript @types/node @types/express @types/cors tsx @biomejs/biome
That’s express for the web server, dotenv for env files, cors for cross-origin requests, LangChain packages for working with OpenAI, TypeScript for type safety, tsx to run TypeScript directly, and Biome for linting. Configure TypeScript (tsconfig.json): View source on GitHub
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true,
    "resolveJsonModule": true,
    "declaration": true,
    "declarationMap": true,
    "sourceMap": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules", "dist"]
}
Update package.json:
{
  "type": "module",
  "scripts": {
    "dev": "tsx watch src/index.ts",
    "build": "tsc",
    "start": "node dist/index.js",
    "lint": "biome check .",
    "lint:fix": "biome check --write .",
    "format": "biome format --write ."
  }
}

Building the classifier

1. The classification logic

This is where the AI does its work. We send the ticket to OpenAI with a detailed system prompt that explains the classification schema. Create src/classifier.ts: View source on GitHub
import { ChatOpenAI } from "@langchain/openai";

const OPENAI_MODEL = process.env.OPENAI_MODEL || "gpt-4o-mini";

export interface ClassificationResult {
  category: string;
  urgency: string;
  sentiment: string;
  product: string | null;
  customerName: string | null;
  customerEmail: string | null;
  referenceNumbers: string[];
  summary: string;
  suggestedActions: string[];
}

const SYSTEM_PROMPT = `You are a support ticket classifier for Upsun, a Platform-as-a-Service (PaaS) hosting provider. Analyze support tickets and extract structured metadata.

Upsun products and services include:
- Web application hosting (PHP, Python, Node.js, Go, Java, Ruby, etc.)
- Database services (MySQL, PostgreSQL, MariaDB, MongoDB, Redis, Elasticsearch)
- Environment management (staging, production, preview environments)
- Git-based deployments
- CLI tools
- API access
- Domains and SSL certificates
- Resource scaling (CPU, RAM, disk)
- Backups and disaster recovery

Classify each ticket into exactly ONE category:
- Billing: Payment issues, invoices, pricing questions, plan changes, refunds
- Technical: Deployment failures, errors, performance issues, configuration help
- Feature Request: Suggestions for new features or improvements
- Bug Report: Reports of unexpected behavior or defects
- Account: Login issues, access management, team permissions, profile changes
- General Inquiry: Questions that don't fit other categories

Determine urgency level:
- Critical: Production down, security breach, data loss risk
- High: Major functionality impaired, blocking deployment
- Medium: Issue affecting workflow but workaround exists
- Low: General questions, minor issues, future planning

Determine sentiment:
- Positive: Happy, grateful, complimentary
- Neutral: Matter-of-fact, informational
- Negative: Frustrated, angry, disappointed

Extract:
- Product/service mentioned (if any)
- Customer name (if mentioned)
- Customer email (if mentioned)
- Reference numbers (ticket IDs, order numbers, project IDs, environment names)
- Brief summary (1 sentence)
- Suggested actions for support team (2-3 actionable items)

Respond ONLY with valid JSON matching this exact structure:
{
  "category": "string",
  "urgency": "string",
  "sentiment": "string",
  "product": "string or null",
  "customerName": "string or null",
  "customerEmail": "string or null",
  "referenceNumbers": ["array of strings"],
  "summary": "string",
  "suggestedActions": ["array of strings"]
}`;

export async function classifyTicket(ticketText: string): Promise<ClassificationResult> {
  const model = new ChatOpenAI({
    modelName: OPENAI_MODEL,
    temperature: 0,
  });

  const response = await model.invoke([
    { role: "system", content: SYSTEM_PROMPT },
    { role: "user", content: ticketText },
  ]);

  const content = response.content as string;

  // Extract JSON from response (handle potential markdown code blocks)
  let jsonStr = content;
  const jsonMatch = content.match(/```(?:json)?\s*([\s\S]*?)```/);
  if (jsonMatch) {
    jsonStr = jsonMatch[1].trim();
  }

  const result = JSON.parse(jsonStr) as ClassificationResult;

  // Validate required fields
  if (!result.category || !result.urgency || !result.sentiment) {
    throw new Error("Invalid classification response: missing required fields");
  }

  return result;
}
Worth noting: Temperature is set to 0. For classification tasks, you want deterministic output. The same ticket should always get the same category. The prompt lists all valid values explicitly. Without that, you might get “Tech Support” instead of “Technical” or “Urgent” instead of “High.” We handle markdown code blocks. Sometimes the model wraps JSON in triple backticks. The regex strips those out. Validation happens after parsing. If the model returns malformed JSON or misses required fields, we throw an error rather than returning garbage.

2. Input validation

Support tickets can be any length, but we need sensible limits. Create src/validation.ts: View source on GitHub
import type { NextFunction, Request, Response } from "express";

const MAX_TICKET_LENGTH = 10000;
const MIN_TICKET_LENGTH = 10;

export function validateTicketInput(req: Request, res: Response, next: NextFunction): void {
  const { ticket } = req.body;

  if (!ticket || typeof ticket !== "string") {
    res.status(400).json({ error: "Missing or invalid 'ticket' field" });
    return;
  }

  const trimmed = ticket.trim();

  if (trimmed.length < MIN_TICKET_LENGTH) {
    res.status(400).json({ error: `Ticket must be at least ${MIN_TICKET_LENGTH} characters` });
    return;
  }

  if (trimmed.length > MAX_TICKET_LENGTH) {
    res
      .status(400)
      .json({ error: `Ticket exceeds maximum length of ${MAX_TICKET_LENGTH} characters` });
    return;
  }

  req.body.ticket = trimmed;
  next();
}
Ten thousand characters is generous. Most support tickets run under 2,000. The minimum of 10 catches empty or near-empty submissions.

3. Rate limiting

Prevent abuse with a simple in-memory rate limiter. Create src/rate-limiter.ts: View source on GitHub
import type { NextFunction, Request, Response } from "express";

const WINDOW_MS = 60 * 1000; // 1 minute
const MAX_REQUESTS = 20;

interface RateLimitEntry {
  count: number;
  resetAt: number;
}

const ipRequests = new Map<string, RateLimitEntry>();

// Cleanup old entries periodically
setInterval(() => {
  const now = Date.now();
  for (const [ip, entry] of ipRequests.entries()) {
    if (entry.resetAt < now) {
      ipRequests.delete(ip);
    }
  }
}, WINDOW_MS);

export function rateLimiter(req: Request, res: Response, next: NextFunction): void {
  const ip = req.ip || req.socket.remoteAddress || "unknown";
  const now = Date.now();

  let entry = ipRequests.get(ip);

  if (!entry || entry.resetAt < now) {
    entry = { count: 0, resetAt: now + WINDOW_MS };
    ipRequests.set(ip, entry);
  }

  entry.count++;

  if (entry.count > MAX_REQUESTS) {
    const retryAfter = Math.ceil((entry.resetAt - now) / 1000);
    res.setHeader("Retry-After", retryAfter.toString());
    res.status(429).json({
      error: "Too many requests. Please try again later.",
      retryAfter,
    });
    return;
  }

  next();
}
Twenty requests per minute per IP. The cleanup interval prevents memory from growing unbounded.

4. Express server

Create src/index.ts: View source on GitHub
import "dotenv/config";
import { dirname, resolve } from "node:path";
import { fileURLToPath } from "node:url";
import cors from "cors";
import express from "express";
import { classifyTicket } from "./classifier.js";
import { rateLimiter } from "./rate-limiter.js";
import { validateTicketInput } from "./validation.js";

const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);

const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
const PORT = Number.parseInt(process.env.PORT || "3000", 10);

if (!OPENAI_API_KEY) {
  console.error("ERROR: OPENAI_API_KEY environment variable is required");
  process.exit(1);
}

const app = express();
app.use(cors());
app.use(express.json({ limit: "50kb" }));
app.use(express.urlencoded({ extended: true, limit: "50kb" }));

app.use(express.static(resolve(__dirname, "../public")));

app.post("/api/classify", rateLimiter, validateTicketInput, async (req, res) => {
  const { ticket } = req.body;
  const startTime = performance.now();
  console.log(`[classify] Request: "${ticket.slice(0, 80)}..."`);

  try {
    const result = await classifyTicket(ticket);
    const elapsed = (performance.now() - startTime).toFixed(0);
    console.log(`[classify] Done in ${elapsed}ms | Category: ${result.category}`);
    res.json(result);
  } catch (err) {
    console.error(`[classify] Error after ${(performance.now() - startTime).toFixed(0)}ms:`, err);
    res.status(500).json({ error: "Classification failed. Please try again." });
  }
});

app.get("/health", (_req, res) => {
  res.json({ status: "ok" });
});

app.listen(PORT, () => {
  console.log(`[server] Support ticket classifier running on port ${PORT}`);
});
One POST endpoint that accepts a ticket, classifies it, and returns JSON. We log timing for monitoring.

5. Frontend

Create public/index.html. The full file is in the repository. It has:
  • Header with title and description
  • Example tickets section with six pre-written Upsun-themed tickets
  • Form with textarea and submit button
  • Results section showing classification with color-coded badges
  • JSON toggle to view raw response
The example tickets cover billing (plan upgrade request), technical (deployment failure), feature request (autoscaling), bug report (CLI crash), account (team permissions), and general inquiry (platform comparison). Check the repo for the complete HTML/CSS/JS.

Local development

Create .env:
OPENAI_API_KEY=sk-your-actual-key-here
OPENAI_MODEL=gpt-4o-mini
PORT=3000
Create .env.example for documentation: View source on GitHub
OPENAI_API_KEY=
OPENAI_MODEL=gpt-4o-mini
PORT=3000
Run the dev server:
npm run dev
Open http://localhost:3000. Click an example ticket, hit “Classify ticket,” and watch the results appear.

Deploying to Upsun

Create .upsun/config.yaml: View source on GitHub
applications:
  ticket-classifier:
    type: "nodejs:22"

    variables:
      env:
        OPENAI_MODEL: "gpt-4o-mini"

    build:
      flavor: none

    dependencies:
      nodejs:
        pnpm: "9.15.4"

    hooks:
      build: |
        set -e
        echo "Installing dependencies with pnpm..."
        pnpm install --frozen-lockfile

        echo "Compiling TypeScript..."
        pnpm build

        echo "Build complete!"

      deploy: |
        echo "Deploy hook: Nothing to do"

    web:
      commands:
        start: "node dist/index.js"
      locations:
        /:
          passthru: true
          allow: false
          scripts: false
          rules:
            \.(css|js|gif|jpe?g|png|svg|ico|woff2?|ttf|eot|html)$:
              allow: true

    mounts:
      "/.npm":
        source: "storage"
        source_path: "npm_cache"
      "/.pnpm-store":
        source: "storage"
        source_path: "pnpm_store"

    relationships: {}

routes:
  "https://{default}/":
    type: upstream
    upstream: "ticket-classifier:http"
  "https://www.{default}/":
    type: redirect
    to: "https://{default}/"
Initialize Git:
git init
git add .
git commit -m "Initial commit: Support ticket classifier"
Create Upsun project:
upsun login
upsun project:create
Follow the prompts for organization, name, region, and plan. Set the OpenAI API key:
upsun variable:create \
  --level project \
  --name env:OPENAI_API_KEY \
  --value "sk-your-actual-key-here" \
  --sensitive true \
  --visible-build false \
  --visible-runtime true
Deploy:
upsun push
Get the URL:
upsun url

Testing

Try the example tickets. Each should classify correctly:
ExampleExpected CategoryExpected Urgency
Billing issueBillingHigh
Deployment failureTechnicalCritical
Autoscaling requestFeature RequestLow
CLI crashBug ReportMedium
Team accessAccountLow
Platform comparisonGeneral InquiryLow
Monitor logs:
upsun logs --tail
You’ll see request logs with timing and categories.

Extending the classifier

Add more categories

Edit the system prompt in src/classifier.ts:
// Add to the categories list
- Sales: Pricing inquiries, demo requests, enterprise questions
- Security: Vulnerability reports, compliance questions, audit requests
Update the ClassificationResult interface if needed.

Extract more fields

Add fields to the prompt and interface:
interface ClassificationResult {
  // ... existing fields
  language: string;           // Detected language
  escalationRequired: boolean; // Needs manager attention
  estimatedResolutionTime: string; // "< 1 hour", "1-4 hours", etc.
}

Connect to a ticketing system

Instead of just displaying results, send them somewhere:
app.post("/api/classify", rateLimiter, validateTicketInput, async (req, res) => {
  const result = await classifyTicket(ticket);

  // Send to Zendesk, Freshdesk, Linear, etc.
  await fetch("https://api.zendesk.com/tickets", {
    method: "POST",
    headers: { "Authorization": `Bearer ${process.env.ZENDESK_TOKEN}` },
    body: JSON.stringify({
      subject: result.summary,
      priority: mapUrgency(result.urgency),
      tags: [result.category.toLowerCase()],
    }),
  });

  res.json(result);
});

Add batch processing

Process multiple tickets at once:
app.post("/api/classify/batch", async (req, res) => {
  const { tickets } = req.body; // Array of strings

  const results = await Promise.all(
    tickets.map(ticket => classifyTicket(ticket))
  );

  res.json(results);
});

Store results

Add PostgreSQL for persistence:
# .upsun/config.yaml
relationships:
  database: "postgresql:16"
import pg from "pg";

const pool = new pg.Pool({
  connectionString: process.env.DATABASE_URL,
});

app.post("/api/classify", async (req, res) => {
  const result = await classifyTicket(ticket);

  await pool.query(
    `INSERT INTO classifications (ticket_text, category, urgency, sentiment, created_at)
     VALUES ($1, $2, $3, $4, NOW())`,
    [ticket, result.category, result.urgency, result.sentiment]
  );

  res.json(result);
});

Cost considerations

Each classification uses roughly 500-1,500 tokens depending on ticket length. With gpt-4o-mini:
Daily VolumeMonthly Cost (approx)
100 tickets~$3
1,000 tickets~$30
10,000 tickets~$300
For high volume, consider caching identical tickets, using embeddings to find similar past tickets, batching requests, or fine-tuning a smaller model.

Troubleshooting

”OPENAI_API_KEY is required” error

Check if the variable exists:
upsun variable:list
If missing, create it (see deployment section).

Classification returns unexpected categories

The model might be using its own judgment. Make the prompt more explicit:
const SYSTEM_PROMPT = `...
IMPORTANT: You MUST use EXACTLY one of these categories:
- Billing
- Technical
- Feature Request
- Bug Report
- Account
- General Inquiry

Do not invent new categories or use variations.
...`;

JSON parsing fails

Sometimes the model adds extra text. Make the prompt stricter:
const SYSTEM_PROMPT = `...
CRITICAL: Your response must contain ONLY valid JSON. No explanations, no markdown, no extra text.
Start with { and end with }
...`;

High latency

Classification should take 1-3 seconds. If slower:
  1. Check if you’re hitting rate limits
  2. Try a different OpenAI region
  3. Consider caching common ticket patterns

Rate limiting too strict

Adjust in src/rate-limiter.ts:
const MAX_REQUESTS = 50; // More generous

What’s next

You’ve got a ticket classifier that extracts structured data from unstructured text and deploys to Upsun. The same pattern works for other form processing: job applications, feedback forms, bug reports, customer inquiries. Anywhere you have unstructured text that needs structure, this approach works.

Resources

For questions, check the Upsun community forum or open an issue in this repo.
Last modified on March 6, 2026