Skip to main content
This tutorial covers building an AI news digest where a Node.js frontend takes user requests and a Python worker does the slow work of fetching RSS feeds and calling OpenAI. The two apps communicate through Redis. You’ll see how Upsun workers handle background processing, how apps share services, and how multi-app deployment works.

What you’re building

Users pick some topics, click a button, and get a summary of recent news. Behind the scenes: the frontend drops a job into Redis, a Python worker grabs it, fetches a bunch of RSS feeds, sends the articles to OpenAI, and stores the result. The frontend polls until it’s done. Why split it up like this? The web app never blocks. Fetching RSS feeds is slow. Calling OpenAI is slower. If you did that inline, users would stare at a spinner for 20 seconds. With a worker, the request returns immediately and processing happens in the background. Workers are not the same as cron jobs. Cron runs on a schedule: every hour, daily at 8am. Workers run all the time, grabbing tasks as they show up. Pick workers when users want results fast. Pick cron when nobody is waiting.

Architecture

┌─────────────────┐     ┌─────────────┐     ┌─────────────────┐
│   Frontend      │────▶│    Redis    │◀────│   Worker        │
│   (Node.js)     │     │   (Queue)   │     │   (Python)      │
│                 │     └─────────────┘     │                 │
│  - Serves UI    │                         │  - Polls queue  │
│  - REST API     │                         │  - Fetches RSS  │
│  - Queues jobs  │                         │  - AI summary   │
└─────────────────┘                         └─────────────────┘
        │                                           │
        └───────────── Both deployed ───────────────┘
                      as one project
The frontend and worker live in the same Upsun project but run as separate containers. They share Redis through relationships. Only the frontend gets HTTP routes.

Prerequisites

You need Node.js 22+, Python 3.12+, Docker (for running Redis locally), an OpenAI API key from platform.openai.com, the Upsun CLI (docs.upsun.com/administration/cli), and Git.

Project structure

Upsun multi-app projects need each app in its own directory:
04-news-digest/
├── frontend/               # Node.js app
│   ├── src/
│   │   └── index.ts
│   ├── public/
│   │   └── index.html
│   ├── package.json
│   └── tsconfig.json
├── worker/                 # Python worker
│   ├── main.py
│   └── requirements.txt
├── .upsun/
│   └── config.yaml         # Multi-app configuration
└── README.md

Building the frontend

1. Initialize the project

mkdir -p 04-news-digest/frontend
cd 04-news-digest/frontend
npm init -y
Install dependencies:
npm install express cors dotenv ioredis uuid
npm install -D typescript @types/node @types/express @types/cors @types/uuid tsx
express runs the web server, ioredis talks to Redis, uuid generates job IDs. TypeScript and tsx are for development.

2. Configure TypeScript

Create tsconfig.json: View source on GitHub
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true,
    "declaration": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules", "dist"]
}
Update package.json:
{
  "type": "module",
  "scripts": {
    "dev": "tsx watch src/index.ts",
    "build": "tsc",
    "start": "node dist/index.js"
  }
}

3. Create the Express server

Create src/index.ts: View source on GitHub
import "dotenv/config";
import { dirname, resolve } from "node:path";
import { fileURLToPath } from "node:url";
import cors from "cors";
import express from "express";
import { Redis } from "ioredis";
import { v4 as uuidv4 } from "uuid";

const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);

const PORT = Number.parseInt(process.env.PORT || "3000", 10);

// Parse Redis connection from Upsun relationship or use local
function getRedisConfig(): { host: string; port: number } {
  const relationships = process.env.PLATFORM_RELATIONSHIPS;
  if (relationships) {
    const parsed = JSON.parse(Buffer.from(relationships, "base64").toString());
    if (parsed.redis?.[0]) {
      return {
        host: parsed.redis[0].host,
        port: parsed.redis[0].port,
      };
    }
  }
  return { host: "localhost", port: 6379 };
}

const redisConfig = getRedisConfig();
const redis = new Redis(redisConfig);

console.log(`[redis] Connecting to ${redisConfig.host}:${redisConfig.port}`);

const app = express();
app.use(cors());
app.use(express.json({ limit: "50kb" }));

// Serve static frontend
app.use(express.static(resolve(__dirname, "../public")));

// Request a new digest
app.post("/api/digest", async (req, res) => {
  const { topics } = req.body;
  const jobId = uuidv4();

  const job = {
    id: jobId,
    topics: topics || ["technology", "business", "science"],
    status: "pending",
    createdAt: new Date().toISOString(),
  };

  // Queue the job for the worker
  await redis.lpush("digest:queue", JSON.stringify(job));
  await redis.set(`digest:job:${jobId}`, JSON.stringify(job), "EX", 3600);

  console.log(`[api] Queued digest job ${jobId} with topics: ${job.topics.join(", ")}`);

  res.json({ jobId, status: "pending" });
});

// Get digest status/result
app.get("/api/digest/:jobId", async (req, res) => {
  const { jobId } = req.params;
  const jobData = await redis.get(`digest:job:${jobId}`);

  if (!jobData) {
    return res.status(404).json({ error: "Job not found" });
  }

  const job = JSON.parse(jobData);
  res.json(job);
});

// Get latest completed digest
app.get("/api/digest", async (req, res) => {
  const latestId = await redis.get("digest:latest");
  if (!latestId) {
    return res.json({ status: "none", message: "No digests available yet" });
  }

  const jobData = await redis.get(`digest:job:${latestId}`);
  if (!jobData) {
    return res.json({ status: "none", message: "No digests available yet" });
  }

  res.json(JSON.parse(jobData));
});

app.get("/health", (_req, res) => {
  res.json({ status: "ok" });
});

app.listen(PORT, () => {
  console.log(`[server] News digest frontend running on port ${PORT}`);
});
The getRedisConfig() function handles the Upsun-specific bit. On Upsun, service credentials arrive in PLATFORM_RELATIONSHIPS, base64 encoded. We decode it to get the Redis host and port. Locally, it falls back to localhost:6379. This pattern gets old if you have many services, but it works. Three endpoints: POST /api/digest queues a new job, GET /api/digest/:jobId checks status and returns the result, GET /api/digest returns the most recent completed digest.

4. Create the frontend UI

Create public/index.html. The full file is in the repository. It has topic selection buttons, a generate button, polling logic, and markdown rendering. The design is based on the chat interface from the LangChain chatbot tutorial (dark theme, lime accents, Space Grotesk), adapted for the digest workflow instead of a streaming chat. The polling is basic:
// Submit job
fetch("/api/digest", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({ topics: selectedTopics })
})
.then(res => res.json())
.then(data => {
  currentJobId = data.jobId;
  // Start polling
  pollInterval = setInterval(pollJobStatus, 2000);
});

// Poll for completion
function pollJobStatus() {
  fetch("/api/digest/" + currentJobId)
    .then(res => res.json())
    .then(job => {
      if (job.status === "completed") {
        clearInterval(pollInterval);
        showDigest(job);
      }
    });
}
Every 2 seconds, check if the job finished. When it does, stop polling and show the result. You could use WebSockets or Server-Sent Events for real-time updates, but polling is fine for a demo and much simpler to debug.

Building the worker

1. Initialize the Python project

cd ../worker
Create requirements.txt: View source on GitHub
redis>=5.0.0
openai>=1.0.0
feedparser>=6.0.0
httpx>=0.27.0
python-dotenv>=1.0.0
redis for the queue, openai for summaries, feedparser for RSS parsing, httpx for HTTP requests (I like it better than requests), python-dotenv for local env files.

2. Create the worker script

Create main.py: View source on GitHub
#!/usr/bin/env python3
"""
News Digest Worker

Continuously polls Redis for digest requests, fetches news from RSS feeds,
and uses OpenAI to generate AI-powered summaries.
"""

import base64
import json
import os
import time
from datetime import datetime, timezone

import feedparser
import httpx
import redis
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

# RSS feeds for different topics
RSS_FEEDS = {
    "technology": [
        "https://feeds.arstechnica.com/arstechnica/technology-lab",
        "https://www.theverge.com/rss/index.xml",
    ],
    "business": [
        "https://feeds.bloomberg.com/markets/news.rss",
        "https://www.ft.com/?format=rss",
    ],
    "science": [
        "https://www.sciencedaily.com/rss/all.xml",
        "https://www.nature.com/nature.rss",
    ],
    "world": [
        "https://feeds.bbci.co.uk/news/world/rss.xml",
        "https://rss.nytimes.com/services/xml/rss/nyt/World.xml",
    ],
}


def get_redis_config() -> dict:
    """Get Redis connection config from Upsun relationship or environment."""
    relationships = os.environ.get("PLATFORM_RELATIONSHIPS")
    if relationships:
        parsed = json.loads(base64.b64decode(relationships).decode())
        if parsed.get("redis"):
            return {
                "host": parsed["redis"][0]["host"],
                "port": parsed["redis"][0]["port"],
            }
    return {"host": os.environ.get("REDIS_HOST", "localhost"), "port": 6379}


def fetch_feed(url: str, timeout: float = 10.0) -> list[dict]:
    """Fetch and parse an RSS feed, returning list of articles."""
    try:
        with httpx.Client(timeout=timeout, follow_redirects=True) as client:
            response = client.get(url, headers={"User-Agent": "NewsDigestBot/1.0"})
            response.raise_for_status()

        feed = feedparser.parse(response.text)
        articles = []

        for entry in feed.entries[:5]:  # Limit to 5 per feed
            articles.append({
                "title": entry.get("title", "Untitled"),
                "link": entry.get("link", ""),
                "summary": entry.get("summary", entry.get("description", ""))[:500],
                "published": entry.get("published", ""),
            })

        return articles
    except Exception as e:
        print(f"[worker] Failed to fetch {url}: {e}")
        return []


def fetch_news_for_topics(topics: list[str]) -> dict[str, list[dict]]:
    """Fetch news articles for the given topics."""
    news_by_topic = {}

    for topic in topics:
        feeds = RSS_FEEDS.get(topic, [])
        articles = []

        for feed_url in feeds:
            articles.extend(fetch_feed(feed_url))

        news_by_topic[topic] = articles[:10]  # Limit to 10 articles per topic
        print(f"[worker] Fetched {len(news_by_topic[topic])} articles for {topic}")

    return news_by_topic


def generate_digest(news_by_topic: dict[str, list[dict]], openai_client: OpenAI) -> str:
    """Use OpenAI to generate a digest summary from the collected news."""
    # Build the prompt with news content
    news_content = ""
    for topic, articles in news_by_topic.items():
        news_content += f"\n## {topic.upper()}\n"
        for article in articles:
            news_content += f"- **{article['title']}**\n  {article['summary'][:200]}...\n"

    prompt = f"""You are a news editor creating a daily digest. Based on the following news articles,
create a concise, engaging summary organized by topic. Highlight the most important stories
and provide brief analysis where relevant.

Format your response in Markdown with clear headers for each topic section.
Include 2-3 key takeaways at the end.

NEWS ARTICLES:
{news_content}

Create a professional news digest:"""

    try:
        response = openai_client.chat.completions.create(
            model=os.environ.get("OPENAI_MODEL", "gpt-4o-mini"),
            messages=[
                {"role": "system", "content": "You are a professional news editor."},
                {"role": "user", "content": prompt},
            ],
            max_tokens=2000,
            temperature=0.7,
        )
        return response.choices[0].message.content or "Failed to generate digest."
    except Exception as e:
        print(f"[worker] OpenAI error: {e}")
        return f"Error generating digest: {e}"


def process_job(job: dict, redis_client: redis.Redis, openai_client: OpenAI) -> None:
    """Process a single digest job."""
    job_id = job["id"]
    topics = job.get("topics", ["technology", "business", "science"])

    print(f"[worker] Processing job {job_id} with topics: {topics}")

    # Update job status to processing
    job["status"] = "processing"
    job["startedAt"] = datetime.now(timezone.utc).isoformat()
    redis_client.set(f"digest:job:{job_id}", json.dumps(job), ex=3600)

    # Fetch news
    news_by_topic = fetch_news_for_topics(topics)

    # Generate digest with AI
    digest_content = generate_digest(news_by_topic, openai_client)

    # Update job with result
    job["status"] = "completed"
    job["completedAt"] = datetime.now(timezone.utc).isoformat()
    job["digest"] = digest_content
    job["articleCount"] = sum(len(articles) for articles in news_by_topic.values())

    redis_client.set(f"digest:job:{job_id}", json.dumps(job), ex=3600)
    redis_client.set("digest:latest", job_id, ex=3600)

    print(f"[worker] Completed job {job_id} with {job['articleCount']} articles")


def main():
    """Main worker loop."""
    openai_key = os.environ.get("OPENAI_API_KEY")
    if not openai_key:
        print("[worker] ERROR: OPENAI_API_KEY environment variable is required")
        return

    redis_config = get_redis_config()
    print(f"[worker] Connecting to Redis at {redis_config['host']}:{redis_config['port']}")

    redis_client = redis.Redis(**redis_config, decode_responses=True)
    openai_client = OpenAI(api_key=openai_key)

    print("[worker] News digest worker started, waiting for jobs...")

    while True:
        try:
            # Block waiting for jobs (timeout after 30 seconds to check for shutdown)
            result = redis_client.brpop("digest:queue", timeout=30)

            if result:
                _, job_data = result
                job = json.loads(job_data)
                process_job(job, redis_client, openai_client)

        except redis.ConnectionError as e:
            print(f"[worker] Redis connection error: {e}")
            time.sleep(5)
        except KeyboardInterrupt:
            print("[worker] Shutting down...")
            break
        except Exception as e:
            print(f"[worker] Error: {e}")
            time.sleep(1)


if __name__ == "__main__":
    main()
The worker uses brpop (blocking right pop) to wait for jobs. This is better than polling in a loop because Redis wakes the worker only when something arrives. The 30-second timeout lets us check for shutdown signals. The job flow: pop from digest:queue, update status to “processing”, fetch RSS feeds, call OpenAI, store result and mark “completed”, save job ID as the latest.

Local development

Start Redis

docker run -d -p 6379:6379 redis:7

Set up the frontend

cd frontend
npm install
Create .env:
PORT=3000
Run the dev server:
npm run dev

Set up the worker

cd worker
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Create .env:
OPENAI_API_KEY=sk-your-key-here
OPENAI_MODEL=gpt-4o-mini
Run the worker:
python main.py

Test the app

Open http://localhost:3000, pick some topics, click “Generate Digest”. Both terminals should light up: the frontend logging the queued job, the worker logging article fetches and completion. If nothing happens, check that Redis is running and both apps can connect.

Deploying to Upsun

Multi-app configuration

The .upsun/config.yaml file defines both apps and how they share Redis: View source on GitHub
applications:
  # Node.js frontend application
  frontend:
    source:
      root: frontend
    type: "nodejs:22"

    build:
      flavor: none

    hooks:
      build: |
        set -e
        npm install
        npm run build

    web:
      commands:
        start: "node dist/index.js"
      locations:
        /:
          passthru: true
          allow: false
          scripts: false
          rules:
            \.(css|js|gif|jpe?g|png|svg|ico|woff2?|ttf|eot)$:
              allow: true

    relationships:
      redis: "redis:redis"

    mounts:
      "/.npm":
        source: storage
        source_path: npm_cache

  # Python worker application
  worker:
    source:
      root: worker
    type: "python:3.12"

    build:
      flavor: none

    hooks:
      build: |
        set -e
        pip install -r requirements.txt

    # Workers don't serve HTTP, they run as background processes
    workers:
      digest:
        commands:
          start: "python main.py"

    relationships:
      redis: "redis:redis"

    mounts:
      "/.cache":
        source: storage
        source_path: pip_cache

services:
  redis:
    type: "redis:8.0"

routes:
  "https://{default}/":
    type: upstream
    upstream: "frontend:http"
  "https://www.{default}/":
    type: redirect
    to: "https://{default}/"
Each app has its own source.root directory. The frontend app uses web: because it serves HTTP. The worker app uses workers: because it runs in the background, no HTTP. Both have relationships.redis pointing to the same Redis service. That’s how they talk to each other. Routes only point to frontend:http since workers don’t get public URLs. The one thing that confused me at first: the worker application contains a workers: block. So you have a worker app that defines workers. The naming is a bit circular, but it makes sense once you see it.

Initialize Git

cd 04-news-digest
git init
git add .
git commit -m "Initial commit: News digest with workers"

Create Upsun project

upsun login
upsun project:create
Follow prompts for organization, project name, region, and plan.

Set the OpenAI API key

upsun variable:create \
  --level project \
  --name env:OPENAI_API_KEY \
  --value "sk-your-actual-key-here" \
  --sensitive true \
  --visible-build false \
  --visible-runtime true
The --sensitive true flag encrypts it. Won’t show up in logs or the UI.

Deploy

upsun push
Watch the build logs. Both apps build separately: npm install and tsc for the frontend, pip install for the worker. When done, Upsun starts both containers. The frontend takes traffic, the worker waits for jobs.

Access your app

upsun url
Opens in your browser.

Testing

Try the full flow: open the app, select topics, click “Generate Digest”, watch it go from “pending” to “processing” to “completed”, read the summary. Check logs from both apps:
upsun logs --app frontend --tail
upsun logs --app worker --tail
Worker logs show the job moving through:
[worker] Processing job abc-123 with topics: ['technology', 'science']
[worker] Fetched 8 articles for technology
[worker] Fetched 6 articles for science
[worker] Completed job abc-123 with 14 articles

Customization

Add more topics

Edit RSS_FEEDS in worker/main.py:
RSS_FEEDS = {
    # ... existing topics ...
    "sports": [
        "https://www.espn.com/espn/rss/news",
        "https://rss.nytimes.com/services/xml/rss/nyt/Sports.xml",
    ],
    "entertainment": [
        "https://variety.com/feed/",
        "https://www.hollywoodreporter.com/feed/",
    ],
}
Then add matching buttons in the frontend. Some feeds are flaky, so test them locally first.

Change the AI model

upsun variable:create --level project --name env:OPENAI_MODEL --value "gpt-4o"
gpt-4o-mini is cheap and fast. gpt-4o is smarter but costs more.

Adjust digest format

Edit the prompt in generate_digest():
prompt = """You are a news editor creating an executive briefing.
Summarize in 3-5 bullet points per topic.
Focus on business impact and action items.
Keep it under 500 words total."""

Scale the worker

If one worker can’t keep up, add more. In .upsun/config.yaml:
workers:
  digest:
    commands:
      start: "python main.py"
    size: M
    count: 2
Two workers compete for jobs from the same queue. Jobs get processed in parallel. Be careful with rate limits on RSS feeds and OpenAI if you scale too much.

Add scheduled digests

Want an automatic digest every morning? Add a cron alongside the worker:
workers:
  digest:
    commands:
      start: "python main.py"

crons:
  morning_digest:
    spec: "0 8 * * *"
    commands:
      start: "python schedule_digest.py"
Create schedule_digest.py to push a job with all topics.

Workers vs cron jobs

Workers make sense when users are waiting. They also work well when jobs arrive unpredictably or take variable time. Cron makes sense for scheduled tasks where nobody is watching. Daily reports, hourly syncs, that kind of thing. This example uses workers because someone clicks a button and wants results. A cron job would only generate digests at predetermined times.

Troubleshooting

Worker not picking up jobs

Check if it’s actually running:
upsun ssh --app worker
ps aux | grep python
If it’s running but not processing, check Redis connectivity:
upsun ssh --app worker
python -c "import redis; r = redis.Redis(); print(r.ping())"

“OPENAI_API_KEY is required” error

upsun variable:list
If missing, create it (see deployment section).

Digest takes too long

Some RSS feeds are slow or broken. Check logs:
upsun logs --app worker | grep "Failed to fetch"
Remove problematic feeds or bump the timeout in fetch_feed().

Jobs stuck in “processing”

The worker probably crashed. Check logs:
upsun logs --app worker
Jobs expire after 1 hour anyway. Or clear manually:
upsun redis:cli
DEL digest:job:stuck-job-id

Build fails for worker

Usually a Python version issue. If you need an older version, change type: "python:3.11" in the config.

Wrapping up

You now have a Node.js frontend for users and a Python worker for background processing, with Redis in the middle. The frontend never blocks on slow operations. Same pattern works for image processing, PDF generation, sending emails, data imports, any AI inference. Keep the web app fast, move the slow stuff to workers.

Resources

For questions, check the Upsun community forum or open an issue in this repo.
Last modified on March 10, 2026