Skip to main content
I don’t know about you, but one task I have always been dreading for every critical project is actually creating the load-test performance scenarios… Whether that was with Apache JMeter or any other one, I’ve spent countless hours creating and debugging them. Handling cookies, customer login, etc. As I needed to run another test on a new website, I started poking around the idea of using Claude Code to generate the test … You can actually automate this process by connecting two Model Context Protocol (MCP) servers, Chrome MCP and Context7, to generate comprehensive test scenarios that match actual user navigation patterns.

How MCPs automate scenario generation

Model Context Protocol (MCP) servers connect AI assistants like Claude Code to external data sources and tools. By combining two MCPs, you can automate load test creation:
  1. Chrome MCP - Analyzes your live application by browsing pages, extracting navigation patterns, and understanding user flows
  2. Context7 MCP - Retrieves current Locust documentation to generate syntactically correct test code
Here’s how the workflow connects:

Setting up your environment

Before generating tests, configure both MCP servers in your Claude Code environment.

Install Chrome MCP

The Chrome MCP lets Claude Code control a Chrome browser to explore your application:
claude mcp add -t stdio npx chrome-devtools-mcp@latest --scope user
The --scope user parameter make the MCP global for all projects.

Configure Context7 MCP

Context7 provides access to technical documentation:
claude mcp add --transport http context7 https://mcp.context7.com/mcp --header "CONTEXT7_API_KEY: YOUR_API_KEY"  --scope user
Note: The API key is optional and only required if you are generating a high usage of Context7.

Install the Locust tool

Set up Locust in a virtual environment to keep dependencies isolated:
# Create a virtual environment
python3 -m venv locust-env

# Activate it (macOS/Linux)
source locust-env/bin/activate

# Or on Windows
# locust-env\Scripts\activate

# Install Locust
pip install locust

# Verify installation
locust --version
Keep this environment activated when running your load tests. You can deactivate it later with deactivate.

Generating load tests for an e-commerce site

Let’s generate Locust tests for an e-commerce platform. This example demonstrates how Claude Code uses both MCPs to create realistic user scenarios. We are going to use the Magento Association website as an example: Magento Association

Upsun resources for preview environments

Before running load tests, configure your test environment properly. Upsun now allows you to customize resources for preview environments, making it perfect for realistic load testing without impacting production. You can temporarily provision the same resource levels as your production environment on a preview environment just for the duration of your tests. This means no more running tests against production at night or hoping staging accurately reflects production performance. Spin up a preview environment with production-equivalent CPU, memory, and storage, run your comprehensive load tests, analyze the results, and then scale it back down—all without risking your live site. Change resources

Prompt Claude Code

Start by describing what you need:
Analyze the e-commerce site at https://example-shop.com/ and generate Locust 
load test scenarios. Create three user types:

1. Browser - go to the home page, views products, searches, but doesn't buy (70% of users)
2. Campaign - go to a product page directly 
3. Shopper - adds items to cart and review the cart (25% of users)

Use Chrome MCP to explore the site navigation and Context7 to get current 
Locust documentation. Generate a complete locustfile.py with realistic timing.

Once done, give me the instruction to run the test.

What Claude Code does

Claude Code executes this workflow:
  1. Analyzes the site structure using Chrome MCP:
    • Navigates through product categories
    • Identifies search functionality
    • Maps the checkout process
    • Notes authentication requirements
  2. Retrieves Locust documentation via Context7:
    • Gets syntax for HttpUser classes
    • Checks task decorator usage
    • Reviews wait_time configurations
    • Verifies weight attribute usage
  3. Generates the test file combining both sources of information

Example generated output

The generated test creates three distinct user types with different behaviors and weights:
You can review the generated locustfile.py on Github

Running your generated tests

Once you have the locustfile, start testing:
# Install Locust if needed
pip install locust

# Run with web interface
locust -f locustfile.py --host=https://example-shop.com

# Run headless for CI/CD
locust -f locustfile.py \
  --host=https://example-shop.com \
  --users=100 \
  --spawn-rate=10 \
  --run-time=5m \
  --headless
Access the web interface at http://localhost:8089 to monitor test execution, view statistics, and adjust user counts. Locust Graph Locust Stats

What’s next?

Generating the test scenarios is just the beginning. Claude Code can continue assisting throughout your entire load testing workflow. Once your tests are running, you can ask Claude Code to analyze the Locust output, identify performance bottlenecks from the statistics, and interpret failure patterns. Simply share the results with Claude Code and request an analysis. Beyond diagnostics, Claude Code can suggest specific code optimizations based on your test results. If certain endpoints show high response times or failure rates, Claude Code can examine your application code, propose caching strategies, recommend database query optimizations, or suggest architectural improvements. This closes the loop from test generation to actionable performance improvements, all within a single AI-powered workflow.
Ready to automate your load testing? Create a free Upsun account to deploy and run your generated Locust tests in isolated environments with full observability.
Last modified on April 14, 2026