Your e-commerce site calls an ERP (Enterprise Resource Planning system) to record orders. The ERP is slow today. Users are waiting 8 seconds for checkout to complete. Some of them give up. Others see 502 errors because your PHP-FPM workers are all stuck waiting on that ERP.
You scale up resources. It helps a little. The 502s come back when traffic increases.
The problem isn’t your infrastructure. It’s that your users are waiting for something they don’t need to wait for.
Keep your responses fast
Here’s the pattern we see in support tickets constantly: an application makes an external API call during a user request, and that call is slow. The user waits. The connection stays open. Resources get tied up.
With PHP-FPM, this becomes visible fast. Worker pools are small by design. If each worker is stuck waiting on an external API for 8 seconds, you run out of workers quickly. New requests have nowhere to go. 502 errors start appearing.
Other stacks handle concurrent connections better. Node.js can juggle thousands of open connections without breaking a sweat. But limits exist everywhere. Your router has a maximum connection count. Linux has file descriptor limits. Push hard enough and you’ll hit something.
The real issue isn’t which limit you hit. It’s that your users are waiting 8 seconds for a checkout page to load. That’s a terrible experience regardless of whether your infrastructure can technically handle it.
And it costs you money. E-commerce sites loading in 1 second convert at 2.5x the rate of sites taking 5 seconds, according to Portent’s research. Amazon found that 100ms of latency cost them 1% in sales. Walmart saw a 2% conversion increase for every second shaved off load time.
Do users actually need to wait?
When your application calls an external API during a request, ask: does the user need that response before they can continue?
For order processing, the answer is usually no.
Think about Amazon. You click “Place order” and immediately see confirmation. But your order hasn’t actually been processed yet. It’s in a queue. A worker will pick it up, validate payment, check inventory, and eventually send a confirmation email. That email might arrive minutes later.
The user doesn’t need to watch your application talk to the ERP in real time. They need to know their order was received. Those are different things.
Offload to a queue
The fix is straightforward. Instead of calling the external API during the request, push the work to a queue and return immediately:
Before:
- User places order
- Application calls ERP (8 seconds)
- Application returns confirmation
After:
- User places order
- Application pushes message to queue (milliseconds)
- Application returns “Order received”
- Worker picks up message
- Worker calls ERP (8 seconds, but nobody is waiting)
- Worker sends confirmation email
Your checkout now takes milliseconds. The ERP integration still works, it’s just not blocking the user anymore.
RabbitMQ is a popular choice and available as a managed service on Upsun. Redis works too, with its Streams or pub/sub features.
Workers vs. crons
Once you have a queue, something needs to process it. Two options:
Workers run continuously, watching the queue. When a message arrives, they process it immediately. On Upsun, workers get their own containers with dedicated resources. More responsive, but they cost more since they’re always running.
Crons run on a schedule and share resources with your main application. A cron might check the queue every 5 minutes and process what’s there. Cheaper, but you get latency (up to 5 minutes before work starts) and potential resource contention if the processing is heavy.
The choice depends on what you’re processing. Order confirmations? Customers expect quick emails, so use workers. Nightly data sync? A cron is fine.
On Upsun, you can also scale worker resources dynamically with the CLI. Some customers use crons to boost worker resources during peak hours and scale them back at night.
When to use this pattern
This applies when:
- Your application calls external APIs during user requests
- Those calls are slow or unreliable
- Users don’t actually need the result before continuing
Common cases:
- E-commerce → ERP: Order data syncs, but not during checkout
- Notifications: Emails, SMS, push notifications should never block requests
- Third-party integrations: Analytics, CRM updates, shipping rate calculations
- Report generation: Users can download reports later
If users genuinely need the external API’s response before they can proceed, you’re stuck waiting on it. But that’s less common than you might think.
The takeaway
Slow external API calls during user requests cause two problems: bad user experience and infrastructure strain. Adding resources helps marginally, but you’re treating symptoms.
The fix is to stop making users wait for things they don’t need to wait for. Push that work to a queue. Return fast. Process in the background. Send a confirmation when it’s done.
Upsun provides RabbitMQ, Redis, and workers for exactly this pattern. Last modified on April 14, 2026