Every environment on Upsun gets HTTPS. Production, staging, that branch you created 10 minutes ago to test a database migration. No configuration, no waiting. Certificates appear automatically.
Making that work across thousands of projects and tens of thousands of environments requires a layered system. It started with wildcard certificates, hit DNS limits we didn’t anticipate, and ended up relying on per-project Let’s Encrypt automation with its own constraints.
The starting point: wildcard certificates on entrypoints
Each Upsun region has entrypoint nodes that handle incoming traffic. These entrypoints hold wildcard TLS certificates for the region’s domain, something like *.eu-5.platformsh.site. When you create an environment, it gets a subdomain under that wildcard, and the certificate covers it automatically.
This works well for simple cases. If your environment URL is main.eu-5.platformsh.site, the wildcard matches and you get HTTPS with zero provisioning delay.
But wildcard certificates only match a single DNS label. *.example.com covers foo.example.com but not foo.bar.example.com. And DNS labels have a hard length limit. We ran into both constraints.
Where the wildcard breaks down
Each environment gets a URL like 7smqkhc-abcdefgh1234567.eu-5.platformsh.site. That’s a single label before the region domain, and the wildcard covers it.
The problem is routes. In your .upsun/config.yaml, you can define routes like https://api.{default} or https://admin.{default}. These prepend an extra label to the environment URL, producing something like api.7smqkhc-abcdefgh1234567.eu-5.platformsh.site. The wildcard for *.eu-5.platformsh.site doesn’t cover that because there are now 2 labels before the region domain, not 1.
There’s also the label length limit. DNS labels (the parts between dots) can be at most 63 characters, per RFC 1035. Early on, we tried encoding more information into a single label using triple-dash separators: feature-login-redesign---main---abcdefgh1234567.eu-5.platformsh.site. This kept everything under the wildcard, but with longer branch names, we’d blow past the 63-character ceiling and DNS would refuse the domain entirely.
Both problems point to the same conclusion: we can’t rely on a single wildcard certificate. We needed per-environment certificates that cover the exact set of domains each environment uses. That’s where Let’s Encrypt comes in.
Per-project Let’s Encrypt automation
We use Let’s Encrypt to provision certificates for each project’s environments. The mapping is straightforward: one Let’s Encrypt account per project, one certificate per environment. When you deploy an environment, the system determines which domains that environment needs and provisions a certificate covering them.
The certificate selection process runs on every deployment. It follows a priority order:
- If you’ve uploaded your own certificates (for custom domains), those are used first. The system picks certificates that cover the most domains, then favors longer validity.
- If any domains remain uncovered, a new Let’s Encrypt certificate is provisioned covering those domains.
The goal is one certificate per environment covering all its domains. When you provide your own certificate, only that certificate is used. If it doesn’t cover all your environment’s domains, the uncovered domains won’t have a valid certificate. When no user-provided certificate is present, the system provisions a single LE certificate covering all the environment’s domains. This matters because Let’s Encrypt has hard limits on how many certificates you can request.
Custom certificates and the expiration boundary
You can upload your own certificates for custom domains through the console or API. When you do, the system treats them differently from automatically provisioned ones.
The most important difference is that user-provided certificates are never automatically renewed or deleted. The system doesn’t have access to your CA account or control over your DNS, so it has no way to request a replacement on your behalf. Renewal is your responsibility.
What the system does do is track expiration. A user-provided certificate is considered valid until 1 week before it expires. Once it crosses that threshold, the system treats the domain as uncovered and tries to provision a Let’s Encrypt certificate for it. If your custom cert covers shop.example.com and it’s about to expire, the system will automatically request an LE cert for that domain as a fallback. This acts as a safety net, but there are caveats. You’ll get a Let’s Encrypt DV certificate instead of your original one, which matters if you had an OV or EV certificate. And if your domain has a CAA record that doesn’t include Let’s Encrypt, the fallback provisioning will fail entirely.
For automatically provisioned certificates, the threshold is more conservative: 4 weeks before expiry. Let’s Encrypt certificates are valid for 90 days (roughly 12 weeks), so we start renewal about a third of the way through the certificate’s lifetime. That gives plenty of runway to handle transient failures, rate limits, or DNS propagation delays.
There’s also a fallback of last resort. If Let’s Encrypt provisioning fails entirely (ACME challenge can’t complete, rate limits hit, DNS misconfigured), the system reuses an expired certificate rather than serving no certificate at all. An expired cert triggers a browser warning, which is bad, but it’s better than a connection failure that gives the user nothing actionable.
What goes into a certificate
A typical certificate for an Upsun environment covers multiple domains. Consider a project whose routes define both https://{default} and https://www.{default}, with a custom domain attached. The certificate for the production environment might list these domains in its Subject Alternative Names (SAN):
DNS: example.com
DNS: www.example.com
A preview branch with the same route configuration would get its own certificate covering the platform domains:
DNS: 4kxrats-abcdefgh1234567.eu-5.platformsh.site
DNS: www.4kxrats-abcdefgh1234567.eu-5.platformsh.site
All domains go into the SAN extension. The domains are sorted in reverse-DNS order (so com.example sorts before com.example.www) to keep related domains grouped together in the certificate.
One detail worth calling out: we don’t set a Common Name (CN) at all. The CSR we send to Let’s Encrypt has an empty subject. We used to pick the shortest domain from the certificate’s SAN list as the CN. But the CN field is limited to 64 characters by the X.509 spec, creating the same length problem we saw with DNS labels. A domain like www.7smqkhc-abcdefgh1234567.eu-5.platformsh.site is already 52 characters, and longer environment hashes or region names could push past the 64-character CN limit.
This is fine because modern TLS clients determine the certificate’s validity from the SAN extension, not the CN. The CN has been effectively deprecated for hostname matching since RFC 6125 in 2011, and Let’s Encrypt explicitly supports certificates with an empty subject. Older clients that only check the CN would reject the certificate. In practice, that means software from before 2010 that has bigger problems than certificate validation.
The system caps each environment at 100 domains, matching the Let’s Encrypt limit of 100 SANs per certificate. Most environments have fewer than 5, but projects with many custom domains and route configurations can accumulate more.
The Public Suffix List and why it matters
Before getting into Let’s Encrypt’s rate limits, it’s worth understanding a piece of internet infrastructure that makes our certificate provisioning viable at all: the Public Suffix List (PSL).
The PSL is a community-maintained list of domain suffixes under which people can register names. The obvious entries are things like com, org, and co.uk. Its original purpose is cookie scoping in browsers. Without it, a site hosted at foo.co.uk could set a cookie for all of .co.uk, affecting every other site under that suffix. The PSL tells browsers where the boundary is between “public suffix” and “registered domain,” so cookies can only be set at the right level. But the PSL has found uses well beyond browsers. Any software that needs to determine domain ownership boundaries can rely on it, and Let’s Encrypt is one of the most consequential examples.
Hosting providers can add their domains to the list. We have *.platformsh.site on the PSL.
Why does this matter? Let’s Encrypt uses the PSL to determine what counts as a “registered domain” (the eTLD+1) when applying rate limits. The wildcard entry means that each region subdomain like eu-5.platformsh.site is itself a public suffix (an eTLD), treated the same way as .com or .co.uk. So 7smqkhc-abcdefgh1234567.eu-5.platformsh.site is a registered domain in the same way that example.com is. Since all environments in a project share the same project ID in their URL, their certificate requests count against the same per-project budget.
Without the PSL entry, Let’s Encrypt would treat platformsh.site as the registered domain. Every certificate request for every project across all regions would count toward a single rate limit bucket. With thousands of projects, we’d burn through the 50-per-week limit almost instantly.
With the PSL entry, each project gets its own rate limit budget. This is the difference between the system working and not working.
Working within Let’s Encrypt’s constraints
Let’s Encrypt is free and automated, but it comes with rate limits that shape how we design the provisioning system.
The most relevant constraints:
- 100 names per certificate. Each certificate can cover at most 100 domain names (Subject Alternative Names). An environment with many custom domains and route configurations can approach this limit.
- 50 certificates per registered domain per week. This is a global limit across all accounts. Thanks to our PSL entry, the “registered domain” is at the per-environment level (e.g.
7smqkhc-abcdefgh1234567.eu-5.platformsh.site), not platformsh.site itself. Since environments within a project share a project ID, each project effectively gets its own budget of 50 new certificates per week.
- 300 new orders per account per 3 hours. Each account can submit up to 300 certificate orders in a 3-hour window, refilling at roughly one order every 36 seconds.
- 5 duplicate certificates per exact set of names per week. If you request a certificate for the exact same list of domains repeatedly, you’re limited to 5 per week.
As mentioned earlier, we use one Let’s Encrypt account per project. This is an instance of the natural scaling pattern: instead of one shared account that needs to handle thousands of projects, each project gets its own isolated account. This doesn’t help with the per-registered-domain limit, which is global regardless of account. But it isolates projects from each other on per-account limits. If one project is cycling through deployments rapidly and burning through its 300-order budget, that doesn’t affect any other project. And since each environment gets its own certificate, a project with 10 active environments means 10 certificates under that project’s account.
Even with the PSL giving each project its own rate limit bucket, the 50-per-week limit can still bite. A project with many active environments that are being frequently created and destroyed could approach it. In practice, this is rarely an issue because most certificate requests are renewals of existing certificates (which don’t count against the limit) rather than net-new issuances. But we have hit rate limits in the past, typically during large-scale infrastructure events where many certificates need to be re-provisioned simultaneously. When it happens, the system backs off and retries, and certificates catch up within hours.
We also self-rate-limit our requests to the ACME server at 5 requests per second per project, using a token bucket. This isn’t required by Let’s Encrypt, but it’s good citizenship. Hammering the ACME API with parallel requests from thousands of projects would create problems for both sides.
Renewals
Let’s Encrypt certificates are valid for 90 days. The system automatically renews certificates that will expire within 30 days. This runs as part of the deployment process, so a redeployment also triggers a certificate check.
For environments that don’t see frequent deployments, the system still tracks certificate expiry and triggers renewal activities independently. You don’t need to redeploy to keep your certificates valid.
The renewal process follows the same selection algorithm as initial provisioning. Any domain configuration change (adding or removing custom domains, changing routes) triggers a new deployment. The certificate is provisioned at deploy time with the correct domain set, so renewals always handle the same domains.
Trade-offs
The wildcard-to-Let’s-Encrypt evolution solved the DNS label problem and gave us more flexibility, but it introduced operational complexity.
Wildcard certificates are simple. One certificate per region, no provisioning delay, no external dependencies. But they can’t handle multi-level subdomains, and they don’t work for custom domains at all.
Per-project Let’s Encrypt certificates handle arbitrary domain structures and custom domains. But they depend on an external CA, they’re subject to rate limits, and they add latency to the first deployment. The ACME challenge needs to complete before the certificate is ready.
The certificate selection algorithm adds its own complexity. Greedy set-cover with multiple tiebreaker criteria isn’t trivial to debug when a domain isn’t getting the certificate you’d expect. But it handles the common cases well: most projects have a handful of domains, the algorithm picks the right certificates, and the entire process is invisible to the user.
Every push gets HTTPS
The goal is that TLS is never something you think about. You push code, you get a URL, and that URL has a valid certificate. Behind the scenes, the system is checking domain coverage, selecting from existing certificates, provisioning new ones through ACME challenges, and renewing before expiry. It’s not glamorous infrastructure, but it’s the kind of thing that gets noticed when it breaks. Last modified on April 27, 2026