Skip to main content
When Docker announced their hardened images in December 2025, making their enterprise-grade hardened container images free and open source, the developer community celebrated. Minimal attack surfaces, non-root by default, continuous CVE scanning, automated updates; finally, someone was taking container security seriously at scale. Here’s the thing: if you’ve been running on Upsun, you’ve had all of this since day one. Not as a new feature launch, not as a premium add-on, but as the invisible infrastructure that keeps hundreds of container images secure while you focus on shipping code. This isn’t us claiming we invented container hardening (we didn’t). But we’ve been doing this work quietly, automatically, and at scale for years. Docker’s announcement validates the approach we’ve been running in production. Let’s talk about how it works.

Why hundreds of images, not one “hardened” base

We run a Debian shop. Every service we support (MariaDB, PostgreSQL, Redis, PHP, Ruby, Python, and more) gets its own set of container images. But here’s the complexity that Docker’s unified hardened images don’t address: each runtime needs separate images for every version we support. Take PHP. We maintain dedicated images for PHP 8.5, 8.4, 8.3, 8.2, and older versions still in use. Why separate images instead of one hardened PHP image? ABI compatibility. If you’re running PHP 8.5 with a compiled binary extension and we suddenly upgrade the base image’s glibc version, your extension breaks. Same story for any language with native extensions. This isn’t a choice we made lightly. It’s how we prevent your application from breaking during routine platform updates. But it means we’re maintaining hundreds of container images, each needing security updates, package patches, and careful version management. This is where the real work happens.

YAML as infrastructure (before infrastructure as code was everywhere)

Here’s where our approach diverges from the Docker hardened images model. We don’t maintain these images the traditional way. There’s no apt-get update and apt-get upgrade running on each image. Instead, we use an internal YAML-based format that specifies every package and version installed in an image. Think of it as a Software Bill of Materials (SBOM) that doubles as the source of truth for building images. Everyone talks about SBOMs now for supply chain security. We’ve been using them as our build system for years because it makes auditing straightforward and automation possible. When we need to check if an image needs updating, we don’t spin up containers. We parse the YAML, query the Debian repositories via HTTP, and compare versions. No packages changed? Nothing to do. New versions available? Build a new image with the latest packages. This approach scales beautifully. Most of the time, there’s nothing to update. When there is, it’s just YAML parsing and HTTP requests to package repositories. No container orchestration, no build farms spinning up unnecessarily. The automation is boring, efficient, and runs every few hours without anyone noticing.

Continuous updates (the unglamorous kind)

Every two to four hours, we scan every image version we maintain. For each one, we check its designated Debian repositories for newer packages. If updates exist, we trigger a build with the latest versions. This has been running in the background for years. This means that within hours of a package appearing in a Debian repository, it’s in our images. Docker’s hardened images promise continuous scanning and updates. We’ve been doing exactly that, automatically, every few hours, for hundreds of images. But here’s the part that takes real engineering: getting an image built and getting it to production safely are different problems.

The testing gauntlet

Speed matters for security updates, but stability matters more, especially for databases. If we push a broken MariaDB update, you lose data. That’s unacceptable. Here’s our process: We don’t deploy every new image we build. Multiple images get built every day as packages update, but we don’t immediately ship them to production. Instead, once a week, we snapshot the latest versions of all our images and run them through our full testing suite. This includes internal testing environments where we validate that everything works as expected. The images that pass testing get scheduled for production deployment. From build to production usually takes one to two weeks. The images hit our internal testing regions first, then roll out to production environments. This gives us confidence that updates won’t break your applications.

The CVE exception

There’s one big exception to this measured cadence: critical CVEs. When a serious vulnerability drops in a package we use, we can fast-track images to production within hours. This doesn’t happen often. In Upsun’s lifetime, we’ve needed emergency updates only a handful of times. But when we do, the fully automated pipeline makes it possible. We can identify affected images, build updates, run essential tests, and deploy; all in a compressed timeframe. This capability exists because of Debian’s technical design. The metadata, repositories, version resolution, and documentation are all standardized and reliable. We can programmatically query everything we need to know about package versions and dependencies.

What this means for you (the actual value)

Here’s the thing about Docker’s hardened images announcement: it’s great news for teams managing their own infrastructure. They now have free access to enterprise-grade container security. That’s genuinely valuable. But if you’re on Upsun, you’ve never had to think about any of this. Not because we hide complexity (we’re literally explaining it right now), but because this is what a proper Cloud Application Platform provides. When Docker releases hardened images, you don’t need to evaluate them for your stack. When a CVE drops in OpenSSL, you don’t need to coordinate updates across your infrastructure. When Debian publishes security patches, you don’t need to rebuild images and redeploy. This is all handled automatically, tested thoroughly, and deployed safely. We handle the system packages. You handle your application dependencies: npm packages, Python libraries, Ruby gems. That’s the division of labor. That’s what “platform” actually means: not just hosting, but the invisible infrastructure work that keeps your application secure and running.

The catch: version lifecycle

There’s one thing you do need to think about: upgrading runtime versions. Because we tie each Debian version to a specific image version, old images eventually stop receiving updates. Debian 7 and Debian 8 are no longer maintained by Debian. If you’re running PHP 5.4 on one of those base systems, you’re not getting security patches for the underlying OS. The application runtime is outdated, and so is everything underneath it. This is why we encourage regular upgrades. It’s not about chasing the latest features (though those are nice). It’s about staying on supported base systems that still receive security updates.

The validation nobody asked for (but we’ll take it)

Docker’s decision to make hardened images free and open source validates something we’ve known for years: container security at scale requires automation, continuous updates, and thorough testing. You can’t manually manage hundreds of images and stay secure. We weren’t being visionary when we built this system. We were being practical. When you’re running a platform that hosts thousands of applications, each with its own runtime version requirements, automation isn’t optional. It’s survival. The boring, repetitive work of checking packages, building images, running tests, and deploying updates; that’s the infrastructure work that matters. Docker’s announcement doesn’t change what we do. It just confirms we’ve been doing it right.
Last modified on April 14, 2026