For more than a decade, Docker’s name has been synonymous with containers — the lightweight, portable packaging format that quietly transformed how software gets built, shipped, and run. But in 2025, Docker made moves that signal something larger than an incremental product update. The company repositioned itself at the intersection of two of the most pressing concerns in modern software: supply chain security and the explosive rise of agentic AI. The result is a Docker that looks familiar on the surface, yet is fundamentally different underneath.
The twin pillars of this transformation — the free release of Docker Hardened Images and the launch of the Docker MCP Catalog and Toolkit — represent a deliberate bet that Docker’s next decade won’t be defined by containerization alone, but by who controls the secure, trusted infrastructure on which AI-powered development runs.
The Security Reckoning
The context for Docker’s pivot to security is hard to overstate. Supply chain attacks — exploiting vulnerabilities baked into the open source libraries and base images that power virtually every modern application — have surged in frequency and cost. By 2025, the damage had tripled from 2021 levels, projected to exceed $60 billion globally. Every language, every ecosystem, every build and distribution step became a potential target.
Against this backdrop, Docker launched its Docker Hardened Images (DHI) initiative in May 2025: a curated catalog of security-hardened, enterprise-grade container images designed to address software supply chain vulnerabilities head-on. The images are built on the widely adopted open source distributions Debian and Alpine, minimizing the attack surface by stripping out unnecessary components such as package managers and shells, and running as non-root users by default. Each image includes a complete software bill of materials (SBOM), transparent CVE data, cryptographic proof of authenticity, and SLSA Build Level 3 provenance — the gold standard of supply chain integrity.
“Security has to start at the earliest point in development, and needs to be universally available to every developer. By making hardened images freely available and providing tooling that works with today’s AI coding agents, we’re giving the entire industry and community the best possible baseline to build on.”
— Mark Cavage, President and COO, Docker, Inc.
Initially a commercial offering, DHI underwent a dramatic shift in December 2025 when Docker made the entire catalog — more than 1,000 images — freely available to all developers under the Apache 2.0 open source license, with no restrictions on use, sharing, or building. The move was described internally as “a fundamental reset of the container security market.”
What “Hardened” Actually Means
The engineering behind Docker Hardened Images goes beyond simply patching known CVEs. The images eliminate the tools attackers most commonly exploit: package managers that could be used to install malicious software at runtime, shells that allow arbitrary command execution, and unnecessary system utilities that expand the attack surface. By starting from this minimal foundation and enforcing non-root execution, DHI makes entire categories of attacks structurally impossible rather than just temporarily patched.
Docker claims the hardened images achieve up to 95% reductions in attack surface compared to traditional base images — a figure that has driven adoption among enterprises with strict compliance requirements. Adobe and Qualcomm are among the organizations that have staked their container security strategy on DHI for compliance in highly regulated environments, while startups like Attentive and Octopus Deploy have used it to accelerate their own compliance programs when selling to larger enterprise customers.
Minimal footprint: Package managers, shells, and unnecessary utilities removed by default — entire attack classes eliminated rather than patched.
Full transparency: Every image ships with a complete SBOM, public CVE data, and cryptographic provenance (SLSA Build Level 3).
Non-root by default: Containers run without elevated privileges, enforcing least-privilege principles from the first pull.
Enterprise SLAs: DHI Enterprise offers critical CVE remediation within seven days, with a roadmap toward 24-hour turnaround.
Extended Lifecycle Support: Vulnerability updates and provenance attestations for up to five years after upstream support ends.
The decision to open-source DHI wasn’t purely altruistic. Docker controls Docker Hub, which processes more than 20 billion image pulls each month. Raising the security baseline of those pulls raises the security baseline of the entire software industry — and positions Docker as the steward of that secure foundation. When Jonathan Bryce, Executive Director of the Cloud Native Computing Foundation, praised the move, he highlighted the broader ecosystem impact: giving developers access to secure, well-maintained building blocks “helps strengthen the software supply chain together.”
The AI Pivot: MCP and the Agent Economy
If Docker’s security moves addressed an existing crisis, its AI investments are a bet on where development is heading. The company’s second major initiative of 2025 — the Docker MCP Catalog and Docker MCP Toolkit — arrived in May as a response to the rapid emergence of agentic AI systems and the protocol that is becoming their connective tissue.
Model Context Protocol (MCP), originally developed by Anthropic, has emerged as a de facto open standard for enabling AI agents to communicate with external tools, databases, APIs, and services. Think of it as the USB-C port of AI integration: a universal connector that, once standardized, allows anything to plug into anything else. By mid-2025, the protocol had achieved broad industry alignment, and the pace of agentic AI development accelerated accordingly.
The challenge Docker identified was familiar: a powerful new technology had arrived with massive potential but a fragmented, insecure developer experience. MCP servers — the services that expose tools and data to AI agents — were scattered across GitHub repositories, required separate installation for every AI client, ran untrusted code directly on developer machines, and had no centralized security or authentication model. Configuring a GitHub MCP server for Claude meant configuring it again from scratch for Cursor, VS Code, and every other AI client.
“MCP has the potential to do for agentic AI interaction what containers did for app deployment — standardize and simplify a complex, fragmented landscape.”
— Docker Engineering Team
The MCP Catalog: A Hub for the Agent Era
The Docker MCP Catalog, integrated directly into Docker Hub, addresses the discovery problem by providing a curated, verified collection of over 300 MCP servers packaged as container images — complete with versioning, provenance, and security updates. Developers can find, evaluate, and launch tools from partners including Stripe, Grafana Labs, GitHub, MongoDB, Neo4j, Elastic, Pulumi, and Heroku from a single location, rather than hunting across the open web.
Crucially, because each MCP server runs as an isolated Docker container, the classic problems of environment conflicts, dependency collisions, and inconsistent behavior across machines are eliminated from the start. The same container-based isolation that made Docker indispensable for traditional application deployment applies with equal force to AI tool infrastructure.
The MCP Toolkit: Security and Simplicity for Agents
Discovery alone wasn’t sufficient. Docker’s MCP Toolkit — integrated into Docker Desktop — solves the operational complexity of managing MCP servers across projects and clients. Rather than configuring each server for each AI application separately, developers set up profiles once and connect all their clients to a centralized MCP Gateway. A “web-dev” profile might include GitHub and Playwright servers; a “backend” profile, database tools. Configure once, share across the team.
The security model of the MCP Toolkit reflects hard lessons from the traditional container security world. Each MCP server runs with strict resource limits — capped at 1 CPU and 2 GB of memory — with no default access to the host filesystem. All MCP server images are digitally signed and include SBOMs for full transparency. Built-in OAuth support and secure credential storage mean that API keys and tokens never get hardcoded into environment variables or configuration files. The Toolkit also introduced Dynamic MCP — a capability that allows AI agents to discover, add, and compose MCP servers on-demand during a conversation, without any manual reconfiguration.
One-click connections to leading AI clients — Claude, Cursor, VS Code, Windsurf, continue.dev, and Goose — mean developers can go from browsing the catalog to having a running, secured MCP server connected to their AI tools in seconds, not hours.
Image signing and attestation: All catalog images are built and digitally signed by Docker, with SBOMs for full supply chain transparency.
Runtime isolation: Each MCP tool runs in its own container with CPU, memory, and filesystem limits enforced by default.
Credential management: Built-in OAuth support and secure storage prevent secrets from leaking into environment variables.
Hardened MCP Servers: Docker extended its DHI methodology to MCP server images, launching hardened versions of popular servers including Grafana, MongoDB, GitHub, and Context7.
A Year of Milestones
The scope of Docker’s 2025 transformation is best appreciated through its timeline. The company moved with unusual speed, shipping a series of interconnected announcements that each built on the last.
-
Q1Docker Model Runner Docker enables developers to run large language models locally via a Docker Desktop extension, establishing a local-first AI development workflow without complex infrastructure setup.
-
MayDocker Hardened Images Launch DHI debuts as a commercial product — over 1,000 security-hardened container images with full SBOM coverage, CVE transparency, and SLSA Level 3 provenance.
-
MayDocker MCP Catalog and Toolkit Beta Docker launches its MCP ecosystem with 100+ verified servers and one-click integration with major AI clients, establishing a secure, centralized hub for AI agent tooling.
-
DecDHI Goes Free and Open Source Docker open-sources its full DHI catalog under Apache 2.0, making over 1,000 hardened images available to every developer at no cost — a move praised by the CNCF as a watershed for supply chain security.
-
DecHardened MCP Servers DHI methodology extended to MCP server images, applying minimal-footprint hardening to AI infrastructure components including Grafana, MongoDB, GitHub, and Context7.
The Competitive Landscape
Docker’s moves are not made in a vacuum. The container security market, valued at roughly $3 billion in 2025, is projected to exceed $20 billion over the next decade, attracting well-funded competitors. Chainguard, a direct rival in hardened images, offers nearly 500 minimal container images with a similar focus on reducing known vulnerabilities, and provides production images with patch SLAs as a commercial offering. Echo Software, another competitor, uses AI agents to autonomously build and maintain vulnerability-free container images and recently secured significant funding.
By making DHI free under Apache 2.0, Docker applied a classic open source playbook: raise the baseline security floor for the entire ecosystem, make it structurally difficult for competitors to charge for a commodity that Docker now provides at no cost, and compete on the value-add commercial tiers — DHI Enterprise with SLA-backed CVE remediation and Extended Lifecycle Support for regulated industries.
In the AI tooling space, Docker’s advantage is its scale. No other company has Docker Hub’s distribution reach — 20 billion monthly pulls, an established trust relationship with tens of millions of developers, and deep integration into the development toolchain. Applying that infrastructure to MCP servers gives Docker a credible claim to become the equivalent of Docker Hub for the agent era: the place where AI tools get discovered, verified, and run safely.
What This Means for Developers
For the individual developer, Docker’s 2025 initiatives translate into concrete, practical changes. The days of pulling a base image from Docker Hub and hoping it’s reasonably secure are over — or should be, now that a hardened alternative is free, minimal-friction, and actively maintained. Teams that previously layered security patches on top of unknown upstream risk can instead begin from a verified, minimal baseline. Docker’s own AI assistant can scan existing containers and automatically recommend or apply equivalent hardened images, reducing the migration burden that would otherwise make adoption impractical.
On the AI side, the MCP Toolkit changes the economics of integrating AI agents into development workflows. What previously required hours of per-client configuration — and left credential management as an afterthought — can now be accomplished in minutes from Docker Desktop, with security built in by default rather than bolted on later. As agentic AI systems move from clever prototypes to operational tools that update infrastructure, resolve customer issues, and manage SaaS environments, the blast radius of a compromised agent tool grows correspondingly. Docker’s approach of container-based isolation and strict resource limits is a direct response to that new risk surface.
“The real problem wasn’t what models say. It was what they could do. Once agents can act, blast radius matters more than the prompt.”
— Docker 2025 Year in Review
Looking Ahead
Docker’s trajectory in 2025 suggests a company that has internalized a lesson from its own history: the developers who won the container era weren’t the ones with the most technically sophisticated solutions, but the ones who made the right thing the easy thing. Containers became universal not because they were theoretically superior, but because Docker made them frictionless.
The same logic now applies to security and AI. Docker Hardened Images are free because the cost of security should be zero — or close enough to zero that no team has an economic excuse to skip it. The MCP Catalog and Toolkit are integrated into Docker Desktop because the right developer experience for AI agents shouldn’t require a manual, error-prone setup process that most teams will cut corners on.
The next generation of container development will be AI-assisted, continuously hardened against supply chain threats, and built on infrastructure that knows its provenance from the first pull to the final deploy. Docker, betting everything on being that infrastructure, has made its strategy unmistakably clear. Whether it succeeds will depend on whether the ecosystem accepts its latest bid to define the standard — just as it did more than a decade ago with the original Docker container.