Introduction
The Model Context Protocol (MCP) became one of the most discussed developments in the first half of 2025 because it addresses a significant issue in AI applications: while large language models are powerful, they are inherently nondeterministic. In situations where consistent and repeatable actions are required, such as retrieving records from a database, filing a ticket, consulting a knowledge base, or executing a defined workflow, teams need a structured approach to link the “fuzzy” reasoning of AI to reliable, code-based execution. The MCP is designed to be that essential connection.
In this article, we explored advanced MCP use cases, focusing on real-world developer workflows and the operational challenges that rapidly arise when transitioning from experimentation to daily use or enterprise deployment.
What MCP Is (and Why It Matters)
At its core, the MCP protocol enables AI clients (applications that interact with a large language model, or LLM) to connect with MCP servers (services that provide specific, well-defined capabilities). The AI determines what actions it wants to take, and the MCP server offers a consistent mechanism to execute those actions, returning structured results that the AI can use in its responses or future actions.
MCP servers can expose various elements, tools, resources, prompts, and other components. The key takeaway is clear: MCP offers a standardized interface for integrating AI applications with external data and actions.
The Problem: MCP’s Ecosystem Is Powerful and Chaotic
A recurring theme in the session was that the rapid adoption of MCP has created a messy, high-risk ecosystem.
- Discovery can be challenging. In common systems like GitHub, Jira, and data stores, there may be several competing MCP server implementations across repositories, resulting in varying quality and unclear maintenance.
- Installing software can be risky and inconsistent. While the phrase “Run this repo” might be acceptable for individual developers, it presents significant challenges for security teams, regulated environments, or users with less technical expertise.
- As AI becomes more capable and agentic, security risks increase. The potential impact of misconfigurations expands significantly. Prompt injection and unsanitized inputs are particularly hazardous when AI possesses the capability to read and write in real systems.
Docker positioned itself as an expert in helping users run third-party software in a controlled, isolated, and repeatable environment.
Docker’s MCP Offering: Catalog, Toolkit, and Gateway
Docker’s MCP approach is based on three key elements:
- MCP Catalog
This is a curated collection of MCP servers available as Docker images on Docker Hub. The goal is to make it easier for users to discover and run MCP servers by leveraging container packaging, ensuring provenance, and using standard distribution workflows that developers are already familiar with. - MCP Toolkit (Docker Desktop)
A component of Docker Desktop that serves as a central management plane for MCP servers. The toolkit is designed to simplify the management of MCP servers.
- discover and enable,
- configure,
- connect to MCP clients,
- manage authentication flows,
- centrally observe and govern interactions.
3. MCP Gateway (containerized Toolkit)
A container-based form of the toolkit for headless, CI, or production environments. This matters because MCP usage rarely stays local; teams want the same integration patterns in automated pipelines and deployed services.
An important architectural feature of the MCP Toolkit is that it functions as an MCP server, acting as a proxy to other MCP servers. This “hub-and-spoke” design enables centralized control over aspects such as configuration, secrets management, interception, and observability. As a result, it eliminates the need for each client to implement these capabilities independently.
Practical Demo Workflow: From Knowledge Base to Issue Tracker
A key use case involved multiple MCP servers to automate a workflow that product or engineering teams might execute weekly.
- Add a Notion MCP server (for searching and reading product feedback pages).
- Add a GitHub MCP server (for creating an issue in a repository).
- Instruct the AI: find product feedback in Notion, summarize it, and create a GitHub issue describing what should be done.
This is where MCP’s value becomes obvious: a single server is useful, but composition is transformative. The AI can coordinate a sequence of steps across systems and produce an output that is immediately actionable (a tracked issue), not merely descriptive.
Secrets Management: Reducing Exposure by Design
Configuration often involves credentials, which makes many teams anxious. Docker highlights a method for managing secrets centrally, providing only the necessary values to the specific MCP server that requires them.
In the session example:
- The Notion server receives the Notion integration token.
- Other tokens (e.g., unrelated API keys) are not exposed to the Notion container.
- Secrets are managed as Docker secrets, minimizing accidental leakage or broad credential exposure across different tools.
The operational goal is to prevent the common failure mode of “everything being dumped into environment variables everywhere,” which becomes unmanageable with multiple MCP servers and clients.
Remote MCP Servers and OAuth: Bringing Vendor-Hosted Servers into the Mix
Docker also showcased support for remote MCP servers, including those provided directly by vendors. This is significant for two reasons:
- API compatibility and maintenance improve when the vendor ships the MCP server and keeps it aligned with their own platform changes.
- OAuth-based authorization becomes more standardized and user-friendly than manually managing long-lived tokens.
A linear example demonstrated how the MCP Toolkit can streamline an OAuth authorization flow, enabling access with minimal manual configuration. This method aligns with the preferences of many enterprises: short-lived, scoped authorization with clear procedures for revocation.
The Security Reality: Prompt Injection + Tool Access
There is an important issue: when an AI has access to tools, untrusted text can create vulnerabilities. For example, a “malicious” GitHub issue might contain instructions designed to manipulate the AI into retrieving sensitive information, exfiltrating data, or executing harmful actions.
Docker’s answer is governance at the proxy layer, specifically through interceptors and policy controls:
- Secret scanning / secret blocking: inspect tool inputs/outputs for known secret patterns and prevent tool calls that would leak sensitive values.
- Custom interceptors: run scripts before tool calls to enforce policies (e.g., a “cross-repo blocker” that prevents the AI from hopping between repositories during a session).
This model treats the MCP Toolkit/Gateway as a policy enforcement point. Rather than trusting each MCP server (or the AI) to behave safely, you establish guardrails in the middle where you can observe and control traffic.
Dynamic MCP Servers: Letting the AI Configure Its Tooling (Safely)
One of the more advanced features demonstrated was dynamic MCP server management from within an AI session. Because the Toolkit is an MCP server itself, it can expose tools such as:
- add an MCP server,
- remove an MCP server,
- (in more advanced modes) chain calls and process data within a sandbox to reduce what must be passed through the LLM.
Practically, this means a user can say: “We need Linear for this — add it,” and the AI can perform the setup steps through the toolkit, instead of the user manually hunting down configuration files and commands.
Profiles and Private Catalogs: Preparing for Real Enterprise Patterns
Toward the end, the session previewed concepts aimed at enterprise readiness:
- Profiles: curated sets of MCP servers and client configurations so that different AI clients can have different, controlled tool access (e.g., “marketing profile” vs. “developer profile”).
- Private catalogs: a way for organizations to host approved, internal MCP servers and expose only those to teams — reducing the risk of developers pulling random third-party servers into sensitive workflows.
The direction is clear: Docker is aiming to make MCP adoption scalable by giving organizations a practical operating model — approved servers, controlled distribution, consistent configuration, and enforceable security policies.
Running in Production and Working with Local Models
Finally, this article clarified that Docker’s MCP approach is not limited to Docker Desktop demos. The MCP Gateway enables the same patterns in headless and production contexts, configured through environment variables and standard container deployment practices.
It also touched on how this can pair with Docker Model Runner (running models locally). The important nuance is that a model runtime alone is not an MCP client; you still need an agent/client layer that can connect the model to MCP servers. In principle, you can combine:
- a locally running model (via Docker Model Runner),
- an MCP client/agent (e.g., an assistant application),
- and the MCP Toolkit/Gateway as the controlled bridge to MCP servers.
Key Takeaways
- MCP is compelling because it standardizes how AI applications call deterministic tools and access external context.
- The MCP ecosystem’s biggest risks are discovery, installation, and security, especially as tool access grows.
- Docker’s strategy is to “containerize and govern” MCP: a catalog of packaged servers, a toolkit/gateway that centralizes configuration and policy, and mechanisms like interceptors to manage real-world threats such as prompt injection.
- Advanced capabilities, remote servers with OAuth, dynamic server management, profiles, and private catalogs signal an emphasis on enterprise scalability and operational control.
