{"id":198,"date":"2026-02-20T19:04:06","date_gmt":"2026-02-20T19:04:06","guid":{"rendered":"https:\/\/rebaihamida.com\/?p=198"},"modified":"2026-02-20T19:04:07","modified_gmt":"2026-02-20T19:04:07","slug":"building-ai-applications-with-docker-to-the-cloud-azure-a-hands-on-guide","status":"publish","type":"post","link":"https:\/\/rebaihamida.com\/?p=198","title":{"rendered":"Building AI Applications with Docker to the Cloud Azure: A Hands-On Guide"},"content":{"rendered":"\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=mDwhqsiohAk\">Hands-on From Docker to Azure: Secure AI Development<\/a><\/p>\n\n\n\n<p>We\u2019ll start by running a small AI workload locally. This will give us a baseline and help us understand the limitations of running directly on a developer machine.<\/p>\n\n\n\n<p>Then we\u2019ll run the same workload inside a container. This step shows how containers improve reproducibility and provide a controlled execution environment without changing the application itself.<\/p>\n\n\n\n<p>After that, we\u2019ll execute the workload inside a sandbox environment. This is particularly important when dealing with AI-generated code or agent-driven workflows, where isolation becomes essential for protecting the host system.<\/p>\n\n\n\n<p>Finally, I\u2019ll show how the same containerized workload can be moved to Azure. The key idea here is consistency: the application we run locally is the same one that runs in the cloud.<\/p>\n\n\n\n<p>The goal of this demo is not to focus on complex code, but to understand the workflow and the architecture that allow us to build AI systems safely and reliably.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><\/h3>\n\n\n\n<h3 class=\"wp-block-heading\">Goals<\/h3>\n\n\n\n<p>This demo is about <strong>process and architecture<\/strong>, not model complexity.<\/p>\n\n\n\n<p>Before we start the live demonstration, I want to be very clear about its purpose.<\/p>\n\n\n\n<p>The goal of this demo is not to build a complex AI application or to dive deep into code. Instead, it\u2019s to show how we can execute AI workloads safely and consistently as we move from experimentation to production.<\/p>\n\n\n\n<p>First, we\u2019ll demonstrate safe execution of AI workloads by running them in controlled environments, reducing the risk of unintended side effects on the host system.<\/p>\n\n\n\n<p>Second, we\u2019ll focus on reproducible environments. The same workload should behave the same way regardless of where it runs, whether that\u2019s on a local machine or in the cloud.<\/p>\n\n\n\n<p>Third, we\u2019ll highlight container portability. The container image we build locally is the same artifact we deploy later, without modification.<\/p>\n\n\n\n<p>Finally, we\u2019ll show how these technical choices support responsible AI practices by enabling better isolation, governance, and operational control throughout the lifecycle.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Running Locally<\/h3>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/900\/1*R6WZSpeJJYUgQx2nvtR4iw.png\" alt=\"\"\/><\/figure>\n\n\n\n<p>Before we look at containers or sandbox environments, let\u2019s start with how most experiments are actually performed today.<\/p>\n\n\n\n<p>Typically, a developer runs an application directly on their machine. The application uses the local runtime, the local libraries, and whatever dependencies are installed in that environment.<\/p>\n\n\n\n<p>This approach works well at the beginning because it is fast and convenient. It allows us to prototype quickly and test ideas without much setup.<\/p>\n\n\n\n<p>However, over time, this model introduces several challenges.<\/p>\n\n\n\n<p>One of them is environment drift. As we install packages and update libraries, our environment slowly changes, and it becomes difficult to reproduce the same results later or on another machine.<\/p>\n\n\n\n<p>Another issue is dependency management. We often install libraries temporarily, and it becomes unclear which versions are actually required for the application to work correctly.<\/p>\n\n\n\n<p>There is also a security aspect to consider. When running scripts locally\u200a\u2014\u200aespecially code generated by AI\u200a\u2014\u200awe may be executing code that interacts with files, network resources, or credentials on our machine.<\/p>\n\n\n\n<p>So, running locally is useful and often necessary, but it is not always reliable or safe in the long term.<\/p>\n\n\n\n<p>Let\u2019s start by running our example locally, and then we\u2019ll see how we can improve this execution model step by step.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Step 1: Local Execution of a Student Analytics Assistant<\/h4>\n\n\n\n<p>For this demonstration, I wanted a simple and realistic scenario that everyone can understand quickly.<\/p>\n\n\n\n<p>The example we will use is a small analytics assistant that analyzes student performance data.<\/p>\n\n\n\n<p>The dataset contains information such as student names, subjects, scores, and study time.<br>&nbsp;The application reads this data and generates a few insights, like calculating averages and identifying students who may need additional support.<\/p>\n\n\n\n<p>This is intentionally a very simple application. The goal of the demo is not to build a complex AI model, but to illustrate the development workflow and the execution environment.<\/p>\n\n\n\n<p>We start by running this application locally, which is how most experiments begin. Developers typically install dependencies, run scripts, and test ideas directly on their machines.<\/p>\n\n\n\n<p>This works well at the beginning, but as experiments grow and AI-generated code becomes more common, this approach starts to introduce risks and limitations.<\/p>\n\n\n\n<p>So let\u2019s first run the application locally and observe the result, and then we\u2019ll look at how we can improve this workflow.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Running in a Container<\/h3>\n\n\n\n<figure class=\"wp-block-image is-resized\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/900\/1*1sx4hRyZ2YwOVlPMm63Uiw.png\" alt=\"\" style=\"aspect-ratio:2.980209545983702;width:688px;height:auto\"\/><\/figure>\n\n\n\n<p>In the previous step, we ran the application locally and saw that it worked, but we also discussed the limitations of running directly on a developer machine.<\/p>\n\n\n\n<p>The next step is to improve this execution model by running the same application inside a container.<\/p>\n\n\n\n<p>Instead of relying on whatever happens to be installed on the local machine, we define a controlled runtime that includes the exact dependencies the application needs. This environment is packaged together with the application, so it can be reproduced consistently anywhere.<\/p>\n\n\n\n<p>Running in a container gives us three key benefits.<\/p>\n\n\n\n<p>First, reproducibility. Anyone can run the container and get the same result, regardless of their local setup.<\/p>\n\n\n\n<p>Second, isolation. The application runs in its own environment, reducing the risk of conflicts or unintended interactions with the host system.<\/p>\n\n\n\n<p>And third, portability. The same container can run on another developer\u2019s machine, in a test environment, or in the cloud without modification.<\/p>\n\n\n\n<p>Let\u2019s now see how we can take our existing application and package it into a container by adding a Dockerfile and building our first image.<\/p>\n\n\n\n<p>Docker provides a command called docker init, which analyzes the project and generates the configuration needed to containerize the application.<\/p>\n\n\n\n<p>This allows us to go from a local application to a containerized workload in just a few steps.<\/p>\n\n\n\n<p>Let\u2019s start by running docker init and see what it generates for us.<\/p>\n\n\n\n<p>Now that the application is running, let\u2019s take a moment to see what actually happened behind the scenes.<\/p>\n\n\n\n<p>I\u2019ll open Docker Desktop to show you the artifacts that were created.<\/p>\n\n\n\n<p>Here we can see the <strong>image<\/strong> that was built. The image contains everything needed to run the application: the runtime, dependencies, and the application itself.<\/p>\n\n\n\n<p>And here we can see the <strong>container<\/strong>, which is the running instance created from that image.<\/p>\n\n\n\n<p>This distinction is important:<br>&nbsp;The image is the blueprint, and the container is the running process.<\/p>\n\n\n\n<p>This is what gives us reproducibility. Anyone can take the same image and run the same container on another machine or in the cloud, and get the same behavior.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Running in a&nbsp;Sandbox<\/h3>\n\n\n\n<p>So far, we\u2019ve seen how to run an application locally and then how to run the same application inside a container.<\/p>\n\n\n\n<p>Containers already give us reproducibility and a level of isolation, which is a big improvement over running directly on a developer machine.<\/p>\n\n\n\n<p>But when we work with AI systems, especially agents or code generated by AI, we often need an additional level of protection.<\/p>\n\n\n\n<p>In these situations, we may be executing code that we did not write ourselves, or code that interacts with files, tools, or external services. That introduces new risks, even if the application is containerized.<\/p>\n\n\n\n<p>This is where sandbox environments become important.<\/p>\n\n\n\n<p>A sandbox allows us to execute workloads in a temporary and controlled environment that can be created and destroyed easily. It provides stronger isolation and reduces the impact of unexpected behavior.<\/p>\n\n\n\n<p>In other words, we can experiment freely while protecting the host system and keeping the environment clean.<\/p>\n\n\n\n<p>Let\u2019s now run this application inside a sandbox environment and observe how this execution model works in practice.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Docker Sandbox<\/h3>\n\n\n\n<p>\u2022Docker Sandbox provides an <strong>isolated development and execution environment<\/strong> designed to safely run code, tools, and workloads without affecting the host system.<\/p>\n\n\n\n<p>\u2022It is especially useful for:<\/p>\n\n\n\n<p>\u2022AI-generated code<\/p>\n\n\n\n<p>\u2022Experimental scripts<\/p>\n\n\n\n<p>\u2022Agent-driven workflows<\/p>\n\n\n\n<p>\u2022Dependency-heavy workloads<\/p>\n\n\n\n<p>A sandbox behaves like a <strong>temporary development machine<\/strong> where you can install tools, run containers, and test code safely.<\/p>\n\n\n\n<p>Before we run anything, I copied the project to another directory called After. Next, I want to confirm the sandbox capability is available in this Docker Desktop installation. The key point is: we\u2019re about to run the workload in an isolated environment separate from my host machine.<\/p>\n\n\n\n<p>I\u2019m creating a sandbox VM bound to this workspace folder.<\/p>\n\n\n\n<p>I will use this command line to create a sandbox for your workspace<\/p>\n\n\n\n<p>Think of it as a disposable, isolated development environment. If anything, weird happens\u200a\u2014\u200adependency mess, file changes, unexpected commands\u200a\u2014\u200aI can destroy the whole sandbox and return to a clean state.<\/p>\n\n\n\n<p>We will use docker sandbox ls to check if our sandbox is created or not<\/p>\n\n\n\n<p>Next step consist on Opening an interactive shell inside the sandbox using this command line: docker sandbox run claude<\/p>\n\n\n\n<p>If you want to execute agents using Claude, this is a great solution. As you can see here, it requires a subscription. In our example, we will just run our code inside the sandbox; it\u2019s not an AI agent.<\/p>\n\n\n\n<p>To go inside the sandbox using this command line. docker sandbox exec -it student-sbx bash<\/p>\n\n\n\n<p>Now I\u2019m <em>inside<\/em> the sandbox. This is not my host OS. This is the isolation boundary. From here, I can run experiments more safely\u200a\u2014\u200aespecially when code is AI-generated or not fully trusted.<\/p>\n\n\n\n<p>So your sandbox is ready as a new environment to use to test your project.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Transition to&nbsp;Cloud<\/h3>\n\n\n\n<p>Up to now, we have been running everything locally\u200a\u2014\u200afirst directly on the machine, and then inside containers.<\/p>\n\n\n\n<p>One of the biggest advantages of containers is portability. The container image we built locally is not tied to this machine. It is a self-contained package that can run anywhere a container runtime is available.<\/p>\n\n\n\n<p>Moving to the cloud is therefore a straightforward process. First, we build the container image, which we have already done. Then we push that image to a container registry so it becomes accessible from the cloud environment. Finally, we deploy that same image to a managed service such as Azure Container Apps.<\/p>\n\n\n\n<p>The key point here is that we are not modifying the application. We are not changing dependencies or configuration. The same container image moves from local development to the cloud unchanged.<\/p>\n\n\n\n<p>This consistency is what makes containers such a powerful foundation for modern AI and cloud-native applications.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Azure Architecture<\/h3>\n\n\n\n<p>Let\u2019s look briefly at how this solution is structured in the cloud.<\/p>\n\n\n\n<p>At the center, we have Azure Container Apps. This is the service that runs our containerized application. It provides a managed runtime, so we don\u2019t need to manage virtual machines or orchestrators ourselves.<\/p>\n\n\n\n<p>The container image is stored in Azure Container Registry, which acts as a secure repository for our images. This allows us to version, store, and deploy containers in a controlled way.<\/p>\n\n\n\n<p>If the application needs AI capabilities, it can connect to Azure OpenAI or other AI services. This allows us to integrate language models or other AI features without embedding models directly into the container.<\/p>\n\n\n\n<p>Finally, Log Analytics provides monitoring and observability. It collects logs and metrics so we can understand how the application behaves in production and troubleshoot issues when necessary.<\/p>\n\n\n\n<p>The key benefits of this architecture are managed scaling, secure networking, and built-in observability, which allow teams to focus on the application rather than the infrastructure.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Responsible AI Development<\/h3>\n\n\n\n<p>When we talk about responsible AI, we often think about ethics and governance, but there is also a very practical dimension that concerns architecture and operations.<\/p>\n\n\n\n<p>In this context, I like to think of responsible AI development as resting on four pillars.<\/p>\n\n\n\n<p>The first is isolation. We need to ensure that workloads, especially experimental or AI-generated code, run in controlled environments so that failures or unexpected behavior do not impact other systems.<\/p>\n\n\n\n<p>The second pillar is identity. Applications should access services securely using managed identities or secure credentials, rather than hardcoded keys. This reduces security risks and improves traceability.<\/p>\n\n\n\n<p>The third pillar is observability. We need visibility into logs, metrics, and execution behavior to understand how systems operate and to detect issues early.<\/p>\n\n\n\n<p>And finally, cost awareness. AI workloads can scale quickly and consume significant resources, so monitoring usage and controlling costs must be part of the design from the beginning.<\/p>\n\n\n\n<p>Responsible AI is therefore not only an ethical topic\u200a\u2014\u200ait is also technical, operational, and financial. It is about building systems that are safe, reliable, and sustainable in real environments.<\/p>\n\n\n\n<p>Let me close by summarizing the journey we followed today.<\/p>\n\n\n\n<p>On the left, we start with the typical local development environment. It\u2019s fast and convenient, but it often becomes risky and difficult to control. Dependencies change, environments drift, and running AI-generated code can introduce security risks.<\/p>\n\n\n\n<p>In the middle, we introduced containers. Containers give us isolation, reproducibility, and controlled execution. The application, its dependencies, and its runtime are packaged together, creating a consistent and reliable environment.<\/p>\n\n\n\n<p>On the right, we moved that same container to the cloud. Using services like Azure Container Apps, we gain scalability, monitoring, and secure integration with AI services\u200a\u2014\u200awithout changing the application itself.<\/p>\n\n\n\n<p>So the key message of this session is that secure AI development is not about slowing down experimentation. It\u2019s about creating a reliable path from local experimentation to production by using isolation, reproducibility, and controlled execution as fundamental principles.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Hands-on From Docker to Azure: Secure AI Development We\u2019ll start by running a small AI workload locally. This will give us a baseline and help us understand the limitations of running directly on a developer machine. Then we\u2019ll run the same workload inside a container. This step shows how containers improve reproducibility and provide a controlled execution environment without changing the application itself. After that, we\u2019ll execute the workload inside a sandbox environment. This is particularly important when dealing with AI-generated code or agent-driven workflows, where isolation becomes essential for protecting the host system. Finally, I\u2019ll show how the same containerized workload can be moved to Azure. The key idea here is consistency: the application we run locally is the same one that runs in the cloud. The goal of this demo is not to focus on complex code, but to understand the workflow and the architecture that allow us to build AI systems safely and reliably. Goals This demo is about process and architecture, not model complexity. Before we start the live demonstration, I want to be very clear about its purpose. The goal of this demo is not to build a complex AI application or to dive deep into code. Instead, it\u2019s to show how we can execute AI workloads safely and consistently as we move from experimentation to production. First, we\u2019ll demonstrate safe execution of AI workloads by running them in controlled environments, reducing the risk of unintended side effects on the host system. Second, we\u2019ll focus on reproducible environments. The same workload should behave the same way regardless of where it runs, whether that\u2019s on a local machine or in the cloud. Third, we\u2019ll highlight container portability. The container image we build locally is the same artifact we deploy later, without modification. Finally, we\u2019ll show how these technical choices support responsible AI practices by enabling better isolation, governance, and operational control throughout the lifecycle. Running Locally Before we look at containers or sandbox environments, let\u2019s start with how most experiments are actually performed today. Typically, a developer runs an application directly on their machine. The application uses the local runtime, the local libraries, and whatever dependencies are installed in that environment. This approach works well at the beginning because it is fast and convenient. It allows us to prototype quickly and test ideas without much setup. However, over time, this model introduces several challenges. One of them is environment drift. As we install packages and update libraries, our environment slowly changes, and it becomes difficult to reproduce the same results later or on another machine. Another issue is dependency management. We often install libraries temporarily, and it becomes unclear which versions are actually required for the application to work correctly. There is also a security aspect to consider. When running scripts locally\u200a\u2014\u200aespecially code generated by AI\u200a\u2014\u200awe may be executing code that interacts with files, network resources, or credentials on our machine. So, running locally is useful and often necessary, but it is not always reliable or safe in the long term. Let\u2019s start by running our example locally, and then we\u2019ll see how we can improve this execution model step by step. Step 1: Local Execution of a Student Analytics Assistant For this demonstration, I wanted a simple and realistic scenario that everyone can understand quickly. The example we will use is a small analytics assistant that analyzes student performance data. The dataset contains information such as student names, subjects, scores, and study time.&nbsp;The application reads this data and generates a few insights, like calculating averages and identifying students who may need additional support. This is intentionally a very simple application. The goal of the demo is not to build a complex AI model, but to illustrate the development workflow and the execution environment. We start by running this application locally, which is how most experiments begin. Developers typically install dependencies, run scripts, and test ideas directly on their machines. This works well at the beginning, but as experiments grow and AI-generated code becomes more common, this approach starts to introduce risks and limitations. So let\u2019s first run the application locally and observe the result, and then we\u2019ll look at how we can improve this workflow. Running in a Container In the previous step, we ran the application locally and saw that it worked, but we also discussed the limitations of running directly on a developer machine. The next step is to improve this execution model by running the same application inside a container. Instead of relying on whatever happens to be installed on the local machine, we define a controlled runtime that includes the exact dependencies the application needs. This environment is packaged together with the application, so it can be reproduced consistently anywhere. Running in a container gives us three key benefits. First, reproducibility. Anyone can run the container and get the same result, regardless of their local setup. Second, isolation. The application runs in its own environment, reducing the risk of conflicts or unintended interactions with the host system. And third, portability. The same container can run on another developer\u2019s machine, in a test environment, or in the cloud without modification. Let\u2019s now see how we can take our existing application and package it into a container by adding a Dockerfile and building our first image. Docker provides a command called docker init, which analyzes the project and generates the configuration needed to containerize the application. This allows us to go from a local application to a containerized workload in just a few steps. Let\u2019s start by running docker init and see what it generates for us. Now that the application is running, let\u2019s take a moment to see what actually happened behind the scenes. I\u2019ll open Docker Desktop to show you the artifacts that were created. Here we can see the image that was built. The image contains everything needed to run the application: the runtime, dependencies, and the application itself. And here we can see the container, which is the running instance created from that image. This distinction is important:&nbsp;The image is the blueprint, and the container is the running process. This is what gives us reproducibility. Anyone can take the same image and run the same container on another machine or in the cloud, and get the same behavior. Running in a&nbsp;Sandbox So far, we\u2019ve seen how to run an application locally and then how to run the same application inside a container. Containers already give us reproducibility and a level of isolation, which is a big improvement over running directly on a developer machine. But when we work with AI systems, especially agents or code generated by AI, we often need an additional level of protection. In these situations, we may be executing code that we did not write ourselves, or code that interacts with files, tools, or external services. That introduces new risks, even if the application is containerized. This is where sandbox environments become important. A sandbox allows us to execute workloads in a temporary and controlled environment that can be created and destroyed easily. It provides stronger isolation and reduces the impact of unexpected behavior. In other words, we can experiment freely while protecting the host system and keeping the environment clean. Let\u2019s now run this application inside a sandbox environment and observe how this execution model works in practice. Docker Sandbox \u2022Docker Sandbox provides an isolated development and execution environment designed to safely run code, tools, and workloads without affecting the host system. \u2022It is especially useful for: \u2022AI-generated code \u2022Experimental scripts \u2022Agent-driven workflows \u2022Dependency-heavy workloads A sandbox behaves like a temporary development machine where you can install tools, run containers, and test code safely. Before we run anything, I copied the project to another directory called After. Next, I want to confirm the sandbox capability is available in this Docker Desktop installation. The key point is: we\u2019re about to run the workload in an isolated environment separate from my host machine. I\u2019m creating a sandbox VM bound to this workspace folder. I will use this command line to create a sandbox for your workspace Think of it as a disposable, isolated development environment. If anything, weird happens\u200a\u2014\u200adependency mess, file changes, unexpected commands\u200a\u2014\u200aI can destroy the whole sandbox and return to a clean state. We will use docker sandbox ls to check if our sandbox is created or not Next step consist on Opening an interactive shell inside the sandbox using this command line: docker sandbox run claude If you want to execute agents using Claude, this is a great solution. As you can see here, it requires a subscription. In our example, we will just run our code inside the sandbox; it\u2019s not an AI agent. To go inside the sandbox using this command line. docker sandbox exec -it student-sbx bash Now I\u2019m inside the sandbox. This is not my host OS. This is the isolation boundary. From here, I can run experiments more safely\u200a\u2014\u200aespecially when code is AI-generated or not fully trusted. So your sandbox is ready as a new environment to use to test your project. Transition to&nbsp;Cloud Up to now, we have been running everything locally\u200a\u2014\u200afirst directly on the machine, and then inside containers. One of the biggest advantages of containers is portability. The container image we built locally is not tied to this machine. It is a self-contained package that can run anywhere a container runtime is available. Moving to the cloud is therefore a straightforward process. First, we build the container image, which we have already done. Then we push that image to a container registry so it becomes accessible from the cloud environment. Finally, we deploy that same image to a managed service such as Azure Container Apps. The key point here is that we are not modifying the application. We are not changing dependencies or configuration. The same container image moves from local development to the cloud unchanged. This consistency is what makes containers such a powerful foundation for modern AI and cloud-native applications. Azure Architecture Let\u2019s look briefly at how this solution is structured in the cloud. At the center, we have Azure Container Apps. This is the service that runs our containerized application. It provides a managed runtime, so we don\u2019t need to manage virtual machines or orchestrators ourselves. The container image is stored in Azure Container Registry, which acts as a secure repository for our images. This allows us to version, store, and deploy containers in a controlled way. If the application needs AI capabilities, it can connect to Azure OpenAI or other AI services. This allows us to integrate language models or other AI features without embedding models directly into the container. Finally, Log Analytics provides monitoring and observability. It collects logs and metrics so we can understand how the application behaves in production and troubleshoot issues when necessary. The key benefits of this architecture are managed scaling, secure networking, and built-in observability, which allow teams to focus on the application rather than the infrastructure. Responsible AI Development When we talk about responsible AI, we often think about ethics and governance, but there is also a very practical dimension that concerns architecture and operations. In this context, I like to think of responsible AI development as resting on four pillars. The first is isolation. We need to ensure that workloads, especially experimental or AI-generated code, run in controlled environments so that failures or unexpected behavior do not impact other systems. The second pillar is identity. Applications should access services securely using managed identities or secure credentials, rather than hardcoded keys. This reduces security risks and improves traceability. The third pillar is observability. We need visibility into logs, metrics, and execution behavior to understand how systems operate and to detect issues early. And finally, cost awareness. AI workloads can scale quickly and consume significant resources, so monitoring usage and controlling costs must be part of the design from the beginning. Responsible AI is therefore not only an ethical topic\u200a\u2014\u200ait is also technical, operational, and financial. It is about building systems that are safe, reliable, and sustainable in real environments. Let me close by summarizing the journey we followed today. On the left, we start with the typical local development environment. It\u2019s fast and convenient, but it often becomes risky and difficult to control. Dependencies change, environments drift, and running AI-generated code can introduce security risks. In the middle, we introduced containers. Containers give us isolation, reproducibility, and controlled execution. The application, its dependencies, and its runtime are packaged together, creating a consistent and reliable environment. On the right, we moved that same container to the cloud. Using services like Azure Container Apps, we gain scalability, monitoring, and secure integration with AI services\u200a\u2014\u200awithout changing the application itself. So the key message of this session is that secure AI development is not about slowing down experimentation. It\u2019s about creating a reliable path from local experimentation to production by using isolation, reproducibility, and controlled execution as fundamental principles.<\/p>\n","protected":false},"author":1,"featured_media":199,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[5,4],"tags":[43,59,54,40,27,26],"class_list":["post-198","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-containers","tag-ai","tag-architecture","tag-azure","tag-cloud","tag-container","tag-docker"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Building AI Applications with Docker to the Cloud Azure: A Hands-On Guide - Next-Generation Tech Blogs<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/rebaihamida.com\/?p=198\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Building AI Applications with Docker to the Cloud Azure: A Hands-On Guide - Next-Generation Tech Blogs\" \/>\n<meta property=\"og:description\" content=\"Hands-on From Docker to Azure: Secure AI Development We\u2019ll start by running a small AI workload locally. This will give us a baseline and help us understand the limitations of running directly on a developer machine. Then we\u2019ll run the same workload inside a container. This step shows how containers improve reproducibility and provide a controlled execution environment without changing the application itself. After that, we\u2019ll execute the workload inside a sandbox environment. This is particularly important when dealing with AI-generated code or agent-driven workflows, where isolation becomes essential for protecting the host system. Finally, I\u2019ll show how the same containerized workload can be moved to Azure. The key idea here is consistency: the application we run locally is the same one that runs in the cloud. The goal of this demo is not to focus on complex code, but to understand the workflow and the architecture that allow us to build AI systems safely and reliably. Goals This demo is about process and architecture, not model complexity. Before we start the live demonstration, I want to be very clear about its purpose. The goal of this demo is not to build a complex AI application or to dive deep into code. Instead, it\u2019s to show how we can execute AI workloads safely and consistently as we move from experimentation to production. First, we\u2019ll demonstrate safe execution of AI workloads by running them in controlled environments, reducing the risk of unintended side effects on the host system. Second, we\u2019ll focus on reproducible environments. The same workload should behave the same way regardless of where it runs, whether that\u2019s on a local machine or in the cloud. Third, we\u2019ll highlight container portability. The container image we build locally is the same artifact we deploy later, without modification. Finally, we\u2019ll show how these technical choices support responsible AI practices by enabling better isolation, governance, and operational control throughout the lifecycle. Running Locally Before we look at containers or sandbox environments, let\u2019s start with how most experiments are actually performed today. Typically, a developer runs an application directly on their machine. The application uses the local runtime, the local libraries, and whatever dependencies are installed in that environment. This approach works well at the beginning because it is fast and convenient. It allows us to prototype quickly and test ideas without much setup. However, over time, this model introduces several challenges. One of them is environment drift. As we install packages and update libraries, our environment slowly changes, and it becomes difficult to reproduce the same results later or on another machine. Another issue is dependency management. We often install libraries temporarily, and it becomes unclear which versions are actually required for the application to work correctly. There is also a security aspect to consider. When running scripts locally\u200a\u2014\u200aespecially code generated by AI\u200a\u2014\u200awe may be executing code that interacts with files, network resources, or credentials on our machine. So, running locally is useful and often necessary, but it is not always reliable or safe in the long term. Let\u2019s start by running our example locally, and then we\u2019ll see how we can improve this execution model step by step. Step 1: Local Execution of a Student Analytics Assistant For this demonstration, I wanted a simple and realistic scenario that everyone can understand quickly. The example we will use is a small analytics assistant that analyzes student performance data. The dataset contains information such as student names, subjects, scores, and study time.&nbsp;The application reads this data and generates a few insights, like calculating averages and identifying students who may need additional support. This is intentionally a very simple application. The goal of the demo is not to build a complex AI model, but to illustrate the development workflow and the execution environment. We start by running this application locally, which is how most experiments begin. Developers typically install dependencies, run scripts, and test ideas directly on their machines. This works well at the beginning, but as experiments grow and AI-generated code becomes more common, this approach starts to introduce risks and limitations. So let\u2019s first run the application locally and observe the result, and then we\u2019ll look at how we can improve this workflow. Running in a Container In the previous step, we ran the application locally and saw that it worked, but we also discussed the limitations of running directly on a developer machine. The next step is to improve this execution model by running the same application inside a container. Instead of relying on whatever happens to be installed on the local machine, we define a controlled runtime that includes the exact dependencies the application needs. This environment is packaged together with the application, so it can be reproduced consistently anywhere. Running in a container gives us three key benefits. First, reproducibility. Anyone can run the container and get the same result, regardless of their local setup. Second, isolation. The application runs in its own environment, reducing the risk of conflicts or unintended interactions with the host system. And third, portability. The same container can run on another developer\u2019s machine, in a test environment, or in the cloud without modification. Let\u2019s now see how we can take our existing application and package it into a container by adding a Dockerfile and building our first image. Docker provides a command called docker init, which analyzes the project and generates the configuration needed to containerize the application. This allows us to go from a local application to a containerized workload in just a few steps. Let\u2019s start by running docker init and see what it generates for us. Now that the application is running, let\u2019s take a moment to see what actually happened behind the scenes. I\u2019ll open Docker Desktop to show you the artifacts that were created. Here we can see the image that was built. The image contains everything needed to run the application: the runtime, dependencies, and the application itself. And here we can see the container, which is the running instance created from that image. This distinction is important:&nbsp;The image is the blueprint, and the container is the running process. This is what gives us reproducibility. Anyone can take the same image and run the same container on another machine or in the cloud, and get the same behavior. Running in a&nbsp;Sandbox So far, we\u2019ve seen how to run an application locally and then how to run the same application inside a container. Containers already give us reproducibility and a level of isolation, which is a big improvement over running directly on a developer machine. But when we work with AI systems, especially agents or code generated by AI, we often need an additional level of protection. In these situations, we may be executing code that we did not write ourselves, or code that interacts with files, tools, or external services. That introduces new risks, even if the application is containerized. This is where sandbox environments become important. A sandbox allows us to execute workloads in a temporary and controlled environment that can be created and destroyed easily. It provides stronger isolation and reduces the impact of unexpected behavior. In other words, we can experiment freely while protecting the host system and keeping the environment clean. Let\u2019s now run this application inside a sandbox environment and observe how this execution model works in practice. Docker Sandbox \u2022Docker Sandbox provides an isolated development and execution environment designed to safely run code, tools, and workloads without affecting the host system. \u2022It is especially useful for: \u2022AI-generated code \u2022Experimental scripts \u2022Agent-driven workflows \u2022Dependency-heavy workloads A sandbox behaves like a temporary development machine where you can install tools, run containers, and test code safely. Before we run anything, I copied the project to another directory called After. Next, I want to confirm the sandbox capability is available in this Docker Desktop installation. The key point is: we\u2019re about to run the workload in an isolated environment separate from my host machine. I\u2019m creating a sandbox VM bound to this workspace folder. I will use this command line to create a sandbox for your workspace Think of it as a disposable, isolated development environment. If anything, weird happens\u200a\u2014\u200adependency mess, file changes, unexpected commands\u200a\u2014\u200aI can destroy the whole sandbox and return to a clean state. We will use docker sandbox ls to check if our sandbox is created or not Next step consist on Opening an interactive shell inside the sandbox using this command line: docker sandbox run claude If you want to execute agents using Claude, this is a great solution. As you can see here, it requires a subscription. In our example, we will just run our code inside the sandbox; it\u2019s not an AI agent. To go inside the sandbox using this command line. docker sandbox exec -it student-sbx bash Now I\u2019m inside the sandbox. This is not my host OS. This is the isolation boundary. From here, I can run experiments more safely\u200a\u2014\u200aespecially when code is AI-generated or not fully trusted. So your sandbox is ready as a new environment to use to test your project. Transition to&nbsp;Cloud Up to now, we have been running everything locally\u200a\u2014\u200afirst directly on the machine, and then inside containers. One of the biggest advantages of containers is portability. The container image we built locally is not tied to this machine. It is a self-contained package that can run anywhere a container runtime is available. Moving to the cloud is therefore a straightforward process. First, we build the container image, which we have already done. Then we push that image to a container registry so it becomes accessible from the cloud environment. Finally, we deploy that same image to a managed service such as Azure Container Apps. The key point here is that we are not modifying the application. We are not changing dependencies or configuration. The same container image moves from local development to the cloud unchanged. This consistency is what makes containers such a powerful foundation for modern AI and cloud-native applications. Azure Architecture Let\u2019s look briefly at how this solution is structured in the cloud. At the center, we have Azure Container Apps. This is the service that runs our containerized application. It provides a managed runtime, so we don\u2019t need to manage virtual machines or orchestrators ourselves. The container image is stored in Azure Container Registry, which acts as a secure repository for our images. This allows us to version, store, and deploy containers in a controlled way. If the application needs AI capabilities, it can connect to Azure OpenAI or other AI services. This allows us to integrate language models or other AI features without embedding models directly into the container. Finally, Log Analytics provides monitoring and observability. It collects logs and metrics so we can understand how the application behaves in production and troubleshoot issues when necessary. The key benefits of this architecture are managed scaling, secure networking, and built-in observability, which allow teams to focus on the application rather than the infrastructure. Responsible AI Development When we talk about responsible AI, we often think about ethics and governance, but there is also a very practical dimension that concerns architecture and operations. In this context, I like to think of responsible AI development as resting on four pillars. The first is isolation. We need to ensure that workloads, especially experimental or AI-generated code, run in controlled environments so that failures or unexpected behavior do not impact other systems. The second pillar is identity. Applications should access services securely using managed identities or secure credentials, rather than hardcoded keys. This reduces security risks and improves traceability. The third pillar is observability. We need visibility into logs, metrics, and execution behavior to understand how systems operate and to detect issues early. And finally, cost awareness. AI workloads can scale quickly and consume significant resources, so monitoring usage and controlling costs must be part of the design from the beginning. Responsible AI is therefore not only an ethical topic\u200a\u2014\u200ait is also technical, operational, and financial. It is about building systems that are safe, reliable, and sustainable in real environments. Let me close by summarizing the journey we followed today. On the left, we start with the typical local development environment. It\u2019s fast and convenient, but it often becomes risky and difficult to control. Dependencies change, environments drift, and running AI-generated code can introduce security risks. In the middle, we introduced containers. Containers give us isolation, reproducibility, and controlled execution. The application, its dependencies, and its runtime are packaged together, creating a consistent and reliable environment. On the right, we moved that same container to the cloud. Using services like Azure Container Apps, we gain scalability, monitoring, and secure integration with AI services\u200a\u2014\u200awithout changing the application itself. So the key message of this session is that secure AI development is not about slowing down experimentation. It\u2019s about creating a reliable path from local experimentation to production by using isolation, reproducibility, and controlled execution as fundamental principles.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/rebaihamida.com\/?p=198\" \/>\n<meta property=\"og:site_name\" content=\"Next-Generation Tech Blogs\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-20T19:04:06+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-20T19:04:07+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/rebaihamida.com\/wp-content\/uploads\/2026\/02\/imagedockerazure.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1536\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Hamida Rebai\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Hamida Rebai\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/?p=198#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/?p=198\"},\"author\":{\"name\":\"Hamida Rebai\",\"@id\":\"http:\\\/\\\/rebaihamida.com\\\/#\\\/schema\\\/person\\\/f6dffae6f5fa8098da26264a0b318771\"},\"headline\":\"Building AI Applications with Docker to the Cloud Azure: A Hands-On Guide\",\"datePublished\":\"2026-02-20T19:04:06+00:00\",\"dateModified\":\"2026-02-20T19:04:07+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/?p=198\"},\"wordCount\":2218,\"commentCount\":0,\"publisher\":{\"@id\":\"http:\\\/\\\/rebaihamida.com\\\/#\\\/schema\\\/person\\\/f6dffae6f5fa8098da26264a0b318771\"},\"image\":{\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/?p=198#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/rebaihamida.com\\\/wp-content\\\/uploads\\\/2026\\\/02\\\/imagedockerazure.png\",\"keywords\":[\"AI\",\"architecture\",\"azure\",\"cloud\",\"Container\",\"Docker\"],\"articleSection\":[\"AI\",\"Containers\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/rebaihamida.com\\\/?p=198#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/?p=198\",\"url\":\"https:\\\/\\\/rebaihamida.com\\\/?p=198\",\"name\":\"Building AI Applications with Docker to the Cloud Azure: A Hands-On Guide - Next-Generation Tech Blogs\",\"isPartOf\":{\"@id\":\"http:\\\/\\\/rebaihamida.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/?p=198#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/?p=198#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/rebaihamida.com\\\/wp-content\\\/uploads\\\/2026\\\/02\\\/imagedockerazure.png\",\"datePublished\":\"2026-02-20T19:04:06+00:00\",\"dateModified\":\"2026-02-20T19:04:07+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/?p=198#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/rebaihamida.com\\\/?p=198\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/?p=198#primaryimage\",\"url\":\"https:\\\/\\\/rebaihamida.com\\\/wp-content\\\/uploads\\\/2026\\\/02\\\/imagedockerazure.png\",\"contentUrl\":\"https:\\\/\\\/rebaihamida.com\\\/wp-content\\\/uploads\\\/2026\\\/02\\\/imagedockerazure.png\",\"width\":1536,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/?p=198#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"http:\\\/\\\/rebaihamida.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Building AI Applications with Docker to the Cloud Azure: A Hands-On Guide\"}]},{\"@type\":\"WebSite\",\"@id\":\"http:\\\/\\\/rebaihamida.com\\\/#website\",\"url\":\"http:\\\/\\\/rebaihamida.com\\\/\",\"name\":\"Next-Generation Tech Blogs\",\"description\":\"Next-Generation Tech Blogs for Modern Thinkers\",\"publisher\":{\"@id\":\"http:\\\/\\\/rebaihamida.com\\\/#\\\/schema\\\/person\\\/f6dffae6f5fa8098da26264a0b318771\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"http:\\\/\\\/rebaihamida.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":[\"Person\",\"Organization\"],\"@id\":\"http:\\\/\\\/rebaihamida.com\\\/#\\\/schema\\\/person\\\/f6dffae6f5fa8098da26264a0b318771\",\"name\":\"Hamida Rebai\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/cropped-site-icon.png\",\"url\":\"https:\\\/\\\/rebaihamida.com\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/cropped-site-icon.png\",\"contentUrl\":\"https:\\\/\\\/rebaihamida.com\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/cropped-site-icon.png\",\"width\":512,\"height\":512,\"caption\":\"Hamida Rebai\"},\"logo\":{\"@id\":\"https:\\\/\\\/rebaihamida.com\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/cropped-site-icon.png\"},\"sameAs\":[\"http:\\\/\\\/rebaihamida.com\",\"https:\\\/\\\/www.linkedin.com\\\/in\\\/hamida-rebai-trabelsi\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@RebaHamidaMVP\"],\"url\":\"https:\\\/\\\/rebaihamida.com\\\/?author=1\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Building AI Applications with Docker to the Cloud Azure: A Hands-On Guide - Next-Generation Tech Blogs","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/rebaihamida.com\/?p=198","og_locale":"en_US","og_type":"article","og_title":"Building AI Applications with Docker to the Cloud Azure: A Hands-On Guide - Next-Generation Tech Blogs","og_description":"Hands-on From Docker to Azure: Secure AI Development We\u2019ll start by running a small AI workload locally. This will give us a baseline and help us understand the limitations of running directly on a developer machine. Then we\u2019ll run the same workload inside a container. This step shows how containers improve reproducibility and provide a controlled execution environment without changing the application itself. After that, we\u2019ll execute the workload inside a sandbox environment. This is particularly important when dealing with AI-generated code or agent-driven workflows, where isolation becomes essential for protecting the host system. Finally, I\u2019ll show how the same containerized workload can be moved to Azure. The key idea here is consistency: the application we run locally is the same one that runs in the cloud. The goal of this demo is not to focus on complex code, but to understand the workflow and the architecture that allow us to build AI systems safely and reliably. Goals This demo is about process and architecture, not model complexity. Before we start the live demonstration, I want to be very clear about its purpose. The goal of this demo is not to build a complex AI application or to dive deep into code. Instead, it\u2019s to show how we can execute AI workloads safely and consistently as we move from experimentation to production. First, we\u2019ll demonstrate safe execution of AI workloads by running them in controlled environments, reducing the risk of unintended side effects on the host system. Second, we\u2019ll focus on reproducible environments. The same workload should behave the same way regardless of where it runs, whether that\u2019s on a local machine or in the cloud. Third, we\u2019ll highlight container portability. The container image we build locally is the same artifact we deploy later, without modification. Finally, we\u2019ll show how these technical choices support responsible AI practices by enabling better isolation, governance, and operational control throughout the lifecycle. Running Locally Before we look at containers or sandbox environments, let\u2019s start with how most experiments are actually performed today. Typically, a developer runs an application directly on their machine. The application uses the local runtime, the local libraries, and whatever dependencies are installed in that environment. This approach works well at the beginning because it is fast and convenient. It allows us to prototype quickly and test ideas without much setup. However, over time, this model introduces several challenges. One of them is environment drift. As we install packages and update libraries, our environment slowly changes, and it becomes difficult to reproduce the same results later or on another machine. Another issue is dependency management. We often install libraries temporarily, and it becomes unclear which versions are actually required for the application to work correctly. There is also a security aspect to consider. When running scripts locally\u200a\u2014\u200aespecially code generated by AI\u200a\u2014\u200awe may be executing code that interacts with files, network resources, or credentials on our machine. So, running locally is useful and often necessary, but it is not always reliable or safe in the long term. Let\u2019s start by running our example locally, and then we\u2019ll see how we can improve this execution model step by step. Step 1: Local Execution of a Student Analytics Assistant For this demonstration, I wanted a simple and realistic scenario that everyone can understand quickly. The example we will use is a small analytics assistant that analyzes student performance data. The dataset contains information such as student names, subjects, scores, and study time.&nbsp;The application reads this data and generates a few insights, like calculating averages and identifying students who may need additional support. This is intentionally a very simple application. The goal of the demo is not to build a complex AI model, but to illustrate the development workflow and the execution environment. We start by running this application locally, which is how most experiments begin. Developers typically install dependencies, run scripts, and test ideas directly on their machines. This works well at the beginning, but as experiments grow and AI-generated code becomes more common, this approach starts to introduce risks and limitations. So let\u2019s first run the application locally and observe the result, and then we\u2019ll look at how we can improve this workflow. Running in a Container In the previous step, we ran the application locally and saw that it worked, but we also discussed the limitations of running directly on a developer machine. The next step is to improve this execution model by running the same application inside a container. Instead of relying on whatever happens to be installed on the local machine, we define a controlled runtime that includes the exact dependencies the application needs. This environment is packaged together with the application, so it can be reproduced consistently anywhere. Running in a container gives us three key benefits. First, reproducibility. Anyone can run the container and get the same result, regardless of their local setup. Second, isolation. The application runs in its own environment, reducing the risk of conflicts or unintended interactions with the host system. And third, portability. The same container can run on another developer\u2019s machine, in a test environment, or in the cloud without modification. Let\u2019s now see how we can take our existing application and package it into a container by adding a Dockerfile and building our first image. Docker provides a command called docker init, which analyzes the project and generates the configuration needed to containerize the application. This allows us to go from a local application to a containerized workload in just a few steps. Let\u2019s start by running docker init and see what it generates for us. Now that the application is running, let\u2019s take a moment to see what actually happened behind the scenes. I\u2019ll open Docker Desktop to show you the artifacts that were created. Here we can see the image that was built. The image contains everything needed to run the application: the runtime, dependencies, and the application itself. And here we can see the container, which is the running instance created from that image. This distinction is important:&nbsp;The image is the blueprint, and the container is the running process. This is what gives us reproducibility. Anyone can take the same image and run the same container on another machine or in the cloud, and get the same behavior. Running in a&nbsp;Sandbox So far, we\u2019ve seen how to run an application locally and then how to run the same application inside a container. Containers already give us reproducibility and a level of isolation, which is a big improvement over running directly on a developer machine. But when we work with AI systems, especially agents or code generated by AI, we often need an additional level of protection. In these situations, we may be executing code that we did not write ourselves, or code that interacts with files, tools, or external services. That introduces new risks, even if the application is containerized. This is where sandbox environments become important. A sandbox allows us to execute workloads in a temporary and controlled environment that can be created and destroyed easily. It provides stronger isolation and reduces the impact of unexpected behavior. In other words, we can experiment freely while protecting the host system and keeping the environment clean. Let\u2019s now run this application inside a sandbox environment and observe how this execution model works in practice. Docker Sandbox \u2022Docker Sandbox provides an isolated development and execution environment designed to safely run code, tools, and workloads without affecting the host system. \u2022It is especially useful for: \u2022AI-generated code \u2022Experimental scripts \u2022Agent-driven workflows \u2022Dependency-heavy workloads A sandbox behaves like a temporary development machine where you can install tools, run containers, and test code safely. Before we run anything, I copied the project to another directory called After. Next, I want to confirm the sandbox capability is available in this Docker Desktop installation. The key point is: we\u2019re about to run the workload in an isolated environment separate from my host machine. I\u2019m creating a sandbox VM bound to this workspace folder. I will use this command line to create a sandbox for your workspace Think of it as a disposable, isolated development environment. If anything, weird happens\u200a\u2014\u200adependency mess, file changes, unexpected commands\u200a\u2014\u200aI can destroy the whole sandbox and return to a clean state. We will use docker sandbox ls to check if our sandbox is created or not Next step consist on Opening an interactive shell inside the sandbox using this command line: docker sandbox run claude If you want to execute agents using Claude, this is a great solution. As you can see here, it requires a subscription. In our example, we will just run our code inside the sandbox; it\u2019s not an AI agent. To go inside the sandbox using this command line. docker sandbox exec -it student-sbx bash Now I\u2019m inside the sandbox. This is not my host OS. This is the isolation boundary. From here, I can run experiments more safely\u200a\u2014\u200aespecially when code is AI-generated or not fully trusted. So your sandbox is ready as a new environment to use to test your project. Transition to&nbsp;Cloud Up to now, we have been running everything locally\u200a\u2014\u200afirst directly on the machine, and then inside containers. One of the biggest advantages of containers is portability. The container image we built locally is not tied to this machine. It is a self-contained package that can run anywhere a container runtime is available. Moving to the cloud is therefore a straightforward process. First, we build the container image, which we have already done. Then we push that image to a container registry so it becomes accessible from the cloud environment. Finally, we deploy that same image to a managed service such as Azure Container Apps. The key point here is that we are not modifying the application. We are not changing dependencies or configuration. The same container image moves from local development to the cloud unchanged. This consistency is what makes containers such a powerful foundation for modern AI and cloud-native applications. Azure Architecture Let\u2019s look briefly at how this solution is structured in the cloud. At the center, we have Azure Container Apps. This is the service that runs our containerized application. It provides a managed runtime, so we don\u2019t need to manage virtual machines or orchestrators ourselves. The container image is stored in Azure Container Registry, which acts as a secure repository for our images. This allows us to version, store, and deploy containers in a controlled way. If the application needs AI capabilities, it can connect to Azure OpenAI or other AI services. This allows us to integrate language models or other AI features without embedding models directly into the container. Finally, Log Analytics provides monitoring and observability. It collects logs and metrics so we can understand how the application behaves in production and troubleshoot issues when necessary. The key benefits of this architecture are managed scaling, secure networking, and built-in observability, which allow teams to focus on the application rather than the infrastructure. Responsible AI Development When we talk about responsible AI, we often think about ethics and governance, but there is also a very practical dimension that concerns architecture and operations. In this context, I like to think of responsible AI development as resting on four pillars. The first is isolation. We need to ensure that workloads, especially experimental or AI-generated code, run in controlled environments so that failures or unexpected behavior do not impact other systems. The second pillar is identity. Applications should access services securely using managed identities or secure credentials, rather than hardcoded keys. This reduces security risks and improves traceability. The third pillar is observability. We need visibility into logs, metrics, and execution behavior to understand how systems operate and to detect issues early. And finally, cost awareness. AI workloads can scale quickly and consume significant resources, so monitoring usage and controlling costs must be part of the design from the beginning. Responsible AI is therefore not only an ethical topic\u200a\u2014\u200ait is also technical, operational, and financial. It is about building systems that are safe, reliable, and sustainable in real environments. Let me close by summarizing the journey we followed today. On the left, we start with the typical local development environment. It\u2019s fast and convenient, but it often becomes risky and difficult to control. Dependencies change, environments drift, and running AI-generated code can introduce security risks. In the middle, we introduced containers. Containers give us isolation, reproducibility, and controlled execution. The application, its dependencies, and its runtime are packaged together, creating a consistent and reliable environment. On the right, we moved that same container to the cloud. Using services like Azure Container Apps, we gain scalability, monitoring, and secure integration with AI services\u200a\u2014\u200awithout changing the application itself. So the key message of this session is that secure AI development is not about slowing down experimentation. It\u2019s about creating a reliable path from local experimentation to production by using isolation, reproducibility, and controlled execution as fundamental principles.","og_url":"https:\/\/rebaihamida.com\/?p=198","og_site_name":"Next-Generation Tech Blogs","article_published_time":"2026-02-20T19:04:06+00:00","article_modified_time":"2026-02-20T19:04:07+00:00","og_image":[{"width":1536,"height":1024,"url":"https:\/\/rebaihamida.com\/wp-content\/uploads\/2026\/02\/imagedockerazure.png","type":"image\/png"}],"author":"Hamida Rebai","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Hamida Rebai","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/rebaihamida.com\/?p=198#article","isPartOf":{"@id":"https:\/\/rebaihamida.com\/?p=198"},"author":{"name":"Hamida Rebai","@id":"http:\/\/rebaihamida.com\/#\/schema\/person\/f6dffae6f5fa8098da26264a0b318771"},"headline":"Building AI Applications with Docker to the Cloud Azure: A Hands-On Guide","datePublished":"2026-02-20T19:04:06+00:00","dateModified":"2026-02-20T19:04:07+00:00","mainEntityOfPage":{"@id":"https:\/\/rebaihamida.com\/?p=198"},"wordCount":2218,"commentCount":0,"publisher":{"@id":"http:\/\/rebaihamida.com\/#\/schema\/person\/f6dffae6f5fa8098da26264a0b318771"},"image":{"@id":"https:\/\/rebaihamida.com\/?p=198#primaryimage"},"thumbnailUrl":"https:\/\/rebaihamida.com\/wp-content\/uploads\/2026\/02\/imagedockerazure.png","keywords":["AI","architecture","azure","cloud","Container","Docker"],"articleSection":["AI","Containers"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/rebaihamida.com\/?p=198#respond"]}]},{"@type":"WebPage","@id":"https:\/\/rebaihamida.com\/?p=198","url":"https:\/\/rebaihamida.com\/?p=198","name":"Building AI Applications with Docker to the Cloud Azure: A Hands-On Guide - Next-Generation Tech Blogs","isPartOf":{"@id":"http:\/\/rebaihamida.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/rebaihamida.com\/?p=198#primaryimage"},"image":{"@id":"https:\/\/rebaihamida.com\/?p=198#primaryimage"},"thumbnailUrl":"https:\/\/rebaihamida.com\/wp-content\/uploads\/2026\/02\/imagedockerazure.png","datePublished":"2026-02-20T19:04:06+00:00","dateModified":"2026-02-20T19:04:07+00:00","breadcrumb":{"@id":"https:\/\/rebaihamida.com\/?p=198#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/rebaihamida.com\/?p=198"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/rebaihamida.com\/?p=198#primaryimage","url":"https:\/\/rebaihamida.com\/wp-content\/uploads\/2026\/02\/imagedockerazure.png","contentUrl":"https:\/\/rebaihamida.com\/wp-content\/uploads\/2026\/02\/imagedockerazure.png","width":1536,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/rebaihamida.com\/?p=198#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"http:\/\/rebaihamida.com\/"},{"@type":"ListItem","position":2,"name":"Building AI Applications with Docker to the Cloud Azure: A Hands-On Guide"}]},{"@type":"WebSite","@id":"http:\/\/rebaihamida.com\/#website","url":"http:\/\/rebaihamida.com\/","name":"Next-Generation Tech Blogs","description":"Next-Generation Tech Blogs for Modern Thinkers","publisher":{"@id":"http:\/\/rebaihamida.com\/#\/schema\/person\/f6dffae6f5fa8098da26264a0b318771"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"http:\/\/rebaihamida.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":["Person","Organization"],"@id":"http:\/\/rebaihamida.com\/#\/schema\/person\/f6dffae6f5fa8098da26264a0b318771","name":"Hamida Rebai","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/rebaihamida.com\/wp-content\/uploads\/2025\/12\/cropped-site-icon.png","url":"https:\/\/rebaihamida.com\/wp-content\/uploads\/2025\/12\/cropped-site-icon.png","contentUrl":"https:\/\/rebaihamida.com\/wp-content\/uploads\/2025\/12\/cropped-site-icon.png","width":512,"height":512,"caption":"Hamida Rebai"},"logo":{"@id":"https:\/\/rebaihamida.com\/wp-content\/uploads\/2025\/12\/cropped-site-icon.png"},"sameAs":["http:\/\/rebaihamida.com","https:\/\/www.linkedin.com\/in\/hamida-rebai-trabelsi\/","https:\/\/www.youtube.com\/@RebaHamidaMVP"],"url":"https:\/\/rebaihamida.com\/?author=1"}]}},"jetpack_featured_media_url":"https:\/\/rebaihamida.com\/wp-content\/uploads\/2026\/02\/imagedockerazure.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/rebaihamida.com\/index.php?rest_route=\/wp\/v2\/posts\/198","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rebaihamida.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rebaihamida.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rebaihamida.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/rebaihamida.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=198"}],"version-history":[{"count":1,"href":"https:\/\/rebaihamida.com\/index.php?rest_route=\/wp\/v2\/posts\/198\/revisions"}],"predecessor-version":[{"id":200,"href":"https:\/\/rebaihamida.com\/index.php?rest_route=\/wp\/v2\/posts\/198\/revisions\/200"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rebaihamida.com\/index.php?rest_route=\/wp\/v2\/media\/199"}],"wp:attachment":[{"href":"https:\/\/rebaihamida.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=198"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rebaihamida.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=198"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rebaihamida.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=198"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}