Skip to content

Running Azure CLI in Docker container

Published: | 5 min read

Table of contents

Open Table of contents


I like to keep my machine nice and tidy. Thus, I really like to develop and “install” 3rd party tools as Docker containers. In this instalment of my how-to articles, let’s see how we run the Azure CLI as a container on a macOS laptop.


Prepare directories

Azure CLI stores temporary credentials on the local filesystem. As default, the location is ~/.azure. With the containerized approach, the location can be decided by us, but there is no reason to reinvent the wheel here.

So, if that directory does not exist, let’s create it:

mkdir .azure

Sometimes we need to export some data with the Azure CLI, so let’s also create a “project directory” az_work within a projects directory:

cd ~/Devs/Projects
mkdir az_work

Available Azure CLI Containers

As always with container images, make sure you know what you are downloading from internet. I like to stick with the official images as much as possible.

The Azure CLI is available in multiple versions, and it is denoted with a container image tag. The full list of available tags/versions is available here.

In most cases, the container images are stored in Docker’s Container Registry called Docker Hub, but since Microsoft has its own registry called Microsoft Container Registry, the image can/must be pulled from there.

To pull the image with a CLI version 2.9.1, let’s run:

docker pull

It is a good practice to always use a specific container image tag, instead of the latest. Explicit is always better than implicit.

Running the Azure CLI Container

Running the container with default options without giving any arguments, will require you to login to Azure every time you run the container. This is because Docker containers are ephemeral, and the state is lost each time the container is stopped.

To make the login persistent — until the tokens expire of course — we will mount a local folder to persist the data. Remember the directory we created earlier? And while we are at it, let’s mount also the project directory for data operations in to and out from the CLI.

# Move to project dir
cd ~/Devs/Projects/az_work
# Run the container
docker run --rm -it -v ~/.azure:/root/.azure -v $(pwd):/root

This brings us to the bash shell of the running container. The first thing we need to do is the login, so run az login. This will show an URL and code we need to enter to the opened form when visiting the URL in the browser. Enter your Azure credentials, and select the tenant if necessary. Should the login be successful, the CLI shows us the default tenant and subscription info.

The login info is now persisted to our host machine, thanks to the first volume mount.

Just to test things out, run the command az account show. It should return the same info as the login just before. We can now exit the container by typing exit.

So do we need to type all the commands and the super-long docker run every time we want to run az commands? Does not seem so handy after all…

Of course not. Let’s make a shell alias to ease things a bit. I use zsh, so I will add it to the .zshrc file, like so:

alias az='docker run --rm -it -v ~/.azure:/root/.azure -v $(pwd):/root az '

Please take note of the trailing az with a blank . This will make the Azure CLI commands really easy to run. After reloading the shell with source ~/.zshrc, let’s run the az account show again, this time from the host, without entering the running container “manually”. Cool, right :)

Export data with the containerized CLI

The Azure CLI enables us to export command outputs, for example as json. Because we mounted another volume as current host directory in addition to the credential volume, we can simply run:

az account show >"account-info.json"

to save the data to a local file.

Use local data with containerized CLI

Some Azure CLI commands require/enable us to reference json documents in order to run the commands. One example could be an Azure Policy definition creation.

Let’s try this out too.

First we need a policy document. Let’s create a new file (name can be anything):

touch AuditStorageAccounts.json

… and enter the following content to it:

  "if": {
    "allOf": [{
        "field": "type",
        "equals": "Microsoft.Storage/storageAccounts"
        "field": "Microsoft.Storage/storageAccounts/networkAcls.defaultAction",
        "equals": "Allow"
  "then": {
    "effect": "audit"

If you are curious, this defines a policy, that will audit Storage Accounts which are open to public networks.

To run the command and create the policy definition, run:

az policy definition create --name 'audit-storage-accounts-open-to-public-networks' --display-name 'Audit Storage Accounts Open to Public Networks' --description 'This policy ensures that storage accounts with exposures to public networks are audited.' --rules AuditStorageAccounts.json --mode All --metadata category=myCategory

As the command runs successfully, we have now tested the Azure CLI to work even when we provide data from local files to the container.

With the alias we created earlier, and as our tests above demonstrates, we can now use the Azure CLI just as it was installed locally on the host machine.

Azure CLI interactive mode

Even the interactive mode works just as well. We can enter it by running az interactive.

Benefits of containerized Azure CLI

With the containerized approach, we get a few significant benefits over locally installed CLI:

  1. We can easily run multiple versions of the Azure CLI.
  2. We are not polluting our host machine with a Homebrew installed Python, on which the Azure CLI depends.
  3. We can easily delete the Azure CLI (version) should we want to just by running docker rmi [image-id].

Further reading