Docker Run Ansible Playbook



This role also uses Ansible tags. Run your playbook with the -list-tasks flag for more information. This playbook uses Kitchen for CI and local testing. Ruby; Bundler; Docker; Make; Running the tests. Ensure you have checked out this repository to elasticsearch, not ansible-elasticsearch. An excellent introduction to the CI/CD pipeline with step by step instructions for the student to follow along. The instructor clearly shows the process of building and deploying a Java web application, first using Git and Jenkins, then introducing Docker containers, followed by using Ansible, and then finally using Kubernetes.

The community.docker collection offers several modules and plugins for orchestrating Docker containers and Docker Swarm.

Most of the modules and plugins in community.docker require the Docker SDK for Python. The SDK needs to be installed on the machines where the modules and plugins are executed, and for the Python version(s) with which the modules and plugins are executed. You can use the community.general.python_requirements_info module to make sure that the Docker SDK for Python is installed on the correct machine and for the Python version used by Ansible.

Note that plugins (inventory plugins and connection plugins) are always executed in the context of Ansible itself. If you use a plugin that requires the Docker SDK for Python, you need to install it on the machine running ansible or ansible-playbook and for the same Python interpreter used by Ansible. To see which Python is used, run ansible--version.

You can install the Docker SDK for Python for Python 2.7 or Python 3 as follows:

For Python 2.6, you need a version before 2.0. For these versions, the SDK was called docker-py, so you need to install it as follows:

Please install only one of docker or docker-py. Installing both will result in a broken installation. If this happens, Ansible will detect it and inform you about it. If that happens, you must uninstall both and reinstall the correct version.

If in doubt, always install docker and never docker-py.

You can connect to a local or remote API using parameters passed to each task or by setting environment variables. The order of precedence is command line parameters and then environment variables. If neither a command line option nor an environment variable is found, Ansible uses the default value provided under Parameters.

Parameters¶

Most plugins and modules can be configured by the following parameters:

docker_host

The URL or Unix socket path used to connect to the Docker API. Defaults to unix://var/run/docker.sock. To connect to a remote host, provide the TCP connection string (for example: tcp://192.0.2.23:2376). If TLS is used to encrypt the connection to the API, then the module will automatically replace ‘tcp’ in the connection URL with ‘https’.

api_version

The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported by the Docker SDK for Python installed.

timeout

The maximum amount of time in seconds to wait on a response from the API. Defaults to 60 seconds.

tls

Secure the connection to the API by using TLS without verifying the authenticity of the Docker host server. Defaults to false.

validate_certs

Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server. Default is false.

cacert_path

Use a CA certificate when performing server verification by providing the path to a CA certificate file.

cert_path

Path to the client’s TLS certificate file.

key_path

Path to the client’s TLS key file.

tls_hostname

When verifying the authenticity of the Docker Host server, provide the expected name of the server. Defaults to localhost.

ssl_version

Provide a valid SSL version number. The default value is determined by the Docker SDK for Python.

Environment variables¶

You can also control how the plugins and modules connect to the Docker API by setting the following environment variables.

For plugins, they have to be set for the environment Ansible itself runs in. For modules, they have to be set for the environment the modules are executed in. For modules running on remote machines, the environment variables have to be set on that machine for the user used to execute the modules with.

DOCKER_HOST

The URL or Unix socket path used to connect to the Docker API.

Docker Run Ansible Playbook Commands

DOCKER_API_VERSION

The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supportedby docker-py.

DOCKER_TIMEOUT

The maximum amount of time in seconds to wait on a response from the API.

DOCKER_CERT_PATH

Path to the directory containing the client certificate, client key and CA certificate.

DOCKER_SSL_VERSION

Provide a valid SSL version number.

DOCKER_TLS

Secure the connection to the API by using TLS without verifying the authenticity of the Docker Host.

DOCKER_TLS_VERIFY

Secure the connection to the API by using TLS and verify the authenticity of the Docker Host.

For working with a plain Docker daemon, that is without Swarm, there are connection plugins, an inventory plugin, and several modules available:

docker connection plugin

The community.docker.docker connection plugin uses the Docker CLI utility to connect to Docker containers and execute modules in them. It essentially wraps dockerexec and dockercp. This connection plugin is supported by the ansible.posix.synchronize module.

docker_api connection plugin

The community.docker.docker_api connection plugin talks directly to the Docker daemon to connect to Docker containers and execute modules in them.

docker_containers inventory plugin

The community.docker.docker_containers inventory plugin allows you to dynamically add Docker containers from a Docker Daemon to your Ansible inventory. See Working with dynamic inventory for details on dynamic inventories.

The docker inventory script is deprecated. Please use the inventory plugin instead. The inventory plugin has several compatibility options. If you need to collect Docker containers from multiple Docker daemons, you need to add every Docker daemon as an individual inventory source.

docker_host_info module

The community.docker.docker_host_info module allows you to retrieve information on a Docker daemon, such as all containers, images, volumes, networks and so on.

docker_login module

The community.docker.docker_login module allows you to log in and out of a remote registry, such as Docker Hub or a private registry. It provides similar functionality to the dockerlogin and dockerlogout CLI commands.

docker_prune module

The community.docker.docker_prune module allows you to prune no longer needed containers, images, volumes and so on. It provides similar functionality to the dockerprune CLI command.

docker_image module

The community.docker.docker_image module provides full control over images, including: build, pull, push, tag and remove.

docker_image_info module

The community.docker.docker_image_info module allows you to list and inspect images.

docker_network module

The community.docker.docker_network module provides full control over Docker networks.

docker_network_info module

The community.docker.docker_network_info module allows you to inspect Docker networks.

docker_volume_info module

The community.docker.docker_volume_info module provides full control over Docker volumes.

docker_volume module

The community.docker.docker_volume module allows you to inspect Docker volumes.

docker_container module

The community.docker.docker_container module manages the container lifecycle by providing the ability to create, update, stop, start and destroy a Docker container.

docker_container_info module

The community.docker.docker_container_info module allows you to inspect a Docker container.

The community.docker.docker_compose moduleallows you to use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm.Supports compose versions 1 and 2.

Next to Docker SDK for Python, you need to install docker-compose on the remote machines to use the module.

The community.docker.docker_machine inventory plugin allows you to dynamically add Docker Machine hosts to your Ansible inventory.

The community.docker.docker_stack module module allows you to control Docker stacks. Information on stacks can be retrieved by the community.docker.docker_stack_info module, and information on stack tasks can be retrieved by the community.docker.docker_stack_task_info module.

The community.docker collection provides multiple plugins and modules for managing Docker Swarms.

Swarm management¶

One inventory plugin and several modules are provided to manage Docker Swarms:

docker_swarm inventory plugin

The community.docker.docker_swarm inventory plugin allows you to dynamically add all Docker Swarm nodes to your Ansible inventory.

docker_swarm module

The community.docker.docker_swarm module allows you to globally configure Docker Swarm manager nodes to join and leave swarms, and to change the Docker Swarm configuration.

Docker Run Ansible Playbook
docker_swarm_info module

The community.docker.docker_swarm_info module allows you to retrieve information on Docker Swarm.

docker_node module

The community.docker.docker_node module allows you to manage Docker Swarm nodes.

docker_node_info module

The community.docker.docker_node_info module allows you to retrieve information on Docker Swarm nodes.

Configuration management¶

Docker Run Ansible Playbook Commands

The community.docker collection offers modules to manage Docker Swarm configurations and secrets:

docker_config module

The community.docker.docker_config module allows you to create and modify Docker Swarm configs.

docker_secret module

The community.docker.docker_secret module allows you to create and modify Docker Swarm secrets.

Swarm services¶

Docker Swarm services can be created and updated with the community.docker.docker_swarm_service module, and information on them can be queried by the community.docker.docker_swarm_service_info module.

Docker Run Ansible Playbook

Still using Dockerfile to build images? Check out ansible-bender, and start building images from your Ansible playbooks.

Use Ansible Operator to launch your docker-compose file on OpenShift. Go from an app on your laptop to a fully scalable app in the cloud with Kubernetes in just a few moments.

Ansible Playbooks and Ad Hoc Commands

Ad hoc commands can run a single, simple task against a set of targeted hosts as a one-time command. The real power of Ansible, however, is in learning how to use playbooks to run multiple, complex tasks against a set of targeted hosts in an easily repeatable manner. A play is an ordered set of tasks run against hosts selected from your inventory. A playbook is a text file containing a list of one or more plays to run in a specific order.

Plays allow you to change a lengthy, complex set of manual administrative tasks into an easily repeatable routine with predictable and successful outcomes. In a playbook, you can save the sequence of tasks in a play into a human-readable and immediately runnable form. The tasks themselves, because of the way in which they are written, document the steps needed to deploy your application or infrastructure.

Formatting an Ansible Playbook

Docker Run Ansible Playbook Ubuntu

To help you understand the format of a playbook, review this ad hoc command:

This can be rewritten as a single task play and saved in a playbook. The resulting playbook appears as follows:

A playbook is a text file written in YAML format, and is normally saved with the extension yml. The playbook uses indentation with space characters to indicate the structure of its data. YAML does not place strict requirements on how many spaces are used for the indentation, but there are two basic rules.

  • Data elements at the same level in the hierarchy (such as items in the same list) must have the same indentation.
  • Items that are children of another item must be indented more than their parents.

You can also add blank lines for readability. Only the space character can be used for indentation; tab characters are not allowed. If you use the vi text editor, you can apply some settings which might make it easier to edit your playbooks. For example, you can add the following line to your $HOME/.vimrc file, and when vi detects that you are editing a YAML file, it performs a 2-space indentation when you press the Tab key and autoindents subsequent lines.

A playbook begins with a line consisting of three dashes (—) as a start of document marker. It may end with three dots (…) as an end of document marker, although in practice this is often omitted. In between those markers, the playbook is defined as a list of plays. An item in a YAML list starts with a single dash followed by a space. For example, a YAML list might appear as follows:

Run Ansible Playbook Using Docker Container

In first example of this post, the line after — begins with a dash and starts the first (and only) play in the list of plays. The play itself is a collection of key-value pairs. Keys in the same play should have the same indentation. The following example shows a YAML snippet with three keys. The first two keys have simple values. The third has a list of three items as a value.

The original example play has three keys, name, hosts, and tasks, because these keys all have the same indentation. The first line of the example play starts with a dash and a space (indicating the play is the first item of a list), and then the first key, the name attribute. The name key associates an arbitrary string with the play as a label. This identifies what the play is for. The name key is optional, but is recommended because it helps to document your playbook. This is especially useful when a playbook contains multiple plays.

The second key in the play is a hosts attribute, which specifies the hosts against which the play’s tasks are run. Like the argument for the ansible command, the hosts attribute takes a host pattern as a value, such as the names of managed hosts or groups in the inventory.

Finally, the last key in the play is the tasks attribute, whose value specifies a list of tasks to run for this play. This example has a single task, which runs the user module with specific arguments (to ensure user newbie exists and has UID 4000).

The tasks attribute is the part of the play that actually lists, in order, the tasks to be run on the managed hosts. Each task in the list is itself a collection of key-value pairs.

In this example, the only task in the play has two keys:

  • name is an optional label documenting the purpose of the task. It is a good idea to name all your tasks to help document the purpose of each step of the automation process.
  • user is the module to run for this task. Its arguments are passed as a collection of key-value pairs, which are children of the module (name, uid, and state).

The following is another example of a tasks attribute with multiple tasks, using the service module to ensure that several network services are enabled to start at boot:

Note: The order in which the plays and tasks are listed in a playbook is important, because Ansible runs them in the same order.

Running Playbooks

The ansible-playbook command is used to run playbooks. The command is executed on the control node and the name of the playbook to be run is passed as an argument:

When you run the playbook, output is generated to show the play and tasks being executed. The output also reports the results of each task executed. The following example shows the contents of a simple playbook, and then the result of running it.

Note that the value of the name key for each play and task is displayed when the playbook is run. (The Gathering Facts task is a special task that the setup module usually runs automatically at the start of a play. This is covered later in the course.) For playbooks with multiple plays and tasks, setting name attributes makes it easier to monitor the progress of a playbook’s execution.

Macos catalina mysql not working together Download and run fetch-macOS.py in a terminal to download the Catalina recovery image from the Apple software distribution server (this can be run on Linux, Proxmox, or on a Mac): This results in a 500MB “BaseSystem.dmg” file in the current directory. Your macOS desktop in the cloud! When you sign up for an macOS VPS with HostMyApple you are free to run any application, server or service on it that you please. Completely unrestricted, Each macOS Cloud Server we offer gives you full admin access with connectivity to our high bandwidth, low latency network and can power your small business, allow team development in the cloud or even run a.

You should also see that the latest httpd version installed task is changed for servera.lab.example.com. This means that the task changed something on that host to ensure its specification was met. In this case, it means that the httpd package probably was not installed or was not the latest version. In general, tasks in Ansible Playbooks are idempotent, and it is safe to run a playbook multiple times. If the targeted managed hosts are already in the correct state, no changes should be made. For example, assume that the playbook from the previous example is run again:

This time, all tasks passed with status ok and no changes were reported.

Increasing Output Verbosity

The default output provided by the ansible-playbook command does not provide detailed taskexecutioninformation.The ansible-playbook -v command provides additional information, with up to four total levels.

Configuring the Output Verbosity of Playbook Execution

OPTION DESCRIPTION
-v The task results are displayed.
-vv Both task results and task configuration are displayed
-vvv Includes information about connections to managed hosts
-vvvv Adds extra verbosity options to the connection plug-ins, including users being used in the managed hosts to execute scripts, and what scripts have been executed

Syntax Verification

Prior to executing a playbook, it is good practice to perform a verification to ensure that the syntax of its contents is correct. The ansible-playbook command offers a –syntax-check option that you can use to verify the syntax of a playbook. The following example shows the successful syntax verification of a playbook.

Docker Install Ansible Playbook

When syntax verification fails, a syntax error is reported. The output also includes the approximate location of the syntax issue in the playbook. The following example shows the failed syntax verification of a playbook where the space separator is missing after the name attribute for the play.

RunDocker Run Ansible Playbook

Docker Build Run Ansible Playbook

Executing a Dry Run

You can use the -C option to perform a dry run of the playbook execution. This causes Ansible to report what changes would have occurred if the playbook were executed, but does not make any actual changes to managed hosts. The following example shows the dry run of a playbook containing a single task for ensuring that the latest version of httpd package is installed on a managed host. Note that the dry run reports that the task would effect a change on the managed host.