Tips that make my Ansible playbooks better

17/10/2019

Tips that make my Ansible playbooks better

I have been using Ansible for two years now, and feel that I am about half way in my mastery. In the Big Data Platform Engineering team, it is our primary configuration management and orchestration tool. Over the years we have faced non-trivial challenges and we have learnt a lot by making a few mistakes. In this blog post I will share some tips that novice or intermediate Ansible users may find useful.

  1. The “!unsafe” tag

Templating is one of the core features of Ansible and allows the dynamic creation of file content using variables that are collected during the facts gathering stage of the playbook run. In the template file, variables are referenced using double curly bracket notation. For example:

{{ username }}

What if our template includes a static content that is surrounded by double curly brackets and should not be parsed by the Jinja2 engine? I came across this problem whilst deploying the Kibana LogTrail plugin (a data visualisation tool for Elasticsearch clusters). The LogTrail configuration file is formatted as JSON, and one of the lines defines how a message should be displayed. For example:

"message_format": "{{{ docker.container.id }}} : {{{ message }}}"

In this example, the message displayed in LogTrail would be the value of docker.container.id and message variable, delimited by a colon with leading and trailing spaces. For example:

ad6d5d32576a : hello world

Since we deploy our ELK stack with Ansible, all configuration is written in YAML. The LogTrail configuration file is templated from the corresponding group_vars file via the Ansible to_nice_json filter:

{{ logtrail_configuration | to_nice_json }}

As expected, without escaping the triple curly brackets, this will fail. To overcome this, my first attempt was to simply escape the triple curly brackets:

message_format: {{ “{{{“ }} docker.container.id {{ “}}}” }} : {{ “{{{“ }} message {{ “}}}” }}

This worked, but was difficult to read and therefore maintain. I spent some time looking for a better approach and found the documentation regarding Ansible’s !unsafe tag. It is very simple to use. Anything that appears on the same line after the tag will not be treated as a Jinja2 template. With the !unsafe tag, the syntax looks a lot cleaner:

message_format: !unsafe {{{ docker.container.id }}} : {{{ message }}}

 

  1. Using encrypt_string feature with ansible-vault

Ansible-vault feature is widely used in our playbooks and I am sure most engineers are familiar with it. None the less, I would like to share a little known trick. Originally ansible-vault was designed to encrypt whole files. In version 2.3 a new feature was added: encrypting single values inside a YAML file.  Its purpose is to allow clear-text YAML files, with fragments of sensitive data encrypted by Ansible vault. For example:

secret_password: !vault | $ANSIBLE_VAULT;1.1;AES256
306136396236653930346364346431303865326437666431373232633331343537
626461663838633234613139393538363635333266653762303562333264650a30
393239323731306466653639656463666337616437666335386330656638623031
3131646230343732356161623164663561613963333931303361326533630a3566
373165353235386461336332393039366461643336363664356262653038313163
3134343034623262353338626262386361316331633535623433363663

A neat trick is being able to create a complete paste-ready block without having to specify a password. A typical use case is when securely storing internal credentials. Using the ELK stack example this could be a Logstash user authenticating against Elasticsearch farm in order to send it some data. In my early days, when I was new to Ansible, I would first create a random password with pwgen command, then copy and paste it to a temporary file and finally cat it with piping to ansible-vault decrypt. Luckily Ansible has a solution to this problem and this one-liner produces a YAML-ready encrypted value of a given variable:

pwgen [pw_length] [num_pw] | ansible-vault encrypt_string -n secret_password

Pwgen is an excellent utility for creating random passwords, and the above command ensures that the password is not echoed to the terminal and not stored in system’s logs. All you need to do is specify a vault password and copy and paste the encrypted block into the respective YAML file.

 

  1. Using tags with Ansible roles

Roles are commonly used to create reusable playbooks that group together logically related tasks and configure a host for a particular purpose e.g. database server, web server and so on. Tags on the other hand allow to run tasks included in the playbook selectively (side note here: not everybody knows that one of the special tags in Ansible is all, which is used by default in every playbook run). At a first glance it may seem like the concept of having a role and then combining it with tags is a mutually exclusive situation. I agree as I find it a bad practice. If you need to apply a different set of tasks to your host, let it be only a subset of tasks from role A, create a new role with those tasks called role B. None the less, there have been instances where we’ve needed to run certain tasks in role selectively. Since every task in the role in question had been uniquely tagged, we didn’t want to create a new role, and assumed we could do something like this:

hosts: kafka_monitor 
roles:
  -{ role: kafka, tags: kafka_collector }

If the playbook was run with the kafka_collector tag, we expected that only the tasks in the kafka role with the kafka_collector tag would be executed. This wasn’t the case. Instead, all the tasks would be run. Adding tags to statically imported role actually adds the tag to all the tasks in the role. This would have caused the full kafka role being deployed unnecessarily, instead of the much lighter kafka_collector as intended. In cases where you would prefer not to create a new role, you can solve the problem with Ansible’s tasks_from parameter:

hosts: kafka_monitor
tasks:
- name: run tasks kafka_collector.yaml instead of main.yaml
  include_role:
    name: kafka
    tasks_from: kafka_collector.yaml

 

  1. Editing YAML files in Vim

Whilst attending Red Hat’s Ansible training course, I learnt about some very helpful Vim settings for editing YAML files. More often than not simple code edits are made directly in the console using Vim (or its flavours) editor. YAML uses indentation-based scoping thus indentation errors in playbooks can lead often to cryptic error messages. Adding an extra space in a wrong place will most probably throw a syntax error during the playbook execution, with not necessarily exact explanation. To reduce the likelihood of mistakes, you can add the following to your .vimrc file:

autocmd FileType yaml setlocal ai ts=2 sts=2 sw=2 et

What does this do?

– autocmd FileType yaml tells Vim to apply the subsequent settings to all YAML files. The file type will be correctly recognised, regardless of whether you give it .yaml or .yml extension

– setlocal limits the scope to the current buffer or window

– ai means auto-indent. Whenever you hit enter, your cursor will automatically be placed at the same level of indentation as the previous line

– ts is tabstop and specifies how many columns a tab equals to (in this case one tab equals two columns)

– sts is soft tabstop and controls the number of spaces a tab is equal to in Vim’s insert mode

– sw is shift-width, and it controls how much a line gets shifted when using << and >>. By setting this to the same value as tabstop, we make sure that shifted lines and blocks of text use the correct indention level

– et is expandtab and ensures that tabs are converted to spaces (the number of which are specified by ts)

When combined, these settings help to ensure that I don’t make indentation errors when I’m writing Ansible playbooks.

 

  1. Setting file and folder permissions with file module

The file module is used to manipulate file and directory attributes. One of the most important attributes is mode e.g. the permissions that will be applied to the file. The ansible-doc command specifies that mode expects a number in octal format, so unless you want to set setuid, setgid, or sticky bit, always use a 4-digit bitmask, e.g. 0644. My systems administration experience has taught me to always follow this practice. However, we came across a case where a 3-digit bitmask slipped through, which was leading to unexpected behaviour. For example, an Ansible playbook was generating a self-signed certificate with an associated key and we expected it to set the restrictive permissions, e.g. 0400. Unfortunately, the mode values were provided in 3-digit format which made the certificate and the key inaccessible for the user which the process was running as. Here is an example of what was happening. First, let’s create a world-accessible test file named /tmp/test.txt:

- name: Change file permissions
  file:
    path: /etc/test.txt
    mode: 0777

After running the playbook, the file’s permissions are as expected:

-rwxrwxrwx  1 user1 data 0 Jul 01 11:58 test.txt*

Now try running the same piece of Ansible code with a 3-digit bitmask e.g. mode: 777. The file permissions now look very different:

-r----x--t  1 user1 data 0 Jul 01 11:59 test.txt*

Without the 0 prefix, Ansible doesn’t know the mode is octal and instead applies a decimal permission. This is very different to the intended settings!

I hope you will find the above tips useful!

 

Tomasz Papir-Zwierz, Big Data Platform Engineering

Interested in a career at G-Research?

Want to be part of a leading quantitative research and technology company? Bring your skills and experience to G-Research by applying for one of our many roles.