SaltStack: A Practical Approach

I am a developer. But I’m also quite interested in the processes to deploy what I develop, something for which today many use the term DevOps. That’s a complex subject, but the main idea deals with strategies and technologies to automate or simplify the development of applications and management of resources to host such applications. You know, the boring part.

One of the more widely used tools for this is SaltStack, and in this occasion I will discuss an approach on using it that has worked wonders so far.

What is SaltStack?

Salt is a system that delivers a dynamic communication bus for infrastructures that can be used for orchestration, remote execution, configuration management and much more.

Is that too broad? Don’t blame me: that’s taken directly from the official documentation. What I can add is how Salt works: it allows a master to tell minions to execute states with pillar as defined by the application of targeting rules against the minions’ grains. Did I make it worse? Here are some definitions that might help:

  • Master: server that tells the minions to execute states
  • Minion: client that executes states as instructed by the master
  • State: command, procedure or check
  • Pillar: variable context of a minion used by states (eg, database connection parameters)
  • Grains: static attributes of a minion (eg, OS)
  • Targeting: definitions for application of pillars and states into minions

Now read that explanation again.

Pillar vs. Grain

Here’s one that’s confusing to newcomers. What’s the difference? How can I tell them apart?

Well, a grain value is determined once, when the minion service is started and generally remains the same forever, while a pillar’s value is evaluated on every state execution. This means that grains are used to define general characteristics about a machine (such as its roles or environment), while pillar are used to define specific parameters for salt states (database connection, branch name, etc).

In other words: grains let you pick what states are run for each machine, and pillar let you pick what parameters you’ll use for those states.


First, the repository structure. Yes, the repository. Because keeping a single repository requires the least work to make it work.  And it also makes the process of changing your master very easy, even saltable.

The main structure of the repository is the following:


Let’s explore the contents of each.


This directory contains the configurations for Salt-Cloud, the system used to provision virtual machines on various public clouds, such as AWS or Rackspace. This is how you should manage your minions. If you’re using a cloud hosting system and you’re not using Salt Cloud, you’re doing it wrong.

It consists of two files:


Wait, isn’t keeping Salt Cloud configuration with salt and pillar weird? You’ll see why that works later.


These are used to abstract all the service-specific configurations away from profiles, so they should contain everything that your instances have in common.

Here’s the first few lines of a provider for a public box hosted in AWS EC2, using Ubuntu 14.04:

  driver: ec2
  image: ami-d05e75b8


Usually, your profiles will be very simple, since most configuration is handled by the provider. You might need to specify only a provider, an instance type (size) and some grains to target the minion:

  provider: ec2-public-www
  size: t2.micro
    box_type: ec2
    app: myapp
    env: prod

Note that any attribute defined in a provider can be overridden in a profile, but if you need to do that, it probably means your level of abstraction is off.


Pillar allow us to parametrize state execution, by providing a context that is defined dynamically immediately before states are applied. An example structure would be something like:


Pillar Contents

Each file consists of a set of attributes (except the targeting file top.sls). For example, the dev environment for myapp (in myapp/dev.sls), contains:

  name: myapp
  settings: ''
  root: apps/myapp
  static_root: apps/myapp/static

All the pillar definitions applied to a machine are merged into a single (Python) dictionary, available to salt states and templates as {{ pillar }}. You’ll see now how useful that is.


The targeting rules are defined in the file top.sls, usually by using (only) grains. In the following sample we use role, app, env and sub_env:

'G@roles:qua or G@roles:ci':
  - match: compound
  - sonar

'G@app:myapp and G@env:prod':
  - match: compound

'G@app:myapp and G@sub_env:shared':
  - match: compound
  - myapp.shared

Depending on the salt state targeting, some pillar data might be unnecessary for some boxes. In those cases it’s good practice to be specific when targeting, to avoid cluttering the minions with useless information as well as protect sensible information (eg, API keys). That’s why we only apply sonar information to qua and ci machines, since other machines would not use that pillar data.


This directory defines that states are applied to every machine. The structure is very similar to the pillar one:



Targeting is also applied on grains, but since states are not environment-specific we never user env and sub_env, as we do in pillar targeting. A sample from the file top.sls shows that:

  - match: compound
  - core.swap

  - match: compound
  - myapp.service
  - services.nginx

  - match: compound
  - services.jenkins
  - services.sonarqube.scanner

Using Pillar

Since the state targeting does not depend on environments, two machines with the same roles and application will execute exactly the same states regardless of environment. But in that case won’t two machines in different environments use the same database? No, because any environment-specific difference is managed by using pillar data to parametrize salt states. For instance, the state to clone the repository for myapp uses pillar a lot:

    - name: {{ pillar['app']['repository']['url'] }}
    - target: {{ pillar['auth']['home'] }}/{{ pillar['app']['root'] }}
    - branch: {{ pillar['app']['repository']['branch'] }}
    - force_checkout: {{ pillar['app']['repository']['checkout'] }}
    - force_reset: {{ pillar['app']['repository']['checkout'] }}
    - user: {{ pillar['auth']['user'] }}
    - require:
      - file: ssh-config
      - ssh_known_hosts: ssh-github-host

We also use pillar in configuration files, for which you need to specify the template engine used for the file handler, or you’ll get a rendering error. For example, the state to update the nginx configuration (located in services/nginx/init.sls) uses jinja:

    - name: /etc/nginx/nginx.conf
    - source: salt://services/nginx/nginx.conf
    - template: jinja

OK, so the repository is ready. What now?

Using It

First you need to install salt-master in the box that will be the Salt master. I won’t go into that because the documentation is clear enough there. If you do need help with installation and configuration, refer to Installation and Configuring the Salt Master.

Set the Master Up

Start by cloning the configuration repository:

cd ~/dev
git clone
cd salt-config

And create the symbolic links for pillar, states and cloud configurations:

ln -sf $PWD/pillar /srv/pillar
ln -sf $PWD/salt /srv/salt
ln -sf $PWD/cloud/profiles /etc/salt/cloud.profiles
ln -sf $PWD/cloud/providers /etc/salt/cloud.providers

Now you see why we kept the salt cloud configuration in the repository. And that works flawlessly because although the profiles and providers are static configuration, the Salt Cloud service only runs for a short period of time when you call it, so every time you use it the configuration are updated. There’s no required refresh.

And notice that the steps are simple enough to add them to the salt configuration, so you can create a new master when needed, using salt. Crazy, huh?

Start the Service

Now you can start the service:

sudo service salt-master start

And don’t forget to update the bootstrap script required to create new minions with Salt Cloud:

sudo salt-cloud -u

Managing Minions

Once Salt Master and Salt Cloud are set up, you can create and destroy minions easily.

You only need to specify the profile and box names:

sudo salt-cloud -p myapp-demo myapp-demo-0 myapp-demo-1

The virtual machines are assigned the grains defined in the profiles when created, and then a highstate is applied on them automatically, so they’re ready to work. Yes, all services up and running, database migation ran, etc. Ready ready. Of course, you need to add the machines to the listeners for the load balancer.

Destroying them is just as easy:

sudo salt-cloud -d myapp-demo-0 myapp-demo-1

Since non-responsive machines are deactivated automatically by the load balancer, you don’t need to update its listeners.

Adding Profiles

The only case in which non-trivial work is required is when a new role, environment or application is added; because in that case a new cloud profile needs to be added.

We’ll review that process assuming that our myapp is hosted in AWS, and it has a dedicated server mode (that’s what our sub-environment meant). Now a new client has joined, and that requires the addition of a new database (RDS), some machines (EC2) and a subdomain (R53). Tough job, huh? Not anymore.


Here’s where you create the database in RDS or whatever you happen to use to host it. Just remember to write down the connection parameters, you’ll need them later.


There are three modification to make for the repository:

  • Add application sub-environment pillar
  • Add pillar targeting to apply new sub-environment pillar
  • Add machine profile for sub-environment

Our new client is Google, so we want to add a sub-environment called google for the application myapp. The pillar would consist only of the database connection parameters, and would be located in pillar/myapp/google.sls.

The additional targeting to the pillar/top.sls file would be:

'G@app:myapp and G@sub_env:google':
  - match: compound

And the profile to add to cloud/profiles:

  provider: ec2-public-www
  size: t2.medium
    box_type: ec2
    app: myapp
    env: prod
    sub_env: google

Adding Minions

Once the repository is updated (pulled), you can create the new machines. This step remains the same as for existing environments, so if we wanted to use three boxes for this sub-environment:

sudo salt-cloud -p myapp-google myapp-google-0 myapp-google-1 myapp-google-2

Once the machines are up and running, you need to add them to a new load balancer as listeners, and then create the new DNS record pointing to that load balancer.

And you’re done.

Next Step: Salty Jenkins

SaltStack is an amazing tool that manages to achieve its goals in a far simpler, more robust approach than other competing ones. But it’s when combined with continuous integration servers that it really shines. Next time I’ll show you how I use it with Jenkins to achieve crazy levels of automation in deployment.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s