Reproducibly provisioning Salt Minions on TransIP with Cloudinit

Part 2 of Avoiding the Cloud Series

This is Part 2 of he Avoiding the Cloud series. This step is built on TransIP but can be modified to work at any provider that supports Cloudinit with Ubuntu 20.04 images)

In Part 1: Building a Docker cluster with Nomad, Consul and SaltStack on TransIP, we determined which technology parts we would use as part of our Cluster stack and set out a IP address layout and which node types are needed.

Ordering our VPSs

SaltStack works with a single Salt Master and multiple Salt Minions. SaltStack uses a declarative way of work. Which means that you specify the state a Minion should be in on the Salt Master. Then you tell the master to apply this state to one or more of your minions, and the minions task is to make sure that the state matches the requested state.

This is the only part of this series that is specific to TransIP. The other parts all depend on already provisioned infrastructure that is provider-independent.

Let’s start by ordering 1x Private Network and 5x VPSs from TransIP (i.e. 1x BladeVPS /X1 VPS with Ubuntu 20.04 for our Salt Master and 4x BladeVPS /X4 VPS with Ubuntu 20.04 for our first Consul server, Nomad Server, Docker Host and Storage node). For the Salt Master, add an account boss with your SSH key. The other servers don’t need to have any personalization at this point.

After a few minutes you will receive 5 e-mails, one for each new VPS containing its host ID, and IP addresses.

Our Salt Master

We start with setting up the VPS to use as Salt Master. The Salt Master will be our first node inside the Private Network. So in the Control Panel go to the Private Network in your VPS list and connect all 5 new VPSs to the Private Network.

You should now go to the e-mails you received and get the IP address of the BladeVPS /X1. Then use that IP address to SSH into our Salt Master with the SSH key we provided to Cloudinit.

$ ssh boss@<SALT-MASTER-IP>

Private IP

Let’s start by actually adding a private IP to our private network interface. At TransIP you will probably get ens7 or ens8 as private interface. To check run:

$ ip a | grep ens

You’ll see one public interface with public IP address configured, probably names ens3 or ens4. And one interface without IP address attached, probably ens7 or ens8.

Let’s add a new file to the netplan config to configure our private interface.

$ sudo vi /etc/netplan/60-private-init.yaml

Or if you are no vi fan, use nano to edit the file. From now on I’ll not show the command for editing files. It’s up to you to chose your preferred way to edit files!

And fill the file with:

network:
 version: 2
 renderer: networkd
 ethernets:
  ens7:
   dhcp4: false
   addresses: [ 192.168.0.2/24 ]

Make sure to replace ens7 in the file above with the right private interface name if it’s different. Now apply the netplan config with:

$ sudo netplan apply

Now you should see both interfaces with IP addresses attached when you run:

$ ip a | grep ens

Salt-master package

The version of SaltStack available on the default Ubuntu repositories is a bit behind. So we’ll use the official Salt repositories to get our newest version for Ubuntu 20.04 (Focal Fossa).

On our Salt Master we run:

# Download Salt GPG keyring
$ sudo curl -fsSL -o /usr/share/keyrings/salt-archive-keyring.gpg https://repo.saltproject.io/py3/ubuntu/20.04/amd64/latest/salt-archive-keyring.gpg

# Create Salt apt sources list file
$ echo "deb [signed-by=/usr/share/keyrings/salt-archive-keyring.gpg] https://repo.saltproject.io/py3/ubuntu/20.04/amd64/latest focal main" | sudo tee /etc/apt/sources.list.d/salt.list

# Install Salt master
$ sudo apt-get update
$ sudo apt-get install salt-master

Let’s force the Salt Master to bind to our private IP address, by changing /etc/salt/master to contain:

interface: 192.168.0.2

Restart our Salt master with

$ sudo service salt-master restart

Opening up the firewall

And now we have an operational Salt Master. But what is a master without any minions? Let’s allow our future minions to contact us and heed our commands (Insert evil laughter: Muhahaha..):

$ sudo ufw allow in on ens7

Of course you’ll have to replace ens7 with ens8 if applicable here.

At this point you have an operational Salt Master which should be accessible for your minions!

Salt Minion Cloudinit

So now we need to create a way to provision Salt Minions. TransIP supports Cloudinit. Cloudinit is supported by most major cloud providers for provisioning a specific configuration onto a new VPS. We’d like to have a single Cloudinit file that can be used across all our cluster nodes to create a single base to start from. And let’s make sure that includes the Salt Minion service ready for use.

On each host though we’ll have to uniquely define:

  • The internal hostname (variable hostname)
  • The authorized public SSH key for our non-root account (variable public_ssh_key)
  • The private IP address to use (variable ip) within our private network IP layout
  • If the rootfs needs to be maximized at the first run. By default this is true, but for our gluster nodes, we need to be able to disable it. (variable resize_rootfs

We’ll first step through all our parts of our used cloud-config. A cloud-config file is a YAML formatted text file that specifies all the actions that should be taken after the host first boots. It will also contain PHP code for quick templating the host-unique values.

Hint: You can clone the end result from my transip-cloudinit repo.

#cloud-config
<?=$resize_rootfs.PHP_EOL?>
hostname: <?=$hostname.PHP_EOL?>
fqdn: <?=$hostname?>.dc1.<?=$domain.PHP_EOL?>

The file always start with #cloud-config. By default the resize_rootfs variable is an empty string and thus not present, except for storage nodes, where we need to do some manual actions first. (More on that later)

We also define our node’s internal hostname, such as consul-server-01, and the FQDN, using both the hostname, and our specified domain (e.g. example.com).

users:
 - name: <?=$account_name.PHP_EOL?>
   ssh_authorized_keys:
     - <?=$ssh_public_key.PHP_EOL?>
   sudo: ['ALL=(ALL) NOPASSWD:ALL']
   shell: /bin/bash

If you don’t specify any additional users, the root account will generate a random password and that will be mailed to you. We’d prefer to have a non-root user account in place. So we specify a non-root user account boss with an authorized public SSH key.

write_files:
- path: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
  content: |
        network: {config: disabled}

At the first boot cloud-init retrieves the providers network config for Netplan (that is used by Ubuntu). We don’t want Cloudinit to redefine the network on every reboot, so we disable network configuration for future reboots by creating /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg.

- path: /etc/netplan/60-private-init.yaml
  content: |
    network:
     version: 2
     renderer: networkd
     ethernets:
      ens7:
       dhcp4: false
       addresses: [<?=$ip?>/24]
      ens8:
       dhcp4: false
       addresses: [<?=$ip?>/24]    
- path: /etc/sysctl.d/60-disable-ipv6.conf
  owner: root
  content: |
    net.ipv6.conf.all.disable_ipv6=1
    net.ipv6.conf.default.disable_ipv6=1    

After you connect your VPS to a Private Network on TransIP, your VPS will get a private network interface. But that interface still needs to be configured. I’ve seen both ens7 and ens8 being used at TransIP. But we won’t know which one we’ll get in advance. So we add an additional network configuration file for Netplan that defines both interfaces with our configured private IP address for our target node. Only one will be present and active.

In addition we disable IPv6 as it’s not needed on our internal network.

apt:
  preserve_sources_list: true
  sources:
    salt:
      source: 'deb [signed-by=/usr/share/keyrings/salt-archive-keyring.gpg] https://repo.saltproject.io/py3/ubuntu/20.04/amd64/latest focal main'

We add the SaltStack package repository for Focal Fossa (the name for Ubuntu 20.04), so we can install the newest Salt minion. The Salt repository uses a GPG Keyring for authentication. Unfortunately Cloudinit supports single GPG keys, but there is no easy way to add a GPG Keyring in this step. So we’ll have to add the keyring later on at runcmd, and we don’t instruct Cloudinit to actually install the Salt minion, as that would fail at this point.

runcmd:
- netplan --debug apply
- sleep 10
- sysctl -w net.ipv6.conf.all.disable_ipv6=1
- sysctl -w net.ipv6.conf.default.disable_ipv6=1

We apply our netplan, which will configure our private network interface. We sleep afterwards, to make sure DHCP for our public interface has completed before we start apt to download our package.

- curl -fsSL -o /usr/share/keyrings/salt-archive-keyring.gpg https://repo.saltproject.io/py3/ubuntu/20.04/amd64/latest/salt-archive-keyring.gpg
- apt-get -y update
- apt-get -y install salt-minion

We download the GPG keyring for Salt, update apt and install our salt-minion package.

- [sed, -ir, -e, 's/^#master:.*$/master: <?=$salt_master_ip?>/', /etc/salt/minion]
- [sed, -ir, -e, 's/^#id:.*$/id: <?=$hostname?>/', /etc/salt/minion]
- [sed, -ir, -e, 's/^#rejected_retry:.*$/rejected_retry: True/', /etc/salt/minion]
- service salt-minion restart

And finally we instruct Cloudinit to configure our salt minion to recognize our Salt master at 192.168.0.2 and use its own hostname as its Salt ID. (And to keep retrying if the Salt Master rejects its key)

After restarting the Salt minion, we should see the minion pop up at the Salt master.

Our full cloud_config.txt thus looks like:

#cloud-config
<?=$resize_rootfs?>

hostname: <?=$hostname.PHP_EOL?>
fqdn: <?=$hostname?>.dc1.<?=$domain.PHP_EOL?>
users:
 - name: <?=$account_name.PHP_EOL?>
   ssh_authorized_keys:
     - <?=$ssh_public_key.PHP_EOL?>
   sudo: ['ALL=(ALL) NOPASSWD:ALL']
   shell: /bin/bash
write_files:
- path: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
  content: |
        network: {config: disabled}
- path: /etc/netplan/60-private-init.yaml
  content: |
    network:
     version: 2
     renderer: networkd
     ethernets:
      ens7:
       dhcp4: false
       addresses: [<?=$ip?>/24]
      ens8:
       dhcp4: false
       addresses: [<?=$ip?>/24]    
- path: /etc/sysctl.d/60-disable-ipv6.conf
  owner: root
  content: |
    net.ipv6.conf.all.disable_ipv6=1
    net.ipv6.conf.default.disable_ipv6=1    
apt:
  preserve_sources_list: true
  sources:
    salt:
      source: 'deb [signed-by=/usr/share/keyrings/salt-archive-keyring.gpg] https://repo.saltproject.io/py3/ubuntu/20.04/amd64/latest focal main'
runcmd:
- netplan --debug apply
- sleep 10
- sysctl -w net.ipv6.conf.all.disable_ipv6=1
- sysctl -w net.ipv6.conf.default.disable_ipv6=1
- curl -fsSL -o /usr/share/keyrings/salt-archive-keyring.gpg https://repo.saltproject.io/py3/ubuntu/20.04/amd64/latest/salt-archive-keyring.gpg
- apt-get -y update
- apt-get -y install salt-minion
- [sed, -ir, -e, 's/^#master:.*$/master: <?=$salt_master_ip?>/', /etc/salt/minion]
- [sed, -ir, -e, 's/^#id:.*$/id: <?=$hostname?>/', /etc/salt/minion]
- [sed, -ir, -e, 's/^#rejected_retry:.*$/rejected_retry: True/', /etc/salt/minion]
- service salt-minion restart

TransIP PHP API for provisioning minions

TransIP has a well documented API that allows you to do all the control panel actions by code. To access the API you will need to generate either a time-limited Token or a semi-permanent Private Key to auto-generate new tokens. You can find the API key generation in your Control Panel under “My Account”. In our case we’ll generate a time-limited Access Token.

All configurable values we’ll put in config.php:

<?php
return array(
    'login' => 'example',
    'domain' => 'example.com',
    'account_name' => 'boss',
    'salt_master_ip' => '192.168.0.2',
    'api_token' => 'TRANSIP_API_TOKEN',
    'ssh_public_key' => 'ssh-ed25519 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA',
    'targets' => array(
        'example-vps3' => array(
            'ip' => '192.168.0.10',
            'hostname' => 'consul-server-01',
            ),
        'example-vps4' => array(
            'ip' => '192.168.0.20',
            'hostname' => 'nomad-server-01',
            ),
        'example-vps5' => array(
            'ip' => '192.168.0.30',
            'hostname' => 'docker-01',
            ),
        'example-vps6' => array(
            'ip' => '192.168.0.100',
            'hostname' => 'gluster-01',
            'resize_rootfs' => 'false',
        ),
    ),
);
?>

The first four values are specific to your TransIP Account, your cluster’s domain name and your personal SSH key that you want to use. You’ll need to personalize these values.

The targets config array defines the TransIP-internal VPS names and our config for them. TransIP generally names your VPSs as <TRANSIP_ACCOUNT_USERNAME>-vps<X>. For this article I’ve used the account name example.

As promised in the initial description we’ll start our focus with just one node of each type.

Then finally we need a script to interact with the TransIP API. TransIP provides a PHP API implementation that we can install with Composer. So we have to install both PHP and composer and install the API:

sudo apt install php-cli composer
composer require transip/transip-api-php

Then for our install_vps.php script I’ll run through all lines again before posting the full file.

<?php
use Transip\Api\Library\TransipAPI;

require_once(__DIR__ . '/vendor/autoload.php');
$config = include(__DIR__ . '/config.php');
$targets = $config['targets'];

We start our script by using the TransIP API and retrieving our created configuration from config.php.

if ($argc != 2)
{
    echo <<<HTML
To re-install VPS, use {$argv[0]} <host_id>

HTML;
    exit();
}

$host_id = $argv[1];
if (!isset($targets[$host_id]))
{
    echo <<<HTML
Unknown host id: {$host_id}

HTML;
    exit();
}

Next we check if the script is called with an argument and if that argument is a valid target specified in the configuration.

// Target specific template variables
//
$ip = $targets[$host_id]['ip'];
$hostname = $targets[$host_id]['hostname'];
$resize_rootfs = (isset($targets[$host_id]['resize_rootfs'])) ? 'resize_rootfs: '.$targets[$host_id]['resize_rootfs'] : '';

// Generic template variables
//
$account_name = $config['account_name'];
$domain = $config['domain'];
$ssh_public_key = $config['ssh_public_key'];
$salt_master_ip = $config['salt_master_ip'];

Then we prepare the template variables that are needed within the cloud-config template we put in cloud_config.txt. For the resize_rootfs variable we use an empty string if the variable is not set.

// Create templated cloud-config
//
ob_start();
include(__DIR__.'/cloud_config.txt');
$cloud_config = ob_get_clean();

$encoded_cloud_config = base64_encode($cloud_config);

With the template variables in hand, we start an output buffer, and let PHP put the template variables in place. The TransIP API requires the cloud-config to be base64-encoded.

// Connect with API and execute install
//
$api = new TransipAPI(
    $config['login'],
    '',
    true,
    $config['api_token']
);

try
{
    $response = $api->test()->test();
    if ($response === true)
        echo 'API connection successful!'.PHP_EOL;
}
catch (RuntimeException $e)
{
    echo 'Fatal Error: '.$e->getMessage().PHP_EOL;
    exit(1);
}

Now we start and test our connection to the TransIP API.

echo "Forcing re-install of ".$host_id.PHP_EOL;

$response = $api->vpsOperatingSystems()->install(
    $host_id,
    'ubuntu-20.04',
    $hostname,
    $encoded_cloud_config,
    'cloudinit'
);

if ($response == null)
    echo "Re-installing".PHP_EOL;
else
    var_dump($response);
?>

And finally we tell the API to install a new Operating System (ubuntu-20.04) on our target’s host-id, with our specified cloud-config.

The full install_vps.php thus reads:

<?php
use Transip\Api\Library\TransipAPI;

require_once(__DIR__ . '/vendor/autoload.php');
$config = include(__DIR__ . '/config.php');
$targets = $config['targets'];

if ($argc != 2)
{
    echo <<<HTML
To re-install VPS, use {$argv[0]} <host_id>

HTML;
    exit();
}

$host_id = $argv[1];
if (!isset($targets[$host_id]))
{
    echo <<<HTML
Unknown host id: {$host_id}

HTML;
    exit();
}

// Target specific template variables
//
$ip = $targets[$host_id]['ip'];
$hostname = $targets[$host_id]['hostname'];
$resize_rootfs = (isset($targets[$host_id]['resize_rootfs'])) ? 'resize_rootfs: '.$targets[$host_id]['resize_rootfs'] : '';

// Generic template variables
//
$account_name = $config['account_name'];
$domain = $config['domain'];
$ssh_public_key = $config['ssh_public_key'];
$salt_master_ip = $config['salt_master_ip'];

// Create templated cloud-config
//
ob_start();
include(__DIR__.'/cloud_config.txt');
$cloud_config = ob_get_clean();

$encoded_cloud_config = base64_encode($cloud_config);

// Connect with API and execute install
//
$api = new TransipAPI(
    $config['login'],
    '',
    true,
    $config['api_token']
);

try
{
    $response = $api->test()->test();
    if ($response === true)
        echo 'API connection successful!'.PHP_EOL;
}
catch (RuntimeException $e)
{
    echo 'Fatal Error: '.$e->getMessage().PHP_EOL;
    exit(1);
}

echo "Forcing re-install of ".$host_id.PHP_EOL;

$response = $api->vpsOperatingSystems()->install(
    $host_id,
    'ubuntu-20.04',
    $hostname,
    $encoded_cloud_config,
    'cloudinit'
);

if ($response == null)
    echo "Re-installing".PHP_EOL;
else
    var_dump($response);
?>

Creating reproducible salt-minions

To create our first four nodes for the first time, we can run our install script like this:

$ php install_vps.php example-vps3
API connection successful!
Forcing re-install of example-vps3
Re-installing
$ php install_vps.php example-vps4
API connection successful!
Forcing re-install of example-vps4
Re-installing
$ php install_vps.php example-vps5
API connection successful!
Forcing re-install of example-vps5
Re-installing
$ php install_vps.php example-vps6
API connection successful!
Forcing re-install of example-vps6
Re-installing

This will create consul-server-01, nomad-server-01, docker-01 and gluster-01. All sill bare Salt minions. Ready to be installed.

To check that installation has been completed correctly we can run:

$ sudo salt-key -A --include-denied
The following keys are going to be accepted:
Unaccepted Keys:
consul-server-01
docker-01
gluster-01
nomad-server-01
Proceed? [n/Y] y
Key for minion consul-server-01 accepted.
Key for minion docker-01 accepted.
Key for minion gluster-01 accepted.
Key for minion nomad-server-01 accepted.

If there are any pending Salt minions for approval you can approve their access to the Salt Master by accepting their key.

Now you can check that all nodes are responding with:

$ sudo salt '*' test.version
consul-server-01:
    3002.6
gluster-01:
    3002.6
docker-01:
    3002.6
nomad-server-01:
    3002.6

You should keep in mind that everytime you reinstall a node (for whatever reason) with install_vps.php, the Salt minion will have different keys and will be denied access by the Salt Master. To prevent any issues, it’s better to first remove the ‘old’ key of that Salt minion before you re-install, like this:

sudo salt-key -d node-01

Where you replace node-01 with the node’s hostname.

Finalizing the Gluster node

Wait a minute! Remember that we set one parameter different for our Gluster node? We set resize_rootfs: false in our target configuration. We’ll need to do some manual steps on the Gluster node to make it ready for provisioning by SaltStack in the next part of this series.

So what happened? Well by default, Cloudinit resizes the rootfs to take the full allotted disk space (150 Gb in the case of a BladeVPS /X4). But for stability reasons it’s not advised to have the Gluster volumes on the root filesystem of your nodes. So we want to have two partitions on the disk, one for the root filesystem, and one for our Gluster volume data.

(Cloudinit does support configuring disks and partitions, but I could not get it to work correctly on a BladeVPS cloudinit. If somebody manages to get it to work from the cloud-config please comment below!)

You will need an interactive shell to perform the commands required, so you should SSH into the host using your authorized SSH key:

$ ssh boss@192.168.0.100
boss@gluster-01:~$

After Cloudinit with resize_rootfs: false the root filesystem is only 2Gb in size, but the root disk partition is 161Gb in size. So first you have to resize the disk partition /dev/vda1/ to a smaller size, like 20Gb. Big enough for the root filesystem, logs, etc.

boss@gluster-01:~$ sudo parted
GNU Parted 3.3
Using /dev/vda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 161GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
14      1049kB  5243kB  4194kB                     bios_grub
15      5243kB  116MB   111MB   fat32              boot, esp
 1      116MB   161GB   161GB   ext4

(parted) resizepart 1
Warning: Partition /dev/vda1 is being used. Are you sure you want to continue?
Yes/No? yes
End?  [161GB]? 20G
Warning: Shrinking a partition can cause data loss, are you sure you want to continue?
Yes/No? yes
(parted) q
Information: You may need to update /etc/fstab.

Now that the disk partition is actually only 20Gb, we can resize the root filesystem to fill the full 20GB with:

boss@gluster-01:~$ sudo resize2fs /dev/vda1
boss@gluster-01:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            2.0G     0  2.0G   0% /dev
tmpfs           394M  1.1M  393M   1% /run
/dev/vda1        18G  1.5G   17G   9% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/loop0       56M   56M     0 100% /snap/core18/1988
/dev/loop1       33M   33M     0 100% /snap/snapd/11107
/dev/loop2       70M   70M     0 100% /snap/lxd/19188
/dev/vda15      105M  7.8M   97M   8% /boot/efi
tmpfs           394M     0  394M   0% /run/user/1000

Great. We can see that /dev/vda1 is now only 18Gb in size. Now on to fdisk to create our gluster data disk partition:

boss@gluster-01:~$ sudo fdisk /dev/vda

Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): n
Partition number (2-13,16-128, default 2): <PRESS ENTER>
First sector (39062501-314572766, default 39063552): <PRESS ENTER>
Last sector, +/-sectors or +/-size{K,M,G,T,P} (39063552-314572766, default 314572766): <PRESS ENTER>

Created a new partition 2 of type 'Linux filesystem' and of size 131.4 GiB.

Command (m for help): p
Disk /dev/vda: 150 GiB, 161061273600 bytes, 314572800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt

Device        Start       End   Sectors   Size Type
/dev/vda1    227328  39062500  38835173  18.5G Linux filesystem
/dev/vda2  39063552 314572766 275509215 131.4G Linux filesystem
/dev/vda14     2048     10239      8192     4M BIOS boot
/dev/vda15    10240    227327    217088   106M EFI System

Partition table entries are not in disk order.

Command (m for help): w
The partition table has been altered.
Syncing disks.

Ok. Our second disk partition is in place. For Gluster we create an XFS filesystem on the partition:

boss@gluster-01:~$ sudo mkfs -t xfs -i size=512 /dev/vda2
meta-data=/dev/vda2              isize=512    agcount=4, agsize=8609663 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=34438651, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16815, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

And finally we add the partition to our standard mounts in /etc/fstab. (Feel free to pick a real volume_name instead of storage)

boss@gluster-01:~$ sudo bash
root@gluster-01:/home/boss# echo "/dev/vda2 /data/gluster/storage/brick1 xfs defaults 0 0" >> /etc/fstab
root@gluster-01:/home/boss# mkdir -p /data/gluster/storage/brick1 && mount -a && mkdir /data/gluster/storage/brick1/brick

Ok.. Now even our Gluster node is ready for further specialization.

Conclusion

At this point we have four bare Salt minions, ready to be provisioned for our purposes. We’ll start with provisioning the different nodes in Part 3: Provisioning Consul, Nomad and Gluster with SaltStack.

transip ubuntu saltstack
comments powered by Disqus