2013-09-29 22:22:54 +02:00
|
|
|
Delegation, Rolling Updates, and Local Actions
|
|
|
|
==============================================
|
2012-05-13 17:00:02 +02:00
|
|
|
|
2013-09-29 22:22:54 +02:00
|
|
|
Ansible is great at doing things on one host on behalf of another, and in particular this is very applicable
|
|
|
|
when setting up continuous deployment infrastructure or zero downtime rolling updates.
|
2012-05-13 17:00:02 +02:00
|
|
|
|
2013-09-29 22:22:54 +02:00
|
|
|
Rolling Update Batch Size
|
2012-10-17 00:58:31 +02:00
|
|
|
`````````````````````````
|
|
|
|
|
2012-08-18 16:23:17 +02:00
|
|
|
.. versionadded:: 0.7
|
|
|
|
|
2013-07-10 21:09:12 +02:00
|
|
|
By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling updates
|
|
|
|
use case, you can define how many hosts Ansible should manage at a single time by using the ''serial'' keyword::
|
2012-10-17 01:12:31 +02:00
|
|
|
|
2012-08-18 16:23:17 +02:00
|
|
|
|
|
|
|
- name: test play
|
|
|
|
hosts: webservers
|
|
|
|
serial: 3
|
|
|
|
|
2012-10-17 01:12:31 +02:00
|
|
|
In the above example, if we had 100 hosts, 3 hosts in the group 'webservers'
|
|
|
|
would complete the play completely before moving on to the next 3 hosts.
|
2012-08-18 16:23:17 +02:00
|
|
|
|
2013-09-06 22:19:34 +02:00
|
|
|
Maximum Failure Percentage
|
|
|
|
``````````````````````````
|
|
|
|
|
|
|
|
.. versionadded:: 1.3
|
|
|
|
|
|
|
|
By default, Ansible will continue executing actions as long as there are hosts in the group that have not yet failed.
|
|
|
|
In some situations, such as with the rolling updates described above, it may be desireable to abort the play when a
|
|
|
|
certain threshold of failures have been reached. To acheive this, as of version 1.3 you can set a maximum failure
|
|
|
|
percentage on a play as follows::
|
|
|
|
|
|
|
|
- hosts: webservers
|
|
|
|
max_fail_percentage: 30
|
|
|
|
serial: 10
|
|
|
|
|
|
|
|
In the above example, if more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted.
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort
|
|
|
|
when 2 of the systems failed, the percentage should be set at 49 rather than 50.
|
|
|
|
|
2012-08-18 16:23:17 +02:00
|
|
|
Delegation
|
|
|
|
``````````
|
|
|
|
|
|
|
|
.. versionadded:: 0.7
|
|
|
|
|
2013-09-29 22:22:54 +02:00
|
|
|
This isn't actually rolling update specific but comes up frequently in those cases.
|
|
|
|
|
2012-08-18 16:23:17 +02:00
|
|
|
If you want to perform a task on one host with reference to other hosts, use the 'delegate_to' keyword on a task.
|
|
|
|
This is ideal for placing nodes in a load balanced pool, or removing them. It is also very useful for controlling
|
|
|
|
outage windows. Using this with the 'serial' keyword to control the number of hosts executing at one time is also
|
|
|
|
a good idea::
|
|
|
|
|
|
|
|
---
|
|
|
|
- hosts: webservers
|
|
|
|
serial: 5
|
|
|
|
|
|
|
|
tasks:
|
|
|
|
- name: take out of load balancer pool
|
2013-07-15 19:50:48 +02:00
|
|
|
command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
|
2012-08-18 16:23:17 +02:00
|
|
|
delegate_to: 127.0.0.1
|
|
|
|
|
|
|
|
- name: actual steps would go here
|
2013-07-15 19:50:48 +02:00
|
|
|
yum: name=acme-web-stack state=latest
|
2012-08-18 16:23:17 +02:00
|
|
|
|
|
|
|
- name: add back to load balancer pool
|
2013-07-15 19:50:48 +02:00
|
|
|
command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
|
2012-08-18 16:23:17 +02:00
|
|
|
delegate_to: 127.0.0.1
|
|
|
|
|
2012-08-20 04:04:18 +02:00
|
|
|
|
2013-06-14 17:13:38 +02:00
|
|
|
These commands will run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that
|
|
|
|
you can use on a per-task basis: 'local_action'. Here is the same playbook as above, but using the shorthand
|
2013-03-26 21:34:16 +01:00
|
|
|
syntax for delegating to 127.0.0.1::
|
2012-08-20 04:04:18 +02:00
|
|
|
|
|
|
|
---
|
|
|
|
# ...
|
|
|
|
tasks:
|
|
|
|
- name: take out of load balancer pool
|
2013-04-13 00:21:09 +02:00
|
|
|
local_action: command /usr/bin/take_out_of_pool {{ inventory_hostname }}
|
2012-08-20 04:04:18 +02:00
|
|
|
|
|
|
|
# ...
|
|
|
|
|
|
|
|
- name: add back to load balancer pool
|
2013-04-13 00:21:09 +02:00
|
|
|
local_action: command /usr/bin/add_back_to_pool {{ inventory_hostname }}
|
2012-08-20 04:04:18 +02:00
|
|
|
|
2013-03-26 21:34:16 +01:00
|
|
|
A common pattern is to use a local action to call 'rsync' to recursively copy files to the managed servers.
|
|
|
|
Here is an example::
|
|
|
|
|
|
|
|
---
|
|
|
|
# ...
|
|
|
|
tasks:
|
|
|
|
- name: recursively copy files from management server to target
|
2013-04-13 00:21:09 +02:00
|
|
|
local_action: command rsync -a /path/to/files {{ inventory_hostname }}:/path/to/target/
|
2013-03-26 21:34:16 +01:00
|
|
|
|
2013-03-27 03:37:06 +01:00
|
|
|
Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync
|
|
|
|
will need to ask for a passphrase.
|
|
|
|
|
2013-09-29 22:47:34 +02:00
|
|
|
Local Playbooks
|
|
|
|
```````````````
|
|
|
|
|
|
|
|
It may be useful to use a playbook locally, rather than by connecting over SSH. This can be useful
|
|
|
|
for assuring the configuration of a system by putting a playbook on a crontab. This may also be used
|
|
|
|
to run a playbook inside a OS installer, such as an Anaconda kickstart.
|
|
|
|
|
|
|
|
To run an entire playbook locally, just set the "hosts:" line to "hosts:127.0.0.1" and then run the playbook like so::
|
|
|
|
|
|
|
|
ansible-playbook playbook.yml --connection=local
|
|
|
|
|
|
|
|
Alternatively, a local connection can be used in a single playbook play, even if other plays in the playbook
|
|
|
|
use the default remote connection type::
|
|
|
|
|
|
|
|
- hosts: 127.0.0.1
|
|
|
|
connection: local
|
|
|
|
|
|
|
|
|