1
0
Fork 0
mirror of https://github.com/ansible-collections/community.general.git synced 2024-09-14 20:13:21 +02:00

Update tags and async sections, among other changes.

This commit is contained in:
Michael DeHaan 2013-09-29 19:03:51 -04:00
parent c10904bcef
commit ad2736b253
12 changed files with 377 additions and 5765 deletions

File diff suppressed because it is too large Load diff

View file

@ -4,7 +4,7 @@ API & Integrations
There are several interesting ways to use Ansible from an API perspective. You can use
the Ansible python API to control nodes, you can extend Ansible to respond to various python events,
and you can plug in inventory data from external data sources. Ansible is written in its own
API so you have a considerable amount of power across the board.
API so you have a considerable amount of power across the board. This chapter discusses the Python API.
.. contents:: `Table of contents`
:depth: 2
@ -81,317 +81,3 @@ Advanced programmers may also wish to read the source to ansible itself, for
it uses the Runner() API (with all available options) to implement the
command line tools ``ansible`` and ``ansible-playbook``.
Plugins Available Online
------------------------
The remainder of features in the API docs have components available in `ansible-plugins <https://github.com/ansible/ansible/blob/devel/plugins>`_. Send us a github pull request if you develop any interesting features.
External Inventory Scripts
--------------------------
Often a user of a configuration management system will want to keep inventory
in a different system. Frequent examples include LDAP, `Cobbler <http://cobbler.github.com>`_,
or a piece of expensive enterprisey CMDB software. Ansible easily supports all
of these options via an external inventory system. The plugins directory contains some of these already -- including options for EC2/Eucalyptus and OpenStack, which will be detailed below.
It's possible to write an external inventory script in any language. If you are familiar with Puppet terminology, this concept is basically the same as 'external nodes', with the slight difference that it also defines which hosts are managed.
Script Conventions
``````````````````
When the external node script is called with the single argument '--list', the script must return a JSON hash/dictionary of all the groups to be managed.
Each group's value should be either a hash/dictionary containing a list of each host/IP, potential child groups, and potential group variables, or
simply a list of host/IP addresses, like so::
{
"databases" : {
"hosts" : [ "host1.example.com", "host2.example.com" ],
"vars" : {
"a" : true
}
},
"webservers" : [ "host2.example.com", "host3.example.com" ],
"atlanta" : {
"hosts" : [ "host1.example.com", "host4.example.com", "host5.example.com" ],
"vars" : {
"b" : false
},
"children": [ "marietta", "5points" ],
},
"marietta" : [ "host6.example.com" ],
"5points" : [ "host7.example.com" ]
}
.. versionadded:: 1.0
Before version 1.0, each group could only have a list of hostnames/IP addresses, like the webservers, marietta, and 5points groups above.
When called with the arguments '--host <hostname>' (where <hostname> is a host from above), the script must return either an empty JSON
hash/dictionary, or a hash/dictionary of variables to make available to templates and playbooks. Returning variables is optional,
if the script does not wish to do this, returning an empty hash/dictionary is the way to go::
{
"favcolor" : "red",
"ntpserver" : "wolf.example.com",
"monitoring" : "pack.example.com"
}
Tuning the External Inventory Script
````````````````````````````````````
.. versionadded:: 1.3
The stock inventory script system detailed above works for all versions of Ansible, but calling
'--host' for every host can be rather expensive, especially if it involves expensive API calls to
a remote subsystemm. In Ansible
1.3 or later, if the inventory script returns a top level element called "_meta", it is possible
to return all of the host variables in one inventory script call. When this meta element contains
a value for "hostvars", the inventory script will not be invoked with "--host" for each host. This
results in a significant performance increase for large numbers of hosts, and also makes client
side caching easier to implement for the inventory script.
The data to be added to the top level JSON dictionary looks like this::
{
# results of inventory script as above go here
# ...
"_meta" : {
"hostvars" : {
"moocow.example.com" : { "asdf" : 1234 },
"llama.example.com" : { "asdf" : 5678 },
}
}
}
Example: The Cobbler External Inventory Script
``````````````````````````````````````````````
It is expected that many Ansible users will also be `Cobbler <http://cobbler.github.com>`_ users. Cobbler has a generic
layer that allows it to represent data for multiple configuration management systems (even at the same time), and has
been referred to as a 'lightweight CMDB' by some admins. This particular script will communicate with Cobbler
using Cobbler's XMLRPC API.
To tie Ansible's inventory to Cobbler (optional), copy `this script <https://raw.github.com/ansible/ansible/devel/plugins/inventory/cobbler.py>`_ to /etc/ansible/hosts and `chmod +x` the file. cobblerd will now need
to be running when you are using Ansible.
Test the file by running `./etc/ansible/hosts` directly. You should see some JSON data output, but it may not have
anything in it just yet.
Let's explore what this does. In cobbler, assume a scenario somewhat like the following::
cobbler profile add --name=webserver --distro=CentOS6-x86_64
cobbler profile edit --name=webserver --mgmt-classes="webserver" --ksmeta="a=2 b=3"
cobbler system edit --name=foo --dns-name="foo.example.com" --mgmt-classes="atlanta" --ksmeta="c=4"
cobbler system edit --name=bar --dns-name="bar.example.com" --mgmt-classes="atlanta" --ksmeta="c=5"
In the example above, the system 'foo.example.com' will be addressable by ansible directly, but will also be addressable when using the group names 'webserver' or 'atlanta'. Since Ansible uses SSH, we'll try to contract system foo over 'foo.example.com', only, never just 'foo'. Similarly, if you try "ansible foo" it wouldn't find the system... but "ansible 'foo*'" would, because the system DNS name starts with 'foo'.
The script doesn't just provide host and group info. In addition, as a bonus, when the 'setup' module is run (which happens automatically when using playbooks), the variables 'a', 'b', and 'c' will all be auto-populated in the templates::
# file: /srv/motd.j2
Welcome, I am templated with a value of a={{ a }}, b={{ b }}, and c={{ c }}
Which could be executed just like this::
ansible webserver -m setup
ansible webserver -m template -a "src=/tmp/motd.j2 dest=/etc/motd"
.. note::
The name 'webserver' came from cobbler, as did the variables for
the config file. You can still pass in your own variables like
normal in Ansible, but variables from the external inventory script
will override any that have the same name.
So, with the template above (motd.j2), this would result in the following data being written to /etc/motd for system 'foo'::
Welcome, I am templated with a value of a=2, b=3, and c=4
And on system 'bar' (bar.example.com)::
Welcome, I am templated with a value of a=2, b=3, and c=5
And technically, though there is no major good reason to do it, this also works too::
ansible webserver -m shell -a "echo {{ a }}"
So in other words, you can use those variables in arguments/actions as well. You might use this to name
a conf.d file appropriately or something similar. Who knows?
So that's the Cobbler integration support -- using the cobbler script as an example, it should be trivial to adapt Ansible to pull inventory, as well as variable information, from any data source. If you create anything interesting, please share with the mailing list, and we can keep it in the source code tree for others to use.
Example: AWS EC2 External Inventory Script
``````````````````````````````````````````
If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach. For this reason, you can use the `EC2 external inventory <https://raw.github.com/ansible/ansible/devel/plugins/inventory/ec2.py>`_ script.
You can use this script in one of two ways. The easiest is to use Ansible's ``-i`` command line option and specify the path to the script.
ansible -i ec2.py -u ubuntu us-east-1d -m ping
The second option is to copy the script to `/etc/ansible/hosts` and `chmod +x` it. You will also need to copy the `ec2.ini <https://raw.github.com/ansible/ansible/devel/plugins/inventory/ec2.ini>`_ file to `/etc/ansible/ec2.ini`. Then you can run ansible as you would normally.
To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are a `variety of methods <http://docs.pythonboto.org/en/latest/boto_config_tut.html>`_ available, but the simplest is just to export two environment variables:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
You can test the script by itself to make sure your config is correct
cd plugins/inventory
./ec2.py --list
After a few moments, you should see your entire EC2 inventory across all regions in JSON.
Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables.
At their heart, inventory files are simply a mapping from some name to a destination address. The default ``ec2.ini`` settings are configured for running Ansible from outside EC2 (from your laptop for example). If you are running Ansible from within EC2, internal DNS names and IP addresses may make more sense than public DNS names. In this case, you can modify the ``destination_variable`` in ``ec2.ini`` to be the private DNS name of an instance. This is particularly important when running Ansible within a private subnet inside a VPC, where the only way to access an instance is via its private IP address. For VPC instances, `vpc_destination_variable` in ``ec2.ini`` provides a means of using which ever `boto.ec2.instance variable <http://docs.pythonboto.org/en/latest/ref/ec2.html#module-boto.ec2.instance>`_ makes the most sense for your use case.
The EC2 external inventory provides mappings to instances from several groups:
Instance ID
These are groups of one since instance IDs are unique.
e.g.
``i-00112233``
``i-a1b1c1d1``
Region
A group of all instances in an AWS region.
e.g.
``us-east-1``
``us-west-2``
Availability Zone
A group of all instances in an availability zone.
e.g.
``us-east-1a``
``us-east-1b``
Security Group
Instances belong to one or more security groups. A group is created for each security group, with all characters except alphanumerics, dashes (-) converted to underscores (_). Each group is prefixed by ``security_group_``
e.g.
``security_group_default``
``security_group_webservers``
``security_group_Pete_s_Fancy_Group``
Tags
Each instance can have a variety of key/value pairs associated with it called Tags. The most common tag key is 'Name', though anything is possible. Each key/value pair is its own group of instances, again with special characters converted to underscores, in the format ``tag_KEY_VALUE``
e.g.
``tag_Name_Web``
``tag_Name_redis-master-001``
``tag_aws_cloudformation_logical-id_WebServerGroup``
When the Ansible is interacting with a specific server, the EC2 inventory script is called again with the ``--host HOST`` option. This looks up the HOST in the index cache to get the instance ID, and then makes an API call to AWS to get information about that specific instance. It then makes information about that instance available as variables to your playbooks. Each variable is prefixed by ``ec2_``. Here are some of the variables available:
- ec2_architecture
- ec2_description
- ec2_dns_name
- ec2_id
- ec2_image_id
- ec2_instance_type
- ec2_ip_address
- ec2_kernel
- ec2_key_name
- ec2_launch_time
- ec2_monitored
- ec2_ownerId
- ec2_placement
- ec2_platform
- ec2_previous_state
- ec2_private_dns_name
- ec2_private_ip_address
- ec2_public_dns_name
- ec2_ramdisk
- ec2_region
- ec2_root_device_name
- ec2_root_device_type
- ec2_security_group_ids
- ec2_security_group_names
- ec2_spot_instance_request_id
- ec2_state
- ec2_state_code
- ec2_state_reason
- ec2_status
- ec2_subnet_id
- ec2_tag_Name
- ec2_tenancy
- ec2_virtualization_type
- ec2_vpc_id
Both ``ec2_security_group_ids`` and ``ec2_security_group_names`` are comma-separated lists of all security groups. Each EC2 tag is a variable in the format ``ec2_tag_KEY``.
To see the complete list of variables available for an instance, run the script by itself::
cd plugins/inventory
./ec2.py --host ec2-12-12-12-12.compute-1.amazonaws.com
Example: OpenStack Inventory Script
```````````````````````````````````
Though not detailed here in as much depth as the EC2 module, there's also a OpenStack Compute external inventory source in the plugins directory. It requires the Grizzly release of OpenStack or
later. See the inline comments in the module source for how to use it.
Callback Plugins
----------------
Ansible can be configured via code to respond to external events. This can include enhancing logging, signalling an external software
system, or even (yes, really) making sound effects. Some examples are contained in the plugins directory.
Connection Type Plugins
-----------------------
By default, ansible ships with a 'paramiko' SSH, native ssh (just called 'ssh'), and 'local' connection type, and an accelerated connection type named 'fireball'. All of these can be used
in playbooks and with /usr/bin/ansible to decide how you want to talk to remote machines. The basics of these connection types
are covered in the 'getting started' section. Should you want to extend Ansible to support other transports (SNMP? Message bus?
Carrier Pigeon?) it's as simple as copying the format of one of the existing modules and dropping it into the connection plugins
directory. The value of 'smart' for a connection allows selection of paramiko or openssh based on system capabilities, and chooses
'ssh' if OpenSSH supports ControlPersist, in Ansible 1.2.1 an later. Previous versions did not support 'smart'.
Lookup Plugins
--------------
Language constructs like "with_fileglob" and "with_items" are implemented via lookup plugins. Just like other plugin types, you can write your own.
Vars Plugins
------------
Playbook constructs like 'host_vars' and 'group_vars' work via 'vars' plugins. They inject additional variable
data into ansible runs that did not come from an inventory, playbook, or command line. Note that variables
can also be returned from inventory, so in most cases, you won't need to write or understand vars_plugins.
Filter Plugins
--------------
If you want more Jinja2 filters available in a Jinja2 template (filters like to_yaml and to_json are provided by default), they can be extended by writing a filter plugin.
Distributing Plugins
--------------------
.. versionadded:: 0.8
Plugins are loaded from both Python's site_packages (those that ship with ansible) and a configured plugins directory, which defaults
to /usr/share/ansible/plugins, in a subfolder for each plugin type::
* action_plugins
* lookup_plugins
* callback_plugins
* connection_plugins
* filter_plugins
* vars_plugins
To change this path, edit the ansible configuration file.
In addition, plugins can be shipped in a subdirectory relative to a top-level playbook, in folders named the same as indicated above.
.. seealso::
:doc:`modules`
List of built-in modules
`Mailing List <http://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel

View file

@ -1,397 +0,0 @@
API & Integrations
==================
There are several interesting ways to use Ansible from an API perspective. You can use
the Ansible python API to control nodes, you can extend Ansible to respond to various python events,
and you can plug in inventory data from external data sources. Ansible is written in its own
API so you have a considerable amount of power across the board.
.. contents:: `Table of contents`
:depth: 2
Python API
----------
The Python API is very powerful, and is how the ansible CLI and ansible-playbook
are implemented.
It's pretty simple::
import ansible.runner
runner = ansible.runner.Runner(
module_name='ping',
module_args='',
pattern='web*',
forks=10
)
datastructure = runner.run()
The run method returns results per host, grouped by whether they
could be contacted or not. Return types are module specific, as
expressed in the 'ansible-modules' documentation.::
{
"dark" : {
"web1.example.com" : "failure message"
},
"contacted" : {
"web2.example.com" : 1
}
}
A module can return any type of JSON data it wants, so Ansible can
be used as a framework to rapidly build powerful applications and scripts.
Detailed API Example
````````````````````
The following script prints out the uptime information for all hosts::
#!/usr/bin/python
import ansible.runner
import sys
# construct the ansible runner and execute on all hosts
results = ansible.runner.Runner(
pattern='*', forks=10,
module_name='command', module_args='/usr/bin/uptime',
).run()
if results is None:
print "No hosts found"
sys.exit(1)
print "UP ***********"
for (hostname, result) in results['contacted'].items():
if not 'failed' in result:
print "%s >>> %s" % (hostname, result['stdout'])
print "FAILED *******"
for (hostname, result) in results['contacted'].items():
if 'failed' in result:
print "%s >>> %s" % (hostname, result['msg'])
print "DOWN *********"
for (hostname, result) in results['dark'].items():
print "%s >>> %s" % (hostname, result)
Advanced programmers may also wish to read the source to ansible itself, for
it uses the Runner() API (with all available options) to implement the
command line tools ``ansible`` and ``ansible-playbook``.
Plugins Available Online
------------------------
The remainder of features in the API docs have components available in `ansible-plugins <https://github.com/ansible/ansible/blob/devel/plugins>`_. Send us a github pull request if you develop any interesting features.
External Inventory Scripts
--------------------------
Often a user of a configuration management system will want to keep inventory
in a different system. Frequent examples include LDAP, `Cobbler <http://cobbler.github.com>`_,
or a piece of expensive enterprisey CMDB software. Ansible easily supports all
of these options via an external inventory system. The plugins directory contains some of these already -- including options for EC2/Eucalyptus and OpenStack, which will be detailed below.
It's possible to write an external inventory script in any language. If you are familiar with Puppet terminology, this concept is basically the same as 'external nodes', with the slight difference that it also defines which hosts are managed.
Script Conventions
``````````````````
When the external node script is called with the single argument '--list', the script must return a JSON hash/dictionary of all the groups to be managed.
Each group's value should be either a hash/dictionary containing a list of each host/IP, potential child groups, and potential group variables, or
simply a list of host/IP addresses, like so::
{
"databases" : {
"hosts" : [ "host1.example.com", "host2.example.com" ],
"vars" : {
"a" : true
}
},
"webservers" : [ "host2.example.com", "host3.example.com" ],
"atlanta" : {
"hosts" : [ "host1.example.com", "host4.example.com", "host5.example.com" ],
"vars" : {
"b" : false
},
"children": [ "marietta", "5points" ],
},
"marietta" : [ "host6.example.com" ],
"5points" : [ "host7.example.com" ]
}
.. versionadded:: 1.0
Before version 1.0, each group could only have a list of hostnames/IP addresses, like the webservers, marietta, and 5points groups above.
When called with the arguments '--host <hostname>' (where <hostname> is a host from above), the script must return either an empty JSON
hash/dictionary, or a hash/dictionary of variables to make available to templates and playbooks. Returning variables is optional,
if the script does not wish to do this, returning an empty hash/dictionary is the way to go::
{
"favcolor" : "red",
"ntpserver" : "wolf.example.com",
"monitoring" : "pack.example.com"
}
Tuning the External Inventory Script
````````````````````````````````````
.. versionadded:: 1.3
The stock inventory script system detailed above works for all versions of Ansible, but calling
'--host' for every host can be rather expensive, especially if it involves expensive API calls to
a remote subsystemm. In Ansible
1.3 or later, if the inventory script returns a top level element called "_meta", it is possible
to return all of the host variables in one inventory script call. When this meta element contains
a value for "hostvars", the inventory script will not be invoked with "--host" for each host. This
results in a significant performance increase for large numbers of hosts, and also makes client
side caching easier to implement for the inventory script.
The data to be added to the top level JSON dictionary looks like this::
{
# results of inventory script as above go here
# ...
"_meta" : {
"hostvars" : {
"moocow.example.com" : { "asdf" : 1234 },
"llama.example.com" : { "asdf" : 5678 },
}
}
}
Example: The Cobbler External Inventory Script
``````````````````````````````````````````````
It is expected that many Ansible users will also be `Cobbler <http://cobbler.github.com>`_ users. Cobbler has a generic
layer that allows it to represent data for multiple configuration management systems (even at the same time), and has
been referred to as a 'lightweight CMDB' by some admins. This particular script will communicate with Cobbler
using Cobbler's XMLRPC API.
To tie Ansible's inventory to Cobbler (optional), copy `this script <https://raw.github.com/ansible/ansible/devel/plugins/inventory/cobbler.py>`_ to /etc/ansible/hosts and `chmod +x` the file. cobblerd will now need
to be running when you are using Ansible.
Test the file by running `./etc/ansible/hosts` directly. You should see some JSON data output, but it may not have
anything in it just yet.
Let's explore what this does. In cobbler, assume a scenario somewhat like the following::
cobbler profile add --name=webserver --distro=CentOS6-x86_64
cobbler profile edit --name=webserver --mgmt-classes="webserver" --ksmeta="a=2 b=3"
cobbler system edit --name=foo --dns-name="foo.example.com" --mgmt-classes="atlanta" --ksmeta="c=4"
cobbler system edit --name=bar --dns-name="bar.example.com" --mgmt-classes="atlanta" --ksmeta="c=5"
In the example above, the system 'foo.example.com' will be addressable by ansible directly, but will also be addressable when using the group names 'webserver' or 'atlanta'. Since Ansible uses SSH, we'll try to contract system foo over 'foo.example.com', only, never just 'foo'. Similarly, if you try "ansible foo" it wouldn't find the system... but "ansible 'foo*'" would, because the system DNS name starts with 'foo'.
The script doesn't just provide host and group info. In addition, as a bonus, when the 'setup' module is run (which happens automatically when using playbooks), the variables 'a', 'b', and 'c' will all be auto-populated in the templates::
# file: /srv/motd.j2
Welcome, I am templated with a value of a={{ a }}, b={{ b }}, and c={{ c }}
Which could be executed just like this::
ansible webserver -m setup
ansible webserver -m template -a "src=/tmp/motd.j2 dest=/etc/motd"
.. note::
The name 'webserver' came from cobbler, as did the variables for
the config file. You can still pass in your own variables like
normal in Ansible, but variables from the external inventory script
will override any that have the same name.
So, with the template above (motd.j2), this would result in the following data being written to /etc/motd for system 'foo'::
Welcome, I am templated with a value of a=2, b=3, and c=4
And on system 'bar' (bar.example.com)::
Welcome, I am templated with a value of a=2, b=3, and c=5
And technically, though there is no major good reason to do it, this also works too::
ansible webserver -m shell -a "echo {{ a }}"
So in other words, you can use those variables in arguments/actions as well. You might use this to name
a conf.d file appropriately or something similar. Who knows?
So that's the Cobbler integration support -- using the cobbler script as an example, it should be trivial to adapt Ansible to pull inventory, as well as variable information, from any data source. If you create anything interesting, please share with the mailing list, and we can keep it in the source code tree for others to use.
Example: AWS EC2 External Inventory Script
``````````````````````````````````````````
If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach. For this reason, you can use the `EC2 external inventory <https://raw.github.com/ansible/ansible/devel/plugins/inventory/ec2.py>`_ script.
You can use this script in one of two ways. The easiest is to use Ansible's ``-i`` command line option and specify the path to the script.
ansible -i ec2.py -u ubuntu us-east-1d -m ping
The second option is to copy the script to `/etc/ansible/hosts` and `chmod +x` it. You will also need to copy the `ec2.ini <https://raw.github.com/ansible/ansible/devel/plugins/inventory/ec2.ini>`_ file to `/etc/ansible/ec2.ini`. Then you can run ansible as you would normally.
To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are a `variety of methods <http://docs.pythonboto.org/en/latest/boto_config_tut.html>`_ available, but the simplest is just to export two environment variables:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
You can test the script by itself to make sure your config is correct
cd plugins/inventory
./ec2.py --list
After a few moments, you should see your entire EC2 inventory across all regions in JSON.
Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables.
At their heart, inventory files are simply a mapping from some name to a destination address. The default ``ec2.ini`` settings are configured for running Ansible from outside EC2 (from your laptop for example). If you are running Ansible from within EC2, internal DNS names and IP addresses may make more sense than public DNS names. In this case, you can modify the ``destination_variable`` in ``ec2.ini`` to be the private DNS name of an instance. This is particularly important when running Ansible within a private subnet inside a VPC, where the only way to access an instance is via its private IP address. For VPC instances, `vpc_destination_variable` in ``ec2.ini`` provides a means of using which ever `boto.ec2.instance variable <http://docs.pythonboto.org/en/latest/ref/ec2.html#module-boto.ec2.instance>`_ makes the most sense for your use case.
The EC2 external inventory provides mappings to instances from several groups:
Instance ID
These are groups of one since instance IDs are unique.
e.g.
``i-00112233``
``i-a1b1c1d1``
Region
A group of all instances in an AWS region.
e.g.
``us-east-1``
``us-west-2``
Availability Zone
A group of all instances in an availability zone.
e.g.
``us-east-1a``
``us-east-1b``
Security Group
Instances belong to one or more security groups. A group is created for each security group, with all characters except alphanumerics, dashes (-) converted to underscores (_). Each group is prefixed by ``security_group_``
e.g.
``security_group_default``
``security_group_webservers``
``security_group_Pete_s_Fancy_Group``
Tags
Each instance can have a variety of key/value pairs associated with it called Tags. The most common tag key is 'Name', though anything is possible. Each key/value pair is its own group of instances, again with special characters converted to underscores, in the format ``tag_KEY_VALUE``
e.g.
``tag_Name_Web``
``tag_Name_redis-master-001``
``tag_aws_cloudformation_logical-id_WebServerGroup``
When the Ansible is interacting with a specific server, the EC2 inventory script is called again with the ``--host HOST`` option. This looks up the HOST in the index cache to get the instance ID, and then makes an API call to AWS to get information about that specific instance. It then makes information about that instance available as variables to your playbooks. Each variable is prefixed by ``ec2_``. Here are some of the variables available:
- ec2_architecture
- ec2_description
- ec2_dns_name
- ec2_id
- ec2_image_id
- ec2_instance_type
- ec2_ip_address
- ec2_kernel
- ec2_key_name
- ec2_launch_time
- ec2_monitored
- ec2_ownerId
- ec2_placement
- ec2_platform
- ec2_previous_state
- ec2_private_dns_name
- ec2_private_ip_address
- ec2_public_dns_name
- ec2_ramdisk
- ec2_region
- ec2_root_device_name
- ec2_root_device_type
- ec2_security_group_ids
- ec2_security_group_names
- ec2_spot_instance_request_id
- ec2_state
- ec2_state_code
- ec2_state_reason
- ec2_status
- ec2_subnet_id
- ec2_tag_Name
- ec2_tenancy
- ec2_virtualization_type
- ec2_vpc_id
Both ``ec2_security_group_ids`` and ``ec2_security_group_names`` are comma-separated lists of all security groups. Each EC2 tag is a variable in the format ``ec2_tag_KEY``.
To see the complete list of variables available for an instance, run the script by itself::
cd plugins/inventory
./ec2.py --host ec2-12-12-12-12.compute-1.amazonaws.com
Example: OpenStack Inventory Script
```````````````````````````````````
Though not detailed here in as much depth as the EC2 module, there's also a OpenStack Compute external inventory source in the plugins directory. It requires the Grizzly release of OpenStack or
later. See the inline comments in the module source for how to use it.
Callback Plugins
----------------
Ansible can be configured via code to respond to external events. This can include enhancing logging, signalling an external software
system, or even (yes, really) making sound effects. Some examples are contained in the plugins directory.
Connection Type Plugins
-----------------------
By default, ansible ships with a 'paramiko' SSH, native ssh (just called 'ssh'), and 'local' connection type, and an accelerated connection type named 'fireball'. All of these can be used
in playbooks and with /usr/bin/ansible to decide how you want to talk to remote machines. The basics of these connection types
are covered in the 'getting started' section. Should you want to extend Ansible to support other transports (SNMP? Message bus?
Carrier Pigeon?) it's as simple as copying the format of one of the existing modules and dropping it into the connection plugins
directory. The value of 'smart' for a connection allows selection of paramiko or openssh based on system capabilities, and chooses
'ssh' if OpenSSH supports ControlPersist, in Ansible 1.2.1 an later. Previous versions did not support 'smart'.
Lookup Plugins
--------------
Language constructs like "with_fileglob" and "with_items" are implemented via lookup plugins. Just like other plugin types, you can write your own.
Vars Plugins
------------
Playbook constructs like 'host_vars' and 'group_vars' work via 'vars' plugins. They inject additional variable
data into ansible runs that did not come from an inventory, playbook, or command line. Note that variables
can also be returned from inventory, so in most cases, you won't need to write or understand vars_plugins.
Filter Plugins
--------------
If you want more Jinja2 filters available in a Jinja2 template (filters like to_yaml and to_json are provided by default), they can be extended by writing a filter plugin.
Distributing Plugins
--------------------
.. versionadded:: 0.8
Plugins are loaded from both Python's site_packages (those that ship with ansible) and a configured plugins directory, which defaults
to /usr/share/ansible/plugins, in a subfolder for each plugin type::
* action_plugins
* lookup_plugins
* callback_plugins
* connection_plugins
* filter_plugins
* vars_plugins
To change this path, edit the ansible configuration file.
In addition, plugins can be shipped in a subdirectory relative to a top-level playbook, in folders named the same as indicated above.
.. seealso::
:doc:`modules`
List of built-in modules
`Mailing List <http://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel

View file

@ -1,107 +1,16 @@
API & Integrations
==================
Inventory Plugins
=================
There are several interesting ways to use Ansible from an API perspective. You can use
the Ansible python API to control nodes, you can extend Ansible to respond to various python events,
and you can plug in inventory data from external data sources. Ansible is written in its own
API so you have a considerable amount of power across the board.
As described in `intro_inventory_dynamic`, ansible can pull inventory information from dynamic sources, including cloud sources.
.. contents:: `Table of contents`
:depth: 2
How do we write a new one?
Python API
----------
The Python API is very powerful, and is how the ansible CLI and ansible-playbook
are implemented.
It's pretty simple::
import ansible.runner
runner = ansible.runner.Runner(
module_name='ping',
module_args='',
pattern='web*',
forks=10
)
datastructure = runner.run()
The run method returns results per host, grouped by whether they
could be contacted or not. Return types are module specific, as
expressed in the 'ansible-modules' documentation.::
{
"dark" : {
"web1.example.com" : "failure message"
},
"contacted" : {
"web2.example.com" : 1
}
}
A module can return any type of JSON data it wants, so Ansible can
be used as a framework to rapidly build powerful applications and scripts.
Detailed API Example
````````````````````
The following script prints out the uptime information for all hosts::
#!/usr/bin/python
import ansible.runner
import sys
# construct the ansible runner and execute on all hosts
results = ansible.runner.Runner(
pattern='*', forks=10,
module_name='command', module_args='/usr/bin/uptime',
).run()
if results is None:
print "No hosts found"
sys.exit(1)
print "UP ***********"
for (hostname, result) in results['contacted'].items():
if not 'failed' in result:
print "%s >>> %s" % (hostname, result['stdout'])
print "FAILED *******"
for (hostname, result) in results['contacted'].items():
if 'failed' in result:
print "%s >>> %s" % (hostname, result['msg'])
print "DOWN *********"
for (hostname, result) in results['dark'].items():
print "%s >>> %s" % (hostname, result)
Advanced programmers may also wish to read the source to ansible itself, for
it uses the Runner() API (with all available options) to implement the
command line tools ``ansible`` and ``ansible-playbook``.
Plugins Available Online
------------------------
The remainder of features in the API docs have components available in `ansible-plugins <https://github.com/ansible/ansible/blob/devel/plugins>`_. Send us a github pull request if you develop any interesting features.
External Inventory Scripts
--------------------------
Often a user of a configuration management system will want to keep inventory
in a different system. Frequent examples include LDAP, `Cobbler <http://cobbler.github.com>`_,
or a piece of expensive enterprisey CMDB software. Ansible easily supports all
of these options via an external inventory system. The plugins directory contains some of these already -- including options for EC2/Eucalyptus and OpenStack, which will be detailed below.
It's possible to write an external inventory script in any language. If you are familiar with Puppet terminology, this concept is basically the same as 'external nodes', with the slight difference that it also defines which hosts are managed.
Simple! We just create a script that can return JSON in the right format when fed the proper arguments.
Script Conventions
``````````````````
When the external node script is called with the single argument '--list', the script must return a JSON hash/dictionary of all the groups to be managed.
Each group's value should be either a hash/dictionary containing a list of each host/IP, potential child groups, and potential group variables, or
simply a list of host/IP addresses, like so::
When the external node script is called with the single argument '--list', the script must return a JSON hash/dictionary of all the groups to be managed. Each group's value should be either a hash/dictionary containing a list of each host/IP, potential child groups, and potential group variables, or simply a list of host/IP addresses, like so::
{
"databases" : {
@ -167,231 +76,3 @@ The data to be added to the top level JSON dictionary looks like this::
}
Example: The Cobbler External Inventory Script
``````````````````````````````````````````````
It is expected that many Ansible users will also be `Cobbler <http://cobbler.github.com>`_ users. Cobbler has a generic
layer that allows it to represent data for multiple configuration management systems (even at the same time), and has
been referred to as a 'lightweight CMDB' by some admins. This particular script will communicate with Cobbler
using Cobbler's XMLRPC API.
To tie Ansible's inventory to Cobbler (optional), copy `this script <https://raw.github.com/ansible/ansible/devel/plugins/inventory/cobbler.py>`_ to /etc/ansible/hosts and `chmod +x` the file. cobblerd will now need
to be running when you are using Ansible.
Test the file by running `./etc/ansible/hosts` directly. You should see some JSON data output, but it may not have
anything in it just yet.
Let's explore what this does. In cobbler, assume a scenario somewhat like the following::
cobbler profile add --name=webserver --distro=CentOS6-x86_64
cobbler profile edit --name=webserver --mgmt-classes="webserver" --ksmeta="a=2 b=3"
cobbler system edit --name=foo --dns-name="foo.example.com" --mgmt-classes="atlanta" --ksmeta="c=4"
cobbler system edit --name=bar --dns-name="bar.example.com" --mgmt-classes="atlanta" --ksmeta="c=5"
In the example above, the system 'foo.example.com' will be addressable by ansible directly, but will also be addressable when using the group names 'webserver' or 'atlanta'. Since Ansible uses SSH, we'll try to contract system foo over 'foo.example.com', only, never just 'foo'. Similarly, if you try "ansible foo" it wouldn't find the system... but "ansible 'foo*'" would, because the system DNS name starts with 'foo'.
The script doesn't just provide host and group info. In addition, as a bonus, when the 'setup' module is run (which happens automatically when using playbooks), the variables 'a', 'b', and 'c' will all be auto-populated in the templates::
# file: /srv/motd.j2
Welcome, I am templated with a value of a={{ a }}, b={{ b }}, and c={{ c }}
Which could be executed just like this::
ansible webserver -m setup
ansible webserver -m template -a "src=/tmp/motd.j2 dest=/etc/motd"
.. note::
The name 'webserver' came from cobbler, as did the variables for
the config file. You can still pass in your own variables like
normal in Ansible, but variables from the external inventory script
will override any that have the same name.
So, with the template above (motd.j2), this would result in the following data being written to /etc/motd for system 'foo'::
Welcome, I am templated with a value of a=2, b=3, and c=4
And on system 'bar' (bar.example.com)::
Welcome, I am templated with a value of a=2, b=3, and c=5
And technically, though there is no major good reason to do it, this also works too::
ansible webserver -m shell -a "echo {{ a }}"
So in other words, you can use those variables in arguments/actions as well. You might use this to name
a conf.d file appropriately or something similar. Who knows?
So that's the Cobbler integration support -- using the cobbler script as an example, it should be trivial to adapt Ansible to pull inventory, as well as variable information, from any data source. If you create anything interesting, please share with the mailing list, and we can keep it in the source code tree for others to use.
Example: AWS EC2 External Inventory Script
``````````````````````````````````````````
If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach. For this reason, you can use the `EC2 external inventory <https://raw.github.com/ansible/ansible/devel/plugins/inventory/ec2.py>`_ script.
You can use this script in one of two ways. The easiest is to use Ansible's ``-i`` command line option and specify the path to the script.
ansible -i ec2.py -u ubuntu us-east-1d -m ping
The second option is to copy the script to `/etc/ansible/hosts` and `chmod +x` it. You will also need to copy the `ec2.ini <https://raw.github.com/ansible/ansible/devel/plugins/inventory/ec2.ini>`_ file to `/etc/ansible/ec2.ini`. Then you can run ansible as you would normally.
To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are a `variety of methods <http://docs.pythonboto.org/en/latest/boto_config_tut.html>`_ available, but the simplest is just to export two environment variables:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
You can test the script by itself to make sure your config is correct
cd plugins/inventory
./ec2.py --list
After a few moments, you should see your entire EC2 inventory across all regions in JSON.
Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables.
At their heart, inventory files are simply a mapping from some name to a destination address. The default ``ec2.ini`` settings are configured for running Ansible from outside EC2 (from your laptop for example). If you are running Ansible from within EC2, internal DNS names and IP addresses may make more sense than public DNS names. In this case, you can modify the ``destination_variable`` in ``ec2.ini`` to be the private DNS name of an instance. This is particularly important when running Ansible within a private subnet inside a VPC, where the only way to access an instance is via its private IP address. For VPC instances, `vpc_destination_variable` in ``ec2.ini`` provides a means of using which ever `boto.ec2.instance variable <http://docs.pythonboto.org/en/latest/ref/ec2.html#module-boto.ec2.instance>`_ makes the most sense for your use case.
The EC2 external inventory provides mappings to instances from several groups:
Instance ID
These are groups of one since instance IDs are unique.
e.g.
``i-00112233``
``i-a1b1c1d1``
Region
A group of all instances in an AWS region.
e.g.
``us-east-1``
``us-west-2``
Availability Zone
A group of all instances in an availability zone.
e.g.
``us-east-1a``
``us-east-1b``
Security Group
Instances belong to one or more security groups. A group is created for each security group, with all characters except alphanumerics, dashes (-) converted to underscores (_). Each group is prefixed by ``security_group_``
e.g.
``security_group_default``
``security_group_webservers``
``security_group_Pete_s_Fancy_Group``
Tags
Each instance can have a variety of key/value pairs associated with it called Tags. The most common tag key is 'Name', though anything is possible. Each key/value pair is its own group of instances, again with special characters converted to underscores, in the format ``tag_KEY_VALUE``
e.g.
``tag_Name_Web``
``tag_Name_redis-master-001``
``tag_aws_cloudformation_logical-id_WebServerGroup``
When the Ansible is interacting with a specific server, the EC2 inventory script is called again with the ``--host HOST`` option. This looks up the HOST in the index cache to get the instance ID, and then makes an API call to AWS to get information about that specific instance. It then makes information about that instance available as variables to your playbooks. Each variable is prefixed by ``ec2_``. Here are some of the variables available:
- ec2_architecture
- ec2_description
- ec2_dns_name
- ec2_id
- ec2_image_id
- ec2_instance_type
- ec2_ip_address
- ec2_kernel
- ec2_key_name
- ec2_launch_time
- ec2_monitored
- ec2_ownerId
- ec2_placement
- ec2_platform
- ec2_previous_state
- ec2_private_dns_name
- ec2_private_ip_address
- ec2_public_dns_name
- ec2_ramdisk
- ec2_region
- ec2_root_device_name
- ec2_root_device_type
- ec2_security_group_ids
- ec2_security_group_names
- ec2_spot_instance_request_id
- ec2_state
- ec2_state_code
- ec2_state_reason
- ec2_status
- ec2_subnet_id
- ec2_tag_Name
- ec2_tenancy
- ec2_virtualization_type
- ec2_vpc_id
Both ``ec2_security_group_ids`` and ``ec2_security_group_names`` are comma-separated lists of all security groups. Each EC2 tag is a variable in the format ``ec2_tag_KEY``.
To see the complete list of variables available for an instance, run the script by itself::
cd plugins/inventory
./ec2.py --host ec2-12-12-12-12.compute-1.amazonaws.com
Example: OpenStack Inventory Script
```````````````````````````````````
Though not detailed here in as much depth as the EC2 module, there's also a OpenStack Compute external inventory source in the plugins directory. It requires the Grizzly release of OpenStack or
later. See the inline comments in the module source for how to use it.
Callback Plugins
----------------
Ansible can be configured via code to respond to external events. This can include enhancing logging, signalling an external software
system, or even (yes, really) making sound effects. Some examples are contained in the plugins directory.
Connection Type Plugins
-----------------------
By default, ansible ships with a 'paramiko' SSH, native ssh (just called 'ssh'), and 'local' connection type, and an accelerated connection type named 'fireball'. All of these can be used
in playbooks and with /usr/bin/ansible to decide how you want to talk to remote machines. The basics of these connection types
are covered in the 'getting started' section. Should you want to extend Ansible to support other transports (SNMP? Message bus?
Carrier Pigeon?) it's as simple as copying the format of one of the existing modules and dropping it into the connection plugins
directory. The value of 'smart' for a connection allows selection of paramiko or openssh based on system capabilities, and chooses
'ssh' if OpenSSH supports ControlPersist, in Ansible 1.2.1 an later. Previous versions did not support 'smart'.
Lookup Plugins
--------------
Language constructs like "with_fileglob" and "with_items" are implemented via lookup plugins. Just like other plugin types, you can write your own.
Vars Plugins
------------
Playbook constructs like 'host_vars' and 'group_vars' work via 'vars' plugins. They inject additional variable
data into ansible runs that did not come from an inventory, playbook, or command line. Note that variables
can also be returned from inventory, so in most cases, you won't need to write or understand vars_plugins.
Filter Plugins
--------------
If you want more Jinja2 filters available in a Jinja2 template (filters like to_yaml and to_json are provided by default), they can be extended by writing a filter plugin.
Distributing Plugins
--------------------
.. versionadded:: 0.8
Plugins are loaded from both Python's site_packages (those that ship with ansible) and a configured plugins directory, which defaults
to /usr/share/ansible/plugins, in a subfolder for each plugin type::
* action_plugins
* lookup_plugins
* callback_plugins
* connection_plugins
* filter_plugins
* vars_plugins
To change this path, edit the ansible configuration file.
In addition, plugins can be shipped in a subdirectory relative to a top-level playbook, in folders named the same as indicated above.
.. seealso::
:doc:`modules`
List of built-in modules
`Mailing List <http://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel

View file

@ -1,360 +1,33 @@
API & Integrations
Plugin Development
==================
There are several interesting ways to use Ansible from an API perspective. You can use
the Ansible python API to control nodes, you can extend Ansible to respond to various python events,
and you can plug in inventory data from external data sources. Ansible is written in its own
API so you have a considerable amount of power across the board.
Ansible is pluggable in a lot of other ways seperate from inventory scripts and callbacks. Many of these features are there to cover
fringe use cases and are infrequently needed, and others are pluggable simply because they are there to implement core features
in ansible and were most convient to be made pluggable.
.. contents:: `Table of contents`
:depth: 2
Python API
----------
The Python API is very powerful, and is how the ansible CLI and ansible-playbook
are implemented.
It's pretty simple::
import ansible.runner
runner = ansible.runner.Runner(
module_name='ping',
module_args='',
pattern='web*',
forks=10
)
datastructure = runner.run()
The run method returns results per host, grouped by whether they
could be contacted or not. Return types are module specific, as
expressed in the 'ansible-modules' documentation.::
{
"dark" : {
"web1.example.com" : "failure message"
},
"contacted" : {
"web2.example.com" : 1
}
}
A module can return any type of JSON data it wants, so Ansible can
be used as a framework to rapidly build powerful applications and scripts.
Detailed API Example
````````````````````
The following script prints out the uptime information for all hosts::
#!/usr/bin/python
import ansible.runner
import sys
# construct the ansible runner and execute on all hosts
results = ansible.runner.Runner(
pattern='*', forks=10,
module_name='command', module_args='/usr/bin/uptime',
).run()
if results is None:
print "No hosts found"
sys.exit(1)
print "UP ***********"
for (hostname, result) in results['contacted'].items():
if not 'failed' in result:
print "%s >>> %s" % (hostname, result['stdout'])
print "FAILED *******"
for (hostname, result) in results['contacted'].items():
if 'failed' in result:
print "%s >>> %s" % (hostname, result['msg'])
print "DOWN *********"
for (hostname, result) in results['dark'].items():
print "%s >>> %s" % (hostname, result)
Advanced programmers may also wish to read the source to ansible itself, for
it uses the Runner() API (with all available options) to implement the
command line tools ``ansible`` and ``ansible-playbook``.
Plugins Available Online
------------------------
The remainder of features in the API docs have components available in `ansible-plugins <https://github.com/ansible/ansible/blob/devel/plugins>`_. Send us a github pull request if you develop any interesting features.
External Inventory Scripts
--------------------------
Often a user of a configuration management system will want to keep inventory
in a different system. Frequent examples include LDAP, `Cobbler <http://cobbler.github.com>`_,
or a piece of expensive enterprisey CMDB software. Ansible easily supports all
of these options via an external inventory system. The plugins directory contains some of these already -- including options for EC2/Eucalyptus and OpenStack, which will be detailed below.
It's possible to write an external inventory script in any language. If you are familiar with Puppet terminology, this concept is basically the same as 'external nodes', with the slight difference that it also defines which hosts are managed.
Script Conventions
``````````````````
When the external node script is called with the single argument '--list', the script must return a JSON hash/dictionary of all the groups to be managed.
Each group's value should be either a hash/dictionary containing a list of each host/IP, potential child groups, and potential group variables, or
simply a list of host/IP addresses, like so::
{
"databases" : {
"hosts" : [ "host1.example.com", "host2.example.com" ],
"vars" : {
"a" : true
}
},
"webservers" : [ "host2.example.com", "host3.example.com" ],
"atlanta" : {
"hosts" : [ "host1.example.com", "host4.example.com", "host5.example.com" ],
"vars" : {
"b" : false
},
"children": [ "marietta", "5points" ],
},
"marietta" : [ "host6.example.com" ],
"5points" : [ "host7.example.com" ]
}
.. versionadded:: 1.0
Before version 1.0, each group could only have a list of hostnames/IP addresses, like the webservers, marietta, and 5points groups above.
When called with the arguments '--host <hostname>' (where <hostname> is a host from above), the script must return either an empty JSON
hash/dictionary, or a hash/dictionary of variables to make available to templates and playbooks. Returning variables is optional,
if the script does not wish to do this, returning an empty hash/dictionary is the way to go::
{
"favcolor" : "red",
"ntpserver" : "wolf.example.com",
"monitoring" : "pack.example.com"
}
Tuning the External Inventory Script
````````````````````````````````````
.. versionadded:: 1.3
The stock inventory script system detailed above works for all versions of Ansible, but calling
'--host' for every host can be rather expensive, especially if it involves expensive API calls to
a remote subsystemm. In Ansible
1.3 or later, if the inventory script returns a top level element called "_meta", it is possible
to return all of the host variables in one inventory script call. When this meta element contains
a value for "hostvars", the inventory script will not be invoked with "--host" for each host. This
results in a significant performance increase for large numbers of hosts, and also makes client
side caching easier to implement for the inventory script.
The data to be added to the top level JSON dictionary looks like this::
{
# results of inventory script as above go here
# ...
"_meta" : {
"hostvars" : {
"moocow.example.com" : { "asdf" : 1234 },
"llama.example.com" : { "asdf" : 5678 },
}
}
}
Example: The Cobbler External Inventory Script
``````````````````````````````````````````````
It is expected that many Ansible users will also be `Cobbler <http://cobbler.github.com>`_ users. Cobbler has a generic
layer that allows it to represent data for multiple configuration management systems (even at the same time), and has
been referred to as a 'lightweight CMDB' by some admins. This particular script will communicate with Cobbler
using Cobbler's XMLRPC API.
To tie Ansible's inventory to Cobbler (optional), copy `this script <https://raw.github.com/ansible/ansible/devel/plugins/inventory/cobbler.py>`_ to /etc/ansible/hosts and `chmod +x` the file. cobblerd will now need
to be running when you are using Ansible.
Test the file by running `./etc/ansible/hosts` directly. You should see some JSON data output, but it may not have
anything in it just yet.
Let's explore what this does. In cobbler, assume a scenario somewhat like the following::
cobbler profile add --name=webserver --distro=CentOS6-x86_64
cobbler profile edit --name=webserver --mgmt-classes="webserver" --ksmeta="a=2 b=3"
cobbler system edit --name=foo --dns-name="foo.example.com" --mgmt-classes="atlanta" --ksmeta="c=4"
cobbler system edit --name=bar --dns-name="bar.example.com" --mgmt-classes="atlanta" --ksmeta="c=5"
In the example above, the system 'foo.example.com' will be addressable by ansible directly, but will also be addressable when using the group names 'webserver' or 'atlanta'. Since Ansible uses SSH, we'll try to contract system foo over 'foo.example.com', only, never just 'foo'. Similarly, if you try "ansible foo" it wouldn't find the system... but "ansible 'foo*'" would, because the system DNS name starts with 'foo'.
The script doesn't just provide host and group info. In addition, as a bonus, when the 'setup' module is run (which happens automatically when using playbooks), the variables 'a', 'b', and 'c' will all be auto-populated in the templates::
# file: /srv/motd.j2
Welcome, I am templated with a value of a={{ a }}, b={{ b }}, and c={{ c }}
Which could be executed just like this::
ansible webserver -m setup
ansible webserver -m template -a "src=/tmp/motd.j2 dest=/etc/motd"
.. note::
The name 'webserver' came from cobbler, as did the variables for
the config file. You can still pass in your own variables like
normal in Ansible, but variables from the external inventory script
will override any that have the same name.
So, with the template above (motd.j2), this would result in the following data being written to /etc/motd for system 'foo'::
Welcome, I am templated with a value of a=2, b=3, and c=4
And on system 'bar' (bar.example.com)::
Welcome, I am templated with a value of a=2, b=3, and c=5
And technically, though there is no major good reason to do it, this also works too::
ansible webserver -m shell -a "echo {{ a }}"
So in other words, you can use those variables in arguments/actions as well. You might use this to name
a conf.d file appropriately or something similar. Who knows?
So that's the Cobbler integration support -- using the cobbler script as an example, it should be trivial to adapt Ansible to pull inventory, as well as variable information, from any data source. If you create anything interesting, please share with the mailing list, and we can keep it in the source code tree for others to use.
Example: AWS EC2 External Inventory Script
``````````````````````````````````````````
If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach. For this reason, you can use the `EC2 external inventory <https://raw.github.com/ansible/ansible/devel/plugins/inventory/ec2.py>`_ script.
You can use this script in one of two ways. The easiest is to use Ansible's ``-i`` command line option and specify the path to the script.
ansible -i ec2.py -u ubuntu us-east-1d -m ping
The second option is to copy the script to `/etc/ansible/hosts` and `chmod +x` it. You will also need to copy the `ec2.ini <https://raw.github.com/ansible/ansible/devel/plugins/inventory/ec2.ini>`_ file to `/etc/ansible/ec2.ini`. Then you can run ansible as you would normally.
To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are a `variety of methods <http://docs.pythonboto.org/en/latest/boto_config_tut.html>`_ available, but the simplest is just to export two environment variables:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
You can test the script by itself to make sure your config is correct
cd plugins/inventory
./ec2.py --list
After a few moments, you should see your entire EC2 inventory across all regions in JSON.
Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables.
At their heart, inventory files are simply a mapping from some name to a destination address. The default ``ec2.ini`` settings are configured for running Ansible from outside EC2 (from your laptop for example). If you are running Ansible from within EC2, internal DNS names and IP addresses may make more sense than public DNS names. In this case, you can modify the ``destination_variable`` in ``ec2.ini`` to be the private DNS name of an instance. This is particularly important when running Ansible within a private subnet inside a VPC, where the only way to access an instance is via its private IP address. For VPC instances, `vpc_destination_variable` in ``ec2.ini`` provides a means of using which ever `boto.ec2.instance variable <http://docs.pythonboto.org/en/latest/ref/ec2.html#module-boto.ec2.instance>`_ makes the most sense for your use case.
The EC2 external inventory provides mappings to instances from several groups:
Instance ID
These are groups of one since instance IDs are unique.
e.g.
``i-00112233``
``i-a1b1c1d1``
Region
A group of all instances in an AWS region.
e.g.
``us-east-1``
``us-west-2``
Availability Zone
A group of all instances in an availability zone.
e.g.
``us-east-1a``
``us-east-1b``
Security Group
Instances belong to one or more security groups. A group is created for each security group, with all characters except alphanumerics, dashes (-) converted to underscores (_). Each group is prefixed by ``security_group_``
e.g.
``security_group_default``
``security_group_webservers``
``security_group_Pete_s_Fancy_Group``
Tags
Each instance can have a variety of key/value pairs associated with it called Tags. The most common tag key is 'Name', though anything is possible. Each key/value pair is its own group of instances, again with special characters converted to underscores, in the format ``tag_KEY_VALUE``
e.g.
``tag_Name_Web``
``tag_Name_redis-master-001``
``tag_aws_cloudformation_logical-id_WebServerGroup``
When the Ansible is interacting with a specific server, the EC2 inventory script is called again with the ``--host HOST`` option. This looks up the HOST in the index cache to get the instance ID, and then makes an API call to AWS to get information about that specific instance. It then makes information about that instance available as variables to your playbooks. Each variable is prefixed by ``ec2_``. Here are some of the variables available:
- ec2_architecture
- ec2_description
- ec2_dns_name
- ec2_id
- ec2_image_id
- ec2_instance_type
- ec2_ip_address
- ec2_kernel
- ec2_key_name
- ec2_launch_time
- ec2_monitored
- ec2_ownerId
- ec2_placement
- ec2_platform
- ec2_previous_state
- ec2_private_dns_name
- ec2_private_ip_address
- ec2_public_dns_name
- ec2_ramdisk
- ec2_region
- ec2_root_device_name
- ec2_root_device_type
- ec2_security_group_ids
- ec2_security_group_names
- ec2_spot_instance_request_id
- ec2_state
- ec2_state_code
- ec2_state_reason
- ec2_status
- ec2_subnet_id
- ec2_tag_Name
- ec2_tenancy
- ec2_virtualization_type
- ec2_vpc_id
Both ``ec2_security_group_ids`` and ``ec2_security_group_names`` are comma-separated lists of all security groups. Each EC2 tag is a variable in the format ``ec2_tag_KEY``.
To see the complete list of variables available for an instance, run the script by itself::
cd plugins/inventory
./ec2.py --host ec2-12-12-12-12.compute-1.amazonaws.com
Example: OpenStack Inventory Script
```````````````````````````````````
Though not detailed here in as much depth as the EC2 module, there's also a OpenStack Compute external inventory source in the plugins directory. It requires the Grizzly release of OpenStack or
later. See the inline comments in the module source for how to use it.
Callback Plugins
----------------
Ansible can be configured via code to respond to external events. This can include enhancing logging, signalling an external software
system, or even (yes, really) making sound effects. Some examples are contained in the plugins directory.
This section will explore these features, though they are generally not common in terms of things people would look to extend.
Connection Type Plugins
-----------------------
By default, ansible ships with a 'paramiko' SSH, native ssh (just called 'ssh'), and 'local' connection type, and an accelerated connection type named 'fireball'. All of these can be used
By default, ansible ships with a 'paramiko' SSH, native ssh (just called 'ssh'), and 'local' connection type, and an accelerated connection type named 'fireball' -- there are also some minor players like 'chroot' and 'jail'. All of these can be used
in playbooks and with /usr/bin/ansible to decide how you want to talk to remote machines. The basics of these connection types
are covered in the 'getting started' section. Should you want to extend Ansible to support other transports (SNMP? Message bus?
Carrier Pigeon?) it's as simple as copying the format of one of the existing modules and dropping it into the connection plugins
directory. The value of 'smart' for a connection allows selection of paramiko or openssh based on system capabilities, and chooses
'ssh' if OpenSSH supports ControlPersist, in Ansible 1.2.1 an later. Previous versions did not support 'smart'.
More documentation on writing connection plugins is pending, though you can jump into lib/ansible/runner/connection_plugins and figure
things out pretty easily.
Lookup Plugins
--------------
Language constructs like "with_fileglob" and "with_items" are implemented via lookup plugins. Just like other plugin types, you can write your own.
More documentation on writing connection plugins is pending, though you can jump into lib/ansible/runner/lookup_plugins and figure
things out pretty easily.
Vars Plugins
------------
@ -362,10 +35,46 @@ Playbook constructs like 'host_vars' and 'group_vars' work via 'vars' plugins.
data into ansible runs that did not come from an inventory, playbook, or command line. Note that variables
can also be returned from inventory, so in most cases, you won't need to write or understand vars_plugins.
More documentation on writing connection plugins is pending, though you can jump into lib/ansible/inventory/vars_plugins and figure
things out pretty easily.
If you find yourself wanting to write a vars_plugin, it's more likely you should write an inventory script instead.
Filter Plugins
--------------
If you want more Jinja2 filters available in a Jinja2 template (filters like to_yaml and to_json are provided by default), they can be extended by writing a filter plugin.
If you want more Jinja2 filters available in a Jinja2 template (filters like to_yaml and to_json are provided by default), they can be extended by writing a filter plugin. Most of the time, when someone comes up with an idea for a new filter they would like to make available in a playbook, we'll just include them in 'core.py' instead.
Jump into lib/ansible/runner/filter_plugins/ for details.
Callbacks
---------
Callbacks are one of the more interesting plugin types. Adding additional callback plugins to Ansible allows for adding new behaviors when responding to events.
Examples
++++++++
Example callbacks are shown `in github in the callbacks directory <https://github.com/ansible/ansible/tree/devel/plugins/callbacks>_`.
The 'log_plays' callback is an example of how to intercept playbook events to a log file, and the 'mail' callback sends email
when playbooks complete.
The 'osx_say' callback provided is particularly entertaining -- it will respond with computer synthesized speech on OS X in relation
to playbook events, and is guaranteed to entertain and/or annoy coworkers.
Configuring
+++++++++++
To active a callback drop it in a callback directory as configured in ansible.cfg.
Development
+++++++++++
More information will come later, though see the source of any of the existing callbacks and you should be able to get started quickly.
They should be reasonably self explanatory.
Distributing Plugins
--------------------

View file

@ -115,17 +115,10 @@ with other solutions in your environment.
.. toctree::
:maxdepth: 1
developing_contributing
developing_code_standards
developing_api
developing_inventory
developing_modules
developing_plugins
developing_callbacks
developing_filters
developing_lookups
developing_transports
developing_modules
REST API <http://ansibleworks.com/ansibleworks-awx>
Miscellaneous

View file

@ -231,82 +231,6 @@ The contents of each variables file is a simple YAML dictionary, like this::
It's also possible to keep per-host and per-group variables in very
similar files, this is covered in :ref:`patterns`.
Prompting For Sensitive Data
````````````````````````````
You may wish to prompt the user for certain input, and can
do so with the similarly named 'vars_prompt' section. This has uses
beyond security, for instance, you may use the same playbook for all
software releases and would prompt for a particular release version
in a push-script::
---
- hosts: all
remote_user: root
vars:
from: "camelot"
vars_prompt:
name: "what is your name?"
quest: "what is your quest?"
favcolor: "what is your favorite color?"
There are full examples of both of these items in the github examples/playbooks directory.
If you have a variable that changes infrequently, it might make sense to
provide a default value that can be overridden. This can be accomplished using
the default argument::
vars_prompt:
- name: "release_version"
prompt: "Product release version"
default: "1.0"
An alternative form of vars_prompt allows for hiding input from the user, and may later support
some other options, but otherwise works equivalently::
vars_prompt:
- name: "some_password"
prompt: "Enter password"
private: yes
- name: "release_version"
prompt: "Product release version"
private: no
If `Passlib <http://pythonhosted.org/passlib/>`_ is installed, vars_prompt can also crypt the
entered value so you can use it, for instance, with the user module to define a password::
vars_prompt:
- name: "my_password2"
prompt: "Enter password2"
private: yes
encrypt: "md5_crypt"
confirm: yes
salt_size: 7
You can use any crypt scheme supported by 'Passlib':
- *des_crypt* - DES Crypt
- *bsdi_crypt* - BSDi Crypt
- *bigcrypt* - BigCrypt
- *crypt16* - Crypt16
- *md5_crypt* - MD5 Crypt
- *bcrypt* - BCrypt
- *sha1_crypt* - SHA-1 Crypt
- *sun_md5_crypt* - Sun MD5 Crypt
- *sha256_crypt* - SHA-256 Crypt
- *sha512_crypt* - SHA-512 Crypt
- *apr_md5_crypt* - Apaches MD5-Crypt variant
- *phpass* - PHPass Portable Hash
- *pbkdf2_digest* - Generic PBKDF2 Hashes
- *cta_pbkdf2_sha1* - Cryptaculars PBKDF2 hash
- *dlitz_pbkdf2_sha1* - Dwayne Litzenbergers PBKDF2 hash
- *scram* - SCRAM Hash
- *bsd_nthash* - FreeBSDs MCF-compatible nthash encoding
However, the only parameters accepted are 'salt' or 'salt_size'. You can use you own salt using
'salt', or have one generated automatically using 'salt_size'. If nothing is specified, a salt
of size 8 will be generated.
Passing Variables On The Command Line
`````````````````````````````````````
@ -393,4 +317,3 @@ Ansible's approach to configuration -- separating variables from tasks, keeps yo
from turning into arbitrary code with ugly nested ifs, conditionals, and so on - and results
in more streamlined & auditable configuration rules -- especially because there are a
minimum of decision points to track.

View file

@ -1,347 +1,3 @@
Playbooks
=========
.. contents::
:depth: 2
Introduction
````````````
Playbooks are a completely different way to use ansible than in task execution mode, and are
particularly powerful. Simply put, playbooks are the basis for a really simple
configuration management and multi-machine deployment system,
unlike any that already exist, and one that is very well suited to deploying complex applications.
Playbooks can declare configurations, but they can also orchestrate steps of
any manual ordered process, even as different steps must bounce back and forth
between sets of machines in particular orders. They can launch tasks
synchronously or asynchronously.
While you might run the main /usr/bin/ansible program for ad-hoc
tasks, playbooks are more likely to be kept in source control and used
to push out your configuration or assure the configurations of your
remote systems are in spec.
Let's dive in and see how they work. As you go, you may wish to open
the `github examples directory <https://github.com/ansible/ansible/tree/devel/examples/playbooks>`_ in
another tab, so you can apply the theory to what things look like in practice.
There are also some full sets of playbooks illustrating a lot of these techniques in the
`ansible-examples repository <https://github.com/ansible/ansible-examples>`_.
There are also many jumping off points after you learn playbooks, so hop back to the documentation
index after you're done with this section.
Playbook Language Example
`````````````````````````
Playbooks are expressed in YAML format and have a minimum of syntax.
Each playbook is composed of one or more 'plays' in a list.
The goal of a play is to map a group of hosts to some well defined roles, represented by
things ansible calls tasks. At a basic level, a task is nothing more than a call
to an ansible module, which you should have learned about in earlier chapters.
By composing a playbook of multiple 'plays', it is possible to
orchestrate multi-machine deployments, running certain steps on all
machines in the webservers group, then certain steps on the database
server group, then more commands back on the webservers group, etc.
For starters, here's a playbook that contains just one play::
---
- hosts: webservers
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum: pkg=httpd state=latest
- name: write the apache config file
template: src=/srv/httpd.j2 dest=/etc/httpd.conf
notify:
- restart apache
- name: ensure apache is running
service: name=httpd state=started
handlers:
- name: restart apache
service: name=httpd state=restarted
Below, we'll break down what the various features of the playbook language are.
Basics
``````
Hosts and Users
+++++++++++++++
For each play in a playbook, you get to choose which machines in your infrastructure
to target and what remote user to complete the steps (called tasks) as.
The `hosts` line is a list of one or more groups or host patterns,
separated by colons, as described in the :ref:`patterns`
documentation. The `remote_user` is just the name of the user account::
---
- hosts: webservers
remote_user: root
.. Note::
The `remote_user` parameter was formerly called just `user`. It was renamed in Ansible 1.4 to make it more distinguishable from the `user` module (used to create users on remote systems).
Support for running things from sudo is also available::
---
- hosts: webservers
remote_user: yourname
sudo: yes
You can also use sudo on a particular task instead of the whole play::
---
- hosts: webservers
remote_user: yourname
tasks:
- service: name=nginx state=started
sudo: yes
You can also login as you, and then sudo to different users than root::
---
- hosts: webservers
remote_user: yourname
sudo: yes
sudo_user: postgres
If you need to specify a password to sudo, run `ansible-playbook` with ``--ask-sudo-pass`` (`-K`).
If you run a sudo playbook and the playbook seems to hang, it's probably stuck at the sudo prompt.
Just `Control-C` to kill it and run it again with `-K`.
.. important::
When using `sudo_user` to a user other than root, the module
arguments are briefly written into a random tempfile in /tmp.
These are deleted immediately after the command is executed. This
only occurs when sudoing from a user like 'bob' to 'timmy', not
when going from 'bob' to 'root', or logging in directly as 'bob' or
'root'. If this concerns you that this data is briefly readable
(not writeable), avoid transferring uncrypted passwords with
`sudo_user` set. In other cases, '/tmp' is not used and this does
not come into play. Ansible also takes care to not log password
parameters.
Tasks list
++++++++++
Each play contains a list of tasks. Tasks are executed in order, one
at a time, against all machines matched by the host pattern,
before moving on to the next task. It is important to understand that, within a play,
all hosts are going to get the same task directives. It is the purpose of a play to map
a selection of hosts to tasks.
When running the playbook, which runs top to bottom, hosts with failed tasks are
taken out of the rotation for the entire playbook. If things fail, simply correct the playbook file and rerun.
The goal of each task is to execute a module, with very specific arguments.
Variables, as mentioned above, can be used in arguments to modules.
Modules are 'idempotent', meaning if you run them
again, they will make only the changes they must in order to bring the
system to the desired state. This makes it very safe to rerun
the same playbook multiple times. They won't change things
unless they have to change things.
The `command` and `shell` modules will typically rerun the same command again,
which is totally ok if the command is something like
'chmod' or 'setsebool', etc. Though there is a 'creates' flag available which can
be used to make these modules also idempotent.
Every task should have a `name`, which is included in the output from
running the playbook. This is output for humans, so it is
nice to have reasonably good descriptions of each task step. If the name
is not provided though, the string fed to 'action' will be used for
output.
Tasks can be declared using the legacy "action: module options" format, but
it is recommeded that you use the more conventional "module: options" format.
This recommended format is used throughout the documentation, but you may
encounter the older format in some playbooks.
Here is what a basic task looks like, as with most modules,
the service module takes key=value arguments::
tasks:
- name: make sure apache is running
service: name=httpd state=running
The `command` and `shell` modules are the one modules that just takes a list
of arguments, and don't use the key=value form. This makes
them work just like you would expect. Simple::
tasks:
- name: disable selinux
command: /sbin/setenforce 0
The command and shell module care about return codes, so if you have a command
whose successful exit code is not zero, you may wish to do this::
tasks:
- name: run this command and ignore the result
shell: /usr/bin/somecommand || /bin/true
Or this::
tasks:
- name: run this command and ignore the result
shell: /usr/bin/somecommand
ignore_errors: True
If the action line is getting too long for comfort you can break it on
a space and indent any continuation lines::
tasks:
- name: Copy ansible inventory file to client
copy: src=/etc/ansible/hosts dest=/etc/ansible/hosts
owner=root group=root mode=0644
Variables can be used in action lines. Suppose you defined
a variable called 'vhost' in the 'vars' section, you could do this::
tasks:
- name: create a virtual host file for {{ vhost }}
template: src=somefile.j2 dest=/etc/httpd/conf.d/{{ vhost }}
Those same variables are usable in templates, which we'll get to later.
Now in a very basic playbook all the tasks will be listed directly in that play, though it will usually
make more sense to break up tasks using the 'include:' directive. We'll show that a bit later.
Action Shorthand
````````````````
.. versionadded:: 0.8
Ansible prefers listing modules like this in 0.8 and later::
template: src=templates/foo.j2 dest=/etc/foo.conf
You will notice in earlier versions, this was only available as::
action: template src=templates/foo.j2 dest=/etc/foo.conf
The old form continues to work in newer versions without any plan of deprecation.
Handlers: Running Operations On Change
``````````````````````````````````````
As we've mentioned, modules are written to be 'idempotent' and can relay when
they have made a change on the remote system. Playbooks recognize this and
have a basic event system that can be used to respond to change.
These 'notify' actions are triggered at the end of each block of tasks in a playbook, and will only be
triggered once even if notified by multiple different tasks.
For instance, multiple resources may indicate
that apache needs to be restarted because they have changed a config file,
but apache will only be bounced once to avoid unneccessary restarts.
Here's an example of restarting two services when the contents of a file
change, but only if the file changes::
- name: template configuration file
template: src=template.j2 dest=/etc/foo.conf
notify:
- restart memcached
- restart apache
The things listed in the 'notify' section of a task are called
handlers.
Handlers are lists of tasks, not really any different from regular
tasks, that are referenced by name. Handlers are what notifiers
notify. If nothing notifies a handler, it will not run. Regardless
of how many things notify a handler, it will run only once, after all
of the tasks complete in a particular play.
Here's an example handlers section::
handlers:
- name: restart memcached
service: name=memcached state=restarted
- name: restart apache
service: name=apache state=restarted
Handlers are best used to restart services and trigger reboots. You probably
won't need them for much else.
.. note::
Notify handlers are always run in the order written.
Roles are described later on. It's worthwhile to point out that handlers are
automatically processed between 'pre_tasks', 'roles', 'tasks', and 'post_tasks'
sections. If you ever want to flush all the handler commands immediately though,
in 1.2 and later, you can::
tasks:
- shell: some tasks go here
- meta: flush_handlers
- shell: some other tasks
In the above example any queued up handlers would be processed early when the 'meta'
statement was reached. This is a bit of a niche case but can come in handy from
time to time.
Executing A Playbook
````````````````````
Now that you've learned playbook syntax, how do you run a playbook? It's simple.
Let's run a playbook using a parallelism level of 10::
ansible-playbook playbook.yml -f 10
Tips and Tricks
```````````````
Look at the bottom of the playbook execution for a summary of the nodes that were targeted
and how they performed. General failures and fatal "unreachable" communication attempts are
kept separate in the counts.
If you ever want to see detailed output from successful modules as well as unsuccessful ones,
use the '--verbose' flag. This is available in Ansible 0.5 and later.
Ansible playbook output is vastly upgraded if the cowsay
package is installed. Try it!
To see what hosts would be affected by a playbook before you run it, you
can do this::
ansible-playbook playbook.yml --list-hosts.
.. seealso::
:doc:`YAMLSyntax`
Learn about YAML syntax
:doc:`bestpractices`
Various tips about managing playbooks in the real world
:doc:`modules`
Learn about available modules
:doc:`moduledev`
Learn how to extend Ansible by writing your own modules
:doc:`patterns`
Learn about how to select hosts
`Github examples directory <https://github.com/ansible/ansible/tree/devel/examples/playbooks>`_
Complete playbook files from the github project source
`Mailing List <http://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
Playbooks
=========

View file

@ -0,0 +1,54 @@
Asynchronous Actions and Polling
================================
By default tasks in playbooks block, meaning the connections stay open
until the task is done on each node. This may not always be desirable, or you may
be running operations that take longer than the SSH timeout.
The easiest way to do this is
to kick them off all at once and then poll until they are done.
You will also want to use asynchronous mode on very long running
operations that might be subject to timeout.
To launch a task asynchronously, specify its maximum runtime
and how frequently you would like to poll for status. The default
poll value is 10 seconds if you do not specify a value for `poll`::
---
- hosts: all
remote_user: root
tasks:
- name: simulate long running op (15 sec), wait for up to 45, poll every 5
command: /bin/sleep 15
async: 45
poll: 5
.. note::
There is no default for the async time limit. If you leave off the
'async' keyword, the task runs synchronously, which is Ansible's
default.
Alternatively, if you do not need to wait on the task to complete, you may
"fire and forget" by specifying a poll value of 0::
---
- hosts: all
remote_user: root
tasks:
- name: simulate long running op, allow to run for 45, fire and forget
command: /bin/sleep 15
async: 45
poll: 0
.. note::
You shouldn't "fire and forget" with operations that require
exclusive locks, such as yum transactions, if you expect to run other
commands later in the playbook against those same resources.
.. note::
Using a higher value for ``--forks`` will result in kicking off asynchronous
tasks even faster. This also increases the efficiency of polling.

View file

@ -0,0 +1,260 @@
Conditionals
============
Often the result of a play may depend on the value of a variable, fact (something learned about the remote system),
or previous task result. In some cases, the values of variables may depend on other variables.
Further, additional groups can be created to manage hosts based on
whether the hosts match other criteria. There are many options to control execution flow in Ansible.
Let's dig into what they are.
.. contents::
:depth: 2
The When Statement
``````````````````
Sometimes you will want to skip a particular step on a particular host. This could be something
as simple as not installing a certain package if the operating system is a particular version,
or it could be something like performing some cleanup steps if a filesystem is getting full.
This is easy to do in Ansible, with the `when` clause, which contains a Jinja2 expression (see `playbooks_variables`).
It's actually pretty simple:
tasks:
- name: "shutdown Debian flavored systems"
command: /sbin/shutdown -t now
when: ansible_os_family == "Debian"
A number of Jinja2 "filters" can also be used in when statements, some of which are unique
and provided by Ansible. Suppose we want to ignore the error of one statement and then
decide to do something conditionally based on success or failure::
tasks:
- command: /bin/false
register: result
ignore_errors: True
- command: /bin/something
when: result|failed
- command: /bin/something_else
when: result|success
- command: /bin/still/something_else
when: result|skipped
Note that was a little bit of foreshadowing on the 'register' statement. We'll get to it a bit later in this chapter.
As a reminder, to see what facts are available on a particular system, you can do::
ansible hostname.example.com -m setup
Tip: Sometimes you'll get back a variable that's a string and you'll want to do a math operation comparison on it. You can do this like so::
tasks:
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_os_family == "RedHat" and ansible_lsb.major_release|int >= 6
.. note:: the above example requires the lsb_release package on the target host in order to return the ansible_lsb.major_release fact.
Variables defined in the playbooks or inventory can also be used. An example may be the execution of a task based on a variable's boolean value::
vars:
epic: true
Then a conditional execution might look like::
tasks:
- shell: echo "This certainly is epic!"
when: epic
or::
tasks:
- shell: echo "This certainly isn't epic!"
when: not epic
If a required variable has not been set, you can skip or fail using Jinja2's
`defined` test. For example::
tasks:
- shell: echo "I've got '{{ foo }}' and am not afraid to use it!"
when: foo is defined
- fail: msg="Bailing out: this play requires 'bar'"
when: bar is not defined
This is especially useful in combination with the conditional import of vars
files (see below).
.. note :: When combining `when` with `with_items`, be aware that the `when` statement is processed separately for each item.
This is by design::
tasks:
- command: echo {{ item }}
with_items: [ 0, 2, 4, 6, 8, 10 ]
when: item > 5
Loading in Custom Facts
```````````````````````
It's also easy to provide your own facts if you want, which is covered in :doc:`moduledev`. To run them, just
make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned
there will be accessible to future tasks::
tasks:
- name: gather site specific fact data
action: site_facts
- command: /usr/bin/thingy
when: "{{ my_custom_fact_just_retrieved_from_the_remote_system }} == '1234'"
The Register Keyword
````````````````````
The 'register' keyword saves the result of a command in a variable. Use "-v" on the playbook command line to see
what kind of values are available, but there are many.
One useful trick with *when* is to key off the result of a last command. As an example::
tasks:
- template: src=/templates/foo.j2 dest=/etc/foo.conf
register: last_result
- command: echo 'the file has changed'
when: last_result.changed
{{ last_result }} is a variable set by the register directive. This assumes Ansible 0.8 and later.
Applying 'when' to roles and includes
`````````````````````````````````````
Note that if you have several tasks that all share the same conditional statement, you can affix the conditional
to a task include statement as below. Note this does not work with playbook includes, just task includes. All the tasks
get evaluated, but the conditional is applied to each and every task::
- include: tasks/sometasks.yml
when: "'reticulating splines' in output"
Or with a role::
- hosts: webservers
roles:
- { role: debian_stock_config, when: ansible_os_family == 'Debian' }
You will note a lot of 'skipped' output by default in Ansible when using this approach on systems that don't match the criteria.
Read up on the 'group_by' module in the `modules` docs for a more streamlined way to accomplish the same thing.
Conditional Imports
```````````````````
.. note:: This is an advanced topic that is infrequently used. You can probably skip this section.
Sometimes you will want to do certain things differently in a playbook based on certain criteria.
Having one playbook that works on multiple platforms and OS versions is a good example.
As an example, the name of the Apache package may be different between CentOS and Debian,
but it is easily handled with a minimum of syntax in an Ansible Playbook::
---
- hosts: all
remote_user: root
vars_files:
- "vars/common.yml"
- [ "vars/{{ ansible_os_family }}.yml", "vars/os_defaults.yml" ]
tasks:
- name: make sure apache is running
service: name={{ apache }} state=running
.. note::
The variable 'ansible_os_family' is being interpolated into
the list of filenames being defined for vars_files.
As a reminder, the various YAML files contain just keys and values::
---
# for vars/CentOS.yml
apache: httpd
somethingelse: 42
How does this work? If the operating system was 'CentOS', the first file Ansible would try to import
would be 'vars/CentOS.yml', followed by '/vars/os_defaults.yml' if that file
did not exist. If no files in the list were found, an error would be raised.
On Debian, it would instead first look towards 'vars/Debian.yml' instead of 'vars/CentOS.yml', before
falling back on 'vars/os_defaults.yml'. Pretty simple.
To use this conditional import feature, you'll need facter or ohai installed prior to running the playbook, but
you can of course push this out with Ansible if you like::
# for facter
ansible -m yum -a "pkg=facter ensure=installed"
ansible -m yum -a "pkg=ruby-json ensure=installed"
# for ohai
ansible -m yum -a "pkg=ohai ensure=installed"
Ansible's approach to configuration -- separating variables from tasks, keeps your playbooks
from turning into arbitrary code with ugly nested ifs, conditionals, and so on - and results
in more streamlined & auditable configuration rules -- especially because there are a
minimum of decision points to track.
Selecting Files And Templates Based On Variables
````````````````````````````````````````````````
.. note:: This is an advanced topic that is infrequently used. You can probably skip this section.
Sometimes a configuration file you want to copy, or a template you will use may depend on a variable.
The following construct selects the first available file appropriate for the variables of a given host, which is often much cleaner than putting a lot of if conditionals in a template.
The following example shows how to template out a configuration file that was very different between, say, CentOS and Debian::
- name: template a file
template: src={{ item }} dest=/etc/myapp/foo.conf
with_first_found:
files:
- {{ ansible_distribution }}.conf
- default.conf
paths:
- search_location_one/somedir/
- /opt/other_location/somedir/
Register Variables
``````````````````
.. versionadded:: 0.7
Often in a playbook it may be useful to store the result of a given command in a variable and access
it later. Use of the command module in this way can in many ways eliminate the need to write site specific facts, for
instance, you could test for the existence of a particular program.
The 'register' keyword decides what variable to save a result in. The resulting variables can be used in templates, action lines, or *when* statements. It looks like this (in an obviously trivial example)::
- name: test play
hosts: all
tasks:
- shell: cat /etc/motd
register: motd_contents
- shell: echo "motd contains the word hi"
when: motd_contents.stdout.find('hi') != -1
As shown previously, the registered variable's string contents are accessible with the 'stdout' value.
The registered result can be used in the "with_items" of a task if it is converted into
a list (or already is a list) as shown below. "stdout_lines" is already available on the object as
well though you could also call "home_dirs.stdout.split()" if you wanted, and could split by other
fields::
- name: registered variable usage as a with_items list
hosts: all
tasks:
- name: retrieve the list of home directories
command: ls /home
register: home_dirs
- name: add home dirs to the backup spooler
file: path=/mnt/bkspool/{{ item }} src=/home/{{ item }} state=link
with_items: home_dirs.stdout_lines
# same as with_items: home_dirs.stdout.split()

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff