mirror of
https://github.com/ansible-collections/community.general.git
synced 2024-09-14 20:13:21 +02:00
Merge branch 'devel' of https://github.com/ansible/ansible into devel
This commit is contained in:
commit
a62ac9f17e
181 changed files with 2099 additions and 624 deletions
|
@ -38,3 +38,6 @@ notifications:
|
||||||
on_failure: always
|
on_failure: always
|
||||||
skip_join: true
|
skip_join: true
|
||||||
nick: ansibletravis
|
nick: ansibletravis
|
||||||
|
webhooks:
|
||||||
|
# trigger Buildtime Trend Service to parse Travis CI log
|
||||||
|
- https://buildtimetrend.herokuapp.com/travis
|
||||||
|
|
|
@ -3,6 +3,14 @@ Ansible Changes By Release
|
||||||
|
|
||||||
## 2.2 TBD - ACTIVE DEVELOPMENT
|
## 2.2 TBD - ACTIVE DEVELOPMENT
|
||||||
|
|
||||||
|
###Major Changes:
|
||||||
|
|
||||||
|
* Added support for binary modules
|
||||||
|
|
||||||
|
####New Modules:
|
||||||
|
- aws
|
||||||
|
* ec2_customer_gateway
|
||||||
|
|
||||||
## 2.1 "The Song Remains the Same" - ACTIVE DEVELOPMENT
|
## 2.1 "The Song Remains the Same" - ACTIVE DEVELOPMENT
|
||||||
|
|
||||||
###Major Changes:
|
###Major Changes:
|
||||||
|
|
|
@ -10,7 +10,7 @@ Released
|
||||||
++++++++
|
++++++++
|
||||||
|
|
||||||
2.1.0 "The Song Remains the Same" in progress
|
2.1.0 "The Song Remains the Same" in progress
|
||||||
2.0.2 "Over the Hills and Far Away" 04-19-2015
|
2.0.2 "Over the Hills and Far Away" 04-19-2016
|
||||||
2.0.1 "Over the Hills and Far Away" 02-24-2016
|
2.0.1 "Over the Hills and Far Away" 02-24-2016
|
||||||
2.0.0 "Over the Hills and Far Away" 01-12-2016
|
2.0.0 "Over the Hills and Far Away" 01-12-2016
|
||||||
1.9.6 "Dancing In the Streets" 04-15-2016
|
1.9.6 "Dancing In the Streets" 04-15-2016
|
||||||
|
|
28
ROADMAP.md
28
ROADMAP.md
|
@ -17,13 +17,13 @@ These roadmaps are the team's *best guess* roadmaps based on the Ansible team's
|
||||||
## Windows, General
|
## Windows, General
|
||||||
* Figuring out privilege escalation (runas w/ username/password)
|
* Figuring out privilege escalation (runas w/ username/password)
|
||||||
* Implement kerberos encryption over http
|
* Implement kerberos encryption over http
|
||||||
* pywinrm conversion to requests (Some mess here on pywinrm/requests. will need docs etc.)
|
* ~~pywinrm conversion to requests (Some mess here on pywinrm/requests. will need docs etc.)~~ DONE
|
||||||
* NTLM support
|
* ~~NTLM support~~ DONE
|
||||||
|
|
||||||
## Modules
|
## Modules
|
||||||
* Windows
|
* Windows
|
||||||
* Finish cleaning up tests and support for post-beta release
|
* ~~Finish cleaning up tests and support for post-beta release~~ DONE
|
||||||
* Strict mode cleanup (one module in core)
|
* ~~Strict mode cleanup (one module in core)~~ DONE
|
||||||
* Domain user/group management
|
* Domain user/group management
|
||||||
* Finish win\_host and win\_rm in the domain/workgroup modules.
|
* Finish win\_host and win\_rm in the domain/workgroup modules.
|
||||||
* Close 2 existing PRs (These were deemed insufficient)
|
* Close 2 existing PRs (These were deemed insufficient)
|
||||||
|
@ -42,16 +42,16 @@ These roadmaps are the team's *best guess* roadmaps based on the Ansible team's
|
||||||
* VMware modules moved to official pyvmomi bindings
|
* VMware modules moved to official pyvmomi bindings
|
||||||
* VMware inventory script updates for pyvmomi, adding tagging support
|
* VMware inventory script updates for pyvmomi, adding tagging support
|
||||||
* Azure (Notes: We've made progress here now that Microsoft has swaped out the code generator on the Azure Python SDK. We have basic modules working against all of these resources at this time. Could ship it against current SDK, but may break. Or should the version be pinned?)
|
* Azure (Notes: We've made progress here now that Microsoft has swaped out the code generator on the Azure Python SDK. We have basic modules working against all of these resources at this time. Could ship it against current SDK, but may break. Or should the version be pinned?)
|
||||||
* Minimal Azure coverage using new ARM api
|
* ~~Minimal Azure coverage using new ARM api~~ DONE
|
||||||
* Resource Group
|
* ~~Resource Group~~ DONE
|
||||||
* Virtual Network
|
* ~~Virtual Network~~ DONE
|
||||||
* Subnet
|
* ~~Subnet~~ DONE
|
||||||
* Public IP
|
* ~~Public IP~~ DONE
|
||||||
* Network Interface
|
* ~~Network Interface~~ DONE
|
||||||
* Storage Account
|
* ~~Storage Account~~ DONE
|
||||||
* Security Group
|
* ~~Security Group~~ DONE
|
||||||
* Virtual Machine
|
* ~~Virtual Machine~~ DONE
|
||||||
* Update of inventory script to use new API, adding tagging support
|
* ~~Update of inventory script to use new API, adding tagging support~~ DONE
|
||||||
* Docker:
|
* Docker:
|
||||||
* Start Docker module refactor
|
* Start Docker module refactor
|
||||||
* Update to match current docker CLI capabilities
|
* Update to match current docker CLI capabilities
|
||||||
|
|
|
@ -5,6 +5,10 @@
|
||||||
|
|
||||||
host = http://PATH_TO_COBBLER_SERVER/cobbler_api
|
host = http://PATH_TO_COBBLER_SERVER/cobbler_api
|
||||||
|
|
||||||
|
# If API needs authentication add 'username' and 'password' options here.
|
||||||
|
#username = foo
|
||||||
|
#password = bar
|
||||||
|
|
||||||
# API calls to Cobbler can be slow. For this reason, we cache the results of an API
|
# API calls to Cobbler can be slow. For this reason, we cache the results of an API
|
||||||
# call. Set this to the path you want cache files to be written to. Two files
|
# call. Set this to the path you want cache files to be written to. Two files
|
||||||
# will be written to this directory:
|
# will be written to this directory:
|
||||||
|
|
|
@ -120,6 +120,9 @@ class CobblerInventory(object):
|
||||||
def _connect(self):
|
def _connect(self):
|
||||||
if not self.conn:
|
if not self.conn:
|
||||||
self.conn = xmlrpclib.Server(self.cobbler_host, allow_none=True)
|
self.conn = xmlrpclib.Server(self.cobbler_host, allow_none=True)
|
||||||
|
self.token = None
|
||||||
|
if self.cobbler_username is not None:
|
||||||
|
self.token = self.conn.login(self.cobbler_username, self.cobbler_password)
|
||||||
|
|
||||||
def is_cache_valid(self):
|
def is_cache_valid(self):
|
||||||
""" Determines if the cache files have expired, or if it is still valid """
|
""" Determines if the cache files have expired, or if it is still valid """
|
||||||
|
@ -140,6 +143,12 @@ class CobblerInventory(object):
|
||||||
config.read(os.path.dirname(os.path.realpath(__file__)) + '/cobbler.ini')
|
config.read(os.path.dirname(os.path.realpath(__file__)) + '/cobbler.ini')
|
||||||
|
|
||||||
self.cobbler_host = config.get('cobbler', 'host')
|
self.cobbler_host = config.get('cobbler', 'host')
|
||||||
|
self.cobbler_username = None
|
||||||
|
self.cobbler_password = None
|
||||||
|
if config.has_option('cobbler', 'username'):
|
||||||
|
self.cobbler_username = config.get('cobbler', 'username')
|
||||||
|
if config.has_option('cobbler', 'password'):
|
||||||
|
self.cobbler_password = config.get('cobbler', 'password')
|
||||||
|
|
||||||
# Cache related
|
# Cache related
|
||||||
cache_path = config.get('cobbler', 'cache_path')
|
cache_path = config.get('cobbler', 'cache_path')
|
||||||
|
@ -163,8 +172,10 @@ class CobblerInventory(object):
|
||||||
self._connect()
|
self._connect()
|
||||||
self.groups = dict()
|
self.groups = dict()
|
||||||
self.hosts = dict()
|
self.hosts = dict()
|
||||||
|
if self.token is not None:
|
||||||
data = self.conn.get_systems()
|
data = self.conn.get_systems(self.token)
|
||||||
|
else:
|
||||||
|
data = self.conn.get_systems()
|
||||||
|
|
||||||
for host in data:
|
for host in data:
|
||||||
# Get the FQDN for the host and add it to the right groups
|
# Get the FQDN for the host and add it to the right groups
|
||||||
|
|
|
@ -84,9 +84,9 @@ to retrieve the kv_groups and kv_metadata based on your consul configuration.
|
||||||
This is used to lookup groups for a node in the key value store. It specifies a
|
This is used to lookup groups for a node in the key value store. It specifies a
|
||||||
path to which each discovered node's name will be added to create a key to query
|
path to which each discovered node's name will be added to create a key to query
|
||||||
the key/value store. There it expects to find a comma separated list of group
|
the key/value store. There it expects to find a comma separated list of group
|
||||||
names to which the node should be added e.g. if the inventory contains
|
names to which the node should be added e.g. if the inventory contains node
|
||||||
'nyc-web-1' and kv_groups = 'ansible/groups' then the key
|
'nyc-web-1' in datacenter 'nyc-dc1' and kv_groups = 'ansible/groups' then the key
|
||||||
'v1/kv/ansible/groups/nyc-web-1' will be queried for a group list. If this query
|
'ansible/groups/nyc-dc1/nyc-web-1' will be queried for a group list. If this query
|
||||||
returned 'test,honeypot' then the node address to both groups.
|
returned 'test,honeypot' then the node address to both groups.
|
||||||
|
|
||||||
'kv_metadata':
|
'kv_metadata':
|
||||||
|
@ -94,7 +94,9 @@ names to which the node should be added e.g. if the inventory contains
|
||||||
kv_metadata is used to lookup metadata for each discovered node. Like kv_groups
|
kv_metadata is used to lookup metadata for each discovered node. Like kv_groups
|
||||||
above it is used to build a path to lookup in the kv store where it expects to
|
above it is used to build a path to lookup in the kv store where it expects to
|
||||||
find a json dictionary of metadata entries. If found, each key/value pair in the
|
find a json dictionary of metadata entries. If found, each key/value pair in the
|
||||||
dictionary is added to the metadata for the node.
|
dictionary is added to the metadata for the node. eg node 'nyc-web-1' in datacenter
|
||||||
|
'nyc-dc1' and kv_metadata = 'ansible/metadata', then the key
|
||||||
|
'ansible/groups/nyc-dc1/nyc-web-1' should contain '{"databse": "postgres"}'
|
||||||
|
|
||||||
'availability':
|
'availability':
|
||||||
|
|
||||||
|
|
|
@ -45,3 +45,11 @@ gce_service_account_email_address =
|
||||||
gce_service_account_pem_file_path =
|
gce_service_account_pem_file_path =
|
||||||
gce_project_id =
|
gce_project_id =
|
||||||
|
|
||||||
|
[inventory]
|
||||||
|
# The 'inventory_ip_type' parameter specifies whether 'ansible_ssh_host' should
|
||||||
|
# contain the instance internal or external address. Values may be either
|
||||||
|
# 'internal' or 'external'. If 'external' is specified but no external instance
|
||||||
|
# address exists, the internal address will be used.
|
||||||
|
# The INVENTORY_IP_TYPE environment variable will override this value.
|
||||||
|
inventory_ip_type =
|
||||||
|
|
||||||
|
|
|
@ -69,7 +69,8 @@ Examples:
|
||||||
$ contrib/inventory/gce.py --host my_instance
|
$ contrib/inventory/gce.py --host my_instance
|
||||||
|
|
||||||
Author: Eric Johnson <erjohnso@google.com>
|
Author: Eric Johnson <erjohnso@google.com>
|
||||||
Version: 0.0.1
|
Contributors: Matt Hite <mhite@hotmail.com>
|
||||||
|
Version: 0.0.2
|
||||||
'''
|
'''
|
||||||
|
|
||||||
__requires__ = ['pycrypto>=2.6']
|
__requires__ = ['pycrypto>=2.6']
|
||||||
|
@ -83,7 +84,7 @@ except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
USER_AGENT_PRODUCT="Ansible-gce_inventory_plugin"
|
USER_AGENT_PRODUCT="Ansible-gce_inventory_plugin"
|
||||||
USER_AGENT_VERSION="v1"
|
USER_AGENT_VERSION="v2"
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
|
@ -111,7 +112,11 @@ class GceInventory(object):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
# Read settings and parse CLI arguments
|
# Read settings and parse CLI arguments
|
||||||
self.parse_cli_args()
|
self.parse_cli_args()
|
||||||
|
self.config = self.get_config()
|
||||||
self.driver = self.get_gce_driver()
|
self.driver = self.get_gce_driver()
|
||||||
|
self.ip_type = self.get_inventory_options()
|
||||||
|
if self.ip_type:
|
||||||
|
self.ip_type = self.ip_type.lower()
|
||||||
|
|
||||||
# Just display data for specific host
|
# Just display data for specific host
|
||||||
if self.args.host:
|
if self.args.host:
|
||||||
|
@ -125,9 +130,13 @@ class GceInventory(object):
|
||||||
pretty=self.args.pretty))
|
pretty=self.args.pretty))
|
||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
|
|
||||||
def get_gce_driver(self):
|
def get_config(self):
|
||||||
"""Determine the GCE authorization settings and return a
|
"""
|
||||||
libcloud driver.
|
Populates a SafeConfigParser object with defaults and
|
||||||
|
attempts to read an .ini-style configuration from the filename
|
||||||
|
specified in GCE_INI_PATH. If the environment variable is
|
||||||
|
not present, the filename defaults to gce.ini in the current
|
||||||
|
working directory.
|
||||||
"""
|
"""
|
||||||
gce_ini_default_path = os.path.join(
|
gce_ini_default_path = os.path.join(
|
||||||
os.path.dirname(os.path.realpath(__file__)), "gce.ini")
|
os.path.dirname(os.path.realpath(__file__)), "gce.ini")
|
||||||
|
@ -142,14 +151,32 @@ class GceInventory(object):
|
||||||
'gce_service_account_pem_file_path': '',
|
'gce_service_account_pem_file_path': '',
|
||||||
'gce_project_id': '',
|
'gce_project_id': '',
|
||||||
'libcloud_secrets': '',
|
'libcloud_secrets': '',
|
||||||
|
'inventory_ip_type': '',
|
||||||
})
|
})
|
||||||
if 'gce' not in config.sections():
|
if 'gce' not in config.sections():
|
||||||
config.add_section('gce')
|
config.add_section('gce')
|
||||||
config.read(gce_ini_path)
|
if 'inventory' not in config.sections():
|
||||||
|
config.add_section('inventory')
|
||||||
|
|
||||||
|
config.read(gce_ini_path)
|
||||||
|
return config
|
||||||
|
|
||||||
|
def get_inventory_options(self):
|
||||||
|
"""Determine inventory options. Environment variables always
|
||||||
|
take precedence over configuration files."""
|
||||||
|
ip_type = self.config.get('inventory', 'inventory_ip_type')
|
||||||
|
# If the appropriate environment variables are set, they override
|
||||||
|
# other configuration
|
||||||
|
ip_type = os.environ.get('INVENTORY_IP_TYPE', ip_type)
|
||||||
|
return ip_type
|
||||||
|
|
||||||
|
def get_gce_driver(self):
|
||||||
|
"""Determine the GCE authorization settings and return a
|
||||||
|
libcloud driver.
|
||||||
|
"""
|
||||||
# Attempt to get GCE params from a configuration file, if one
|
# Attempt to get GCE params from a configuration file, if one
|
||||||
# exists.
|
# exists.
|
||||||
secrets_path = config.get('gce', 'libcloud_secrets')
|
secrets_path = self.config.get('gce', 'libcloud_secrets')
|
||||||
secrets_found = False
|
secrets_found = False
|
||||||
try:
|
try:
|
||||||
import secrets
|
import secrets
|
||||||
|
@ -175,10 +202,10 @@ class GceInventory(object):
|
||||||
pass
|
pass
|
||||||
if not secrets_found:
|
if not secrets_found:
|
||||||
args = [
|
args = [
|
||||||
config.get('gce','gce_service_account_email_address'),
|
self.config.get('gce','gce_service_account_email_address'),
|
||||||
config.get('gce','gce_service_account_pem_file_path')
|
self.config.get('gce','gce_service_account_pem_file_path')
|
||||||
]
|
]
|
||||||
kwargs = {'project': config.get('gce', 'gce_project_id')}
|
kwargs = {'project': self.config.get('gce', 'gce_project_id')}
|
||||||
|
|
||||||
# If the appropriate environment variables are set, they override
|
# If the appropriate environment variables are set, they override
|
||||||
# other configuration; process those into our args and kwargs.
|
# other configuration; process those into our args and kwargs.
|
||||||
|
@ -218,6 +245,12 @@ class GceInventory(object):
|
||||||
md[entry['key']] = entry['value']
|
md[entry['key']] = entry['value']
|
||||||
|
|
||||||
net = inst.extra['networkInterfaces'][0]['network'].split('/')[-1]
|
net = inst.extra['networkInterfaces'][0]['network'].split('/')[-1]
|
||||||
|
# default to exernal IP unless user has specified they prefer internal
|
||||||
|
if self.ip_type == 'internal':
|
||||||
|
ssh_host = inst.private_ips[0]
|
||||||
|
else:
|
||||||
|
ssh_host = inst.public_ips[0] if len(inst.public_ips) >= 1 else inst.private_ips[0]
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'gce_uuid': inst.uuid,
|
'gce_uuid': inst.uuid,
|
||||||
'gce_id': inst.id,
|
'gce_id': inst.id,
|
||||||
|
@ -233,7 +266,7 @@ class GceInventory(object):
|
||||||
'gce_metadata': md,
|
'gce_metadata': md,
|
||||||
'gce_network': net,
|
'gce_network': net,
|
||||||
# Hosts don't have a public name, so we add an IP
|
# Hosts don't have a public name, so we add an IP
|
||||||
'ansible_ssh_host': inst.public_ips[0] if len(inst.public_ips) >= 1 else inst.private_ips[0]
|
'ansible_ssh_host': ssh_host
|
||||||
}
|
}
|
||||||
|
|
||||||
def get_instance(self, instance_name):
|
def get_instance(self, instance_name):
|
||||||
|
|
|
@ -46,7 +46,7 @@ Ask for privilege escalation password.
|
||||||
*-k*, *--ask-pass*::
|
*-k*, *--ask-pass*::
|
||||||
|
|
||||||
Prompt for the connection password, if it is needed for the transport used.
|
Prompt for the connection password, if it is needed for the transport used.
|
||||||
For example, using ssh and not having a key-based authentication with ssh-agent.
|
For example, using ssh and not having a key-based authentication with ssh-agent.
|
||||||
|
|
||||||
*--ask-su-pass*::
|
*--ask-su-pass*::
|
||||||
|
|
||||||
|
@ -96,7 +96,7 @@ Level of parallelism. 'NUM' is specified as an integer, the default is 5.
|
||||||
|
|
||||||
*-h*, *--help*::
|
*-h*, *--help*::
|
||||||
|
|
||||||
Show help page and exit
|
Show help page and exit.
|
||||||
|
|
||||||
*-i* 'PATH', *--inventory=*'PATH'::
|
*-i* 'PATH', *--inventory=*'PATH'::
|
||||||
|
|
||||||
|
@ -128,7 +128,7 @@ environment variable.
|
||||||
|
|
||||||
*--private-key=*'PRIVATE_KEY_FILE'::
|
*--private-key=*'PRIVATE_KEY_FILE'::
|
||||||
|
|
||||||
Use this file to authenticate the connection
|
Use this file to authenticate the connection.
|
||||||
|
|
||||||
*--start-at-task=*'START_AT'::
|
*--start-at-task=*'START_AT'::
|
||||||
|
|
||||||
|
@ -140,11 +140,11 @@ One-step-at-a-time: confirm each task before running.
|
||||||
|
|
||||||
*-S*, --su*::
|
*-S*, --su*::
|
||||||
|
|
||||||
Run operations with su (deprecated, use become)
|
Run operations with su (deprecated, use become).
|
||||||
|
|
||||||
*-R SU-USER*, *--su-user=*'SU_USER'::
|
*-R SU-USER*, *--su-user=*'SU_USER'::
|
||||||
|
|
||||||
run operations with su as this user (default=root) (deprecated, use become)
|
run operations with su as this user (default=root) (deprecated, use become).
|
||||||
|
|
||||||
*-s*, *--sudo*::
|
*-s*, *--sudo*::
|
||||||
|
|
||||||
|
@ -178,7 +178,7 @@ Only run plays and tasks whose tags do not match these values.
|
||||||
|
|
||||||
*--syntax-check*::
|
*--syntax-check*::
|
||||||
|
|
||||||
Look for syntax errors in the playbook, but don't run anything
|
Look for syntax errors in the playbook, but don't run anything.
|
||||||
|
|
||||||
*-t*, 'TAGS', *--tags=*'TAGS'::
|
*-t*, 'TAGS', *--tags=*'TAGS'::
|
||||||
|
|
||||||
|
@ -227,7 +227,7 @@ EXIT STATUS
|
||||||
ENVIRONMENT
|
ENVIRONMENT
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
The following environment variables may be specified.
|
The following environment variables may be specified:
|
||||||
|
|
||||||
ANSIBLE_INVENTORY -- Override the default ansible inventory file
|
ANSIBLE_INVENTORY -- Override the default ansible inventory file
|
||||||
|
|
||||||
|
|
|
@ -60,26 +60,56 @@ People
|
||||||
======
|
======
|
||||||
Individuals who've been asked to become a part of this group have generally been contributing in significant ways to the Ansible community for some time. Should they agree, they are requested to add their names and GitHub IDs to this file, in the section below, via a pull request. Doing so indicates that these individuals agree to act in the ways that their fellow committers trust that they will act.
|
Individuals who've been asked to become a part of this group have generally been contributing in significant ways to the Ansible community for some time. Should they agree, they are requested to add their names and GitHub IDs to this file, in the section below, via a pull request. Doing so indicates that these individuals agree to act in the ways that their fellow committers trust that they will act.
|
||||||
|
|
||||||
* James Cammarata
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
* Brian Coca
|
| Name | Github ID | IRC Nick | Other |
|
||||||
* Matt Davis
|
+=====================+======================+====================+======================+
|
||||||
* Toshio Kuratomi
|
| James Cammarata | jimi-c | jimi | |
|
||||||
* Jason McKerr
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
* Robyn Bergeron
|
| Brian Coca | bcoca | bcoca | mdyson@cyberdyne.com |
|
||||||
* Greg DeKoenigsberg
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
* Monty Taylor
|
| Matt Davis | nitzmahone | nitzmahone | |
|
||||||
* Matt Martz
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
* Nate Case
|
| Toshio Kuratomi | abadger | abadger1999 | |
|
||||||
* James Tanner
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
* Peter Sprygada
|
| Jason McKerr | mckerrj | newtMcKerr | |
|
||||||
* Abhijit Menon-Sen
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
* Michael Scherer
|
| Robyn Bergeron | robynbergeron | rbergeron | |
|
||||||
* René Moser
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
* David Shrewsbury
|
| Greg DeKoenigsberg | gregdek | gregdek | |
|
||||||
* Sandra Wills
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
* Graham Mainwaring
|
| Monty Taylor | emonty | mordred | |
|
||||||
* Jon Davila
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
* Chris Houseknecht
|
| Matt Martz | sivel | sivel | |
|
||||||
* Trond Hindenes
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
* Jon Hawkesworth
|
| Nate Case | qalthos | Qalthos | |
|
||||||
* Will Thames
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| James Tanner | jctanner | jtanner | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Peter Sprygada | privateip | privateip | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Abhijit Menon-Sen | amenonsen | crab | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Michael Scherer | mscherer | misc | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| René Moser | resmo | resmo | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| David Shrewsbury | Shrews | Shrews | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Sandra Wills | docschick | docschick | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Graham Mainwaring | ghjm | | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Jon Davila | defionscode | | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Chris Houseknecht | chouseknecht | | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Trond Hindenes | trondhindenes | | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Jon Hawkesworth | jhawkseworth | jhawkseworth | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Will Thames | wilthames | willthames | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Ryan Brown | ryansb | ryansb | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
| Adrian Likins | alikins | alikins | |
|
||||||
|
+---------------------+----------------------+--------------------+----------------------+
|
||||||
|
|
24
docsite/rst/developing_core.rst
Normal file
24
docsite/rst/developing_core.rst
Normal file
|
@ -0,0 +1,24 @@
|
||||||
|
Developing the Ansible Core Engine
|
||||||
|
==================================
|
||||||
|
|
||||||
|
Although many of the pieces of the Ansible Core Engine are plugins that can be
|
||||||
|
swapped out via playbook directives or configuration, there are still pieces
|
||||||
|
of the Engine that are not modular. The documents here give insight into how
|
||||||
|
those pieces work together.
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 1
|
||||||
|
|
||||||
|
developing_program_flow_modules
|
||||||
|
|
||||||
|
.. seealso::
|
||||||
|
|
||||||
|
:doc:`developing_api`
|
||||||
|
Learn about the Python API for task execution
|
||||||
|
:doc:`developing_plugins`
|
||||||
|
Learn about developing plugins
|
||||||
|
`Mailing List <http://groups.google.com/group/ansible-devel>`_
|
||||||
|
The development mailing list
|
||||||
|
`irc.freenode.net <http://irc.freenode.net>`_
|
||||||
|
#ansible-devel IRC chat channel
|
||||||
|
|
|
@ -48,8 +48,8 @@ the 'command' module could already be used to do this.
|
||||||
|
|
||||||
Reading the modules that come with Ansible (linked above) is a great way to learn how to write
|
Reading the modules that come with Ansible (linked above) is a great way to learn how to write
|
||||||
modules. Keep in mind, though, that some modules in Ansible's source tree are internalisms,
|
modules. Keep in mind, though, that some modules in Ansible's source tree are internalisms,
|
||||||
so look at :ref:`service` or :ref:`yum`, and don't stare too close into things like :ref:`async_wrapper` or
|
so look at :ref:`service` or :ref:`yum`, and don't stare too close into things like ``async_wrapper`` or
|
||||||
you'll turn to stone. Nobody ever executes :ref:`async_wrapper` directly.
|
you'll turn to stone. Nobody ever executes ``async_wrapper`` directly.
|
||||||
|
|
||||||
Ok, let's get going with an example. We'll use Python. For starters, save this as a file named :file:`timetest.py`::
|
Ok, let's get going with an example. We'll use Python. For starters, save this as a file named :file:`timetest.py`::
|
||||||
|
|
||||||
|
@ -204,6 +204,25 @@ This should return something like::
|
||||||
|
|
||||||
{"changed": true, "time": "2012-03-14 12:23:00.000307"}
|
{"changed": true, "time": "2012-03-14 12:23:00.000307"}
|
||||||
|
|
||||||
|
.. _binary_module_reading_input:
|
||||||
|
|
||||||
|
Binary Modules Input
|
||||||
|
~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Support for binary modules was added in Ansible 2.2. When Ansible detects a binary module, it will proceed to
|
||||||
|
supply the argument input as a file on ``argv[1]`` that is formatted as JSON. The JSON contents of that file
|
||||||
|
would resemble something similar to the following payload for a module accepting the same arguments as the
|
||||||
|
``ping`` module::
|
||||||
|
|
||||||
|
{
|
||||||
|
"data": "pong",
|
||||||
|
"_ansible_verbosity": 4,
|
||||||
|
"_ansible_diff": false,
|
||||||
|
"_ansible_debug": false,
|
||||||
|
"_ansible_check_mode": false,
|
||||||
|
"_ansible_no_log": false
|
||||||
|
}
|
||||||
|
|
||||||
.. _module_provided_facts:
|
.. _module_provided_facts:
|
||||||
|
|
||||||
Module Provided 'Facts'
|
Module Provided 'Facts'
|
||||||
|
@ -538,11 +557,11 @@ When you look into the debug_dir you'll see a directory structure like this::
|
||||||
that are passed to the module, this is the file to do it in.
|
that are passed to the module, this is the file to do it in.
|
||||||
|
|
||||||
* The :file:`ansible` directory contains code from
|
* The :file:`ansible` directory contains code from
|
||||||
:module:`ansible.module_utils` that is used by the module. Ansible includes
|
:mod:`ansible.module_utils` that is used by the module. Ansible includes
|
||||||
files for any :`module:`ansible.module_utils` imports in the module but not
|
files for any :`module:`ansible.module_utils` imports in the module but not
|
||||||
no files from any other module. So if your module uses
|
no files from any other module. So if your module uses
|
||||||
:module:`ansible.module_utils.url` Ansible will include it for you, but if
|
:mod:`ansible.module_utils.url` Ansible will include it for you, but if
|
||||||
your module includes :module:`requests` then you'll have to make sure that
|
your module includes :mod:`requests` then you'll have to make sure that
|
||||||
the python requests library is installed on the system before running the
|
the python requests library is installed on the system before running the
|
||||||
module. You can modify files in this directory if you suspect that the
|
module. You can modify files in this directory if you suspect that the
|
||||||
module is having a problem in some of this boilerplate code rather than in
|
module is having a problem in some of this boilerplate code rather than in
|
||||||
|
@ -566,7 +585,7 @@ module file and test that the real module works via :command:`ansible` or
|
||||||
The wrapper provides one more subcommand, ``excommunicate``. This
|
The wrapper provides one more subcommand, ``excommunicate``. This
|
||||||
subcommand is very similar to ``execute`` in that it invokes the exploded
|
subcommand is very similar to ``execute`` in that it invokes the exploded
|
||||||
module on the arguments in the :file:`args`. The way it does this is
|
module on the arguments in the :file:`args`. The way it does this is
|
||||||
different, however. ``excommunicate`` imports the :function:`main`
|
different, however. ``excommunicate`` imports the :func:`main`
|
||||||
function from the module and then calls that. This makes excommunicate
|
function from the module and then calls that. This makes excommunicate
|
||||||
execute the module in the wrapper's process. This may be useful for
|
execute the module in the wrapper's process. This may be useful for
|
||||||
running the module under some graphical debuggers but it is very different
|
running the module under some graphical debuggers but it is very different
|
||||||
|
@ -575,7 +594,7 @@ module file and test that the real module works via :command:`ansible` or
|
||||||
with Ansible normally. Those are not bugs in the module; they're
|
with Ansible normally. Those are not bugs in the module; they're
|
||||||
limitations of ``excommunicate``. Use at your own risk.
|
limitations of ``excommunicate``. Use at your own risk.
|
||||||
|
|
||||||
.. _module_paths
|
.. _module_paths:
|
||||||
|
|
||||||
Module Paths
|
Module Paths
|
||||||
````````````
|
````````````
|
||||||
|
|
150
docsite/rst/developing_modules_python3.rst
Normal file
150
docsite/rst/developing_modules_python3.rst
Normal file
|
@ -0,0 +1,150 @@
|
||||||
|
===========================
|
||||||
|
Porting Modules to Python 3
|
||||||
|
===========================
|
||||||
|
|
||||||
|
Ansible modules are not the usual Python-3 porting exercise. There are two
|
||||||
|
factors that make it harder to port them than most code:
|
||||||
|
|
||||||
|
1. Many modules need to run on Python-2.4 in addition to Python-3.
|
||||||
|
2. A lot of mocking has to go into unittesting a Python-3 module. So it's
|
||||||
|
harder to test that your porting has fixed everything or to make sure that
|
||||||
|
later commits haven't regressed.
|
||||||
|
|
||||||
|
Which version of Python-3.x and which version of Python-2.x are our minimums?
|
||||||
|
=============================================================================
|
||||||
|
|
||||||
|
The short answer is Python-3.4 and Python-2.4 but please read on for more
|
||||||
|
information.
|
||||||
|
|
||||||
|
For Python-3 we are currently using Python-3.4 as a minimum. However, no long
|
||||||
|
term supported Linux distributions currently ship with Python-3. When that
|
||||||
|
occurs, we will probably take that as our minimum Python-3 version rather than
|
||||||
|
Python-3.4. Thus far, Python-3 has been adding small changes that make it
|
||||||
|
more compatible with Python-2 in its newer versions (For instance, Python-3.5
|
||||||
|
added the ability to use percent-formatted byte strings.) so it should be more
|
||||||
|
pleasant to use a newer version of Python-3 if it's available. At some point
|
||||||
|
this will change but we'll just have to cross that bridge when we get to it.
|
||||||
|
|
||||||
|
For Python-2 the default is for modules to run on Python-2.4. This allows
|
||||||
|
users with older distributions that are stuck on Python-2.4 to manage their
|
||||||
|
machines. Modules are allowed to drop support for Python-2.4 when one of
|
||||||
|
their dependent libraries require a higher version of python. This is not an
|
||||||
|
invitation to add unnecessary dependent libraries in order to force your
|
||||||
|
module to be usable only with a newer version of Python. Instead it is an
|
||||||
|
acknowledgment that some libraries (for instance, boto3 and docker-py) will
|
||||||
|
only function with newer Python.
|
||||||
|
|
||||||
|
.. note:: When will we drop support for Python-2.4?
|
||||||
|
|
||||||
|
The only long term supported distro that we know of with Python-2.4 is
|
||||||
|
RHEL5 (and its rebuilds like CentOS5) which is supported until April of
|
||||||
|
2017. We will likely end our support for Python-2.4 in modules in an
|
||||||
|
Ansible release around that time. We know of no long term supported
|
||||||
|
distributions with Python-2.5 so the new minimum Python-2 version will
|
||||||
|
likely be Python-2.6. This will let us take advantage of the
|
||||||
|
forwards-compat features of Python-2.6 so porting and maintainance of
|
||||||
|
Python-2/Python-3 code will be easier after that.
|
||||||
|
|
||||||
|
Supporting only Python-2 or only Python-3
|
||||||
|
=========================================
|
||||||
|
|
||||||
|
Sometimes a module's dependent libraries only run on Python-2 or only run on
|
||||||
|
Python-3. We do not yet have a strategy for these modules but we'll need to
|
||||||
|
come up with one. I see three possibilities:
|
||||||
|
|
||||||
|
1. We treat these libraries like any other libraries that may not be installed
|
||||||
|
on the system. When we import them we check if the import was successful.
|
||||||
|
If so, then we continue. If not we return an error about the library being
|
||||||
|
missing. Users will have to find out that the library is unavailable on
|
||||||
|
their version of Python either by searching for the library on their own or
|
||||||
|
reading the requirements section in :command:`ansible-doc`.
|
||||||
|
|
||||||
|
2. The shebang line is the only metadata that Ansible extracts from a module
|
||||||
|
so we may end up using that to specify what we mean. Something like
|
||||||
|
``#!/usr/bin/python`` means the module will run on both Python-2 and
|
||||||
|
Python-3, ``#!/usr/bin/python2`` means the module will only run on
|
||||||
|
Python-2, and ``#!/usr/bin/python3`` means the module will only run on
|
||||||
|
Python-3. Ansible's code will need to be modified to accommodate this.
|
||||||
|
For :command:`python2`, if ``ansible_python2_interpreter`` is not set, it
|
||||||
|
will have to fallback to `` ansible_python_interpreter`` and if that's not
|
||||||
|
set, fallback to ``/usr/bin/python``. For :command:`python3`, Ansible
|
||||||
|
will have to first try ``ansible_python3_interpreter`` and then fallback to
|
||||||
|
``/usr/bin/python3`` as normal.
|
||||||
|
|
||||||
|
3. We add a way for Ansible to retrieve metadata about modules. The metadata
|
||||||
|
will include the version of Python that is required.
|
||||||
|
|
||||||
|
Methods 2 and 3 will both require that we modify modules or otherwise add this
|
||||||
|
additional information somewhere.
|
||||||
|
|
||||||
|
Tips, tricks, and idioms to adopt
|
||||||
|
=================================
|
||||||
|
|
||||||
|
Exceptions
|
||||||
|
----------
|
||||||
|
|
||||||
|
In code which already needs Python-2.6+ (For instance, because a library it
|
||||||
|
depends on only runs on Python >= 2.6) it is okay to port directly to the new
|
||||||
|
exception catching syntax::
|
||||||
|
|
||||||
|
try:
|
||||||
|
a = 2/0
|
||||||
|
except ValueError as e:
|
||||||
|
module.fail_json(msg="Tried to divide by zero!")
|
||||||
|
|
||||||
|
For modules which also run on Python-2.4, we have to use an uglier
|
||||||
|
construction to make this work under both Python-2.4 and Python-3::
|
||||||
|
|
||||||
|
from ansible.module_utils.pycompat import get_exception
|
||||||
|
[...]
|
||||||
|
|
||||||
|
try:
|
||||||
|
a = 2/0
|
||||||
|
except ValueError:
|
||||||
|
e = get_exception()
|
||||||
|
module.fail_json(msg="Tried to divide by zero!")
|
||||||
|
|
||||||
|
Octal numbers
|
||||||
|
-------------
|
||||||
|
|
||||||
|
In Python-2.4, octal literals are specified as ``0755``. In Python-3, that is
|
||||||
|
invalid and octals must be specified as ``0o755``. To bridge this gap,
|
||||||
|
modules should create their octals like this::
|
||||||
|
|
||||||
|
# Can't use 0755 on Python-3 and can't use 0o755 on Python-2.4
|
||||||
|
EXECUTABLE_PERMS = int('0755', 8)
|
||||||
|
|
||||||
|
Bundled six
|
||||||
|
-----------
|
||||||
|
|
||||||
|
The third-party python-six library exists to help projects create code that
|
||||||
|
runs on both Python-2 and Python-3. Ansible includes version 1.4.1 in
|
||||||
|
module_utils so that other modules can use it without requiring that it is
|
||||||
|
installed on the remote system. To make use of it, import it like this::
|
||||||
|
|
||||||
|
from ansible.module_utils import six
|
||||||
|
|
||||||
|
.. note:: Why version 1.4.1?
|
||||||
|
|
||||||
|
six-1.4.1 is the last version of python-six to support Python-2.4. As
|
||||||
|
long as Ansible modules need to run on Python-2.4 we won't be able to
|
||||||
|
update the bundled copy of six.
|
||||||
|
|
||||||
|
Compile Test
|
||||||
|
------------
|
||||||
|
|
||||||
|
We have travis compiling all modules with various versions of Python to check
|
||||||
|
that the modules conform to the syntax at those versions. When you've
|
||||||
|
ported a module so that its syntax works with Python-3, we need to modify
|
||||||
|
.travis.yml so that the module is included in the syntax check. Here's the
|
||||||
|
relevant section of .travis.yml::
|
||||||
|
|
||||||
|
script:
|
||||||
|
[...]
|
||||||
|
- python3.4 -m compileall -fq system/ping.py
|
||||||
|
- python3.5 -m compileall -fq system/ping.py
|
||||||
|
|
||||||
|
At the moment this is a whitelist. Just add your newly ported module to that
|
||||||
|
line. Eventually, not compiling on Python-3 will be the exception. When that
|
||||||
|
occurs, we will move to a blacklist for listing which modules do not compile
|
||||||
|
under Python-3.
|
|
@ -79,7 +79,7 @@ New-style powershell modules use the :ref:`module_replacer` framework for
|
||||||
constructing modules. These modules get a library of powershell code embedded
|
constructing modules. These modules get a library of powershell code embedded
|
||||||
in them before being sent to the managed node.
|
in them before being sent to the managed node.
|
||||||
|
|
||||||
.. _flow_josnargs_modules:
|
.. _flow_jsonargs_modules:
|
||||||
|
|
||||||
JSONARGS
|
JSONARGS
|
||||||
^^^^^^^^
|
^^^^^^^^
|
||||||
|
@ -325,7 +325,7 @@ string and substituted into the combined module file. In :ref:`ziploader`,
|
||||||
the JSON-ified string is passed into the module via stdin. When
|
the JSON-ified string is passed into the module via stdin. When
|
||||||
a :class:`ansible.module_utils.basic.AnsibleModule` is instantiated,
|
a :class:`ansible.module_utils.basic.AnsibleModule` is instantiated,
|
||||||
it parses this string and places the args into
|
it parses this string and places the args into
|
||||||
:attribute:`AnsibleModule.params` where it can be accessed by the module's
|
:attr:`AnsibleModule.params` where it can be accessed by the module's
|
||||||
other code.
|
other code.
|
||||||
|
|
||||||
.. _flow_passing_module_constants:
|
.. _flow_passing_module_constants:
|
||||||
|
@ -357,7 +357,7 @@ For now, :code:`ANSIBLE_VERSION` is also available at its old location inside of
|
||||||
:ref:`ziploader` passes these as part of the JSON-ified argument string via stdin.
|
:ref:`ziploader` passes these as part of the JSON-ified argument string via stdin.
|
||||||
When
|
When
|
||||||
:class:`ansible.module_utils.basic.AnsibleModule` is instantiated, it parses this
|
:class:`ansible.module_utils.basic.AnsibleModule` is instantiated, it parses this
|
||||||
string and places the constants into :attribute:`AnsibleModule.constants`
|
string and places the constants into :attr:`AnsibleModule.constants`
|
||||||
where other code can access it.
|
where other code can access it.
|
||||||
|
|
||||||
Unlike the ``ANSIBLE_VERSION``, where some efforts were made to keep the old
|
Unlike the ``ANSIBLE_VERSION``, where some efforts were made to keep the old
|
||||||
|
|
|
@ -329,7 +329,7 @@ be applied to single tasks only, once a playbook is completed.
|
||||||
.. _interpolate_variables:
|
.. _interpolate_variables:
|
||||||
|
|
||||||
When should I use {{ }}? Also, how to interpolate variables or dynamic variable names
|
When should I use {{ }}? Also, how to interpolate variables or dynamic variable names
|
||||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
A steadfast rule is 'always use {{ }} except when `when:`'.
|
A steadfast rule is 'always use {{ }} except when `when:`'.
|
||||||
Conditionals are always run through Jinja2 as to resolve the expression,
|
Conditionals are always run through Jinja2 as to resolve the expression,
|
||||||
|
|
|
@ -332,6 +332,7 @@ A sample azure_rm.ini file is included along with the inventory script in contri
|
||||||
file will contain the following:
|
file will contain the following:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[azure]
|
[azure]
|
||||||
# Control which resource groups are included. By default all resources groups are included.
|
# Control which resource groups are included. By default all resources groups are included.
|
||||||
# Set resource_groups to a comma separated list of resource groups names.
|
# Set resource_groups to a comma separated list of resource groups names.
|
||||||
|
|
|
@ -11,7 +11,7 @@ Introduction
|
||||||
Ansible contains modules for managing Google Compute Engine resources, including creating instances, controlling network access, working with persistent disks, and managing
|
Ansible contains modules for managing Google Compute Engine resources, including creating instances, controlling network access, working with persistent disks, and managing
|
||||||
load balancers. Additionally, there is an inventory plugin that can automatically suck down all of your GCE instances into Ansible dynamic inventory, and create groups by tag and other properties.
|
load balancers. Additionally, there is an inventory plugin that can automatically suck down all of your GCE instances into Ansible dynamic inventory, and create groups by tag and other properties.
|
||||||
|
|
||||||
The GCE modules all require the apache-libcloud module, which you can install from pip:
|
The GCE modules all require the apache-libcloud module which you can install from pip:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
@ -22,16 +22,19 @@ The GCE modules all require the apache-libcloud module, which you can install fr
|
||||||
Credentials
|
Credentials
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
To work with the GCE modules, you'll first need to get some credentials. You can create new one from the `console <https://console.developers.google.com/>`_ by going to the "APIs and Auth" section and choosing to create a new client ID for a service account. Once you've created a new client ID and downloaded (you must click **Generate new P12 Key**) the generated private key (in the `pkcs12 format <http://en.wikipedia.org/wiki/PKCS_12>`_), you'll need to convert the key by running the following command:
|
To work with the GCE modules, you'll first need to get some credentials in the
|
||||||
|
JSON format:
|
||||||
|
|
||||||
.. code-block:: bash
|
1. `Create a Service Account <https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount>`_
|
||||||
|
2. `Download JSON credentials <https://support.google.com/cloud/answer/6158849?hl=en&ref_topic=6262490#serviceaccounts>`_
|
||||||
|
|
||||||
$ openssl pkcs12 -in pkey.pkcs12 -passin pass:notasecret -nodes -nocerts | openssl rsa -out pkey.pem
|
There are three different ways to provide credentials to Ansible so that it can talk with Google Cloud for provisioning and configuration actions:
|
||||||
|
|
||||||
There are two different ways to provide credentials to Ansible so that it can talk with Google Cloud for provisioning and configuration actions:
|
.. note:: If you would like to use JSON credentials you must have libcloud >= 0.17.0
|
||||||
|
|
||||||
* by providing to the modules directly
|
* by providing to the modules directly
|
||||||
* by populating a ``secrets.py`` file
|
* by populating a ``secrets.py`` file
|
||||||
|
* by setting environment variables
|
||||||
|
|
||||||
Calling Modules By Passing Credentials
|
Calling Modules By Passing Credentials
|
||||||
``````````````````````````````````````
|
``````````````````````````````````````
|
||||||
|
@ -39,7 +42,7 @@ Calling Modules By Passing Credentials
|
||||||
For the GCE modules you can specify the credentials as arguments:
|
For the GCE modules you can specify the credentials as arguments:
|
||||||
|
|
||||||
* ``service_account_email``: email associated with the project
|
* ``service_account_email``: email associated with the project
|
||||||
* ``pem_file``: path to the pem file
|
* ``credentials_file``: path to the JSON credentials file
|
||||||
* ``project_id``: id of the project
|
* ``project_id``: id of the project
|
||||||
|
|
||||||
For example, to create a new instance using the cloud module, you can use the following configuration:
|
For example, to create a new instance using the cloud module, you can use the following configuration:
|
||||||
|
@ -48,12 +51,12 @@ For example, to create a new instance using the cloud module, you can use the fo
|
||||||
|
|
||||||
- name: Create instance(s)
|
- name: Create instance(s)
|
||||||
hosts: localhost
|
hosts: localhost
|
||||||
connection: local
|
connection: local
|
||||||
gather_facts: no
|
gather_facts: no
|
||||||
|
|
||||||
vars:
|
vars:
|
||||||
service_account_email: unique-id@developer.gserviceaccount.com
|
service_account_email: unique-id@developer.gserviceaccount.com
|
||||||
pem_file: /path/to/project.pem
|
credentials_file: /path/to/project.json
|
||||||
project_id: project-id
|
project_id: project-id
|
||||||
machine_type: n1-standard-1
|
machine_type: n1-standard-1
|
||||||
image: debian-7
|
image: debian-7
|
||||||
|
@ -61,28 +64,50 @@ For example, to create a new instance using the cloud module, you can use the fo
|
||||||
tasks:
|
tasks:
|
||||||
|
|
||||||
- name: Launch instances
|
- name: Launch instances
|
||||||
gce:
|
gce:
|
||||||
instance_names: dev
|
instance_names: dev
|
||||||
machine_type: "{{ machine_type }}"
|
machine_type: "{{ machine_type }}"
|
||||||
image: "{{ image }}"
|
image: "{{ image }}"
|
||||||
service_account_email: "{{ service_account_email }}"
|
service_account_email: "{{ service_account_email }}"
|
||||||
pem_file: "{{ pem_file }}"
|
credentials_file: "{{ credentials_file }}"
|
||||||
project_id: "{{ project_id }}"
|
project_id: "{{ project_id }}"
|
||||||
|
|
||||||
Calling Modules with secrets.py
|
When running Ansible inside a GCE VM you can use the service account credentials from the local metadata server by
|
||||||
```````````````````````````````
|
setting both ``service_account_email`` and ``credentials_file`` to a blank string.
|
||||||
|
|
||||||
|
Configuring Modules with secrets.py
|
||||||
|
```````````````````````````````````
|
||||||
|
|
||||||
Create a file ``secrets.py`` looking like following, and put it in some folder which is in your ``$PYTHONPATH``:
|
Create a file ``secrets.py`` looking like following, and put it in some folder which is in your ``$PYTHONPATH``:
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
GCE_PARAMS = ('i...@project.googleusercontent.com', '/path/to/project.pem')
|
GCE_PARAMS = ('i...@project.googleusercontent.com', '/path/to/project.json')
|
||||||
GCE_KEYWORD_PARAMS = {'project': 'project_id'}
|
GCE_KEYWORD_PARAMS = {'project': 'project_id'}
|
||||||
|
|
||||||
Ensure to enter the email address from the created services account and not the one from your main account.
|
Ensure to enter the email address from the created services account and not the one from your main account.
|
||||||
|
|
||||||
Now the modules can be used as above, but the account information can be omitted.
|
Now the modules can be used as above, but the account information can be omitted.
|
||||||
|
|
||||||
|
If you are running Ansible from inside a GCE VM with an authorized service account you can set the email address and
|
||||||
|
credentials path as follows so that get automatically picked up:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
GCE_PARAMS = ('', '')
|
||||||
|
GCE_KEYWORD_PARAMS = {'project': 'project_id'}
|
||||||
|
|
||||||
|
Configuring Modules with Environment Variables
|
||||||
|
``````````````````````````````````````````````
|
||||||
|
|
||||||
|
Set the following environment variables before running Ansible in order to configure your credentials:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
GCE_EMAIL
|
||||||
|
GCE_PROJECT
|
||||||
|
GCE_CREDENTIALS_FILE_PATH
|
||||||
|
|
||||||
GCE Dynamic Inventory
|
GCE Dynamic Inventory
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
|
@ -171,7 +196,7 @@ A playbook would looks like this:
|
||||||
machine_type: n1-standard-1 # default
|
machine_type: n1-standard-1 # default
|
||||||
image: debian-7
|
image: debian-7
|
||||||
service_account_email: unique-id@developer.gserviceaccount.com
|
service_account_email: unique-id@developer.gserviceaccount.com
|
||||||
pem_file: /path/to/project.pem
|
credentials_file: /path/to/project.json
|
||||||
project_id: project-id
|
project_id: project-id
|
||||||
|
|
||||||
tasks:
|
tasks:
|
||||||
|
@ -181,7 +206,7 @@ A playbook would looks like this:
|
||||||
machine_type: "{{ machine_type }}"
|
machine_type: "{{ machine_type }}"
|
||||||
image: "{{ image }}"
|
image: "{{ image }}"
|
||||||
service_account_email: "{{ service_account_email }}"
|
service_account_email: "{{ service_account_email }}"
|
||||||
pem_file: "{{ pem_file }}"
|
credentials_file: "{{ credentials_file }}"
|
||||||
project_id: "{{ project_id }}"
|
project_id: "{{ project_id }}"
|
||||||
tags: webserver
|
tags: webserver
|
||||||
register: gce
|
register: gce
|
||||||
|
@ -224,7 +249,7 @@ a basic example of what is possible::
|
||||||
machine_type: n1-standard-1 # default
|
machine_type: n1-standard-1 # default
|
||||||
image: debian-7
|
image: debian-7
|
||||||
service_account_email: unique-id@developer.gserviceaccount.com
|
service_account_email: unique-id@developer.gserviceaccount.com
|
||||||
pem_file: /path/to/project.pem
|
credentials_file: /path/to/project.json
|
||||||
project_id: project-id
|
project_id: project-id
|
||||||
|
|
||||||
roles:
|
roles:
|
||||||
|
@ -238,13 +263,12 @@ a basic example of what is possible::
|
||||||
args:
|
args:
|
||||||
fwname: "all-http"
|
fwname: "all-http"
|
||||||
name: "default"
|
name: "default"
|
||||||
allowed: "tcp:80"
|
allowed: "tcp:80"
|
||||||
state: "present"
|
state: "present"
|
||||||
service_account_email: "{{ service_account_email }}"
|
service_account_email: "{{ service_account_email }}"
|
||||||
pem_file: "{{ pem_file }}"
|
credentials_file: "{{ credentials_file }}"
|
||||||
project_id: "{{ project_id }}"
|
project_id: "{{ project_id }}"
|
||||||
|
|
||||||
By pointing your browser to the IP of the server, you should see a page welcoming you.
|
By pointing your browser to the IP of the server, you should see a page welcoming you.
|
||||||
|
|
||||||
Upgrades to this documentation are welcome, hit the github link at the top right of this page if you would like to make additions!
|
Upgrades to this documentation are welcome, hit the github link at the top right of this page if you would like to make additions!
|
||||||
|
|
||||||
|
|
|
@ -156,7 +156,7 @@ to the next section.
|
||||||
Host Inventory
|
Host Inventory
|
||||||
``````````````
|
``````````````
|
||||||
|
|
||||||
Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle his is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up Ansible via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, etc. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
|
Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle this is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up cloud instances via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, etc. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
|
||||||
|
|
||||||
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
|
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
|
||||||
|
|
||||||
|
|
|
@ -34,6 +34,28 @@ To tie Ansible's inventory to Cobbler (optional), copy `this script <https://raw
|
||||||
to be running when you are using Ansible and you'll need to use Ansible's ``-i`` command line option (e.g. ``-i /etc/ansible/cobbler.py``).
|
to be running when you are using Ansible and you'll need to use Ansible's ``-i`` command line option (e.g. ``-i /etc/ansible/cobbler.py``).
|
||||||
This particular script will communicate with Cobbler using Cobbler's XMLRPC API.
|
This particular script will communicate with Cobbler using Cobbler's XMLRPC API.
|
||||||
|
|
||||||
|
Also a cobbler.ini file should be added to /etc/ansible so Ansible knows where the Cobbler server is and some cache improvements can be used. For example::
|
||||||
|
|
||||||
|
|
||||||
|
[cobbler]
|
||||||
|
|
||||||
|
# Set Cobbler's hostname or IP address
|
||||||
|
host = http://127.0.0.1/cobbler_api
|
||||||
|
|
||||||
|
# API calls to Cobbler can be slow. For this reason, we cache the results of an API
|
||||||
|
# call. Set this to the path you want cache files to be written to. Two files
|
||||||
|
# will be written to this directory:
|
||||||
|
# - ansible-cobbler.cache
|
||||||
|
# - ansible-cobbler.index
|
||||||
|
|
||||||
|
cache_path = /tmp
|
||||||
|
|
||||||
|
# The number of seconds a cache file is considered valid. After this many
|
||||||
|
# seconds, a new API call will be made, and the cache file will be updated.
|
||||||
|
|
||||||
|
cache_max_age = 900
|
||||||
|
|
||||||
|
|
||||||
First test the script by running ``/etc/ansible/cobbler.py`` directly. You should see some JSON data output, but it may not have anything in it just yet.
|
First test the script by running ``/etc/ansible/cobbler.py`` directly. You should see some JSON data output, but it may not have anything in it just yet.
|
||||||
|
|
||||||
Let's explore what this does. In cobbler, assume a scenario somewhat like the following::
|
Let's explore what this does. In cobbler, assume a scenario somewhat like the following::
|
||||||
|
@ -111,7 +133,7 @@ If you use boto profiles to manage multiple AWS accounts, you can pass ``--profi
|
||||||
aws_access_key_id = <prod access key>
|
aws_access_key_id = <prod access key>
|
||||||
aws_secret_access_key = <prod secret key>
|
aws_secret_access_key = <prod secret key>
|
||||||
|
|
||||||
You can then run ``ec2.py --profile prod`` to get the inventory for the prod account, this option is not supported by ``anisble-playbook`` though.
|
You can then run ``ec2.py --profile prod`` to get the inventory for the prod account, this option is not supported by ``ansible-playbook`` though.
|
||||||
But you can use the ``AWS_PROFILE`` variable - e.g. ``AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml``
|
But you can use the ``AWS_PROFILE`` variable - e.g. ``AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml``
|
||||||
|
|
||||||
Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables.
|
Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables.
|
||||||
|
@ -231,13 +253,13 @@ Source an OpenStack RC file::
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
An OpenStack RC file contains the environment variables required by the client tools to establish a connection with the cloud provider, such as the authentication URL, user name, password and region name. For more information on how to download, create or source an OpenStack RC file, please refer to http://docs.openstack.org/cli-reference/content/cli_openrc.html.
|
An OpenStack RC file contains the environment variables required by the client tools to establish a connection with the cloud provider, such as the authentication URL, user name, password and region name. For more information on how to download, create or source an OpenStack RC file, please refer to `Set environment variables using the OpenStack RC file <http://docs.openstack.org/cli-reference/common/cli_set_environment_variables_using_openstack_rc.html>`_.
|
||||||
|
|
||||||
You can confirm the file has been successfully sourced by running a simple command, such as `nova list` and ensuring it return no errors.
|
You can confirm the file has been successfully sourced by running a simple command, such as `nova list` and ensuring it return no errors.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
The OpenStack command line clients are required to run the `nova list` command. For more information on how to install them, please refer to http://docs.openstack.org/cli-reference/content/install_clients.html.
|
The OpenStack command line clients are required to run the `nova list` command. For more information on how to install them, please refer to `Install the OpenStack command-line clients <http://docs.openstack.org/cli-reference/common/cli_install_openstack_command_line_clients.html>`_.
|
||||||
|
|
||||||
You can test the OpenStack dynamic inventory script manually to confirm it is working as expected::
|
You can test the OpenStack dynamic inventory script manually to confirm it is working as expected::
|
||||||
|
|
||||||
|
|
|
@ -203,7 +203,7 @@ As alluded to above, setting the following variables controls how ansible intera
|
||||||
Host connection:
|
Host connection:
|
||||||
|
|
||||||
ansible_connection
|
ansible_connection
|
||||||
Connection type to the host. This can be the name of any of ansible's connection plugins. SSH protocol types are smart, ssh or paramiko. The default is smart. Non-SSH based types are described in the next section.
|
Connection type to the host. This can be the name of any of ansible's connection plugins. SSH protocol types are ``smart``, ``ssh`` or ``paramiko``. The default is smart. Non-SSH based types are described in the next section.
|
||||||
|
|
||||||
|
|
||||||
.. include:: ansible_ssh_changes_note.rst
|
.. include:: ansible_ssh_changes_note.rst
|
||||||
|
@ -300,7 +300,7 @@ ansible_become
|
||||||
ansible_docker_extra_args
|
ansible_docker_extra_args
|
||||||
Could be a string with any additional arguments understood by Docker, which are not command specific. This parameter is mainly used to configure a remote Docker daemon to use.
|
Could be a string with any additional arguments understood by Docker, which are not command specific. This parameter is mainly used to configure a remote Docker daemon to use.
|
||||||
|
|
||||||
Here an example of how to instantly depoloy to created containers::
|
Here is an example of how to instantly deploy to created containers::
|
||||||
|
|
||||||
- name: create jenkins container
|
- name: create jenkins container
|
||||||
docker:
|
docker:
|
||||||
|
|
|
@ -44,7 +44,7 @@ Installing python-kerberos dependencies
|
||||||
yum -y install python-devel krb5-devel krb5-libs krb5-workstation
|
yum -y install python-devel krb5-devel krb5-libs krb5-workstation
|
||||||
|
|
||||||
# Via Apt (Ubuntu)
|
# Via Apt (Ubuntu)
|
||||||
sudo apt-get install python-dev libkrb5-dev
|
sudo apt-get install python-dev libkrb5-dev krb5-user
|
||||||
|
|
||||||
# Via Portage (Gentoo)
|
# Via Portage (Gentoo)
|
||||||
emerge -av app-crypt/mit-krb5
|
emerge -av app-crypt/mit-krb5
|
||||||
|
|
|
@ -556,7 +556,7 @@ Ansible by default sets the loop variable `item` for each loop, which causes the
|
||||||
As of Ansible 2.1, the `loop_control` option can be used to specify the name of the variable to be used for the loop::
|
As of Ansible 2.1, the `loop_control` option can be used to specify the name of the variable to be used for the loop::
|
||||||
|
|
||||||
# main.yml
|
# main.yml
|
||||||
- include: test.yml outer_loop="{{ outer_item }}"
|
- include: inner.yml
|
||||||
with_items:
|
with_items:
|
||||||
- 1
|
- 1
|
||||||
- 2
|
- 2
|
||||||
|
@ -565,7 +565,7 @@ As of Ansible 2.1, the `loop_control` option can be used to specify the name of
|
||||||
loop_var: outer_item
|
loop_var: outer_item
|
||||||
|
|
||||||
# inner.yml
|
# inner.yml
|
||||||
- debug: msg="outer item={{ outer_loop }} inner item={{ item }}"
|
- debug: msg="outer item={{ outer_item }} inner item={{ item }}"
|
||||||
with_items:
|
with_items:
|
||||||
- a
|
- a
|
||||||
- b
|
- b
|
||||||
|
@ -583,7 +583,7 @@ Because `loop_control` is not available in Ansible 2.0, when using an include wi
|
||||||
for `item`::
|
for `item`::
|
||||||
|
|
||||||
# main.yml
|
# main.yml
|
||||||
- include: test.yml
|
- include: inner.yml
|
||||||
with_items:
|
with_items:
|
||||||
- 1
|
- 1
|
||||||
- 2
|
- 2
|
||||||
|
|
|
@ -289,6 +289,8 @@ def process_module(module, options, env, template, outputname, module_map, alias
|
||||||
del doc['options'][k]['version_added']
|
del doc['options'][k]['version_added']
|
||||||
if not 'description' in doc['options'][k]:
|
if not 'description' in doc['options'][k]:
|
||||||
raise AnsibleError("Missing required description for option %s in %s " % (k, module))
|
raise AnsibleError("Missing required description for option %s in %s " % (k, module))
|
||||||
|
if not 'required' in doc['options'][k]:
|
||||||
|
raise AnsibleError("Missing required 'required' for option %s in %s " % (k, module))
|
||||||
if not isinstance(doc['options'][k]['description'],list):
|
if not isinstance(doc['options'][k]['description'],list):
|
||||||
doc['options'][k]['description'] = [doc['options'][k]['description']]
|
doc['options'][k]['description'] = [doc['options'][k]['description']]
|
||||||
|
|
||||||
|
|
|
@ -476,7 +476,7 @@ class CLI(object):
|
||||||
display.display(text)
|
display.display(text)
|
||||||
else:
|
else:
|
||||||
self.pager_pipe(text, os.environ['PAGER'])
|
self.pager_pipe(text, os.environ['PAGER'])
|
||||||
elif subprocess.call('(less --version) 2> /dev/null', shell = True) == 0:
|
elif subprocess.call('(less --version) &> /dev/null', shell = True) == 0:
|
||||||
self.pager_pipe(text, 'less')
|
self.pager_pipe(text, 'less')
|
||||||
else:
|
else:
|
||||||
display.display(text)
|
display.display(text)
|
||||||
|
|
|
@ -219,7 +219,9 @@ class DocCLI(CLI):
|
||||||
opt = doc['options'][o]
|
opt = doc['options'][o]
|
||||||
desc = CLI.tty_ify(" ".join(opt['description']))
|
desc = CLI.tty_ify(" ".join(opt['description']))
|
||||||
|
|
||||||
required = opt.get('required', False)
|
required = opt.get('required')
|
||||||
|
if required is None:
|
||||||
|
raise("Missing required field 'Required'")
|
||||||
if not isinstance(required, bool):
|
if not isinstance(required, bool):
|
||||||
raise("Incorrect value for 'Required', a boolean is needed.: %s" % required)
|
raise("Incorrect value for 'Required', a boolean is needed.: %s" % required)
|
||||||
if required:
|
if required:
|
||||||
|
|
|
@ -388,12 +388,6 @@ def _get_shebang(interpreter, task_vars, args=tuple()):
|
||||||
|
|
||||||
return (shebang, interpreter)
|
return (shebang, interpreter)
|
||||||
|
|
||||||
def _get_facility(task_vars):
|
|
||||||
facility = C.DEFAULT_SYSLOG_FACILITY
|
|
||||||
if 'ansible_syslog_facility' in task_vars:
|
|
||||||
facility = task_vars['ansible_syslog_facility']
|
|
||||||
return facility
|
|
||||||
|
|
||||||
def recursive_finder(name, data, py_module_names, py_module_cache, zf):
|
def recursive_finder(name, data, py_module_names, py_module_cache, zf):
|
||||||
"""
|
"""
|
||||||
Using ModuleDepFinder, make sure we have all of the module_utils files that
|
Using ModuleDepFinder, make sure we have all of the module_utils files that
|
||||||
|
@ -490,6 +484,11 @@ def recursive_finder(name, data, py_module_names, py_module_cache, zf):
|
||||||
# Save memory; the file won't have to be read again for this ansible module.
|
# Save memory; the file won't have to be read again for this ansible module.
|
||||||
del py_module_cache[py_module_file]
|
del py_module_cache[py_module_file]
|
||||||
|
|
||||||
|
def _is_binary(module_data):
|
||||||
|
textchars = bytearray(set([7, 8, 9, 10, 12, 13, 27]) | set(range(0x20, 0x100)) - set([0x7f]))
|
||||||
|
start = module_data[:1024]
|
||||||
|
return bool(start.translate(None, textchars))
|
||||||
|
|
||||||
def _find_snippet_imports(module_name, module_data, module_path, module_args, task_vars, module_compression):
|
def _find_snippet_imports(module_name, module_data, module_path, module_args, task_vars, module_compression):
|
||||||
"""
|
"""
|
||||||
Given the source of the module, convert it to a Jinja2 template to insert
|
Given the source of the module, convert it to a Jinja2 template to insert
|
||||||
|
@ -504,7 +503,9 @@ def _find_snippet_imports(module_name, module_data, module_path, module_args, ta
|
||||||
# module_substyle is extra information that's useful internally. It tells
|
# module_substyle is extra information that's useful internally. It tells
|
||||||
# us what we have to look to substitute in the module files and whether
|
# us what we have to look to substitute in the module files and whether
|
||||||
# we're using module replacer or ziploader to format the module itself.
|
# we're using module replacer or ziploader to format the module itself.
|
||||||
if REPLACER in module_data:
|
if _is_binary(module_data):
|
||||||
|
module_substyle = module_style = 'binary'
|
||||||
|
elif REPLACER in module_data:
|
||||||
# Do REPLACER before from ansible.module_utils because we need make sure
|
# Do REPLACER before from ansible.module_utils because we need make sure
|
||||||
# we substitute "from ansible.module_utils basic" for REPLACER
|
# we substitute "from ansible.module_utils basic" for REPLACER
|
||||||
module_style = 'new'
|
module_style = 'new'
|
||||||
|
@ -523,24 +524,16 @@ def _find_snippet_imports(module_name, module_data, module_path, module_args, ta
|
||||||
module_substyle = module_style = 'non_native_want_json'
|
module_substyle = module_style = 'non_native_want_json'
|
||||||
|
|
||||||
shebang = None
|
shebang = None
|
||||||
# Neither old-style nor non_native_want_json modules should be modified
|
# Neither old-style, non_native_want_json nor binary modules should be modified
|
||||||
# except for the shebang line (Done by modify_module)
|
# except for the shebang line (Done by modify_module)
|
||||||
if module_style in ('old', 'non_native_want_json'):
|
if module_style in ('old', 'non_native_want_json', 'binary'):
|
||||||
return module_data, module_style, shebang
|
return module_data, module_style, shebang
|
||||||
|
|
||||||
output = BytesIO()
|
output = BytesIO()
|
||||||
py_module_names = set()
|
py_module_names = set()
|
||||||
|
|
||||||
if module_substyle == 'python':
|
if module_substyle == 'python':
|
||||||
# ziploader for new-style python classes
|
params = dict(ANSIBLE_MODULE_ARGS=module_args,)
|
||||||
constants = dict(
|
|
||||||
SELINUX_SPECIAL_FS=C.DEFAULT_SELINUX_SPECIAL_FS,
|
|
||||||
SYSLOG_FACILITY=_get_facility(task_vars),
|
|
||||||
ANSIBLE_VERSION=__version__,
|
|
||||||
)
|
|
||||||
params = dict(ANSIBLE_MODULE_ARGS=module_args,
|
|
||||||
ANSIBLE_MODULE_CONSTANTS=constants,
|
|
||||||
)
|
|
||||||
python_repred_params = to_bytes(repr(json.dumps(params)), errors='strict')
|
python_repred_params = to_bytes(repr(json.dumps(params)), errors='strict')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
@ -690,7 +683,7 @@ def _find_snippet_imports(module_name, module_data, module_path, module_args, ta
|
||||||
# The main event -- substitute the JSON args string into the module
|
# The main event -- substitute the JSON args string into the module
|
||||||
module_data = module_data.replace(REPLACER_JSONARGS, module_args_json)
|
module_data = module_data.replace(REPLACER_JSONARGS, module_args_json)
|
||||||
|
|
||||||
facility = b'syslog.' + to_bytes(_get_facility(task_vars), errors='strict')
|
facility = b'syslog.' + to_bytes(task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY), errors='strict')
|
||||||
module_data = module_data.replace(b'syslog.LOG_USER', facility)
|
module_data = module_data.replace(b'syslog.LOG_USER', facility)
|
||||||
|
|
||||||
return (module_data, module_style, shebang)
|
return (module_data, module_style, shebang)
|
||||||
|
@ -731,7 +724,9 @@ def modify_module(module_name, module_path, module_args, task_vars=dict(), modul
|
||||||
|
|
||||||
(module_data, module_style, shebang) = _find_snippet_imports(module_name, module_data, module_path, module_args, task_vars, module_compression)
|
(module_data, module_style, shebang) = _find_snippet_imports(module_name, module_data, module_path, module_args, task_vars, module_compression)
|
||||||
|
|
||||||
if shebang is None:
|
if module_style == 'binary':
|
||||||
|
return (module_data, module_style, shebang)
|
||||||
|
elif shebang is None:
|
||||||
lines = module_data.split(b"\n", 1)
|
lines = module_data.split(b"\n", 1)
|
||||||
if lines[0].startswith(b"#!"):
|
if lines[0].startswith(b"#!"):
|
||||||
shebang = lines[0].strip()
|
shebang = lines[0].strip()
|
||||||
|
|
|
@ -295,10 +295,10 @@ class PlayIterator:
|
||||||
setup_block = self._blocks[0]
|
setup_block = self._blocks[0]
|
||||||
if setup_block.has_tasks() and len(setup_block.block) > 0:
|
if setup_block.has_tasks() and len(setup_block.block) > 0:
|
||||||
task = setup_block.block[0]
|
task = setup_block.block[0]
|
||||||
if not peek:
|
if not peek:
|
||||||
# mark the host as having gathered facts, because we're
|
# mark the host as having gathered facts, because we're
|
||||||
# returning the setup task to be executed
|
# returning the setup task to be executed
|
||||||
host.set_gathered_facts(True)
|
host.set_gathered_facts(True)
|
||||||
else:
|
else:
|
||||||
# This is the second trip through ITERATING_SETUP, so we clear
|
# This is the second trip through ITERATING_SETUP, so we clear
|
||||||
# the flag and move onto the next block in the list while setting
|
# the flag and move onto the next block in the list while setting
|
||||||
|
@ -326,8 +326,7 @@ class PlayIterator:
|
||||||
if self._check_failed_state(state.tasks_child_state):
|
if self._check_failed_state(state.tasks_child_state):
|
||||||
# failed child state, so clear it and move into the rescue portion
|
# failed child state, so clear it and move into the rescue portion
|
||||||
state.tasks_child_state = None
|
state.tasks_child_state = None
|
||||||
state.fail_state |= self.FAILED_TASKS
|
self._set_failed_state(state)
|
||||||
state.run_state = self.ITERATING_RESCUE
|
|
||||||
else:
|
else:
|
||||||
# get the next task recursively
|
# get the next task recursively
|
||||||
if task is None or state.tasks_child_state.run_state == self.ITERATING_COMPLETE:
|
if task is None or state.tasks_child_state.run_state == self.ITERATING_COMPLETE:
|
||||||
|
@ -365,8 +364,7 @@ class PlayIterator:
|
||||||
(state.rescue_child_state, task) = self._get_next_task_from_state(state.rescue_child_state, host=host, peek=peek)
|
(state.rescue_child_state, task) = self._get_next_task_from_state(state.rescue_child_state, host=host, peek=peek)
|
||||||
if self._check_failed_state(state.rescue_child_state):
|
if self._check_failed_state(state.rescue_child_state):
|
||||||
state.rescue_child_state = None
|
state.rescue_child_state = None
|
||||||
state.fail_state |= self.FAILED_RESCUE
|
self._set_failed_state(state)
|
||||||
state.run_state = self.ITERATING_ALWAYS
|
|
||||||
else:
|
else:
|
||||||
if task is None or state.rescue_child_state.run_state == self.ITERATING_COMPLETE:
|
if task is None or state.rescue_child_state.run_state == self.ITERATING_COMPLETE:
|
||||||
state.rescue_child_state = None
|
state.rescue_child_state = None
|
||||||
|
@ -396,8 +394,7 @@ class PlayIterator:
|
||||||
(state.always_child_state, task) = self._get_next_task_from_state(state.always_child_state, host=host, peek=peek)
|
(state.always_child_state, task) = self._get_next_task_from_state(state.always_child_state, host=host, peek=peek)
|
||||||
if self._check_failed_state(state.always_child_state):
|
if self._check_failed_state(state.always_child_state):
|
||||||
state.always_child_state = None
|
state.always_child_state = None
|
||||||
state.fail_state |= self.FAILED_ALWAYS
|
self._set_failed_state(state)
|
||||||
state.run_state = self.ITERATING_COMPLETE
|
|
||||||
else:
|
else:
|
||||||
if task is None or state.always_child_state.run_state == self.ITERATING_COMPLETE:
|
if task is None or state.always_child_state.run_state == self.ITERATING_COMPLETE:
|
||||||
state.always_child_state = None
|
state.always_child_state = None
|
||||||
|
@ -466,7 +463,9 @@ class PlayIterator:
|
||||||
|
|
||||||
def mark_host_failed(self, host):
|
def mark_host_failed(self, host):
|
||||||
s = self.get_host_state(host)
|
s = self.get_host_state(host)
|
||||||
|
display.debug("marking host %s failed, current state: %s" % (host, s))
|
||||||
s = self._set_failed_state(s)
|
s = self._set_failed_state(s)
|
||||||
|
display.debug("^ failed state is now: %s" % s)
|
||||||
self._host_states[host.name] = s
|
self._host_states[host.name] = s
|
||||||
|
|
||||||
def get_failed_hosts(self):
|
def get_failed_hosts(self):
|
||||||
|
@ -476,8 +475,7 @@ class PlayIterator:
|
||||||
if state is None:
|
if state is None:
|
||||||
return False
|
return False
|
||||||
elif state.fail_state != self.FAILED_NONE:
|
elif state.fail_state != self.FAILED_NONE:
|
||||||
if state.run_state == self.ITERATING_RESCUE and state.fail_state&self.FAILED_RESCUE == 0 or \
|
if state.run_state == self.ITERATING_RESCUE and state.fail_state&self.FAILED_RESCUE == 0:
|
||||||
state.run_state == self.ITERATING_ALWAYS and state.fail_state&self.FAILED_ALWAYS == 0:
|
|
||||||
return False
|
return False
|
||||||
else:
|
else:
|
||||||
return True
|
return True
|
||||||
|
|
|
@ -232,7 +232,7 @@ class TaskExecutor:
|
||||||
loop_var = self._task.loop_control.loop_var or 'item'
|
loop_var = self._task.loop_control.loop_var or 'item'
|
||||||
|
|
||||||
if loop_var in task_vars:
|
if loop_var in task_vars:
|
||||||
raise AnsibleError("the loop variable '%s' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions" % loop_var)
|
display.warning("The loop variable '%s' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions and unexpected behavior." % loop_var)
|
||||||
|
|
||||||
items = self._squash_items(items, loop_var, task_vars)
|
items = self._squash_items(items, loop_var, task_vars)
|
||||||
for item in items:
|
for item in items:
|
||||||
|
@ -269,59 +269,64 @@ class TaskExecutor:
|
||||||
Squash items down to a comma-separated list for certain modules which support it
|
Squash items down to a comma-separated list for certain modules which support it
|
||||||
(typically package management modules).
|
(typically package management modules).
|
||||||
'''
|
'''
|
||||||
# _task.action could contain templatable strings (via action: and
|
try:
|
||||||
# local_action:) Template it before comparing. If we don't end up
|
# _task.action could contain templatable strings (via action: and
|
||||||
# optimizing it here, the templatable string might use template vars
|
# local_action:) Template it before comparing. If we don't end up
|
||||||
# that aren't available until later (it could even use vars from the
|
# optimizing it here, the templatable string might use template vars
|
||||||
# with_items loop) so don't make the templated string permanent yet.
|
# that aren't available until later (it could even use vars from the
|
||||||
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
|
# with_items loop) so don't make the templated string permanent yet.
|
||||||
task_action = self._task.action
|
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=variables)
|
||||||
if templar._contains_vars(task_action):
|
task_action = self._task.action
|
||||||
task_action = templar.template(task_action, fail_on_undefined=False)
|
if templar._contains_vars(task_action):
|
||||||
|
task_action = templar.template(task_action, fail_on_undefined=False)
|
||||||
|
|
||||||
if len(items) > 0 and task_action in self.SQUASH_ACTIONS:
|
if len(items) > 0 and task_action in self.SQUASH_ACTIONS:
|
||||||
if all(isinstance(o, string_types) for o in items):
|
if all(isinstance(o, string_types) for o in items):
|
||||||
final_items = []
|
final_items = []
|
||||||
|
|
||||||
name = None
|
name = None
|
||||||
for allowed in ['name', 'pkg', 'package']:
|
for allowed in ['name', 'pkg', 'package']:
|
||||||
name = self._task.args.pop(allowed, None)
|
name = self._task.args.pop(allowed, None)
|
||||||
if name is not None:
|
if name is not None:
|
||||||
break
|
break
|
||||||
|
|
||||||
# This gets the information to check whether the name field
|
# This gets the information to check whether the name field
|
||||||
# contains a template that we can squash for
|
# contains a template that we can squash for
|
||||||
template_no_item = template_with_item = None
|
template_no_item = template_with_item = None
|
||||||
if name:
|
if name:
|
||||||
if templar._contains_vars(name):
|
if templar._contains_vars(name):
|
||||||
variables[loop_var] = '\0$'
|
variables[loop_var] = '\0$'
|
||||||
template_no_item = templar.template(name, variables, cache=False)
|
template_no_item = templar.template(name, variables, cache=False)
|
||||||
variables[loop_var] = '\0@'
|
variables[loop_var] = '\0@'
|
||||||
template_with_item = templar.template(name, variables, cache=False)
|
template_with_item = templar.template(name, variables, cache=False)
|
||||||
del variables[loop_var]
|
del variables[loop_var]
|
||||||
|
|
||||||
# Check if the user is doing some operation that doesn't take
|
# Check if the user is doing some operation that doesn't take
|
||||||
# name/pkg or the name/pkg field doesn't have any variables
|
# name/pkg or the name/pkg field doesn't have any variables
|
||||||
# and thus the items can't be squashed
|
# and thus the items can't be squashed
|
||||||
if template_no_item != template_with_item:
|
if template_no_item != template_with_item:
|
||||||
for item in items:
|
for item in items:
|
||||||
variables[loop_var] = item
|
variables[loop_var] = item
|
||||||
if self._task.evaluate_conditional(templar, variables):
|
if self._task.evaluate_conditional(templar, variables):
|
||||||
new_item = templar.template(name, cache=False)
|
new_item = templar.template(name, cache=False)
|
||||||
final_items.append(new_item)
|
final_items.append(new_item)
|
||||||
self._task.args['name'] = final_items
|
self._task.args['name'] = final_items
|
||||||
# Wrap this in a list so that the calling function loop
|
# Wrap this in a list so that the calling function loop
|
||||||
# executes exactly once
|
# executes exactly once
|
||||||
return [final_items]
|
return [final_items]
|
||||||
else:
|
else:
|
||||||
# Restore the name parameter
|
# Restore the name parameter
|
||||||
self._task.args['name'] = name
|
self._task.args['name'] = name
|
||||||
#elif:
|
#elif:
|
||||||
# Right now we only optimize single entries. In the future we
|
# Right now we only optimize single entries. In the future we
|
||||||
# could optimize more types:
|
# could optimize more types:
|
||||||
# * lists can be squashed together
|
# * lists can be squashed together
|
||||||
# * dicts could squash entries that match in all cases except the
|
# * dicts could squash entries that match in all cases except the
|
||||||
# name or pkg field.
|
# name or pkg field.
|
||||||
|
except:
|
||||||
|
# Squashing is an optimization. If it fails for any reason,
|
||||||
|
# simply use the unoptimized list of items.
|
||||||
|
pass
|
||||||
return items
|
return items
|
||||||
|
|
||||||
def _execute(self, variables=None):
|
def _execute(self, variables=None):
|
||||||
|
@ -414,10 +419,10 @@ class TaskExecutor:
|
||||||
self._task.args = dict((i[0], i[1]) for i in iteritems(self._task.args) if i[1] != omit_token)
|
self._task.args = dict((i[0], i[1]) for i in iteritems(self._task.args) if i[1] != omit_token)
|
||||||
|
|
||||||
# Read some values from the task, so that we can modify them if need be
|
# Read some values from the task, so that we can modify them if need be
|
||||||
if self._task.until is not None:
|
if self._task.until:
|
||||||
retries = self._task.retries
|
retries = self._task.retries
|
||||||
if retries <= 0:
|
if retries is None:
|
||||||
retries = 1
|
retries = 3
|
||||||
else:
|
else:
|
||||||
retries = 1
|
retries = 1
|
||||||
|
|
||||||
|
@ -431,7 +436,7 @@ class TaskExecutor:
|
||||||
|
|
||||||
display.debug("starting attempt loop")
|
display.debug("starting attempt loop")
|
||||||
result = None
|
result = None
|
||||||
for attempt in range(retries):
|
for attempt in range(1, retries + 1):
|
||||||
display.debug("running the handler")
|
display.debug("running the handler")
|
||||||
try:
|
try:
|
||||||
result = self._handler.run(task_vars=variables)
|
result = self._handler.run(task_vars=variables)
|
||||||
|
@ -494,23 +499,23 @@ class TaskExecutor:
|
||||||
_evaluate_changed_when_result(result)
|
_evaluate_changed_when_result(result)
|
||||||
_evaluate_failed_when_result(result)
|
_evaluate_failed_when_result(result)
|
||||||
|
|
||||||
if attempt < retries - 1:
|
if retries > 1:
|
||||||
cond = Conditional(loader=self._loader)
|
cond = Conditional(loader=self._loader)
|
||||||
cond.when = self._task.until
|
cond.when = self._task.until
|
||||||
if cond.evaluate_conditional(templar, vars_copy):
|
if cond.evaluate_conditional(templar, vars_copy):
|
||||||
break
|
break
|
||||||
else:
|
else:
|
||||||
# no conditional check, or it failed, so sleep for the specified time
|
# no conditional check, or it failed, so sleep for the specified time
|
||||||
result['attempts'] = attempt + 1
|
if attempt < retries:
|
||||||
result['retries'] = retries
|
result['attempts'] = attempt
|
||||||
result['_ansible_retry'] = True
|
result['_ansible_retry'] = True
|
||||||
display.debug('Retrying task, attempt %d of %d' % (attempt + 1, retries))
|
result['retries'] = retries
|
||||||
self._rslt_q.put(TaskResult(self._host, self._task, result), block=False)
|
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
|
||||||
time.sleep(delay)
|
self._rslt_q.put(TaskResult(self._host, self._task, result), block=False)
|
||||||
|
time.sleep(delay)
|
||||||
else:
|
else:
|
||||||
if retries > 1:
|
if retries > 1:
|
||||||
# we ran out of attempts, so mark the result as failed
|
# we ran out of attempts, so mark the result as failed
|
||||||
result['attempts'] = retries
|
|
||||||
result['failed'] = True
|
result['failed'] = True
|
||||||
|
|
||||||
# do the final update of the local variables here, for both registered
|
# do the final update of the local variables here, for both registered
|
||||||
|
@ -595,14 +600,14 @@ class TaskExecutor:
|
||||||
# since we're delegating, we don't want to use interpreter values
|
# since we're delegating, we don't want to use interpreter values
|
||||||
# which would have been set for the original target host
|
# which would have been set for the original target host
|
||||||
for i in variables.keys():
|
for i in variables.keys():
|
||||||
if i.startswith('ansible_') and i.endswith('_interpreter'):
|
if isinstance(i, string_types) and i.startswith('ansible_') and i.endswith('_interpreter'):
|
||||||
del variables[i]
|
del variables[i]
|
||||||
# now replace the interpreter values with those that may have come
|
# now replace the interpreter values with those that may have come
|
||||||
# from the delegated-to host
|
# from the delegated-to host
|
||||||
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict())
|
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(self._task.delegate_to, dict())
|
||||||
if isinstance(delegated_vars, dict):
|
if isinstance(delegated_vars, dict):
|
||||||
for i in delegated_vars:
|
for i in delegated_vars:
|
||||||
if i.startswith("ansible_") and i.endswith("_interpreter"):
|
if isinstance(i, string_types) and i.startswith("ansible_") and i.endswith("_interpreter"):
|
||||||
variables[i] = delegated_vars[i]
|
variables[i] = delegated_vars[i]
|
||||||
|
|
||||||
conn_type = self._play_context.connection
|
conn_type = self._play_context.connection
|
||||||
|
@ -629,6 +634,8 @@ class TaskExecutor:
|
||||||
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
|
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
|
||||||
|
|
||||||
if self._play_context.accelerate:
|
if self._play_context.accelerate:
|
||||||
|
# accelerate is deprecated as of 2.1...
|
||||||
|
display.deprecated('Accelerated mode is deprecated. Consider using SSH with ControlPersist and pipelining enabled instead')
|
||||||
# launch the accelerated daemon here
|
# launch the accelerated daemon here
|
||||||
ssh_connection = connection
|
ssh_connection = connection
|
||||||
handler = self._shared_loader_obj.action_loader.get(
|
handler = self._shared_loader_obj.action_loader.get(
|
||||||
|
|
|
@ -40,14 +40,16 @@ class TaskResult:
|
||||||
return self._check_key('changed')
|
return self._check_key('changed')
|
||||||
|
|
||||||
def is_skipped(self):
|
def is_skipped(self):
|
||||||
|
# loop results
|
||||||
if 'results' in self._result and self._task.loop:
|
if 'results' in self._result and self._task.loop:
|
||||||
flag = True
|
results = self._result['results']
|
||||||
for res in self._result.get('results', []):
|
# Loop tasks are only considered skipped if all items were skipped.
|
||||||
if isinstance(res, dict):
|
# some squashed results (eg, yum) are not dicts and can't be skipped individually
|
||||||
flag &= res.get('skipped', False)
|
if results and all(isinstance(res, dict) and res.get('skipped', False) for res in results):
|
||||||
return flag
|
return True
|
||||||
else:
|
|
||||||
return self._result.get('skipped', False)
|
# regular tasks and squashed non-dict results
|
||||||
|
return self._result.get('skipped', False)
|
||||||
|
|
||||||
def is_failed(self):
|
def is_failed(self):
|
||||||
if 'failed_when_result' in self._result or \
|
if 'failed_when_result' in self._result or \
|
||||||
|
|
|
@ -204,7 +204,7 @@ class Inventory(object):
|
||||||
|
|
||||||
# exclude hosts mentioned in any restriction (ex: failed hosts)
|
# exclude hosts mentioned in any restriction (ex: failed hosts)
|
||||||
if self._restriction is not None:
|
if self._restriction is not None:
|
||||||
hosts = [ h for h in hosts if h in self._restriction ]
|
hosts = [ h for h in hosts if h.name in self._restriction ]
|
||||||
|
|
||||||
seen = set()
|
seen = set()
|
||||||
HOSTS_PATTERNS_CACHE[pattern_hash] = [x for x in hosts if x not in seen and not seen.add(x)]
|
HOSTS_PATTERNS_CACHE[pattern_hash] = [x for x in hosts if x not in seen and not seen.add(x)]
|
||||||
|
@ -600,7 +600,7 @@ class Inventory(object):
|
||||||
return
|
return
|
||||||
elif not isinstance(restriction, list):
|
elif not isinstance(restriction, list):
|
||||||
restriction = [ restriction ]
|
restriction = [ restriction ]
|
||||||
self._restriction = restriction
|
self._restriction = [ h.name for h in restriction ]
|
||||||
|
|
||||||
def subset(self, subset_pattern):
|
def subset(self, subset_pattern):
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -52,6 +52,7 @@ def get_file_parser(hostsfile, groups, loader):
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
#FIXME: make this 'plugin loop'
|
#FIXME: make this 'plugin loop'
|
||||||
# script
|
# script
|
||||||
if loader.is_executable(hostsfile):
|
if loader.is_executable(hostsfile):
|
||||||
|
@ -59,9 +60,9 @@ def get_file_parser(hostsfile, groups, loader):
|
||||||
parser = InventoryScript(loader=loader, groups=groups, filename=hostsfile)
|
parser = InventoryScript(loader=loader, groups=groups, filename=hostsfile)
|
||||||
processed = True
|
processed = True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
myerr.append("The file %s is marked as executable, but failed to execute correctly. " % hostsfile + \
|
|
||||||
"If this is not supposed to be an executable script, correct this with `chmod -x %s`." % hostsfile)
|
|
||||||
myerr.append(str(e))
|
myerr.append(str(e))
|
||||||
|
elif shebang_present:
|
||||||
|
myerr.append("The file %s looks like it should be an executable inventory script, but is not marked executable. Perhaps you want to correct this with `chmod +x %s`?" % (hostsfile, hostsfile))
|
||||||
|
|
||||||
# YAML/JSON
|
# YAML/JSON
|
||||||
if not processed and os.path.splitext(hostsfile)[-1] in C.YAML_FILENAME_EXTENSIONS:
|
if not processed and os.path.splitext(hostsfile)[-1] in C.YAML_FILENAME_EXTENSIONS:
|
||||||
|
@ -69,11 +70,7 @@ def get_file_parser(hostsfile, groups, loader):
|
||||||
parser = InventoryYAMLParser(loader=loader, groups=groups, filename=hostsfile)
|
parser = InventoryYAMLParser(loader=loader, groups=groups, filename=hostsfile)
|
||||||
processed = True
|
processed = True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
if shebang_present and not loader.is_executable(hostsfile):
|
myerr.append(str(e))
|
||||||
myerr.append("The file %s looks like it should be an executable inventory script, but is not marked executable. " % hostsfile + \
|
|
||||||
"Perhaps you want to correct this with `chmod +x %s`?" % hostsfile)
|
|
||||||
else:
|
|
||||||
myerr.append(str(e))
|
|
||||||
|
|
||||||
# ini
|
# ini
|
||||||
if not processed:
|
if not processed:
|
||||||
|
@ -81,11 +78,7 @@ def get_file_parser(hostsfile, groups, loader):
|
||||||
parser = InventoryINIParser(loader=loader, groups=groups, filename=hostsfile)
|
parser = InventoryINIParser(loader=loader, groups=groups, filename=hostsfile)
|
||||||
processed = True
|
processed = True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
if shebang_present and not loader.is_executable(hostsfile):
|
myerr.append(str(e))
|
||||||
myerr.append("The file %s looks like it should be an executable inventory script, but is not marked executable. " % hostsfile + \
|
|
||||||
"Perhaps you want to correct this with `chmod +x %s`?" % hostsfile)
|
|
||||||
else:
|
|
||||||
myerr.append(str(e))
|
|
||||||
|
|
||||||
if not processed and myerr:
|
if not processed and myerr:
|
||||||
raise AnsibleError( '\n'.join(myerr) )
|
raise AnsibleError( '\n'.join(myerr) )
|
||||||
|
|
|
@ -27,6 +27,7 @@ from ansible.inventory.group import Group
|
||||||
from ansible.inventory.expand_hosts import detect_range
|
from ansible.inventory.expand_hosts import detect_range
|
||||||
from ansible.inventory.expand_hosts import expand_hostname_range
|
from ansible.inventory.expand_hosts import expand_hostname_range
|
||||||
from ansible.parsing.utils.addresses import parse_address
|
from ansible.parsing.utils.addresses import parse_address
|
||||||
|
from ansible.compat.six import string_types
|
||||||
|
|
||||||
class InventoryParser(object):
|
class InventoryParser(object):
|
||||||
"""
|
"""
|
||||||
|
@ -77,6 +78,11 @@ class InventoryParser(object):
|
||||||
self.groups[group] = Group(name=group)
|
self.groups[group] = Group(name=group)
|
||||||
|
|
||||||
if isinstance(group_data, dict):
|
if isinstance(group_data, dict):
|
||||||
|
#make sure they are dicts
|
||||||
|
for section in ['vars', 'children', 'hosts']:
|
||||||
|
if section in group_data and isinstance(group_data[section], string_types):
|
||||||
|
group_data[section] = { group_data[section]: None}
|
||||||
|
|
||||||
if 'vars' in group_data:
|
if 'vars' in group_data:
|
||||||
for var in group_data['vars']:
|
for var in group_data['vars']:
|
||||||
if var != 'ansible_group_priority':
|
if var != 'ansible_group_priority':
|
||||||
|
|
|
@ -88,7 +88,7 @@ try:
|
||||||
from azure.mgmt.compute.compute_management_client import ComputeManagementClient,\
|
from azure.mgmt.compute.compute_management_client import ComputeManagementClient,\
|
||||||
ComputeManagementClientConfiguration
|
ComputeManagementClientConfiguration
|
||||||
from azure.storage.cloudstorageaccount import CloudStorageAccount
|
from azure.storage.cloudstorageaccount import CloudStorageAccount
|
||||||
except ImportError, exc:
|
except ImportError as exc:
|
||||||
HAS_AZURE_EXC = exc
|
HAS_AZURE_EXC = exc
|
||||||
HAS_AZURE = False
|
HAS_AZURE = False
|
||||||
|
|
||||||
|
@ -323,7 +323,7 @@ class AzureRMModuleBase(object):
|
||||||
return self.rm_client.resource_groups.get(resource_group)
|
return self.rm_client.resource_groups.get(resource_group)
|
||||||
except CloudError:
|
except CloudError:
|
||||||
self.fail("Parameter error: resource group {0} not found".format(resource_group))
|
self.fail("Parameter error: resource group {0} not found".format(resource_group))
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("Error retrieving resource group {0} - {1}".format(resource_group, str(exc)))
|
self.fail("Error retrieving resource group {0} - {1}".format(resource_group, str(exc)))
|
||||||
|
|
||||||
def _get_profile(self, profile="default"):
|
def _get_profile(self, profile="default"):
|
||||||
|
@ -331,7 +331,7 @@ class AzureRMModuleBase(object):
|
||||||
try:
|
try:
|
||||||
config = ConfigParser.ConfigParser()
|
config = ConfigParser.ConfigParser()
|
||||||
config.read(path)
|
config.read(path)
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("Failed to access {0}. Check that the file exists and you have read "
|
self.fail("Failed to access {0}. Check that the file exists and you have read "
|
||||||
"access. {1}".format(path, str(exc)))
|
"access. {1}".format(path, str(exc)))
|
||||||
credentials = dict()
|
credentials = dict()
|
||||||
|
@ -418,7 +418,7 @@ class AzureRMModuleBase(object):
|
||||||
self.log("Waiting for {0} sec".format(delay))
|
self.log("Waiting for {0} sec".format(delay))
|
||||||
poller.wait(timeout=delay)
|
poller.wait(timeout=delay)
|
||||||
return poller.result()
|
return poller.result()
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.log(str(exc))
|
self.log(str(exc))
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
@ -465,13 +465,13 @@ class AzureRMModuleBase(object):
|
||||||
account_keys = self.storage_client.storage_accounts.list_keys(resource_group_name, storage_account_name)
|
account_keys = self.storage_client.storage_accounts.list_keys(resource_group_name, storage_account_name)
|
||||||
keys['key1'] = account_keys.key1
|
keys['key1'] = account_keys.key1
|
||||||
keys['key2'] = account_keys.key2
|
keys['key2'] = account_keys.key2
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("Error getting keys for account {0} - {1}".format(storage_account_name, str(exc)))
|
self.fail("Error getting keys for account {0} - {1}".format(storage_account_name, str(exc)))
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.log('Create blob service')
|
self.log('Create blob service')
|
||||||
return CloudStorageAccount(storage_account_name, keys['key1']).create_block_blob_service()
|
return CloudStorageAccount(storage_account_name, keys['key1']).create_block_blob_service()
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("Error creating blob service client for storage account {0} - {1}".format(storage_account_name,
|
self.fail("Error creating blob service client for storage account {0} - {1}".format(storage_account_name,
|
||||||
str(exc)))
|
str(exc)))
|
||||||
|
|
||||||
|
@ -508,7 +508,7 @@ class AzureRMModuleBase(object):
|
||||||
self.log('Creating default public IP {0}'.format(public_ip_name))
|
self.log('Creating default public IP {0}'.format(public_ip_name))
|
||||||
try:
|
try:
|
||||||
poller = self.network_client.public_ip_addresses.create_or_update(resource_group, public_ip_name, params)
|
poller = self.network_client.public_ip_addresses.create_or_update(resource_group, public_ip_name, params)
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("Error creating {0} - {1}".format(public_ip_name, str(exc)))
|
self.fail("Error creating {0} - {1}".format(public_ip_name, str(exc)))
|
||||||
|
|
||||||
return self.get_poller_result(poller)
|
return self.get_poller_result(poller)
|
||||||
|
@ -578,7 +578,7 @@ class AzureRMModuleBase(object):
|
||||||
poller = self.network_client.network_security_groups.create_or_update(resource_group,
|
poller = self.network_client.network_security_groups.create_or_update(resource_group,
|
||||||
security_group_name,
|
security_group_name,
|
||||||
parameters)
|
parameters)
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("Error creating default security rule {0} - {1}".format(security_group_name, str(exc)))
|
self.fail("Error creating default security rule {0} - {1}".format(security_group_name, str(exc)))
|
||||||
|
|
||||||
return self.get_poller_result(poller)
|
return self.get_poller_result(poller)
|
||||||
|
@ -589,7 +589,7 @@ class AzureRMModuleBase(object):
|
||||||
# time we attempt to use the requested client.
|
# time we attempt to use the requested client.
|
||||||
resource_client = self.rm_client
|
resource_client = self.rm_client
|
||||||
resource_client.providers.register(key)
|
resource_client.providers.register(key)
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("One-time registration of {0} failed - {1}".format(key, str(exc)))
|
self.fail("One-time registration of {0} failed - {1}".format(key, str(exc)))
|
||||||
|
|
||||||
@property
|
@property
|
||||||
|
|
|
@ -136,10 +136,10 @@ except ImportError:
|
||||||
try:
|
try:
|
||||||
import simplejson as json
|
import simplejson as json
|
||||||
except ImportError:
|
except ImportError:
|
||||||
print('{"msg": "Error: ansible requires the stdlib json or simplejson module, neither was found!", "failed": true}')
|
print('\n{"msg": "Error: ansible requires the stdlib json or simplejson module, neither was found!", "failed": true}')
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
except SyntaxError:
|
except SyntaxError:
|
||||||
print('{"msg": "SyntaxError: probably due to installed simplejson being for a different python version", "failed": true}')
|
print('\n{"msg": "SyntaxError: probably due to installed simplejson being for a different python version", "failed": true}')
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
HAVE_SELINUX=False
|
HAVE_SELINUX=False
|
||||||
|
@ -219,6 +219,9 @@ except ImportError:
|
||||||
|
|
||||||
_literal_eval = literal_eval
|
_literal_eval = literal_eval
|
||||||
|
|
||||||
|
# Backwards compat. There were present in basic.py before
|
||||||
|
from ansible.module_utils.pycompat import get_exception
|
||||||
|
|
||||||
# Internal global holding passed in params and constants. This is consulted
|
# Internal global holding passed in params and constants. This is consulted
|
||||||
# in case multiple AnsibleModules are created. Otherwise each AnsibleModule
|
# in case multiple AnsibleModules are created. Otherwise each AnsibleModule
|
||||||
# would attempt to read from stdin. Other code should not use this directly
|
# would attempt to read from stdin. Other code should not use this directly
|
||||||
|
@ -253,21 +256,6 @@ EXEC_PERM_BITS = int('00111', 8) # execute permission bits
|
||||||
DEFAULT_PERM = int('0666', 8) # default file permission bits
|
DEFAULT_PERM = int('0666', 8) # default file permission bits
|
||||||
|
|
||||||
|
|
||||||
def get_exception():
|
|
||||||
"""Get the current exception.
|
|
||||||
|
|
||||||
This code needs to work on Python 2.4 through 3.x, so we cannot use
|
|
||||||
"except Exception, e:" (SyntaxError on Python 3.x) nor
|
|
||||||
"except Exception as e:" (SyntaxError on Python 2.4-2.5).
|
|
||||||
Instead we must use ::
|
|
||||||
|
|
||||||
except Exception:
|
|
||||||
e = get_exception()
|
|
||||||
|
|
||||||
"""
|
|
||||||
return sys.exc_info()[1]
|
|
||||||
|
|
||||||
|
|
||||||
def get_platform():
|
def get_platform():
|
||||||
''' what's the platform? example: Linux is a platform. '''
|
''' what's the platform? example: Linux is a platform. '''
|
||||||
return platform.system()
|
return platform.system()
|
||||||
|
@ -558,7 +546,7 @@ class AnsibleModule(object):
|
||||||
self.run_command_environ_update = {}
|
self.run_command_environ_update = {}
|
||||||
|
|
||||||
self.aliases = {}
|
self.aliases = {}
|
||||||
self._legal_inputs = ['_ansible_check_mode', '_ansible_no_log', '_ansible_debug', '_ansible_diff', '_ansible_verbosity']
|
self._legal_inputs = ['_ansible_check_mode', '_ansible_no_log', '_ansible_debug', '_ansible_diff', '_ansible_verbosity', '_ansible_selinux_special_fs', '_ansible_version', '_ansible_syslog_facility']
|
||||||
|
|
||||||
if add_file_common_args:
|
if add_file_common_args:
|
||||||
for k, v in FILE_COMMON_ARGUMENTS.items():
|
for k, v in FILE_COMMON_ARGUMENTS.items():
|
||||||
|
@ -574,7 +562,7 @@ class AnsibleModule(object):
|
||||||
except Exception:
|
except Exception:
|
||||||
e = get_exception()
|
e = get_exception()
|
||||||
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
|
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
|
||||||
print('{"failed": true, "msg": "Module alias error: %s"}' % str(e))
|
print('\n{"failed": true, "msg": "Module alias error: %s"}' % str(e))
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
# Save parameter values that should never be logged
|
# Save parameter values that should never be logged
|
||||||
|
@ -782,7 +770,7 @@ class AnsibleModule(object):
|
||||||
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
|
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
|
||||||
|
|
||||||
if path_mount_point == mount_point:
|
if path_mount_point == mount_point:
|
||||||
for fs in self.constants['SELINUX_SPECIAL_FS']:
|
for fs in self._selinux_special_fs:
|
||||||
if fs in fstype:
|
if fs in fstype:
|
||||||
special_context = self.selinux_context(path_mount_point)
|
special_context = self.selinux_context(path_mount_point)
|
||||||
return (True, special_context)
|
return (True, special_context)
|
||||||
|
@ -1175,7 +1163,8 @@ class AnsibleModule(object):
|
||||||
return aliases_results
|
return aliases_results
|
||||||
|
|
||||||
def _check_arguments(self, check_invalid_arguments):
|
def _check_arguments(self, check_invalid_arguments):
|
||||||
for (k,v) in self.params.items():
|
self._syslog_facility = 'LOG_USER'
|
||||||
|
for (k,v) in list(self.params.items()):
|
||||||
|
|
||||||
if k == '_ansible_check_mode' and v:
|
if k == '_ansible_check_mode' and v:
|
||||||
if not self.supports_check_mode:
|
if not self.supports_check_mode:
|
||||||
|
@ -1194,6 +1183,15 @@ class AnsibleModule(object):
|
||||||
elif k == '_ansible_verbosity':
|
elif k == '_ansible_verbosity':
|
||||||
self._verbosity = v
|
self._verbosity = v
|
||||||
|
|
||||||
|
elif k == '_ansible_selinux_special_fs':
|
||||||
|
self._selinux_special_fs = v
|
||||||
|
|
||||||
|
elif k == '_ansible_syslog_facility':
|
||||||
|
self._syslog_facility = v
|
||||||
|
|
||||||
|
elif k == '_ansible_version':
|
||||||
|
self.ansible_version = v
|
||||||
|
|
||||||
elif check_invalid_arguments and k not in self._legal_inputs:
|
elif check_invalid_arguments and k not in self._legal_inputs:
|
||||||
self.fail_json(msg="unsupported parameter for module: %s" % k)
|
self.fail_json(msg="unsupported parameter for module: %s" % k)
|
||||||
|
|
||||||
|
@ -1400,7 +1398,7 @@ class AnsibleModule(object):
|
||||||
# Return a jsonified string. Sometimes the controller turns a json
|
# Return a jsonified string. Sometimes the controller turns a json
|
||||||
# string into a dict/list so transform it back into json here
|
# string into a dict/list so transform it back into json here
|
||||||
if isinstance(value, (unicode, bytes)):
|
if isinstance(value, (unicode, bytes)):
|
||||||
return value
|
return value.strip()
|
||||||
else:
|
else:
|
||||||
if isinstance(value (list, tuple, dict)):
|
if isinstance(value (list, tuple, dict)):
|
||||||
return json.dumps(value)
|
return json.dumps(value)
|
||||||
|
@ -1497,7 +1495,7 @@ class AnsibleModule(object):
|
||||||
params = json.loads(buffer.decode('utf-8'))
|
params = json.loads(buffer.decode('utf-8'))
|
||||||
except ValueError:
|
except ValueError:
|
||||||
# This helper used too early for fail_json to work.
|
# This helper used too early for fail_json to work.
|
||||||
print('{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
|
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
if sys.version_info < (3,):
|
if sys.version_info < (3,):
|
||||||
|
@ -1505,16 +1503,15 @@ class AnsibleModule(object):
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.params = params['ANSIBLE_MODULE_ARGS']
|
self.params = params['ANSIBLE_MODULE_ARGS']
|
||||||
self.constants = params['ANSIBLE_MODULE_CONSTANTS']
|
|
||||||
except KeyError:
|
except KeyError:
|
||||||
# This helper used too early for fail_json to work.
|
# This helper used too early for fail_json to work.
|
||||||
print('{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS and ANSIBLE_MODULE_CONSTANTS in json data from stdin. Unable to figure out what parameters were passed", "failed": true}')
|
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS and ANSIBLE_MODULE_CONSTANTS in json data from stdin. Unable to figure out what parameters were passed", "failed": true}')
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
def _log_to_syslog(self, msg):
|
def _log_to_syslog(self, msg):
|
||||||
if HAS_SYSLOG:
|
if HAS_SYSLOG:
|
||||||
module = 'ansible-%s' % os.path.basename(__file__)
|
module = 'ansible-%s' % os.path.basename(__file__)
|
||||||
facility = getattr(syslog, self.constants.get('SYSLOG_FACILITY', 'LOG_USER'), syslog.LOG_USER)
|
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
|
||||||
syslog.openlog(str(module), 0, facility)
|
syslog.openlog(str(module), 0, facility)
|
||||||
syslog.syslog(syslog.LOG_INFO, msg)
|
syslog.syslog(syslog.LOG_INFO, msg)
|
||||||
|
|
||||||
|
@ -1700,7 +1697,7 @@ class AnsibleModule(object):
|
||||||
kwargs['invocation'] = {'module_args': self.params}
|
kwargs['invocation'] = {'module_args': self.params}
|
||||||
kwargs = remove_values(kwargs, self.no_log_values)
|
kwargs = remove_values(kwargs, self.no_log_values)
|
||||||
self.do_cleanup_files()
|
self.do_cleanup_files()
|
||||||
print(self.jsonify(kwargs))
|
print('\n%s' % self.jsonify(kwargs))
|
||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
|
|
||||||
def fail_json(self, **kwargs):
|
def fail_json(self, **kwargs):
|
||||||
|
@ -1712,7 +1709,7 @@ class AnsibleModule(object):
|
||||||
kwargs['invocation'] = {'module_args': self.params}
|
kwargs['invocation'] = {'module_args': self.params}
|
||||||
kwargs = remove_values(kwargs, self.no_log_values)
|
kwargs = remove_values(kwargs, self.no_log_values)
|
||||||
self.do_cleanup_files()
|
self.do_cleanup_files()
|
||||||
print(self.jsonify(kwargs))
|
print('\n%s' % self.jsonify(kwargs))
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
def fail_on_missing_params(self, required_params=None):
|
def fail_on_missing_params(self, required_params=None):
|
||||||
|
|
|
@ -37,7 +37,7 @@ try:
|
||||||
from docker.constants import DEFAULT_TIMEOUT_SECONDS, DEFAULT_DOCKER_API_VERSION
|
from docker.constants import DEFAULT_TIMEOUT_SECONDS, DEFAULT_DOCKER_API_VERSION
|
||||||
from docker.utils.types import Ulimit, LogConfig
|
from docker.utils.types import Ulimit, LogConfig
|
||||||
from docker import auth
|
from docker import auth
|
||||||
except ImportError, exc:
|
except ImportError as exc:
|
||||||
HAS_DOCKER_ERROR = str(exc)
|
HAS_DOCKER_ERROR = str(exc)
|
||||||
HAS_DOCKER_PY = False
|
HAS_DOCKER_PY = False
|
||||||
|
|
||||||
|
@ -161,9 +161,9 @@ class AnsibleDockerClient(Client):
|
||||||
|
|
||||||
try:
|
try:
|
||||||
super(AnsibleDockerClient, self).__init__(**self._connect_params)
|
super(AnsibleDockerClient, self).__init__(**self._connect_params)
|
||||||
except APIError, exc:
|
except APIError as exc:
|
||||||
self.fail("Docker API error: %s" % exc)
|
self.fail("Docker API error: %s" % exc)
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("Error connecting: %s" % exc)
|
self.fail("Error connecting: %s" % exc)
|
||||||
|
|
||||||
def log(self, msg, pretty_print=False):
|
def log(self, msg, pretty_print=False):
|
||||||
|
@ -262,7 +262,7 @@ class AnsibleDockerClient(Client):
|
||||||
try:
|
try:
|
||||||
tls_config = TLSConfig(**kwargs)
|
tls_config = TLSConfig(**kwargs)
|
||||||
return tls_config
|
return tls_config
|
||||||
except TLSParameterError, exc:
|
except TLSParameterError as exc:
|
||||||
self.fail("TLS config error: %s" % exc)
|
self.fail("TLS config error: %s" % exc)
|
||||||
|
|
||||||
def _get_connect_params(self):
|
def _get_connect_params(self):
|
||||||
|
@ -372,9 +372,9 @@ class AnsibleDockerClient(Client):
|
||||||
if container['Id'] == name:
|
if container['Id'] == name:
|
||||||
result = container
|
result = container
|
||||||
break
|
break
|
||||||
except SSLError, exc:
|
except SSLError as exc:
|
||||||
self._handle_ssl_error(exc)
|
self._handle_ssl_error(exc)
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("Error retrieving container list: %s" % exc)
|
self.fail("Error retrieving container list: %s" % exc)
|
||||||
|
|
||||||
if result is not None:
|
if result is not None:
|
||||||
|
@ -382,7 +382,7 @@ class AnsibleDockerClient(Client):
|
||||||
self.log("Inspecting container Id %s" % result['Id'])
|
self.log("Inspecting container Id %s" % result['Id'])
|
||||||
result = self.inspect_container(container=result['Id'])
|
result = self.inspect_container(container=result['Id'])
|
||||||
self.log("Completed container inspection")
|
self.log("Completed container inspection")
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("Error inspecting container: %s" % exc)
|
self.fail("Error inspecting container: %s" % exc)
|
||||||
|
|
||||||
return result
|
return result
|
||||||
|
@ -411,7 +411,7 @@ class AnsibleDockerClient(Client):
|
||||||
if len(images) == 1:
|
if len(images) == 1:
|
||||||
try:
|
try:
|
||||||
inspection = self.inspect_image(images[0]['Id'])
|
inspection = self.inspect_image(images[0]['Id'])
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("Error inspecting image %s:%s - %s" % (name, tag, str(exc)))
|
self.fail("Error inspecting image %s:%s - %s" % (name, tag, str(exc)))
|
||||||
return inspection
|
return inspection
|
||||||
|
|
||||||
|
@ -455,7 +455,7 @@ class AnsibleDockerClient(Client):
|
||||||
error_detail.get('message')))
|
error_detail.get('message')))
|
||||||
else:
|
else:
|
||||||
self.fail("Error pulling %s - %s" % (name, line.get('error')))
|
self.fail("Error pulling %s - %s" % (name, line.get('error')))
|
||||||
except Exception, exc:
|
except Exception as exc:
|
||||||
self.fail("Error pulling image %s:%s - %s" % (name, tag, str(exc)))
|
self.fail("Error pulling image %s:%s - %s" % (name, tag, str(exc)))
|
||||||
|
|
||||||
return self.find_image(name, tag)
|
return self.find_image(name, tag)
|
||||||
|
|
|
@ -226,13 +226,13 @@ def ec2_connect(module):
|
||||||
if region:
|
if region:
|
||||||
try:
|
try:
|
||||||
ec2 = connect_to_aws(boto.ec2, region, **boto_params)
|
ec2 = connect_to_aws(boto.ec2, region, **boto_params)
|
||||||
except (boto.exception.NoAuthHandlerFound, AnsibleAWSError), e:
|
except (boto.exception.NoAuthHandlerFound, AnsibleAWSError) as e:
|
||||||
module.fail_json(msg=str(e))
|
module.fail_json(msg=str(e))
|
||||||
# Otherwise, no region so we fallback to the old connection method
|
# Otherwise, no region so we fallback to the old connection method
|
||||||
elif ec2_url:
|
elif ec2_url:
|
||||||
try:
|
try:
|
||||||
ec2 = boto.connect_ec2_endpoint(ec2_url, **boto_params)
|
ec2 = boto.connect_ec2_endpoint(ec2_url, **boto_params)
|
||||||
except (boto.exception.NoAuthHandlerFound, AnsibleAWSError), e:
|
except (boto.exception.NoAuthHandlerFound, AnsibleAWSError) as e:
|
||||||
module.fail_json(msg=str(e))
|
module.fail_json(msg=str(e))
|
||||||
else:
|
else:
|
||||||
module.fail_json(msg="Either region or ec2_url must be specified")
|
module.fail_json(msg="Either region or ec2_url must be specified")
|
||||||
|
@ -364,7 +364,10 @@ def boto3_tag_list_to_ansible_dict(tags_list):
|
||||||
|
|
||||||
tags_dict = {}
|
tags_dict = {}
|
||||||
for tag in tags_list:
|
for tag in tags_list:
|
||||||
tags_dict[tag['Key']] = tag['Value']
|
if 'key' in tag:
|
||||||
|
tags_dict[tag['key']] = tag['value']
|
||||||
|
elif 'Key' in tag:
|
||||||
|
tags_dict[tag['Key']] = tag['Value']
|
||||||
|
|
||||||
return tags_dict
|
return tags_dict
|
||||||
|
|
||||||
|
|
|
@ -43,6 +43,7 @@ def f5_argument_spec():
|
||||||
user=dict(type='str', required=True),
|
user=dict(type='str', required=True),
|
||||||
password=dict(type='str', aliases=['pass', 'pwd'], required=True, no_log=True),
|
password=dict(type='str', aliases=['pass', 'pwd'], required=True, no_log=True),
|
||||||
validate_certs = dict(default='yes', type='bool'),
|
validate_certs = dict(default='yes', type='bool'),
|
||||||
|
server_port = dict(type='int', default=443, required=False),
|
||||||
state = dict(type='str', default='present', choices=['present', 'absent']),
|
state = dict(type='str', default='present', choices=['present', 'absent']),
|
||||||
partition = dict(type='str', default='Common')
|
partition = dict(type='str', default='Common')
|
||||||
)
|
)
|
||||||
|
@ -57,12 +58,16 @@ def f5_parse_arguments(module):
|
||||||
if not hasattr(ssl, 'SSLContext'):
|
if not hasattr(ssl, 'SSLContext'):
|
||||||
module.fail_json(msg='bigsuds does not support verifying certificates with python < 2.7.9. Either update python or set validate_certs=False on the task')
|
module.fail_json(msg='bigsuds does not support verifying certificates with python < 2.7.9. Either update python or set validate_certs=False on the task')
|
||||||
|
|
||||||
return (module.params['server'],module.params['user'],module.params['password'],module.params['state'],module.params['partition'],module.params['validate_certs'])
|
return (module.params['server'],module.params['user'],module.params['password'],module.params['state'],module.params['partition'],module.params['validate_certs'],module.params['server_port'])
|
||||||
|
|
||||||
def bigip_api(bigip, user, password, validate_certs):
|
def bigip_api(bigip, user, password, validate_certs, port=443):
|
||||||
try:
|
try:
|
||||||
# bigsuds >= 1.0.3
|
if bigsuds.__version__ >= '1.0.4':
|
||||||
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password, verify=validate_certs)
|
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password, verify=validate_certs, port=port)
|
||||||
|
elif bigsuds.__version__ == '1.0.3':
|
||||||
|
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password, verify=validate_certs)
|
||||||
|
else:
|
||||||
|
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password)
|
||||||
except TypeError:
|
except TypeError:
|
||||||
# bigsuds < 1.0.3, no verify param
|
# bigsuds < 1.0.3, no verify param
|
||||||
if validate_certs:
|
if validate_certs:
|
||||||
|
@ -92,5 +97,3 @@ def fq_list_names(partition,list_names):
|
||||||
if list_names is None:
|
if list_names is None:
|
||||||
return None
|
return None
|
||||||
return map(lambda x: fq_name(partition,x),list_names)
|
return map(lambda x: fq_name(partition,x),list_names)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -156,6 +156,7 @@ class Facts(object):
|
||||||
{ 'path' : '/usr/sbin/urpmi', 'name' : 'urpmi' },
|
{ 'path' : '/usr/sbin/urpmi', 'name' : 'urpmi' },
|
||||||
{ 'path' : '/usr/bin/pacman', 'name' : 'pacman' },
|
{ 'path' : '/usr/bin/pacman', 'name' : 'pacman' },
|
||||||
{ 'path' : '/bin/opkg', 'name' : 'opkg' },
|
{ 'path' : '/bin/opkg', 'name' : 'opkg' },
|
||||||
|
{ 'path' : '/usr/pkg/bin/pkgin', 'name' : 'pkgin' },
|
||||||
{ 'path' : '/opt/local/bin/pkgin', 'name' : 'pkgin' },
|
{ 'path' : '/opt/local/bin/pkgin', 'name' : 'pkgin' },
|
||||||
{ 'path' : '/opt/local/bin/port', 'name' : 'macports' },
|
{ 'path' : '/opt/local/bin/port', 'name' : 'macports' },
|
||||||
{ 'path' : '/usr/local/bin/brew', 'name' : 'homebrew' },
|
{ 'path' : '/usr/local/bin/brew', 'name' : 'homebrew' },
|
||||||
|
@ -179,7 +180,7 @@ class Facts(object):
|
||||||
# about those first.
|
# about those first.
|
||||||
if load_on_init:
|
if load_on_init:
|
||||||
self.get_platform_facts()
|
self.get_platform_facts()
|
||||||
self.facts.update(Distribution().populate())
|
self.facts.update(Distribution(module).populate())
|
||||||
self.get_cmdline()
|
self.get_cmdline()
|
||||||
self.get_public_ssh_host_keys()
|
self.get_public_ssh_host_keys()
|
||||||
self.get_selinux_facts()
|
self.get_selinux_facts()
|
||||||
|
@ -604,6 +605,10 @@ class Distribution(object):
|
||||||
This is unit tested. Please extend the tests to cover all distributions if you have them available.
|
This is unit tested. Please extend the tests to cover all distributions if you have them available.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
# every distribution name mentioned here, must have one of
|
||||||
|
# - allowempty == True
|
||||||
|
# - be listed in SEARCH_STRING
|
||||||
|
# - have a function get_distribution_DISTNAME implemented
|
||||||
OSDIST_LIST = (
|
OSDIST_LIST = (
|
||||||
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
|
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
|
||||||
{'path': '/etc/slackware-version', 'name': 'Slackware'},
|
{'path': '/etc/slackware-version', 'name': 'Slackware'},
|
||||||
|
@ -643,36 +648,32 @@ class Distribution(object):
|
||||||
FreeBSD = 'FreeBSD', HPUX = 'HP-UX', openSUSE_Leap = 'Suse'
|
FreeBSD = 'FreeBSD', HPUX = 'HP-UX', openSUSE_Leap = 'Suse'
|
||||||
)
|
)
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self, module):
|
||||||
self.system = platform.system()
|
self.system = platform.system()
|
||||||
self.facts = {}
|
self.facts = {}
|
||||||
|
self.module = module
|
||||||
|
|
||||||
def populate(self):
|
def populate(self):
|
||||||
if self.system == 'Linux':
|
self.get_distribution_facts()
|
||||||
self.get_distribution_facts()
|
|
||||||
return self.facts
|
return self.facts
|
||||||
|
|
||||||
def get_distribution_facts(self):
|
def get_distribution_facts(self):
|
||||||
|
|
||||||
# The platform module provides information about the running
|
# The platform module provides information about the running
|
||||||
# system/distribution. Use this as a baseline and fix buggy systems
|
# system/distribution. Use this as a baseline and fix buggy systems
|
||||||
# afterwards
|
# afterwards
|
||||||
|
self.facts['distribution'] = self.system
|
||||||
self.facts['distribution_release'] = platform.release()
|
self.facts['distribution_release'] = platform.release()
|
||||||
self.facts['distribution_version'] = platform.version()
|
self.facts['distribution_version'] = platform.version()
|
||||||
|
|
||||||
systems_platform_working = ('NetBSD', 'FreeBSD')
|
|
||||||
systems_implemented = ('AIX', 'HP-UX', 'Darwin', 'OpenBSD')
|
systems_implemented = ('AIX', 'HP-UX', 'Darwin', 'OpenBSD')
|
||||||
|
|
||||||
if self.system in systems_platform_working:
|
self.facts['distribution'] = self.system
|
||||||
# the distribution is provided by platform module already and needs no fixes
|
|
||||||
pass
|
|
||||||
|
|
||||||
elif self.system in systems_implemented:
|
if self.system in systems_implemented:
|
||||||
self.facts['distribution'] = self.system
|
|
||||||
cleanedname = self.system.replace('-','')
|
cleanedname = self.system.replace('-','')
|
||||||
distfunc = getattr(self, 'get_distribution_'+cleanedname)
|
distfunc = getattr(self, 'get_distribution_'+cleanedname)
|
||||||
distfunc()
|
distfunc()
|
||||||
else:
|
elif self.system == 'Linux':
|
||||||
# try to find out which linux distribution this is
|
# try to find out which linux distribution this is
|
||||||
dist = platform.dist()
|
dist = platform.dist()
|
||||||
self.facts['distribution'] = dist[0].capitalize() or 'NA'
|
self.facts['distribution'] = dist[0].capitalize() or 'NA'
|
||||||
|
@ -687,12 +688,12 @@ class Distribution(object):
|
||||||
|
|
||||||
if not os.path.exists(path):
|
if not os.path.exists(path):
|
||||||
continue
|
continue
|
||||||
|
# if allowempty is set, we only check for file existance but not content
|
||||||
|
if 'allowempty' in ddict and ddict['allowempty']:
|
||||||
|
self.facts['distribution'] = name
|
||||||
|
break
|
||||||
if os.path.getsize(path) == 0:
|
if os.path.getsize(path) == 0:
|
||||||
if 'allowempty' in ddict and ddict['allowempty']:
|
continue
|
||||||
self.facts['distribution'] = name
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
continue
|
|
||||||
|
|
||||||
data = get_file_content(path)
|
data = get_file_content(path)
|
||||||
if name in self.SEARCH_STRING:
|
if name in self.SEARCH_STRING:
|
||||||
|
@ -707,13 +708,19 @@ class Distribution(object):
|
||||||
break
|
break
|
||||||
else:
|
else:
|
||||||
# call a dedicated function for parsing the file content
|
# call a dedicated function for parsing the file content
|
||||||
distfunc = getattr(self, 'get_distribution_' + name)
|
try:
|
||||||
parsed = distfunc(name, data, path)
|
distfunc = getattr(self, 'get_distribution_' + name)
|
||||||
if parsed is None or parsed:
|
parsed = distfunc(name, data, path)
|
||||||
# distfunc return False if parsing failed
|
if parsed is None or parsed:
|
||||||
# break only if parsing was succesful
|
# distfunc return False if parsing failed
|
||||||
# otherwise continue with other distributions
|
# break only if parsing was succesful
|
||||||
break
|
# otherwise continue with other distributions
|
||||||
|
break
|
||||||
|
except AttributeError:
|
||||||
|
# this should never happen, but if it does fail quitely and not with a traceback
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# to debug multiple matching release files, one can use:
|
# to debug multiple matching release files, one can use:
|
||||||
# self.facts['distribution_debug'].append({path + ' ' + name:
|
# self.facts['distribution_debug'].append({path + ' ' + name:
|
||||||
|
@ -780,10 +787,6 @@ class Distribution(object):
|
||||||
if release:
|
if release:
|
||||||
self.facts['distribution_release'] = release.groups()[0]
|
self.facts['distribution_release'] = release.groups()[0]
|
||||||
|
|
||||||
def get_distribution_Archlinux(self, name, data, path):
|
|
||||||
self.facts['distribution'] = 'Archlinux'
|
|
||||||
self.facts['distribution_version'] = data
|
|
||||||
|
|
||||||
def get_distribution_Alpine(self, name, data, path):
|
def get_distribution_Alpine(self, name, data, path):
|
||||||
self.facts['distribution'] = 'Alpine'
|
self.facts['distribution'] = 'Alpine'
|
||||||
self.facts['distribution_version'] = data
|
self.facts['distribution_version'] = data
|
||||||
|
|
|
@ -27,18 +27,29 @@
|
||||||
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||||
#
|
#
|
||||||
|
|
||||||
|
import json
|
||||||
import os
|
import os
|
||||||
import traceback
|
import traceback
|
||||||
|
from distutils.version import LooseVersion
|
||||||
|
|
||||||
from libcloud.compute.types import Provider
|
try:
|
||||||
from libcloud.compute.providers import get_driver
|
from libcloud.compute.types import Provider
|
||||||
|
import libcloud
|
||||||
|
from libcloud.compute.providers import get_driver
|
||||||
|
HAS_LIBCLOUD_BASE = True
|
||||||
|
except ImportError:
|
||||||
|
HAS_LIBCLOUD_BASE = False
|
||||||
|
|
||||||
USER_AGENT_PRODUCT="Ansible-gce"
|
USER_AGENT_PRODUCT="Ansible-gce"
|
||||||
USER_AGENT_VERSION="v1"
|
USER_AGENT_VERSION="v1"
|
||||||
|
|
||||||
def gce_connect(module, provider=None):
|
def gce_connect(module, provider=None):
|
||||||
"""Return a Google Cloud Engine connection."""
|
"""Return a Google Cloud Engine connection."""
|
||||||
|
if not HAS_LIBCLOUD_BASE:
|
||||||
|
module.fail_json(msg='libcloud must be installed to use this module')
|
||||||
|
|
||||||
service_account_email = module.params.get('service_account_email', None)
|
service_account_email = module.params.get('service_account_email', None)
|
||||||
|
credentials_file = module.params.get('credentials_file', None)
|
||||||
pem_file = module.params.get('pem_file', None)
|
pem_file = module.params.get('pem_file', None)
|
||||||
project_id = module.params.get('project_id', None)
|
project_id = module.params.get('project_id', None)
|
||||||
|
|
||||||
|
@ -50,6 +61,8 @@ def gce_connect(module, provider=None):
|
||||||
project_id = os.environ.get('GCE_PROJECT', None)
|
project_id = os.environ.get('GCE_PROJECT', None)
|
||||||
if not pem_file:
|
if not pem_file:
|
||||||
pem_file = os.environ.get('GCE_PEM_FILE_PATH', None)
|
pem_file = os.environ.get('GCE_PEM_FILE_PATH', None)
|
||||||
|
if not credentials_file:
|
||||||
|
credentials_file = os.environ.get('GCE_CREDENTIALS_FILE_PATH', pem_file)
|
||||||
|
|
||||||
# If we still don't have one or more of our credentials, attempt to
|
# If we still don't have one or more of our credentials, attempt to
|
||||||
# get the remaining values from the libcloud secrets file.
|
# get the remaining values from the libcloud secrets file.
|
||||||
|
@ -62,32 +75,48 @@ def gce_connect(module, provider=None):
|
||||||
if hasattr(secrets, 'GCE_PARAMS'):
|
if hasattr(secrets, 'GCE_PARAMS'):
|
||||||
if not service_account_email:
|
if not service_account_email:
|
||||||
service_account_email = secrets.GCE_PARAMS[0]
|
service_account_email = secrets.GCE_PARAMS[0]
|
||||||
if not pem_file:
|
if not credentials_file:
|
||||||
pem_file = secrets.GCE_PARAMS[1]
|
credentials_file = secrets.GCE_PARAMS[1]
|
||||||
keyword_params = getattr(secrets, 'GCE_KEYWORD_PARAMS', {})
|
keyword_params = getattr(secrets, 'GCE_KEYWORD_PARAMS', {})
|
||||||
if not project_id:
|
if not project_id:
|
||||||
project_id = keyword_params.get('project', None)
|
project_id = keyword_params.get('project', None)
|
||||||
|
|
||||||
# If we *still* don't have the credentials we need, then it's time to
|
# If we *still* don't have the credentials we need, then it's time to
|
||||||
# just fail out.
|
# just fail out.
|
||||||
if service_account_email is None or pem_file is None or project_id is None:
|
if service_account_email is None or credentials_file is None or project_id is None:
|
||||||
module.fail_json(msg='Missing GCE connection parameters in libcloud '
|
module.fail_json(msg='Missing GCE connection parameters in libcloud '
|
||||||
'secrets file.')
|
'secrets file.')
|
||||||
return None
|
return None
|
||||||
|
else:
|
||||||
|
# We have credentials but lets make sure that if they are JSON we have the minimum
|
||||||
|
# libcloud requirement met
|
||||||
|
try:
|
||||||
|
# Try to read credentials as JSON
|
||||||
|
with open(credentials_file) as credentials:
|
||||||
|
json.loads(credentials.read())
|
||||||
|
# If the credentials are proper JSON and we do not have the minimum
|
||||||
|
# required libcloud version, bail out and return a descriptive error
|
||||||
|
if LooseVersion(libcloud.__version__) < '0.17.0':
|
||||||
|
module.fail_json(msg='Using JSON credentials but libcloud minimum version not met. '
|
||||||
|
'Upgrade to libcloud>=0.17.0.')
|
||||||
|
return None
|
||||||
|
except ValueError as e:
|
||||||
|
# Not JSON
|
||||||
|
pass
|
||||||
|
|
||||||
# Allow for passing in libcloud Google DNS (e.g, Provider.GOOGLE)
|
# Allow for passing in libcloud Google DNS (e.g, Provider.GOOGLE)
|
||||||
if provider is None:
|
if provider is None:
|
||||||
provider = Provider.GCE
|
provider = Provider.GCE
|
||||||
|
|
||||||
try:
|
try:
|
||||||
gce = get_driver(provider)(service_account_email, pem_file,
|
gce = get_driver(provider)(service_account_email, credentials_file,
|
||||||
datacenter=module.params.get('zone', None),
|
datacenter=module.params.get('zone', None),
|
||||||
project=project_id)
|
project=project_id)
|
||||||
gce.connection.user_agent_append("%s/%s" % (
|
gce.connection.user_agent_append("%s/%s" % (
|
||||||
USER_AGENT_PRODUCT, USER_AGENT_VERSION))
|
USER_AGENT_PRODUCT, USER_AGENT_VERSION))
|
||||||
except (RuntimeError, ValueError), e:
|
except (RuntimeError, ValueError) as e:
|
||||||
module.fail_json(msg=str(e), changed=False)
|
module.fail_json(msg=str(e), changed=False)
|
||||||
except Exception, e:
|
except Exception as e:
|
||||||
module.fail_json(msg=unexpected_error_msg(e), changed=False)
|
module.fail_json(msg=unexpected_error_msg(e), changed=False)
|
||||||
|
|
||||||
return gce
|
return gce
|
||||||
|
|
|
@ -158,7 +158,8 @@ class Netconf(object):
|
||||||
|
|
||||||
self.config = Config(self.device)
|
self.config = Config(self.device)
|
||||||
|
|
||||||
except Exception, exc:
|
except Exception:
|
||||||
|
exc = get_exception()
|
||||||
self._fail('unable to connect to %s: %s' % (host, str(exc)))
|
self._fail('unable to connect to %s: %s' % (host, str(exc)))
|
||||||
|
|
||||||
def run_commands(self, commands, **kwargs):
|
def run_commands(self, commands, **kwargs):
|
||||||
|
@ -169,9 +170,11 @@ class Netconf(object):
|
||||||
try:
|
try:
|
||||||
resp = self.device.cli(command=cmd, format=fmt)
|
resp = self.device.cli(command=cmd, format=fmt)
|
||||||
response.append(resp)
|
response.append(resp)
|
||||||
except (ValueError, RpcError), exc:
|
except (ValueError, RpcError):
|
||||||
|
exc = get_exception()
|
||||||
self._fail('Unable to get cli output: %s' % str(exc))
|
self._fail('Unable to get cli output: %s' % str(exc))
|
||||||
except Exception, exc:
|
except Exception:
|
||||||
|
exc = get_exception()
|
||||||
self._fail('Uncaught exception - please report: %s' % str(exc))
|
self._fail('Uncaught exception - please report: %s' % str(exc))
|
||||||
|
|
||||||
return response
|
return response
|
||||||
|
@ -180,14 +183,16 @@ class Netconf(object):
|
||||||
try:
|
try:
|
||||||
self.config.unlock()
|
self.config.unlock()
|
||||||
self._locked = False
|
self._locked = False
|
||||||
except UnlockError, exc:
|
except UnlockError:
|
||||||
|
exc = get_exception()
|
||||||
self.module.log('unable to unlock config: {0}'.format(str(exc)))
|
self.module.log('unable to unlock config: {0}'.format(str(exc)))
|
||||||
|
|
||||||
def lock_config(self):
|
def lock_config(self):
|
||||||
try:
|
try:
|
||||||
self.config.lock()
|
self.config.lock()
|
||||||
self._locked = True
|
self._locked = True
|
||||||
except LockError, exc:
|
except LockError:
|
||||||
|
exc = get_exception()
|
||||||
self.module.log('unable to lock config: {0}'.format(str(exc)))
|
self.module.log('unable to lock config: {0}'.format(str(exc)))
|
||||||
|
|
||||||
def check_config(self):
|
def check_config(self):
|
||||||
|
@ -200,7 +205,8 @@ class Netconf(object):
|
||||||
if confirm and confirm > 0:
|
if confirm and confirm > 0:
|
||||||
kwargs['confirm'] = confirm
|
kwargs['confirm'] = confirm
|
||||||
return self.config.commit(**kwargs)
|
return self.config.commit(**kwargs)
|
||||||
except CommitError, exc:
|
except CommitError:
|
||||||
|
exc = get_exception()
|
||||||
msg = 'Unable to commit configuration: {0}'.format(str(exc))
|
msg = 'Unable to commit configuration: {0}'.format(str(exc))
|
||||||
self._fail(msg=msg)
|
self._fail(msg=msg)
|
||||||
|
|
||||||
|
@ -215,7 +221,8 @@ class Netconf(object):
|
||||||
try:
|
try:
|
||||||
self.config.load(candidate, format=format, merge=merge,
|
self.config.load(candidate, format=format, merge=merge,
|
||||||
overwrite=overwrite)
|
overwrite=overwrite)
|
||||||
except ConfigLoadError, exc:
|
except ConfigLoadError:
|
||||||
|
exc = get_exception()
|
||||||
msg = 'Unable to load config: {0}'.format(str(exc))
|
msg = 'Unable to load config: {0}'.format(str(exc))
|
||||||
self._fail(msg=msg)
|
self._fail(msg=msg)
|
||||||
|
|
||||||
|
@ -234,7 +241,8 @@ class Netconf(object):
|
||||||
|
|
||||||
try:
|
try:
|
||||||
result = self.config.rollback(identifier)
|
result = self.config.rollback(identifier)
|
||||||
except Exception, exc:
|
except Exception:
|
||||||
|
exc = get_exception()
|
||||||
msg = 'Unable to rollback config: {0}'.format(str(exc))
|
msg = 'Unable to rollback config: {0}'.format(str(exc))
|
||||||
self._fail(msg=msg)
|
self._fail(msg=msg)
|
||||||
|
|
||||||
|
@ -350,6 +358,8 @@ def get_module(**kwargs):
|
||||||
module.fail_json(msg='paramiko is required but does not appear to be installed')
|
module.fail_json(msg='paramiko is required but does not appear to be installed')
|
||||||
elif module.params['transport'] == 'netconf' and not HAS_PYEZ:
|
elif module.params['transport'] == 'netconf' and not HAS_PYEZ:
|
||||||
module.fail_json(msg='junos-eznc >= 1.2.2 is required but does not appear to be installed')
|
module.fail_json(msg='junos-eznc >= 1.2.2 is required but does not appear to be installed')
|
||||||
|
elif module.params['transport'] == 'netconf' and not HAS_JXMLEASE:
|
||||||
|
module.fail_json(msg='jxmlease is required but does not appear to be installed')
|
||||||
|
|
||||||
module.connect()
|
module.connect()
|
||||||
return module
|
return module
|
||||||
|
|
|
@ -229,7 +229,7 @@ class NetworkConfig(object):
|
||||||
if self._device_os == 'junos':
|
if self._device_os == 'junos':
|
||||||
return updates
|
return updates
|
||||||
|
|
||||||
diffs = dict()
|
diffs = collections.OrderedDict()
|
||||||
for update in updates:
|
for update in updates:
|
||||||
if replace == 'block' and update.parents:
|
if replace == 'block' and update.parents:
|
||||||
update = update.parents[-1]
|
update = update.parents[-1]
|
||||||
|
@ -382,7 +382,7 @@ class Conditional(object):
|
||||||
return self.number(value) <= self.value
|
return self.number(value) <= self.value
|
||||||
|
|
||||||
def contains(self, value):
|
def contains(self, value):
|
||||||
return self.value in value
|
return str(self.value) in value
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
44
lib/ansible/module_utils/pycompat.py
Normal file
44
lib/ansible/module_utils/pycompat.py
Normal file
|
@ -0,0 +1,44 @@
|
||||||
|
# This code is part of Ansible, but is an independent component.
|
||||||
|
# This particular file snippet, and this file snippet only, is BSD licensed.
|
||||||
|
# Modules you write using this snippet, which is embedded dynamically by Ansible
|
||||||
|
# still belong to the author of the module, and may assign their own license
|
||||||
|
# to the complete work.
|
||||||
|
#
|
||||||
|
# Copyright (c) 2016, Toshio Kuratomi <tkuratomi@ansible.com>
|
||||||
|
# Copyright (c) 2015, Marius Gedminas
|
||||||
|
#
|
||||||
|
# Redistribution and use in source and binary forms, with or without modification,
|
||||||
|
# are permitted provided that the following conditions are met:
|
||||||
|
#
|
||||||
|
# * Redistributions of source code must retain the above copyright
|
||||||
|
# notice, this list of conditions and the following disclaimer.
|
||||||
|
# * Redistributions in binary form must reproduce the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer in the documentation
|
||||||
|
# and/or other materials provided with the distribution.
|
||||||
|
#
|
||||||
|
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||||
|
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||||
|
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
||||||
|
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||||
|
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
|
||||||
|
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||||
|
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||||
|
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
|
||||||
|
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
#
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
def get_exception():
|
||||||
|
"""Get the current exception.
|
||||||
|
|
||||||
|
This code needs to work on Python 2.4 through 3.x, so we cannot use
|
||||||
|
"except Exception, e:" (SyntaxError on Python 3.x) nor
|
||||||
|
"except Exception as e:" (SyntaxError on Python 2.4-2.5).
|
||||||
|
Instead we must use ::
|
||||||
|
|
||||||
|
except Exception:
|
||||||
|
e = get_exception()
|
||||||
|
|
||||||
|
"""
|
||||||
|
return sys.exc_info()[1]
|
|
@ -163,7 +163,7 @@ def rax_find_volume(module, rax_module, name):
|
||||||
volume = cbs.find(name=name)
|
volume = cbs.find(name=name)
|
||||||
except rax_module.exc.NotFound:
|
except rax_module.exc.NotFound:
|
||||||
volume = None
|
volume = None
|
||||||
except Exception, e:
|
except Exception as e:
|
||||||
module.fail_json(msg='%s' % e)
|
module.fail_json(msg='%s' % e)
|
||||||
return volume
|
return volume
|
||||||
|
|
||||||
|
@ -263,7 +263,7 @@ def rax_required_together():
|
||||||
|
|
||||||
def setup_rax_module(module, rax_module, region_required=True):
|
def setup_rax_module(module, rax_module, region_required=True):
|
||||||
"""Set up pyrax in a standard way for all modules"""
|
"""Set up pyrax in a standard way for all modules"""
|
||||||
rax_module.USER_AGENT = 'ansible/%s %s' % (module.constants['ANSIBLE_VERSION'],
|
rax_module.USER_AGENT = 'ansible/%s %s' % (module.ansible_version,
|
||||||
rax_module.USER_AGENT)
|
rax_module.USER_AGENT)
|
||||||
|
|
||||||
api_key = module.params.get('api_key')
|
api_key = module.params.get('api_key')
|
||||||
|
@ -302,7 +302,7 @@ def setup_rax_module(module, rax_module, region_required=True):
|
||||||
os.environ.get('RAX_CREDS_FILE'))
|
os.environ.get('RAX_CREDS_FILE'))
|
||||||
region = (region or os.environ.get('RAX_REGION') or
|
region = (region or os.environ.get('RAX_REGION') or
|
||||||
rax_module.get_setting('region'))
|
rax_module.get_setting('region'))
|
||||||
except KeyError, e:
|
except KeyError as e:
|
||||||
module.fail_json(msg='Unable to load %s' % e.message)
|
module.fail_json(msg='Unable to load %s' % e.message)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
@ -317,7 +317,7 @@ def setup_rax_module(module, rax_module, region_required=True):
|
||||||
rax_module.set_credential_file(credentials, region=region)
|
rax_module.set_credential_file(credentials, region=region)
|
||||||
else:
|
else:
|
||||||
raise Exception('No credentials supplied!')
|
raise Exception('No credentials supplied!')
|
||||||
except Exception, e:
|
except Exception as e:
|
||||||
if e.message:
|
if e.message:
|
||||||
msg = str(e.message)
|
msg = str(e.message)
|
||||||
else:
|
else:
|
||||||
|
|
|
@ -31,6 +31,7 @@ try:
|
||||||
except ImportError:
|
except ImportError:
|
||||||
HAS_PARAMIKO = False
|
HAS_PARAMIKO = False
|
||||||
|
|
||||||
|
from ansible.module_utils.basic import get_exception
|
||||||
|
|
||||||
ANSI_RE = re.compile(r'(\x1b\[\?1h\x1b=)')
|
ANSI_RE = re.compile(r'(\x1b\[\?1h\x1b=)')
|
||||||
|
|
||||||
|
@ -135,7 +136,8 @@ class Shell(object):
|
||||||
if self.read(window):
|
if self.read(window):
|
||||||
resp = self.strip(recv.getvalue())
|
resp = self.strip(recv.getvalue())
|
||||||
return self.sanitize(cmd, resp)
|
return self.sanitize(cmd, resp)
|
||||||
except ShellError, exc:
|
except ShellError:
|
||||||
|
exc = get_exception()
|
||||||
exc.command = cmd
|
exc.command = cmd
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
|
577
lib/ansible/module_utils/six.py
Normal file
577
lib/ansible/module_utils/six.py
Normal file
|
@ -0,0 +1,577 @@
|
||||||
|
"""Utilities for writing code that runs on Python 2 and 3"""
|
||||||
|
|
||||||
|
# Copyright (c) 2010-2013 Benjamin Peterson
|
||||||
|
#
|
||||||
|
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
# of this software and associated documentation files (the "Software"), to deal
|
||||||
|
# in the Software without restriction, including without limitation the rights
|
||||||
|
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
# copies of the Software, and to permit persons to whom the Software is
|
||||||
|
# furnished to do so, subject to the following conditions:
|
||||||
|
#
|
||||||
|
# The above copyright notice and this permission notice shall be included in all
|
||||||
|
# copies or substantial portions of the Software.
|
||||||
|
#
|
||||||
|
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
# SOFTWARE.
|
||||||
|
|
||||||
|
import operator
|
||||||
|
import sys
|
||||||
|
import types
|
||||||
|
|
||||||
|
__author__ = "Benjamin Peterson <benjamin@python.org>"
|
||||||
|
__version__ = "1.4.1"
|
||||||
|
|
||||||
|
|
||||||
|
# Useful for very coarse version differentiation.
|
||||||
|
PY2 = sys.version_info[0] == 2
|
||||||
|
PY3 = sys.version_info[0] == 3
|
||||||
|
|
||||||
|
if PY3:
|
||||||
|
string_types = str,
|
||||||
|
integer_types = int,
|
||||||
|
class_types = type,
|
||||||
|
text_type = str
|
||||||
|
binary_type = bytes
|
||||||
|
|
||||||
|
MAXSIZE = sys.maxsize
|
||||||
|
else:
|
||||||
|
string_types = basestring,
|
||||||
|
integer_types = (int, long)
|
||||||
|
class_types = (type, types.ClassType)
|
||||||
|
text_type = unicode
|
||||||
|
binary_type = str
|
||||||
|
|
||||||
|
if sys.platform.startswith("java"):
|
||||||
|
# Jython always uses 32 bits.
|
||||||
|
MAXSIZE = int((1 << 31) - 1)
|
||||||
|
else:
|
||||||
|
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
|
||||||
|
class X(object):
|
||||||
|
def __len__(self):
|
||||||
|
return 1 << 31
|
||||||
|
try:
|
||||||
|
len(X())
|
||||||
|
except OverflowError:
|
||||||
|
# 32-bit
|
||||||
|
MAXSIZE = int((1 << 31) - 1)
|
||||||
|
else:
|
||||||
|
# 64-bit
|
||||||
|
MAXSIZE = int((1 << 63) - 1)
|
||||||
|
del X
|
||||||
|
|
||||||
|
|
||||||
|
def _add_doc(func, doc):
|
||||||
|
"""Add documentation to a function."""
|
||||||
|
func.__doc__ = doc
|
||||||
|
|
||||||
|
|
||||||
|
def _import_module(name):
|
||||||
|
"""Import module, returning the module after the last dot."""
|
||||||
|
__import__(name)
|
||||||
|
return sys.modules[name]
|
||||||
|
|
||||||
|
|
||||||
|
class _LazyDescr(object):
|
||||||
|
|
||||||
|
def __init__(self, name):
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
def __get__(self, obj, tp):
|
||||||
|
result = self._resolve()
|
||||||
|
setattr(obj, self.name, result)
|
||||||
|
# This is a bit ugly, but it avoids running this again.
|
||||||
|
delattr(tp, self.name)
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
class MovedModule(_LazyDescr):
|
||||||
|
|
||||||
|
def __init__(self, name, old, new=None):
|
||||||
|
super(MovedModule, self).__init__(name)
|
||||||
|
if PY3:
|
||||||
|
if new is None:
|
||||||
|
new = name
|
||||||
|
self.mod = new
|
||||||
|
else:
|
||||||
|
self.mod = old
|
||||||
|
|
||||||
|
def _resolve(self):
|
||||||
|
return _import_module(self.mod)
|
||||||
|
|
||||||
|
|
||||||
|
class MovedAttribute(_LazyDescr):
|
||||||
|
|
||||||
|
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
|
||||||
|
super(MovedAttribute, self).__init__(name)
|
||||||
|
if PY3:
|
||||||
|
if new_mod is None:
|
||||||
|
new_mod = name
|
||||||
|
self.mod = new_mod
|
||||||
|
if new_attr is None:
|
||||||
|
if old_attr is None:
|
||||||
|
new_attr = name
|
||||||
|
else:
|
||||||
|
new_attr = old_attr
|
||||||
|
self.attr = new_attr
|
||||||
|
else:
|
||||||
|
self.mod = old_mod
|
||||||
|
if old_attr is None:
|
||||||
|
old_attr = name
|
||||||
|
self.attr = old_attr
|
||||||
|
|
||||||
|
def _resolve(self):
|
||||||
|
module = _import_module(self.mod)
|
||||||
|
return getattr(module, self.attr)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
class _MovedItems(types.ModuleType):
|
||||||
|
"""Lazy loading of moved objects"""
|
||||||
|
|
||||||
|
|
||||||
|
_moved_attributes = [
|
||||||
|
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
|
||||||
|
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
|
||||||
|
MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"),
|
||||||
|
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
|
||||||
|
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
|
||||||
|
MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"),
|
||||||
|
MovedAttribute("reload_module", "__builtin__", "imp", "reload"),
|
||||||
|
MovedAttribute("reduce", "__builtin__", "functools"),
|
||||||
|
MovedAttribute("StringIO", "StringIO", "io"),
|
||||||
|
MovedAttribute("UserString", "UserString", "collections"),
|
||||||
|
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
|
||||||
|
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
|
||||||
|
MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"),
|
||||||
|
|
||||||
|
MovedModule("builtins", "__builtin__"),
|
||||||
|
MovedModule("configparser", "ConfigParser"),
|
||||||
|
MovedModule("copyreg", "copy_reg"),
|
||||||
|
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
|
||||||
|
MovedModule("http_cookies", "Cookie", "http.cookies"),
|
||||||
|
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
|
||||||
|
MovedModule("html_parser", "HTMLParser", "html.parser"),
|
||||||
|
MovedModule("http_client", "httplib", "http.client"),
|
||||||
|
MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
|
||||||
|
MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
|
||||||
|
MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
|
||||||
|
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
|
||||||
|
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
|
||||||
|
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
|
||||||
|
MovedModule("cPickle", "cPickle", "pickle"),
|
||||||
|
MovedModule("queue", "Queue"),
|
||||||
|
MovedModule("reprlib", "repr"),
|
||||||
|
MovedModule("socketserver", "SocketServer"),
|
||||||
|
MovedModule("tkinter", "Tkinter"),
|
||||||
|
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
|
||||||
|
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
|
||||||
|
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
|
||||||
|
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
|
||||||
|
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
|
||||||
|
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
|
||||||
|
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
|
||||||
|
MovedModule("tkinter_colorchooser", "tkColorChooser",
|
||||||
|
"tkinter.colorchooser"),
|
||||||
|
MovedModule("tkinter_commondialog", "tkCommonDialog",
|
||||||
|
"tkinter.commondialog"),
|
||||||
|
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
|
||||||
|
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
|
||||||
|
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
|
||||||
|
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
|
||||||
|
"tkinter.simpledialog"),
|
||||||
|
MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"),
|
||||||
|
MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"),
|
||||||
|
MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"),
|
||||||
|
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
|
||||||
|
MovedModule("winreg", "_winreg"),
|
||||||
|
]
|
||||||
|
for attr in _moved_attributes:
|
||||||
|
setattr(_MovedItems, attr.name, attr)
|
||||||
|
del attr
|
||||||
|
|
||||||
|
moves = sys.modules[__name__ + ".moves"] = _MovedItems(__name__ + ".moves")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
class Module_six_moves_urllib_parse(types.ModuleType):
|
||||||
|
"""Lazy loading of moved objects in six.moves.urllib_parse"""
|
||||||
|
|
||||||
|
|
||||||
|
_urllib_parse_moved_attributes = [
|
||||||
|
MovedAttribute("ParseResult", "urlparse", "urllib.parse"),
|
||||||
|
MovedAttribute("parse_qs", "urlparse", "urllib.parse"),
|
||||||
|
MovedAttribute("parse_qsl", "urlparse", "urllib.parse"),
|
||||||
|
MovedAttribute("urldefrag", "urlparse", "urllib.parse"),
|
||||||
|
MovedAttribute("urljoin", "urlparse", "urllib.parse"),
|
||||||
|
MovedAttribute("urlparse", "urlparse", "urllib.parse"),
|
||||||
|
MovedAttribute("urlsplit", "urlparse", "urllib.parse"),
|
||||||
|
MovedAttribute("urlunparse", "urlparse", "urllib.parse"),
|
||||||
|
MovedAttribute("urlunsplit", "urlparse", "urllib.parse"),
|
||||||
|
MovedAttribute("quote", "urllib", "urllib.parse"),
|
||||||
|
MovedAttribute("quote_plus", "urllib", "urllib.parse"),
|
||||||
|
MovedAttribute("unquote", "urllib", "urllib.parse"),
|
||||||
|
MovedAttribute("unquote_plus", "urllib", "urllib.parse"),
|
||||||
|
MovedAttribute("urlencode", "urllib", "urllib.parse"),
|
||||||
|
]
|
||||||
|
for attr in _urllib_parse_moved_attributes:
|
||||||
|
setattr(Module_six_moves_urllib_parse, attr.name, attr)
|
||||||
|
del attr
|
||||||
|
|
||||||
|
sys.modules[__name__ + ".moves.urllib_parse"] = Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse")
|
||||||
|
sys.modules[__name__ + ".moves.urllib.parse"] = Module_six_moves_urllib_parse(__name__ + ".moves.urllib.parse")
|
||||||
|
|
||||||
|
|
||||||
|
class Module_six_moves_urllib_error(types.ModuleType):
|
||||||
|
"""Lazy loading of moved objects in six.moves.urllib_error"""
|
||||||
|
|
||||||
|
|
||||||
|
_urllib_error_moved_attributes = [
|
||||||
|
MovedAttribute("URLError", "urllib2", "urllib.error"),
|
||||||
|
MovedAttribute("HTTPError", "urllib2", "urllib.error"),
|
||||||
|
MovedAttribute("ContentTooShortError", "urllib", "urllib.error"),
|
||||||
|
]
|
||||||
|
for attr in _urllib_error_moved_attributes:
|
||||||
|
setattr(Module_six_moves_urllib_error, attr.name, attr)
|
||||||
|
del attr
|
||||||
|
|
||||||
|
sys.modules[__name__ + ".moves.urllib_error"] = Module_six_moves_urllib_error(__name__ + ".moves.urllib_error")
|
||||||
|
sys.modules[__name__ + ".moves.urllib.error"] = Module_six_moves_urllib_error(__name__ + ".moves.urllib.error")
|
||||||
|
|
||||||
|
|
||||||
|
class Module_six_moves_urllib_request(types.ModuleType):
|
||||||
|
"""Lazy loading of moved objects in six.moves.urllib_request"""
|
||||||
|
|
||||||
|
|
||||||
|
_urllib_request_moved_attributes = [
|
||||||
|
MovedAttribute("urlopen", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("install_opener", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("build_opener", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("pathname2url", "urllib", "urllib.request"),
|
||||||
|
MovedAttribute("url2pathname", "urllib", "urllib.request"),
|
||||||
|
MovedAttribute("getproxies", "urllib", "urllib.request"),
|
||||||
|
MovedAttribute("Request", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("OpenerDirector", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("ProxyHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("BaseHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("HTTPHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("FileHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("FTPHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("UnknownHandler", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"),
|
||||||
|
MovedAttribute("urlretrieve", "urllib", "urllib.request"),
|
||||||
|
MovedAttribute("urlcleanup", "urllib", "urllib.request"),
|
||||||
|
MovedAttribute("URLopener", "urllib", "urllib.request"),
|
||||||
|
MovedAttribute("FancyURLopener", "urllib", "urllib.request"),
|
||||||
|
]
|
||||||
|
for attr in _urllib_request_moved_attributes:
|
||||||
|
setattr(Module_six_moves_urllib_request, attr.name, attr)
|
||||||
|
del attr
|
||||||
|
|
||||||
|
sys.modules[__name__ + ".moves.urllib_request"] = Module_six_moves_urllib_request(__name__ + ".moves.urllib_request")
|
||||||
|
sys.modules[__name__ + ".moves.urllib.request"] = Module_six_moves_urllib_request(__name__ + ".moves.urllib.request")
|
||||||
|
|
||||||
|
|
||||||
|
class Module_six_moves_urllib_response(types.ModuleType):
|
||||||
|
"""Lazy loading of moved objects in six.moves.urllib_response"""
|
||||||
|
|
||||||
|
|
||||||
|
_urllib_response_moved_attributes = [
|
||||||
|
MovedAttribute("addbase", "urllib", "urllib.response"),
|
||||||
|
MovedAttribute("addclosehook", "urllib", "urllib.response"),
|
||||||
|
MovedAttribute("addinfo", "urllib", "urllib.response"),
|
||||||
|
MovedAttribute("addinfourl", "urllib", "urllib.response"),
|
||||||
|
]
|
||||||
|
for attr in _urllib_response_moved_attributes:
|
||||||
|
setattr(Module_six_moves_urllib_response, attr.name, attr)
|
||||||
|
del attr
|
||||||
|
|
||||||
|
sys.modules[__name__ + ".moves.urllib_response"] = Module_six_moves_urllib_response(__name__ + ".moves.urllib_response")
|
||||||
|
sys.modules[__name__ + ".moves.urllib.response"] = Module_six_moves_urllib_response(__name__ + ".moves.urllib.response")
|
||||||
|
|
||||||
|
|
||||||
|
class Module_six_moves_urllib_robotparser(types.ModuleType):
|
||||||
|
"""Lazy loading of moved objects in six.moves.urllib_robotparser"""
|
||||||
|
|
||||||
|
|
||||||
|
_urllib_robotparser_moved_attributes = [
|
||||||
|
MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"),
|
||||||
|
]
|
||||||
|
for attr in _urllib_robotparser_moved_attributes:
|
||||||
|
setattr(Module_six_moves_urllib_robotparser, attr.name, attr)
|
||||||
|
del attr
|
||||||
|
|
||||||
|
sys.modules[__name__ + ".moves.urllib_robotparser"] = Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib_robotparser")
|
||||||
|
sys.modules[__name__ + ".moves.urllib.robotparser"] = Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser")
|
||||||
|
|
||||||
|
|
||||||
|
class Module_six_moves_urllib(types.ModuleType):
|
||||||
|
"""Create a six.moves.urllib namespace that resembles the Python 3 namespace"""
|
||||||
|
parse = sys.modules[__name__ + ".moves.urllib_parse"]
|
||||||
|
error = sys.modules[__name__ + ".moves.urllib_error"]
|
||||||
|
request = sys.modules[__name__ + ".moves.urllib_request"]
|
||||||
|
response = sys.modules[__name__ + ".moves.urllib_response"]
|
||||||
|
robotparser = sys.modules[__name__ + ".moves.urllib_robotparser"]
|
||||||
|
|
||||||
|
|
||||||
|
sys.modules[__name__ + ".moves.urllib"] = Module_six_moves_urllib(__name__ + ".moves.urllib")
|
||||||
|
|
||||||
|
|
||||||
|
def add_move(move):
|
||||||
|
"""Add an item to six.moves."""
|
||||||
|
setattr(_MovedItems, move.name, move)
|
||||||
|
|
||||||
|
|
||||||
|
def remove_move(name):
|
||||||
|
"""Remove item from six.moves."""
|
||||||
|
try:
|
||||||
|
delattr(_MovedItems, name)
|
||||||
|
except AttributeError:
|
||||||
|
try:
|
||||||
|
del moves.__dict__[name]
|
||||||
|
except KeyError:
|
||||||
|
raise AttributeError("no such move, %r" % (name,))
|
||||||
|
|
||||||
|
|
||||||
|
if PY3:
|
||||||
|
_meth_func = "__func__"
|
||||||
|
_meth_self = "__self__"
|
||||||
|
|
||||||
|
_func_closure = "__closure__"
|
||||||
|
_func_code = "__code__"
|
||||||
|
_func_defaults = "__defaults__"
|
||||||
|
_func_globals = "__globals__"
|
||||||
|
|
||||||
|
_iterkeys = "keys"
|
||||||
|
_itervalues = "values"
|
||||||
|
_iteritems = "items"
|
||||||
|
_iterlists = "lists"
|
||||||
|
else:
|
||||||
|
_meth_func = "im_func"
|
||||||
|
_meth_self = "im_self"
|
||||||
|
|
||||||
|
_func_closure = "func_closure"
|
||||||
|
_func_code = "func_code"
|
||||||
|
_func_defaults = "func_defaults"
|
||||||
|
_func_globals = "func_globals"
|
||||||
|
|
||||||
|
_iterkeys = "iterkeys"
|
||||||
|
_itervalues = "itervalues"
|
||||||
|
_iteritems = "iteritems"
|
||||||
|
_iterlists = "iterlists"
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
advance_iterator = next
|
||||||
|
except NameError:
|
||||||
|
def advance_iterator(it):
|
||||||
|
return it.next()
|
||||||
|
next = advance_iterator
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
callable = callable
|
||||||
|
except NameError:
|
||||||
|
def callable(obj):
|
||||||
|
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
|
||||||
|
|
||||||
|
|
||||||
|
if PY3:
|
||||||
|
def get_unbound_function(unbound):
|
||||||
|
return unbound
|
||||||
|
|
||||||
|
create_bound_method = types.MethodType
|
||||||
|
|
||||||
|
Iterator = object
|
||||||
|
else:
|
||||||
|
def get_unbound_function(unbound):
|
||||||
|
return unbound.im_func
|
||||||
|
|
||||||
|
def create_bound_method(func, obj):
|
||||||
|
return types.MethodType(func, obj, obj.__class__)
|
||||||
|
|
||||||
|
class Iterator(object):
|
||||||
|
|
||||||
|
def next(self):
|
||||||
|
return type(self).__next__(self)
|
||||||
|
|
||||||
|
callable = callable
|
||||||
|
_add_doc(get_unbound_function,
|
||||||
|
"""Get the function out of a possibly unbound function""")
|
||||||
|
|
||||||
|
|
||||||
|
get_method_function = operator.attrgetter(_meth_func)
|
||||||
|
get_method_self = operator.attrgetter(_meth_self)
|
||||||
|
get_function_closure = operator.attrgetter(_func_closure)
|
||||||
|
get_function_code = operator.attrgetter(_func_code)
|
||||||
|
get_function_defaults = operator.attrgetter(_func_defaults)
|
||||||
|
get_function_globals = operator.attrgetter(_func_globals)
|
||||||
|
|
||||||
|
|
||||||
|
def iterkeys(d, **kw):
|
||||||
|
"""Return an iterator over the keys of a dictionary."""
|
||||||
|
return iter(getattr(d, _iterkeys)(**kw))
|
||||||
|
|
||||||
|
def itervalues(d, **kw):
|
||||||
|
"""Return an iterator over the values of a dictionary."""
|
||||||
|
return iter(getattr(d, _itervalues)(**kw))
|
||||||
|
|
||||||
|
def iteritems(d, **kw):
|
||||||
|
"""Return an iterator over the (key, value) pairs of a dictionary."""
|
||||||
|
return iter(getattr(d, _iteritems)(**kw))
|
||||||
|
|
||||||
|
def iterlists(d, **kw):
|
||||||
|
"""Return an iterator over the (key, [values]) pairs of a dictionary."""
|
||||||
|
return iter(getattr(d, _iterlists)(**kw))
|
||||||
|
|
||||||
|
|
||||||
|
if PY3:
|
||||||
|
def b(s):
|
||||||
|
return s.encode("latin-1")
|
||||||
|
def u(s):
|
||||||
|
return s
|
||||||
|
unichr = chr
|
||||||
|
if sys.version_info[1] <= 1:
|
||||||
|
def int2byte(i):
|
||||||
|
return bytes((i,))
|
||||||
|
else:
|
||||||
|
# This is about 2x faster than the implementation above on 3.2+
|
||||||
|
int2byte = operator.methodcaller("to_bytes", 1, "big")
|
||||||
|
byte2int = operator.itemgetter(0)
|
||||||
|
indexbytes = operator.getitem
|
||||||
|
iterbytes = iter
|
||||||
|
import io
|
||||||
|
StringIO = io.StringIO
|
||||||
|
BytesIO = io.BytesIO
|
||||||
|
else:
|
||||||
|
def b(s):
|
||||||
|
return s
|
||||||
|
def u(s):
|
||||||
|
return unicode(s, "unicode_escape")
|
||||||
|
unichr = unichr
|
||||||
|
int2byte = chr
|
||||||
|
def byte2int(bs):
|
||||||
|
return ord(bs[0])
|
||||||
|
def indexbytes(buf, i):
|
||||||
|
return ord(buf[i])
|
||||||
|
def iterbytes(buf):
|
||||||
|
return (ord(byte) for byte in buf)
|
||||||
|
import StringIO
|
||||||
|
StringIO = BytesIO = StringIO.StringIO
|
||||||
|
_add_doc(b, """Byte literal""")
|
||||||
|
_add_doc(u, """Text literal""")
|
||||||
|
|
||||||
|
|
||||||
|
if PY3:
|
||||||
|
import builtins
|
||||||
|
exec_ = getattr(builtins, "exec")
|
||||||
|
|
||||||
|
|
||||||
|
def reraise(tp, value, tb=None):
|
||||||
|
if value.__traceback__ is not tb:
|
||||||
|
raise value.with_traceback(tb)
|
||||||
|
raise value
|
||||||
|
|
||||||
|
|
||||||
|
print_ = getattr(builtins, "print")
|
||||||
|
del builtins
|
||||||
|
|
||||||
|
else:
|
||||||
|
def exec_(_code_, _globs_=None, _locs_=None):
|
||||||
|
"""Execute code in a namespace."""
|
||||||
|
if _globs_ is None:
|
||||||
|
frame = sys._getframe(1)
|
||||||
|
_globs_ = frame.f_globals
|
||||||
|
if _locs_ is None:
|
||||||
|
_locs_ = frame.f_locals
|
||||||
|
del frame
|
||||||
|
elif _locs_ is None:
|
||||||
|
_locs_ = _globs_
|
||||||
|
exec("""exec _code_ in _globs_, _locs_""")
|
||||||
|
|
||||||
|
|
||||||
|
exec_("""def reraise(tp, value, tb=None):
|
||||||
|
raise tp, value, tb
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def print_(*args, **kwargs):
|
||||||
|
"""The new-style print function."""
|
||||||
|
fp = kwargs.pop("file", sys.stdout)
|
||||||
|
if fp is None:
|
||||||
|
return
|
||||||
|
def write(data):
|
||||||
|
if not isinstance(data, basestring):
|
||||||
|
data = str(data)
|
||||||
|
fp.write(data)
|
||||||
|
want_unicode = False
|
||||||
|
sep = kwargs.pop("sep", None)
|
||||||
|
if sep is not None:
|
||||||
|
if isinstance(sep, unicode):
|
||||||
|
want_unicode = True
|
||||||
|
elif not isinstance(sep, str):
|
||||||
|
raise TypeError("sep must be None or a string")
|
||||||
|
end = kwargs.pop("end", None)
|
||||||
|
if end is not None:
|
||||||
|
if isinstance(end, unicode):
|
||||||
|
want_unicode = True
|
||||||
|
elif not isinstance(end, str):
|
||||||
|
raise TypeError("end must be None or a string")
|
||||||
|
if kwargs:
|
||||||
|
raise TypeError("invalid keyword arguments to print()")
|
||||||
|
if not want_unicode:
|
||||||
|
for arg in args:
|
||||||
|
if isinstance(arg, unicode):
|
||||||
|
want_unicode = True
|
||||||
|
break
|
||||||
|
if want_unicode:
|
||||||
|
newline = unicode("\n")
|
||||||
|
space = unicode(" ")
|
||||||
|
else:
|
||||||
|
newline = "\n"
|
||||||
|
space = " "
|
||||||
|
if sep is None:
|
||||||
|
sep = space
|
||||||
|
if end is None:
|
||||||
|
end = newline
|
||||||
|
for i, arg in enumerate(args):
|
||||||
|
if i:
|
||||||
|
write(sep)
|
||||||
|
write(arg)
|
||||||
|
write(end)
|
||||||
|
|
||||||
|
_add_doc(reraise, """Reraise an exception.""")
|
||||||
|
|
||||||
|
|
||||||
|
def with_metaclass(meta, *bases):
|
||||||
|
"""Create a base class with a metaclass."""
|
||||||
|
return meta("NewBase", bases, {})
|
||||||
|
|
||||||
|
def add_metaclass(metaclass):
|
||||||
|
"""Class decorator for creating a class with a metaclass."""
|
||||||
|
def wrapper(cls):
|
||||||
|
orig_vars = cls.__dict__.copy()
|
||||||
|
orig_vars.pop('__dict__', None)
|
||||||
|
orig_vars.pop('__weakref__', None)
|
||||||
|
for slots_var in orig_vars.get('__slots__', ()):
|
||||||
|
orig_vars.pop(slots_var)
|
||||||
|
return metaclass(cls.__name__, cls.__bases__, orig_vars)
|
||||||
|
return wrapper
|
|
@ -893,7 +893,7 @@ def fetch_url(module, url, data=None, headers=None, method=None,
|
||||||
url_password=password, http_agent=http_agent, force_basic_auth=force_basic_auth,
|
url_password=password, http_agent=http_agent, force_basic_auth=force_basic_auth,
|
||||||
follow_redirects=follow_redirects)
|
follow_redirects=follow_redirects)
|
||||||
info.update(r.info())
|
info.update(r.info())
|
||||||
info.update(dict(msg="OK (%s bytes)" % r.headers.get('Content-Length', 'unknown'), url=r.geturl(), status=r.getcode()))
|
info.update(dict(msg="OK (%s bytes)" % r.headers.get('Content-Length', 'unknown'), url=r.geturl(), status=r.code))
|
||||||
except NoSSLError:
|
except NoSSLError:
|
||||||
e = get_exception()
|
e = get_exception()
|
||||||
distribution = get_distribution()
|
distribution = get_distribution()
|
||||||
|
|
|
@ -46,7 +46,8 @@ def vca_argument_spec():
|
||||||
api_version=dict(default=DEFAULT_VERSION),
|
api_version=dict(default=DEFAULT_VERSION),
|
||||||
service_type=dict(default=DEFAULT_SERVICE_TYPE, choices=SERVICE_MAP.keys()),
|
service_type=dict(default=DEFAULT_SERVICE_TYPE, choices=SERVICE_MAP.keys()),
|
||||||
vdc_name=dict(),
|
vdc_name=dict(),
|
||||||
gateway_name=dict(default='gateway')
|
gateway_name=dict(default='gateway'),
|
||||||
|
verify_certs=dict(type='bool', default=True)
|
||||||
)
|
)
|
||||||
|
|
||||||
class VcaAnsibleModule(AnsibleModule):
|
class VcaAnsibleModule(AnsibleModule):
|
||||||
|
@ -110,7 +111,7 @@ class VcaAnsibleModule(AnsibleModule):
|
||||||
|
|
||||||
def create_instance(self):
|
def create_instance(self):
|
||||||
service_type = self.params.get('service_type', DEFAULT_SERVICE_TYPE)
|
service_type = self.params.get('service_type', DEFAULT_SERVICE_TYPE)
|
||||||
if service_type == 'vcd':
|
if service_type == 'vcd':
|
||||||
host = self.params['host']
|
host = self.params['host']
|
||||||
else:
|
else:
|
||||||
host = LOGIN_HOST[service_type]
|
host = LOGIN_HOST[service_type]
|
||||||
|
@ -130,8 +131,12 @@ class VcaAnsibleModule(AnsibleModule):
|
||||||
service_type = self.params['service_type']
|
service_type = self.params['service_type']
|
||||||
password = self.params['password']
|
password = self.params['password']
|
||||||
|
|
||||||
if not self.vca.login(password=password):
|
login_org = None
|
||||||
self.fail('Login to VCA failed', response=self.vca.response.content)
|
if service_type == 'vcd':
|
||||||
|
login_org = self.params['org']
|
||||||
|
|
||||||
|
if not self.vca.login(password=password, org=login_org):
|
||||||
|
self.fail('Login to VCA failed', response=self.vca.response)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
method_name = 'login_%s' % service_type
|
method_name = 'login_%s' % service_type
|
||||||
|
@ -139,8 +144,8 @@ class VcaAnsibleModule(AnsibleModule):
|
||||||
meth()
|
meth()
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
self.fail('no login method exists for service_type %s' % service_type)
|
self.fail('no login method exists for service_type %s' % service_type)
|
||||||
except VcaError, e:
|
except VcaError as e:
|
||||||
self.fail(e.message, response=self.vca.response.content, **e.kwargs)
|
self.fail(e.message, response=self.vca.response, **e.kwargs)
|
||||||
|
|
||||||
def login_vca(self):
|
def login_vca(self):
|
||||||
instance_id = self.params['instance_id']
|
instance_id = self.params['instance_id']
|
||||||
|
@ -155,14 +160,14 @@ class VcaAnsibleModule(AnsibleModule):
|
||||||
|
|
||||||
org = self.params['org']
|
org = self.params['org']
|
||||||
if not org:
|
if not org:
|
||||||
raise VcaError('missing required or for service_type vchs')
|
raise VcaError('missing required org for service_type vchs')
|
||||||
|
|
||||||
self.vca.login_to_org(service_id, org)
|
self.vca.login_to_org(service_id, org)
|
||||||
|
|
||||||
def login_vcd(self):
|
def login_vcd(self):
|
||||||
org = self.params['org']
|
org = self.params['org']
|
||||||
if not org:
|
if not org:
|
||||||
raise VcaError('missing required or for service_type vchs')
|
raise VcaError('missing required org for service_type vcd')
|
||||||
|
|
||||||
if not self.vca.token:
|
if not self.vca.token:
|
||||||
raise VcaError('unable to get token for service_type vcd')
|
raise VcaError('unable to get token for service_type vcd')
|
||||||
|
@ -313,7 +318,7 @@ def vca_login(module):
|
||||||
_vchs_login(vca, password, service, org)
|
_vchs_login(vca, password, service, org)
|
||||||
elif service_type == 'vcd':
|
elif service_type == 'vcd':
|
||||||
_vcd_login(vca, password, org)
|
_vcd_login(vca, password, org)
|
||||||
except VcaError, e:
|
except VcaError as e:
|
||||||
module.fail_json(msg=e.message, **e.kwargs)
|
module.fail_json(msg=e.message, **e.kwargs)
|
||||||
|
|
||||||
return vca
|
return vca
|
||||||
|
|
|
@ -194,9 +194,9 @@ def connect_to_api(module, disconnect_atexit=True):
|
||||||
|
|
||||||
try:
|
try:
|
||||||
service_instance = connect.SmartConnect(host=hostname, user=username, pwd=password)
|
service_instance = connect.SmartConnect(host=hostname, user=username, pwd=password)
|
||||||
except vim.fault.InvalidLogin, invalid_login:
|
except vim.fault.InvalidLogin as invalid_login:
|
||||||
module.fail_json(msg=invalid_login.msg, apierror=str(invalid_login))
|
module.fail_json(msg=invalid_login.msg, apierror=str(invalid_login))
|
||||||
except requests.ConnectionError, connection_error:
|
except requests.ConnectionError as connection_error:
|
||||||
if '[SSL: CERTIFICATE_VERIFY_FAILED]' in str(connection_error) and not validate_certs:
|
if '[SSL: CERTIFICATE_VERIFY_FAILED]' in str(connection_error) and not validate_certs:
|
||||||
context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
|
context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
|
||||||
context.verify_mode = ssl.CERT_NONE
|
context.verify_mode = ssl.CERT_NONE
|
||||||
|
|
|
@ -1 +1 @@
|
||||||
Subproject commit bb9572ca861ff35ce85a34087be892e25a268391
|
Subproject commit 92bf802cb82844783a2b678b0e709bdd82c1103d
|
|
@ -1 +1 @@
|
||||||
Subproject commit 7fd4180857f856a59792724e02e95dd99c067083
|
Subproject commit e710dc47fe35fa2e05f57c184f34e2763f9ac864
|
|
@ -66,7 +66,7 @@ class Block(Base, Become, Conditional, Taggable):
|
||||||
all_vars = self.vars.copy()
|
all_vars = self.vars.copy()
|
||||||
|
|
||||||
if self._role:
|
if self._role:
|
||||||
all_vars.update(self._role.get_vars(self._dep_chain))
|
all_vars.update(self._role.get_vars(self._dep_chain, include_params=False))
|
||||||
if self._parent_block:
|
if self._parent_block:
|
||||||
all_vars.update(self._parent_block.get_vars())
|
all_vars.update(self._parent_block.get_vars())
|
||||||
if self._task_include:
|
if self._task_include:
|
||||||
|
|
|
@ -96,7 +96,7 @@ class PlaybookInclude(Base, Conditional, Taggable):
|
||||||
# plays. If so, we can take a shortcut here and simply prepend them to
|
# plays. If so, we can take a shortcut here and simply prepend them to
|
||||||
# those attached to each block (if any)
|
# those attached to each block (if any)
|
||||||
if forward_conditional:
|
if forward_conditional:
|
||||||
for task_block in entry.tasks:
|
for task_block in entry.pre_tasks + entry.roles + entry.tasks + entry.post_tasks:
|
||||||
task_block.when = self.when[:] + task_block.when
|
task_block.when = self.when[:] + task_block.when
|
||||||
|
|
||||||
return pb
|
return pb
|
||||||
|
|
|
@ -84,7 +84,7 @@ class Task(Base, Conditional, Taggable, Become):
|
||||||
_notify = FieldAttribute(isa='list')
|
_notify = FieldAttribute(isa='list')
|
||||||
_poll = FieldAttribute(isa='int')
|
_poll = FieldAttribute(isa='int')
|
||||||
_register = FieldAttribute(isa='string')
|
_register = FieldAttribute(isa='string')
|
||||||
_retries = FieldAttribute(isa='int', default=3)
|
_retries = FieldAttribute(isa='int')
|
||||||
_until = FieldAttribute(isa='list', default=[])
|
_until = FieldAttribute(isa='list', default=[])
|
||||||
|
|
||||||
def __init__(self, block=None, role=None, task_include=None):
|
def __init__(self, block=None, role=None, task_include=None):
|
||||||
|
|
|
@ -136,7 +136,7 @@ class PluginLoader:
|
||||||
def _all_directories(self, dir):
|
def _all_directories(self, dir):
|
||||||
results = []
|
results = []
|
||||||
results.append(dir)
|
results.append(dir)
|
||||||
for root, subdirs, files in os.walk(dir):
|
for root, subdirs, files in os.walk(dir, followlinks=True):
|
||||||
if '__init__.py' in files:
|
if '__init__.py' in files:
|
||||||
for x in subdirs:
|
for x in subdirs:
|
||||||
results.append(os.path.join(root,x))
|
results.append(os.path.join(root,x))
|
||||||
|
|
|
@ -35,6 +35,7 @@ from ansible.compat.six import binary_type, text_type, iteritems, with_metaclass
|
||||||
from ansible import constants as C
|
from ansible import constants as C
|
||||||
from ansible.errors import AnsibleError, AnsibleConnectionFailure
|
from ansible.errors import AnsibleError, AnsibleConnectionFailure
|
||||||
from ansible.executor.module_common import modify_module
|
from ansible.executor.module_common import modify_module
|
||||||
|
from ansible.release import __version__
|
||||||
from ansible.parsing.utils.jsonify import jsonify
|
from ansible.parsing.utils.jsonify import jsonify
|
||||||
from ansible.utils.unicode import to_bytes, to_unicode
|
from ansible.utils.unicode import to_bytes, to_unicode
|
||||||
|
|
||||||
|
@ -147,7 +148,7 @@ class ActionBase(with_metaclass(ABCMeta, object)):
|
||||||
# insert shared code and arguments into the module
|
# insert shared code and arguments into the module
|
||||||
(module_data, module_style, module_shebang) = modify_module(module_name, module_path, module_args, task_vars=task_vars, module_compression=self._play_context.module_compression)
|
(module_data, module_style, module_shebang) = modify_module(module_name, module_path, module_args, task_vars=task_vars, module_compression=self._play_context.module_compression)
|
||||||
|
|
||||||
return (module_style, module_shebang, module_data)
|
return (module_style, module_shebang, module_data, module_path)
|
||||||
|
|
||||||
def _compute_environment_string(self):
|
def _compute_environment_string(self):
|
||||||
'''
|
'''
|
||||||
|
@ -240,7 +241,8 @@ class ActionBase(with_metaclass(ABCMeta, object)):
|
||||||
raise AnsibleConnectionFailure(output)
|
raise AnsibleConnectionFailure(output)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
rc = self._connection._shell.join_path(result['stdout'].strip(), u'').splitlines()[-1]
|
stdout_parts = result['stdout'].strip().split('%s=' % basefile, 1)
|
||||||
|
rc = self._connection._shell.join_path(stdout_parts[-1], u'').splitlines()[-1]
|
||||||
except IndexError:
|
except IndexError:
|
||||||
# stdout was empty or just space, set to / to trigger error in next if
|
# stdout was empty or just space, set to / to trigger error in next if
|
||||||
rc = '/'
|
rc = '/'
|
||||||
|
@ -291,7 +293,7 @@ class ActionBase(with_metaclass(ABCMeta, object)):
|
||||||
|
|
||||||
return remote_path
|
return remote_path
|
||||||
|
|
||||||
def _fixup_perms(self, remote_path, remote_user, execute=False, recursive=True):
|
def _fixup_perms(self, remote_path, remote_user, execute=True, recursive=True):
|
||||||
"""
|
"""
|
||||||
We need the files we upload to be readable (and sometimes executable)
|
We need the files we upload to be readable (and sometimes executable)
|
||||||
by the user being sudo'd to but we want to limit other people's access
|
by the user being sudo'd to but we want to limit other people's access
|
||||||
|
@ -324,7 +326,7 @@ class ActionBase(with_metaclass(ABCMeta, object)):
|
||||||
# contain a path to a tmp dir but doesn't know if it needs to
|
# contain a path to a tmp dir but doesn't know if it needs to
|
||||||
# exist or not. If there's no path, then there's no need for us
|
# exist or not. If there's no path, then there's no need for us
|
||||||
# to do work
|
# to do work
|
||||||
self._display.debug('_fixup_perms called with remote_path==None. Sure this is correct?')
|
display.debug('_fixup_perms called with remote_path==None. Sure this is correct?')
|
||||||
return remote_path
|
return remote_path
|
||||||
|
|
||||||
if self._play_context.become and self._play_context.become_user not in ('root', remote_user):
|
if self._play_context.become and self._play_context.become_user not in ('root', remote_user):
|
||||||
|
@ -360,7 +362,7 @@ class ActionBase(with_metaclass(ABCMeta, object)):
|
||||||
if C.ALLOW_WORLD_READABLE_TMPFILES:
|
if C.ALLOW_WORLD_READABLE_TMPFILES:
|
||||||
# fs acls failed -- do things this insecure way only
|
# fs acls failed -- do things this insecure way only
|
||||||
# if the user opted in in the config file
|
# if the user opted in in the config file
|
||||||
self._display.warning('Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user which may be insecure. For information on securing this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user')
|
display.warning('Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user which may be insecure. For information on securing this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user')
|
||||||
res = self._remote_chmod('a+%s' % mode, remote_path, recursive=recursive)
|
res = self._remote_chmod('a+%s' % mode, remote_path, recursive=recursive)
|
||||||
if res['rc'] != 0:
|
if res['rc'] != 0:
|
||||||
raise AnsibleError('Failed to set file mode on remote files (rc: {0}, err: {1})'.format(res['rc'], res['stderr']))
|
raise AnsibleError('Failed to set file mode on remote files (rc: {0}, err: {1})'.format(res['rc'], res['stderr']))
|
||||||
|
@ -480,21 +482,49 @@ class ActionBase(with_metaclass(ABCMeta, object)):
|
||||||
else:
|
else:
|
||||||
return initial_fragment
|
return initial_fragment
|
||||||
|
|
||||||
def _filter_leading_non_json_lines(self, data):
|
@staticmethod
|
||||||
|
def _filter_non_json_lines(data):
|
||||||
'''
|
'''
|
||||||
Used to avoid random output from SSH at the top of JSON output, like messages from
|
Used to avoid random output from SSH at the top of JSON output, like messages from
|
||||||
tcagetattr, or where dropbear spews MOTD on every single command (which is nuts).
|
tcagetattr, or where dropbear spews MOTD on every single command (which is nuts).
|
||||||
|
|
||||||
need to filter anything which starts not with '{', '[', ', '=' or is an empty line.
|
need to filter anything which does not start with '{', '[', or is an empty line.
|
||||||
filter only leading lines since multiline JSON is valid.
|
Have to be careful how we filter trailing junk as multiline JSON is valid.
|
||||||
'''
|
'''
|
||||||
idx = 0
|
# Filter initial junk
|
||||||
for line in data.splitlines(True):
|
lines = data.splitlines()
|
||||||
if line.startswith((u'{', u'[')):
|
for start, line in enumerate(lines):
|
||||||
|
line = line.strip()
|
||||||
|
if line.startswith(u'{'):
|
||||||
|
endchar = u'}'
|
||||||
break
|
break
|
||||||
idx = idx + len(line)
|
elif line.startswith(u'['):
|
||||||
|
endchar = u']'
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
display.debug('No start of json char found')
|
||||||
|
raise ValueError('No start of json char found')
|
||||||
|
|
||||||
return data[idx:]
|
# Filter trailing junk
|
||||||
|
lines = lines[start:]
|
||||||
|
lines.reverse()
|
||||||
|
for end, line in enumerate(lines):
|
||||||
|
if line.strip().endswith(endchar):
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
display.debug('No end of json char found')
|
||||||
|
raise ValueError('No end of json char found')
|
||||||
|
|
||||||
|
if end < len(lines) - 1:
|
||||||
|
# Trailing junk is uncommon and can point to things the user might
|
||||||
|
# want to change. So print a warning if we find any
|
||||||
|
trailing_junk = lines[:end]
|
||||||
|
trailing_junk.reverse()
|
||||||
|
display.warning('Module invocation had junk after the JSON data: %s' % '\n'.join(trailing_junk))
|
||||||
|
|
||||||
|
lines = lines[end:]
|
||||||
|
lines.reverse()
|
||||||
|
return '\n'.join(lines)
|
||||||
|
|
||||||
def _strip_success_message(self, data):
|
def _strip_success_message(self, data):
|
||||||
'''
|
'''
|
||||||
|
@ -539,10 +569,19 @@ class ActionBase(with_metaclass(ABCMeta, object)):
|
||||||
module_args['_ansible_diff'] = self._play_context.diff
|
module_args['_ansible_diff'] = self._play_context.diff
|
||||||
|
|
||||||
# let module know our verbosity
|
# let module know our verbosity
|
||||||
module_args['_ansible_verbosity'] = self._display.verbosity
|
module_args['_ansible_verbosity'] = display.verbosity
|
||||||
|
|
||||||
(module_style, shebang, module_data) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)
|
# give the module information about the ansible version
|
||||||
if not shebang:
|
module_args['_ansible_version'] = __version__
|
||||||
|
|
||||||
|
# set the syslog facility to be used in the module
|
||||||
|
module_args['_ansible_syslog_facility'] = task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY)
|
||||||
|
|
||||||
|
# let module know about filesystems that selinux treats specially
|
||||||
|
module_args['_ansible_selinux_special_fs'] = C.DEFAULT_SELINUX_SPECIAL_FS
|
||||||
|
|
||||||
|
(module_style, shebang, module_data, module_path) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)
|
||||||
|
if not shebang and module_style != 'binary':
|
||||||
raise AnsibleError("module (%s) is missing interpreter line" % module_name)
|
raise AnsibleError("module (%s) is missing interpreter line" % module_name)
|
||||||
|
|
||||||
# a remote tmp path may be necessary and not already created
|
# a remote tmp path may be necessary and not already created
|
||||||
|
@ -552,15 +591,18 @@ class ActionBase(with_metaclass(ABCMeta, object)):
|
||||||
tmp = self._make_tmp_path(remote_user)
|
tmp = self._make_tmp_path(remote_user)
|
||||||
|
|
||||||
if tmp:
|
if tmp:
|
||||||
remote_module_filename = self._connection._shell.get_remote_filename(module_name)
|
remote_module_filename = self._connection._shell.get_remote_filename(module_path)
|
||||||
remote_module_path = self._connection._shell.join_path(tmp, remote_module_filename)
|
remote_module_path = self._connection._shell.join_path(tmp, remote_module_filename)
|
||||||
if module_style in ['old', 'non_native_want_json']:
|
if module_style in ('old', 'non_native_want_json', 'binary'):
|
||||||
# we'll also need a temp file to hold our module arguments
|
# we'll also need a temp file to hold our module arguments
|
||||||
args_file_path = self._connection._shell.join_path(tmp, 'args')
|
args_file_path = self._connection._shell.join_path(tmp, 'args')
|
||||||
|
|
||||||
if remote_module_path or module_style != 'new':
|
if remote_module_path or module_style != 'new':
|
||||||
display.debug("transferring module to remote")
|
display.debug("transferring module to remote")
|
||||||
self._transfer_data(remote_module_path, module_data)
|
if module_style == 'binary':
|
||||||
|
self._transfer_file(module_path, remote_module_path)
|
||||||
|
else:
|
||||||
|
self._transfer_data(remote_module_path, module_data)
|
||||||
if module_style == 'old':
|
if module_style == 'old':
|
||||||
# we need to dump the module args to a k=v string in a file on
|
# we need to dump the module args to a k=v string in a file on
|
||||||
# the remote system, which can be read and parsed by the module
|
# the remote system, which can be read and parsed by the module
|
||||||
|
@ -568,7 +610,7 @@ class ActionBase(with_metaclass(ABCMeta, object)):
|
||||||
for k,v in iteritems(module_args):
|
for k,v in iteritems(module_args):
|
||||||
args_data += '%s="%s" ' % (k, pipes.quote(text_type(v)))
|
args_data += '%s="%s" ' % (k, pipes.quote(text_type(v)))
|
||||||
self._transfer_data(args_file_path, args_data)
|
self._transfer_data(args_file_path, args_data)
|
||||||
elif module_style == 'non_native_want_json':
|
elif module_style in ('non_native_want_json', 'binary'):
|
||||||
self._transfer_data(args_file_path, json.dumps(module_args))
|
self._transfer_data(args_file_path, json.dumps(module_args))
|
||||||
display.debug("done transferring module to remote")
|
display.debug("done transferring module to remote")
|
||||||
|
|
||||||
|
@ -627,7 +669,7 @@ class ActionBase(with_metaclass(ABCMeta, object)):
|
||||||
|
|
||||||
def _parse_returned_data(self, res):
|
def _parse_returned_data(self, res):
|
||||||
try:
|
try:
|
||||||
data = json.loads(self._filter_leading_non_json_lines(res.get('stdout', u'')))
|
data = json.loads(self._filter_non_json_lines(res.get('stdout', u'')))
|
||||||
except ValueError:
|
except ValueError:
|
||||||
# not valid json, lets try to capture error
|
# not valid json, lets try to capture error
|
||||||
data = dict(failed=True, parsed=False)
|
data = dict(failed=True, parsed=False)
|
||||||
|
|
|
@ -54,15 +54,18 @@ class ActionModule(ActionBase):
|
||||||
module_args['_ansible_no_log'] = True
|
module_args['_ansible_no_log'] = True
|
||||||
|
|
||||||
# configure, upload, and chmod the target module
|
# configure, upload, and chmod the target module
|
||||||
(module_style, shebang, module_data) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)
|
(module_style, shebang, module_data, module_path) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)
|
||||||
self._transfer_data(remote_module_path, module_data)
|
if module_style == 'binary':
|
||||||
|
self._transfer_file(module_path, remote_module_path)
|
||||||
|
else:
|
||||||
|
self._transfer_data(remote_module_path, module_data)
|
||||||
|
|
||||||
# configure, upload, and chmod the async_wrapper module
|
# configure, upload, and chmod the async_wrapper module
|
||||||
(async_module_style, shebang, async_module_data) = self._configure_module(module_name='async_wrapper', module_args=dict(), task_vars=task_vars)
|
(async_module_style, shebang, async_module_data, _) = self._configure_module(module_name='async_wrapper', module_args=dict(), task_vars=task_vars)
|
||||||
self._transfer_data(async_module_path, async_module_data)
|
self._transfer_data(async_module_path, async_module_data)
|
||||||
|
|
||||||
argsfile = None
|
argsfile = None
|
||||||
if module_style == 'non_native_want_json':
|
if module_style in ('non_native_want_json', 'binary'):
|
||||||
argsfile = self._transfer_data(self._connection._shell.join_path(tmp, 'arguments'), json.dumps(module_args))
|
argsfile = self._transfer_data(self._connection._shell.join_path(tmp, 'arguments'), json.dumps(module_args))
|
||||||
elif module_style == 'old':
|
elif module_style == 'old':
|
||||||
args_data = ""
|
args_data = ""
|
||||||
|
|
|
@ -93,6 +93,17 @@ class ActionModule(ActionBase):
|
||||||
except IOError:
|
except IOError:
|
||||||
return dict(failed=True, msg='unable to load src file')
|
return dict(failed=True, msg='unable to load src file')
|
||||||
|
|
||||||
|
# Create a template search path in the following order:
|
||||||
|
# [working_path, self_role_path, dependent_role_paths, dirname(source)]
|
||||||
|
searchpath = [working_path]
|
||||||
|
if self._task._role is not None:
|
||||||
|
searchpath.append(self._task._role._role_path)
|
||||||
|
dep_chain = self._task._block.get_dep_chain()
|
||||||
|
if dep_chain is not None:
|
||||||
|
for role in dep_chain:
|
||||||
|
searchpath.append(role._role_path)
|
||||||
|
searchpath.append(os.path.dirname(source))
|
||||||
|
self._templar.environment.loader.searchpath = searchpath
|
||||||
self._task.args['src'] = self._templar.template(template_data)
|
self._task.args['src'] = self._templar.template(template_data)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -322,7 +322,12 @@ class ActionModule(ActionBase):
|
||||||
self._task.args['rsync_path'] = '"%s"' % rsync_path
|
self._task.args['rsync_path'] = '"%s"' % rsync_path
|
||||||
|
|
||||||
if use_ssh_args:
|
if use_ssh_args:
|
||||||
self._task.args['ssh_args'] = C.ANSIBLE_SSH_ARGS
|
ssh_args = [
|
||||||
|
getattr(self._play_context, 'ssh_args', ''),
|
||||||
|
getattr(self._play_context, 'ssh_common_args', ''),
|
||||||
|
getattr(self._play_context, 'ssh_extra_args', ''),
|
||||||
|
]
|
||||||
|
self._task.args['ssh_args'] = ' '.join([a for a in ssh_args if a])
|
||||||
|
|
||||||
# run the module and store the result
|
# run the module and store the result
|
||||||
result.update(self._execute_module('synchronize', task_vars=task_vars))
|
result.update(self._execute_module('synchronize', task_vars=task_vars))
|
||||||
|
|
9
lib/ansible/plugins/cache/jsonfile.py
vendored
9
lib/ansible/plugins/cache/jsonfile.py
vendored
|
@ -62,13 +62,16 @@ class CacheModule(BaseCacheModule):
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def get(self, key):
|
def get(self, key):
|
||||||
|
""" This checks the in memory cache first as the fact was not expired at 'gather time'
|
||||||
if self.has_expired(key) or key == "":
|
and it would be problematic if the key did expire after some long running tasks and
|
||||||
raise KeyError
|
user gets 'undefined' error in the same play """
|
||||||
|
|
||||||
if key in self._cache:
|
if key in self._cache:
|
||||||
return self._cache.get(key)
|
return self._cache.get(key)
|
||||||
|
|
||||||
|
if self.has_expired(key) or key == "":
|
||||||
|
raise KeyError
|
||||||
|
|
||||||
cachefile = "%s/%s" % (self._cache_dir, key)
|
cachefile = "%s/%s" % (self._cache_dir, key)
|
||||||
try:
|
try:
|
||||||
with codecs.open(cachefile, 'r', encoding='utf-8') as f:
|
with codecs.open(cachefile, 'r', encoding='utf-8') as f:
|
||||||
|
|
|
@ -133,8 +133,12 @@ class Connection(ConnectionBase):
|
||||||
## Next, additional arguments based on the configuration.
|
## Next, additional arguments based on the configuration.
|
||||||
|
|
||||||
# sftp batch mode allows us to correctly catch failed transfers, but can
|
# sftp batch mode allows us to correctly catch failed transfers, but can
|
||||||
# be disabled if the client side doesn't support the option.
|
# be disabled if the client side doesn't support the option. However,
|
||||||
|
# sftp batch mode does not prompt for passwords so it must be disabled
|
||||||
|
# if not using controlpersist and using sshpass
|
||||||
if binary == 'sftp' and C.DEFAULT_SFTP_BATCH_MODE:
|
if binary == 'sftp' and C.DEFAULT_SFTP_BATCH_MODE:
|
||||||
|
if self._play_context.password:
|
||||||
|
self._add_args('disable batch mode for sshpass', ['-o', 'BatchMode=no'])
|
||||||
self._command += ['-b', '-']
|
self._command += ['-b', '-']
|
||||||
|
|
||||||
self._command += ['-C']
|
self._command += ['-C']
|
||||||
|
|
|
@ -33,7 +33,6 @@ from ansible.errors import AnsibleError, AnsibleConnectionFailure
|
||||||
try:
|
try:
|
||||||
import winrm
|
import winrm
|
||||||
from winrm import Response
|
from winrm import Response
|
||||||
from winrm.exceptions import WinRMTransportError
|
|
||||||
from winrm.protocol import Protocol
|
from winrm.protocol import Protocol
|
||||||
except ImportError:
|
except ImportError:
|
||||||
raise AnsibleError("winrm is not installed")
|
raise AnsibleError("winrm is not installed")
|
||||||
|
@ -63,7 +62,7 @@ class Connection(ConnectionBase):
|
||||||
'''WinRM connections over HTTP/HTTPS.'''
|
'''WinRM connections over HTTP/HTTPS.'''
|
||||||
|
|
||||||
transport = 'winrm'
|
transport = 'winrm'
|
||||||
module_implementation_preferences = ('.ps1', '')
|
module_implementation_preferences = ('.ps1', '.exe', '')
|
||||||
become_methods = []
|
become_methods = []
|
||||||
allow_executable = False
|
allow_executable = False
|
||||||
|
|
||||||
|
@ -122,7 +121,7 @@ class Connection(ConnectionBase):
|
||||||
|
|
||||||
# warn for kwargs unsupported by the installed version of pywinrm
|
# warn for kwargs unsupported by the installed version of pywinrm
|
||||||
for arg in unsupported_args:
|
for arg in unsupported_args:
|
||||||
display.warning("ansible_winrm_{0} unsupported by pywinrm (are you running the right pywinrm version?)".format(arg))
|
display.warning("ansible_winrm_{0} unsupported by pywinrm (is an up-to-date version of pywinrm installed?)".format(arg))
|
||||||
|
|
||||||
# arg names we're going passing directly
|
# arg names we're going passing directly
|
||||||
internal_kwarg_mask = set(['self', 'endpoint', 'transport', 'username', 'password'])
|
internal_kwarg_mask = set(['self', 'endpoint', 'transport', 'username', 'password'])
|
||||||
|
@ -147,9 +146,8 @@ class Connection(ConnectionBase):
|
||||||
display.vvvvv('WINRM CONNECT: transport=%s endpoint=%s' % (transport, endpoint), host=self._winrm_host)
|
display.vvvvv('WINRM CONNECT: transport=%s endpoint=%s' % (transport, endpoint), host=self._winrm_host)
|
||||||
try:
|
try:
|
||||||
protocol = Protocol(endpoint, transport=transport, **self._winrm_kwargs)
|
protocol = Protocol(endpoint, transport=transport, **self._winrm_kwargs)
|
||||||
# send keepalive message to ensure we're awake
|
|
||||||
# TODO: is this necessary?
|
# open the shell from connect so we know we're able to talk to the server
|
||||||
# protocol.send_message(xmltodict.unparse(rq))
|
|
||||||
if not self.shell_id:
|
if not self.shell_id:
|
||||||
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
|
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
|
||||||
display.vvvvv('WINRM OPEN SHELL: %s' % self.shell_id, host=self._winrm_host)
|
display.vvvvv('WINRM OPEN SHELL: %s' % self.shell_id, host=self._winrm_host)
|
||||||
|
@ -163,7 +161,7 @@ class Connection(ConnectionBase):
|
||||||
if m:
|
if m:
|
||||||
code = int(m.groups()[0])
|
code = int(m.groups()[0])
|
||||||
if code == 401:
|
if code == 401:
|
||||||
err_msg = 'the username/password specified for this server was incorrect'
|
err_msg = 'the specified credentials were rejected by the server'
|
||||||
elif code == 411:
|
elif code == 411:
|
||||||
return protocol
|
return protocol
|
||||||
errors.append(u'%s: %s' % (transport, err_msg))
|
errors.append(u'%s: %s' % (transport, err_msg))
|
||||||
|
@ -282,7 +280,7 @@ class Connection(ConnectionBase):
|
||||||
try:
|
try:
|
||||||
result.std_err = self.parse_clixml_stream(result.std_err)
|
result.std_err = self.parse_clixml_stream(result.std_err)
|
||||||
except:
|
except:
|
||||||
# unsure if we're guaranteed a valid xml doc- keep original output just in case
|
# unsure if we're guaranteed a valid xml doc- use raw output in case of error
|
||||||
pass
|
pass
|
||||||
|
|
||||||
return (result.status_code, result.std_out, result.std_err)
|
return (result.status_code, result.std_out, result.std_err)
|
||||||
|
@ -294,7 +292,7 @@ class Connection(ConnectionBase):
|
||||||
def parse_clixml_stream(self, clixml_doc, stream_name='Error'):
|
def parse_clixml_stream(self, clixml_doc, stream_name='Error'):
|
||||||
clear_xml = clixml_doc.replace('#< CLIXML\r\n', '')
|
clear_xml = clixml_doc.replace('#< CLIXML\r\n', '')
|
||||||
doc = xmltodict.parse(clear_xml)
|
doc = xmltodict.parse(clear_xml)
|
||||||
lines = [l.get('#text', '') for l in doc.get('Objs', {}).get('S', {}) if l.get('@S') == stream_name]
|
lines = [l.get('#text', '').replace('_x000D__x000A_', '') for l in doc.get('Objs', {}).get('S', {}) if l.get('@S') == stream_name]
|
||||||
return '\r\n'.join(lines)
|
return '\r\n'.join(lines)
|
||||||
|
|
||||||
# FUTURE: determine buffer size at runtime via remote winrm config?
|
# FUTURE: determine buffer size at runtime via remote winrm config?
|
||||||
|
|
|
@ -15,7 +15,7 @@
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||||
#
|
#
|
||||||
# USAGE: {{ lookup('hashi_vault', 'secret=secret/hello token=c975b780-d1be-8016-866b-01d0f9b688a5 url=http://myvault:8200')}}
|
# USAGE: {{ lookup('hashi_vault', 'secret=secret/hello:value token=c975b780-d1be-8016-866b-01d0f9b688a5 url=http://myvault:8200')}}
|
||||||
#
|
#
|
||||||
# You can skip setting the url if you set the VAULT_ADDR environment variable
|
# You can skip setting the url if you set the VAULT_ADDR environment variable
|
||||||
# or if you want it to default to localhost:8200
|
# or if you want it to default to localhost:8200
|
||||||
|
@ -47,9 +47,23 @@ class HashiVault:
|
||||||
except ImportError:
|
except ImportError:
|
||||||
AnsibleError("Please pip install hvac to use this module")
|
AnsibleError("Please pip install hvac to use this module")
|
||||||
|
|
||||||
self.url = kwargs.pop('url')
|
self.url = kwargs.get('url', ANSIBLE_HASHI_VAULT_ADDR)
|
||||||
self.secret = kwargs.pop('secret')
|
|
||||||
self.token = kwargs.pop('token')
|
self.token = kwargs.get('token')
|
||||||
|
if self.token==None:
|
||||||
|
raise AnsibleError("No Vault Token specified")
|
||||||
|
|
||||||
|
# split secret arg, which has format 'secret/hello:value' into secret='secret/hello' and secret_field='value'
|
||||||
|
s = kwargs.get('secret')
|
||||||
|
if s==None:
|
||||||
|
raise AnsibleError("No secret specified")
|
||||||
|
|
||||||
|
s_f = s.split(':')
|
||||||
|
self.secret = s_f[0]
|
||||||
|
if len(s_f)>=2:
|
||||||
|
self.secret_field = s_f[1]
|
||||||
|
else:
|
||||||
|
self.secret_field = 'value'
|
||||||
|
|
||||||
self.client = hvac.Client(url=self.url, token=self.token)
|
self.client = hvac.Client(url=self.url, token=self.token)
|
||||||
|
|
||||||
|
@ -62,20 +76,27 @@ class HashiVault:
|
||||||
data = self.client.read(self.secret)
|
data = self.client.read(self.secret)
|
||||||
if data is None:
|
if data is None:
|
||||||
raise AnsibleError("The secret %s doesn't seem to exist" % self.secret)
|
raise AnsibleError("The secret %s doesn't seem to exist" % self.secret)
|
||||||
else:
|
|
||||||
return data['data']['value']
|
if self.secret_field=='': # secret was specified with trailing ':'
|
||||||
|
return data['data']
|
||||||
|
|
||||||
|
if self.secret_field not in data['data']:
|
||||||
|
raise AnsibleError("The secret %s does not contain the field '%s'. " % (self.secret, self.secret_field))
|
||||||
|
|
||||||
|
return data['data'][self.secret_field]
|
||||||
|
|
||||||
|
|
||||||
class LookupModule(LookupBase):
|
class LookupModule(LookupBase):
|
||||||
|
|
||||||
def run(self, terms, variables, **kwargs):
|
def run(self, terms, variables, **kwargs):
|
||||||
|
|
||||||
vault_args = terms[0].split(' ')
|
vault_args = terms[0].split(' ')
|
||||||
vault_dict = {}
|
vault_dict = {}
|
||||||
ret = []
|
ret = []
|
||||||
|
|
||||||
for param in vault_args:
|
for param in vault_args:
|
||||||
key, value = param.split('=')
|
try:
|
||||||
|
key, value = param.split('=')
|
||||||
|
except ValueError as e:
|
||||||
|
raise AnsibleError("hashi_vault plugin needs key=value pairs, but received %s" % terms)
|
||||||
vault_dict[key] = value
|
vault_dict[key] = value
|
||||||
|
|
||||||
vault_conn = HashiVault(**vault_dict)
|
vault_conn = HashiVault(**vault_dict)
|
||||||
|
@ -84,4 +105,6 @@ class LookupModule(LookupBase):
|
||||||
key = term.split()[0]
|
key = term.split()[0]
|
||||||
value = vault_conn.get()
|
value = vault_conn.get()
|
||||||
ret.append(value)
|
ret.append(value)
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
|
|
|
@ -50,7 +50,8 @@ class ShellBase(object):
|
||||||
return os.path.join(*args)
|
return os.path.join(*args)
|
||||||
|
|
||||||
# some shells (eg, powershell) are snooty about filenames/extensions, this lets the shell plugin have a say
|
# some shells (eg, powershell) are snooty about filenames/extensions, this lets the shell plugin have a say
|
||||||
def get_remote_filename(self, base_name):
|
def get_remote_filename(self, pathname):
|
||||||
|
base_name = os.path.basename(pathname.strip())
|
||||||
return base_name.strip()
|
return base_name.strip()
|
||||||
|
|
||||||
def path_has_trailing_slash(self, path):
|
def path_has_trailing_slash(self, path):
|
||||||
|
@ -134,7 +135,7 @@ class ShellBase(object):
|
||||||
basetmp = self.join_path(basetmpdir, basefile)
|
basetmp = self.join_path(basetmpdir, basefile)
|
||||||
|
|
||||||
cmd = 'mkdir -p %s echo %s %s' % (self._SHELL_SUB_LEFT, basetmp, self._SHELL_SUB_RIGHT)
|
cmd = 'mkdir -p %s echo %s %s' % (self._SHELL_SUB_LEFT, basetmp, self._SHELL_SUB_RIGHT)
|
||||||
cmd += ' %s echo %s echo %s %s' % (self._SHELL_AND, self._SHELL_SUB_LEFT, basetmp, self._SHELL_SUB_RIGHT)
|
cmd += ' %s echo %s=%s echo %s %s' % (self._SHELL_AND, basefile, self._SHELL_SUB_LEFT, basetmp, self._SHELL_SUB_RIGHT)
|
||||||
|
|
||||||
# change the umask in a subshell to achieve the desired mode
|
# change the umask in a subshell to achieve the desired mode
|
||||||
# also for directories created with `mkdir -p`
|
# also for directories created with `mkdir -p`
|
||||||
|
@ -164,7 +165,13 @@ class ShellBase(object):
|
||||||
# don't quote the cmd if it's an empty string, because this will break pipelining mode
|
# don't quote the cmd if it's an empty string, because this will break pipelining mode
|
||||||
if cmd.strip() != '':
|
if cmd.strip() != '':
|
||||||
cmd = pipes.quote(cmd)
|
cmd = pipes.quote(cmd)
|
||||||
cmd_parts = [env_string.strip(), shebang.replace("#!", "").strip(), cmd]
|
|
||||||
|
cmd_parts = []
|
||||||
|
if shebang:
|
||||||
|
shebang = shebang.replace("#!", "").strip()
|
||||||
|
else:
|
||||||
|
shebang = ""
|
||||||
|
cmd_parts.extend([env_string.strip(), shebang, cmd])
|
||||||
if arg_path is not None:
|
if arg_path is not None:
|
||||||
cmd_parts.append(arg_path)
|
cmd_parts.append(arg_path)
|
||||||
new_cmd = " ".join(cmd_parts)
|
new_cmd = " ".join(cmd_parts)
|
||||||
|
|
|
@ -54,10 +54,12 @@ class ShellModule(object):
|
||||||
return path
|
return path
|
||||||
return '\'%s\'' % path
|
return '\'%s\'' % path
|
||||||
|
|
||||||
# powershell requires that script files end with .ps1
|
def get_remote_filename(self, pathname):
|
||||||
def get_remote_filename(self, base_name):
|
# powershell requires that script files end with .ps1
|
||||||
if not base_name.strip().lower().endswith('.ps1'):
|
base_name = os.path.basename(pathname.strip())
|
||||||
return base_name.strip() + '.ps1'
|
name, ext = os.path.splitext(base_name.strip())
|
||||||
|
if ext.lower() not in ['.ps1', '.exe']:
|
||||||
|
return name + '.ps1'
|
||||||
|
|
||||||
return base_name.strip()
|
return base_name.strip()
|
||||||
|
|
||||||
|
@ -146,6 +148,10 @@ class ShellModule(object):
|
||||||
cmd_parts.insert(0, '&')
|
cmd_parts.insert(0, '&')
|
||||||
elif shebang and shebang.startswith('#!'):
|
elif shebang and shebang.startswith('#!'):
|
||||||
cmd_parts.insert(0, shebang[2:])
|
cmd_parts.insert(0, shebang[2:])
|
||||||
|
elif not shebang:
|
||||||
|
# The module is assumed to be a binary
|
||||||
|
cmd_parts[0] = self._unquote(cmd_parts[0])
|
||||||
|
cmd_parts.append(arg_path)
|
||||||
script = '''
|
script = '''
|
||||||
Try
|
Try
|
||||||
{
|
{
|
||||||
|
|
|
@ -349,7 +349,7 @@ class StrategyBase:
|
||||||
# be a host that is not really in inventory at all
|
# be a host that is not really in inventory at all
|
||||||
if task.delegate_to is not None and task.delegate_facts:
|
if task.delegate_to is not None and task.delegate_facts:
|
||||||
task_vars = self._variable_manager.get_vars(loader=self._loader, play=iterator._play, host=host, task=task)
|
task_vars = self._variable_manager.get_vars(loader=self._loader, play=iterator._play, host=host, task=task)
|
||||||
task_vars = self.add_tqm_variables(task_vars, play=iterator._play)
|
self.add_tqm_variables(task_vars, play=iterator._play)
|
||||||
loop_var = 'item'
|
loop_var = 'item'
|
||||||
if task.loop_control:
|
if task.loop_control:
|
||||||
loop_var = task.loop_control.loop_var or 'item'
|
loop_var = task.loop_control.loop_var or 'item'
|
||||||
|
@ -377,9 +377,9 @@ class StrategyBase:
|
||||||
facts = result[4]
|
facts = result[4]
|
||||||
for target_host in host_list:
|
for target_host in host_list:
|
||||||
if task.action == 'set_fact':
|
if task.action == 'set_fact':
|
||||||
self._variable_manager.set_nonpersistent_facts(target_host, facts)
|
self._variable_manager.set_nonpersistent_facts(target_host, facts.copy())
|
||||||
else:
|
else:
|
||||||
self._variable_manager.set_host_facts(target_host, facts)
|
self._variable_manager.set_host_facts(target_host, facts.copy())
|
||||||
elif result[0].startswith('v2_runner_item') or result[0] == 'v2_runner_retry':
|
elif result[0].startswith('v2_runner_item') or result[0] == 'v2_runner_retry':
|
||||||
self._tqm.send_callback(result[0], result[1])
|
self._tqm.send_callback(result[0], result[1])
|
||||||
elif result[0] == 'v2_on_file_diff':
|
elif result[0] == 'v2_on_file_diff':
|
||||||
|
|
|
@ -58,7 +58,7 @@ class StrategyModule(StrategyBase):
|
||||||
work_to_do = True
|
work_to_do = True
|
||||||
while work_to_do and not self._tqm._terminated:
|
while work_to_do and not self._tqm._terminated:
|
||||||
|
|
||||||
hosts_left = [host for host in self._inventory.get_hosts(iterator._play.hosts) if host.name not in self._tqm._unreachable_hosts and not iterator.is_failed(host)]
|
hosts_left = [host for host in self._inventory.get_hosts(iterator._play.hosts) if host.name not in self._tqm._unreachable_hosts]
|
||||||
if len(hosts_left) == 0:
|
if len(hosts_left) == 0:
|
||||||
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
|
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
|
||||||
result = False
|
result = False
|
||||||
|
@ -123,6 +123,7 @@ class StrategyModule(StrategyBase):
|
||||||
# if there is metadata, check to see if the allow_duplicates flag was set to true
|
# if there is metadata, check to see if the allow_duplicates flag was set to true
|
||||||
if task._role._metadata is None or task._role._metadata and not task._role._metadata.allow_duplicates:
|
if task._role._metadata is None or task._role._metadata and not task._role._metadata.allow_duplicates:
|
||||||
display.debug("'%s' skipped because role has already run" % task)
|
display.debug("'%s' skipped because role has already run" % task)
|
||||||
|
del self._blocked_hosts[host_name]
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if task.action == 'meta':
|
if task.action == 'meta':
|
||||||
|
@ -191,6 +192,9 @@ class StrategyModule(StrategyBase):
|
||||||
# pause briefly so we don't spin lock
|
# pause briefly so we don't spin lock
|
||||||
time.sleep(0.001)
|
time.sleep(0.001)
|
||||||
|
|
||||||
|
# collect all the final results
|
||||||
|
results = self._wait_on_pending_results(iterator)
|
||||||
|
|
||||||
# run the base class run() method, which executes the cleanup function
|
# run the base class run() method, which executes the cleanup function
|
||||||
# and runs any outstanding handlers which have been triggered
|
# and runs any outstanding handlers which have been triggered
|
||||||
return super(StrategyModule, self).run(iterator, play_context, result)
|
return super(StrategyModule, self).run(iterator, play_context, result)
|
||||||
|
|
|
@ -163,7 +163,7 @@ class StrategyModule(StrategyBase):
|
||||||
|
|
||||||
try:
|
try:
|
||||||
display.debug("getting the remaining hosts for this loop")
|
display.debug("getting the remaining hosts for this loop")
|
||||||
hosts_left = [host for host in self._inventory.get_hosts(iterator._play.hosts) if host.name not in self._tqm._unreachable_hosts and not iterator.is_failed(host)]
|
hosts_left = [host for host in self._inventory.get_hosts(iterator._play.hosts) if host.name not in self._tqm._unreachable_hosts]
|
||||||
display.debug("done getting the remaining hosts for this loop")
|
display.debug("done getting the remaining hosts for this loop")
|
||||||
|
|
||||||
# queue up this task for each host in the inventory
|
# queue up this task for each host in the inventory
|
||||||
|
|
|
@ -34,7 +34,7 @@ options:
|
||||||
aliases: ['pass', 'pwd']
|
aliases: ['pass', 'pwd']
|
||||||
org:
|
org:
|
||||||
description:
|
description:
|
||||||
- The org to login to for creating vapp, mostly set when the service_type is vdc.
|
- The org to login to for creating vapp. This option is required when the C(service_type) is I(vdc).
|
||||||
required: false
|
required: false
|
||||||
default: None
|
default: None
|
||||||
instance_id:
|
instance_id:
|
||||||
|
|
|
@ -324,7 +324,7 @@ class VariableManager:
|
||||||
|
|
||||||
if task:
|
if task:
|
||||||
if task._role:
|
if task._role:
|
||||||
all_vars = combine_vars(all_vars, task._role.get_vars())
|
all_vars = combine_vars(all_vars, task._role.get_vars(include_params=False))
|
||||||
all_vars = combine_vars(all_vars, task._role.get_role_params(task._block._dep_chain))
|
all_vars = combine_vars(all_vars, task._role.get_role_params(task._block._dep_chain))
|
||||||
all_vars = combine_vars(all_vars, task.get_vars())
|
all_vars = combine_vars(all_vars, task.get_vars())
|
||||||
|
|
||||||
|
|
|
@ -99,3 +99,9 @@ class HostVars(collections.Mapping):
|
||||||
def __len__(self):
|
def __len__(self):
|
||||||
return len(self._inventory.get_hosts(ignore_limits_and_restrictions=True))
|
return len(self._inventory.get_hosts(ignore_limits_and_restrictions=True))
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
out = {}
|
||||||
|
for host in self._inventory.get_hosts(ignore_limits_and_restrictions=True):
|
||||||
|
name = host.name
|
||||||
|
out[name] = self.get(name)
|
||||||
|
return repr(out)
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
BASEDIR=${1-"."}
|
BASEDIR=${1-"."}
|
||||||
|
|
||||||
URLLIB_USERS=$(find "$BASEDIR" -name '*.py' -exec grep -H urlopen \{\} \;)
|
URLLIB_USERS=$(find "$BASEDIR" -name '*.py' -exec grep -H urlopen \{\} \;)
|
||||||
URLLIB_USERS=$(echo "$URLLIB_USERS" | sed '/\(\n\|lib\/ansible\/module_utils\/urls.py\|lib\/ansible\/compat\/six\/_six.py\|.tox\)/d')
|
URLLIB_USERS=$(echo "$URLLIB_USERS" | sed '/\(\n\|lib\/ansible\/module_utils\/urls.py\|lib\/ansible\/module_utils\/six.py\|lib\/ansible\/compat\/six\/_six.py\|.tox\)/d')
|
||||||
URLLIB_USERS=$(echo "$URLLIB_USERS" | sed '/^[^:]\+:#/d')
|
URLLIB_USERS=$(echo "$URLLIB_USERS" | sed '/^[^:]\+:#/d')
|
||||||
if test -n "$URLLIB_USERS" ; then
|
if test -n "$URLLIB_USERS" ; then
|
||||||
printf "$URLLIB_USERS"
|
printf "$URLLIB_USERS"
|
||||||
|
|
|
@ -23,7 +23,9 @@ VAULT_PASSWORD_FILE = vault-password
|
||||||
CONSUL_RUNNING := $(shell python consul_running.py)
|
CONSUL_RUNNING := $(shell python consul_running.py)
|
||||||
EUID := $(shell id -u -r)
|
EUID := $(shell id -u -r)
|
||||||
|
|
||||||
all: setup test_test_infra parsing test_var_precedence unicode test_templating_settings environment test_connection non_destructive destructive includes blocks pull check_mode test_hash test_handlers test_group_by test_vault test_tags test_lookup_paths no_log test_gathering_facts
|
UNAME := $(shell uname | tr '[:upper:]' '[:lower:]')
|
||||||
|
|
||||||
|
all: setup test_test_infra parsing test_var_precedence unicode test_templating_settings environment test_connection non_destructive destructive includes blocks pull check_mode test_hash test_handlers test_group_by test_vault test_tags test_lookup_paths no_log test_gathering_facts test_binary_modules
|
||||||
|
|
||||||
test_test_infra:
|
test_test_infra:
|
||||||
# ensure fail/assert work locally and can stop execution with non-zero exit code
|
# ensure fail/assert work locally and can stop execution with non-zero exit code
|
||||||
|
@ -284,3 +286,17 @@ test_lookup_paths: setup
|
||||||
no_log: setup
|
no_log: setup
|
||||||
# This test expects 7 loggable vars and 0 non loggable ones, if either mismatches it fails, run the ansible-playbook command to debug
|
# This test expects 7 loggable vars and 0 non loggable ones, if either mismatches it fails, run the ansible-playbook command to debug
|
||||||
[ "$$(ansible-playbook no_log_local.yml -i $(INVENTORY) -e outputdir=$(TEST_DIR) -vvvvv | awk --source 'BEGIN { logme = 0; nolog = 0; } /LOG_ME/ { logme += 1;} /DO_NOT_LOG/ { nolog += 1;} END { printf "%d/%d", logme, nolog; }')" = "6/0" ]
|
[ "$$(ansible-playbook no_log_local.yml -i $(INVENTORY) -e outputdir=$(TEST_DIR) -vvvvv | awk --source 'BEGIN { logme = 0; nolog = 0; } /LOG_ME/ { logme += 1;} /DO_NOT_LOG/ { nolog += 1;} END { printf "%d/%d", logme, nolog; }')" = "6/0" ]
|
||||||
|
|
||||||
|
test_binary_modules:
|
||||||
|
mytmpdir=$(MYTMPDIR); \
|
||||||
|
ls -al $$mytmpdir; \
|
||||||
|
curl https://storage.googleapis.com/golang/go1.6.2.$(UNAME)-amd64.tar.gz | tar -xz -C $$mytmpdir; \
|
||||||
|
[ $$? != 0 ] && wget -qO- https://storage.googleapis.com/golang/go1.6.2.$(UNAME)-amd64.tar.gz | tar -xz -C $$mytmpdir; \
|
||||||
|
ls -al $$mytmpdir; \
|
||||||
|
cd library; \
|
||||||
|
GOROOT=$$mytmpdir/go GOOS=linux GOARCH=amd64 $$mytmpdir/go/bin/go build -o helloworld_linux helloworld.go; \
|
||||||
|
GOROOT=$$mytmpdir/go GOOS=windows GOARCH=amd64 $$mytmpdir/go/bin/go build -o helloworld_win32nt.exe helloworld.go; \
|
||||||
|
GOROOT=$$mytmpdir/go GOOS=darwin GOARCH=amd64 $$mytmpdir/go/bin/go build -o helloworld_darwin helloworld.go; \
|
||||||
|
cd ..; \
|
||||||
|
rm -rf $$mytmpdir; \
|
||||||
|
ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook test_binary_modules.yml -i $(INVENTORY) -v $(TEST_FLAGS)
|
||||||
|
|
|
@ -21,4 +21,5 @@
|
||||||
- { role: test_zypper, tags: test_zypper}
|
- { role: test_zypper, tags: test_zypper}
|
||||||
- { role: test_zypper_repository, tags: test_zypper_repository}
|
- { role: test_zypper_repository, tags: test_zypper_repository}
|
||||||
- { role: test_uri, tags: test_uri }
|
- { role: test_uri, tags: test_uri }
|
||||||
|
- { role: test_get_url, tags: test_get_url }
|
||||||
- { role: test_apache2_module, tags: test_apache2_module }
|
- { role: test_apache2_module, tags: test_apache2_module }
|
||||||
|
|
1
test/integration/library/.gitignore
vendored
Normal file
1
test/integration/library/.gitignore
vendored
Normal file
|
@ -0,0 +1 @@
|
||||||
|
helloworld_*
|
89
test/integration/library/helloworld.go
Normal file
89
test/integration/library/helloworld.go
Normal file
|
@ -0,0 +1,89 @@
|
||||||
|
// This file is part of Ansible
|
||||||
|
//
|
||||||
|
// Ansible is free software: you can redistribute it and/or modify
|
||||||
|
// it under the terms of the GNU General Public License as published by
|
||||||
|
// the Free Software Foundation, either version 3 of the License, or
|
||||||
|
// (at your option) any later version.
|
||||||
|
//
|
||||||
|
// Ansible is distributed in the hope that it will be useful,
|
||||||
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
// GNU General Public License for more details.
|
||||||
|
//
|
||||||
|
// You should have received a copy of the GNU General Public License
|
||||||
|
// along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io/ioutil"
|
||||||
|
"os"
|
||||||
|
)
|
||||||
|
|
||||||
|
type ModuleArgs struct {
|
||||||
|
Name string
|
||||||
|
}
|
||||||
|
|
||||||
|
type Response struct {
|
||||||
|
Msg string `json:"msg"`
|
||||||
|
Changed bool `json:"changed"`
|
||||||
|
Failed bool `json:"failed"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func ExitJson(responseBody Response) {
|
||||||
|
returnResponse(responseBody)
|
||||||
|
}
|
||||||
|
|
||||||
|
func FailJson(responseBody Response) {
|
||||||
|
responseBody.Failed = true
|
||||||
|
returnResponse(responseBody)
|
||||||
|
}
|
||||||
|
|
||||||
|
func returnResponse(responseBody Response) {
|
||||||
|
var response []byte
|
||||||
|
var err error
|
||||||
|
response, err = json.Marshal(responseBody)
|
||||||
|
if err != nil {
|
||||||
|
response, _ = json.Marshal(Response{Msg: "Invalid response object"})
|
||||||
|
}
|
||||||
|
fmt.Println(string(response))
|
||||||
|
if responseBody.Failed {
|
||||||
|
os.Exit(1)
|
||||||
|
} else {
|
||||||
|
os.Exit(0)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
var response Response
|
||||||
|
|
||||||
|
if len(os.Args) != 2 {
|
||||||
|
response.Msg = "No argument file provided"
|
||||||
|
FailJson(response)
|
||||||
|
}
|
||||||
|
|
||||||
|
argsFile := os.Args[1]
|
||||||
|
|
||||||
|
text, err := ioutil.ReadFile(argsFile)
|
||||||
|
if err != nil {
|
||||||
|
response.Msg = "Could not read configuration file: " + argsFile
|
||||||
|
FailJson(response)
|
||||||
|
}
|
||||||
|
|
||||||
|
var moduleArgs ModuleArgs
|
||||||
|
err = json.Unmarshal(text, &moduleArgs)
|
||||||
|
if err != nil {
|
||||||
|
response.Msg = "Configuration file not valid JSON: " + argsFile
|
||||||
|
FailJson(response)
|
||||||
|
}
|
||||||
|
|
||||||
|
var name string = "World"
|
||||||
|
if moduleArgs.Name != "" {
|
||||||
|
name = moduleArgs.Name
|
||||||
|
}
|
||||||
|
|
||||||
|
response.Msg = "Hello, " + name + "!"
|
||||||
|
ExitJson(response)
|
||||||
|
}
|
|
@ -38,7 +38,6 @@
|
||||||
- { role: test_command_shell, tags: test_command_shell }
|
- { role: test_command_shell, tags: test_command_shell }
|
||||||
- { role: test_script, tags: test_script }
|
- { role: test_script, tags: test_script }
|
||||||
- { role: test_authorized_key, tags: test_authorized_key }
|
- { role: test_authorized_key, tags: test_authorized_key }
|
||||||
- { role: test_get_url, tags: test_get_url }
|
|
||||||
- { role: test_embedded_module, tags: test_embedded_module }
|
- { role: test_embedded_module, tags: test_embedded_module }
|
||||||
- { role: test_add_host, tags: test_add_host }
|
- { role: test_add_host, tags: test_add_host }
|
||||||
- { role: test_binary, tags: test_binary }
|
- { role: test_binary, tags: test_binary }
|
||||||
|
|
|
@ -0,0 +1,3 @@
|
||||||
|
badssl_host: wrong.host.badssl.com
|
||||||
|
httpbin_host: httpbin.org
|
||||||
|
sni_host: sni.velox.ch
|
35
test/integration/roles/prepare_http_tests/tasks/main.yml
Normal file
35
test/integration/roles/prepare_http_tests/tasks/main.yml
Normal file
|
@ -0,0 +1,35 @@
|
||||||
|
# The docker --link functionality gives us an ENV var we can key off of to see if we have access to
|
||||||
|
# the httptester container
|
||||||
|
- set_fact:
|
||||||
|
has_httptester: "{{ lookup('env', 'ANSIBLE.HTTP.TESTS_PORT_80_TCP_ADDR') != '' }}"
|
||||||
|
|
||||||
|
# If we are running with access to a httptester container, grab it's cacert and install it
|
||||||
|
- block:
|
||||||
|
# Override hostname defaults with httptester linked names
|
||||||
|
- include_vars: httptester.yml
|
||||||
|
|
||||||
|
- name: RedHat - Enable the dynamic CA configuration feature
|
||||||
|
command: update-ca-trust force-enable
|
||||||
|
when: ansible_os_family == 'RedHat'
|
||||||
|
|
||||||
|
- name: RedHat - Retrieve test cacert
|
||||||
|
get_url:
|
||||||
|
url: "http://ansible.http.tests/cacert.pem"
|
||||||
|
dest: "/etc/pki/ca-trust/source/anchors/ansible.pem"
|
||||||
|
when: ansible_os_family == 'RedHat'
|
||||||
|
|
||||||
|
- name: Debian - Retrieve test cacert
|
||||||
|
get_url:
|
||||||
|
url: "http://ansible.http.tests/cacert.pem"
|
||||||
|
dest: "/usr/local/share/ca-certificates/ansible.crt"
|
||||||
|
when: ansible_os_family == 'Debian'
|
||||||
|
|
||||||
|
- name: Redhat - Update ca trust
|
||||||
|
command: update-ca-trust extract
|
||||||
|
when: ansible_os_family == 'RedHat'
|
||||||
|
|
||||||
|
- name: Debian - Update ca certificates
|
||||||
|
command: update-ca-certificates
|
||||||
|
when: ansible_os_family == 'Debian'
|
||||||
|
|
||||||
|
when: has_httptester|bool
|
|
@ -0,0 +1,4 @@
|
||||||
|
# these are fake hostnames provided by docker link for the httptester container
|
||||||
|
badssl_host: fail.ansible.http.tests
|
||||||
|
httpbin_host: ansible.http.tests
|
||||||
|
sni_host: sni1.ansible.http.tests
|
|
@ -52,15 +52,15 @@
|
||||||
loop_var: postgresql_package_item
|
loop_var: postgresql_package_item
|
||||||
when: ansible_pkg_mgr == 'apt'
|
when: ansible_pkg_mgr == 'apt'
|
||||||
|
|
||||||
- name: Initialize postgres (systemd)
|
- name: Initialize postgres (RedHat systemd)
|
||||||
command: postgresql-setup initdb
|
command: postgresql-setup initdb
|
||||||
when: ansible_distribution == "Fedora" or (ansible_os_family == "RedHat" and ansible_distribution_major_version|int >= 7)
|
when: ansible_distribution == "Fedora" or (ansible_os_family == "RedHat" and ansible_distribution_major_version|int >= 7)
|
||||||
|
|
||||||
- name: Initialize postgres (sysv)
|
- name: Initialize postgres (RedHat sysv)
|
||||||
command: /sbin/service postgresql initdb
|
command: /sbin/service postgresql initdb
|
||||||
when: ansible_os_family == "RedHat" and ansible_distribution_major_version|int <= 6
|
when: ansible_os_family == "RedHat" and ansible_distribution_major_version|int <= 6
|
||||||
|
|
||||||
- name: Iniitalize postgres (upstart)
|
- name: Iniitalize postgres (Debian)
|
||||||
command: /usr/bin/pg_createcluster {{ pg_ver }} main
|
command: /usr/bin/pg_createcluster {{ pg_ver }} main
|
||||||
# Sometimes package install creates the db cluster, sometimes this step is needed
|
# Sometimes package install creates the db cluster, sometimes this step is needed
|
||||||
ignore_errors: True
|
ignore_errors: True
|
||||||
|
|
10
test/integration/roles/setup_postgresql_db/vars/Debian-8.yml
Normal file
10
test/integration/roles/setup_postgresql_db/vars/Debian-8.yml
Normal file
|
@ -0,0 +1,10 @@
|
||||||
|
postgresql_service: "postgresql"
|
||||||
|
|
||||||
|
postgresql_packages:
|
||||||
|
- "postgresql"
|
||||||
|
- "postgresql-common"
|
||||||
|
- "python-psycopg2"
|
||||||
|
|
||||||
|
pg_hba_location: "/etc/postgresql/9.4/main/pg_hba.conf"
|
||||||
|
pg_dir: "/var/lib/postgresql/9.4/main"
|
||||||
|
pg_ver: 9.4
|
|
@ -0,0 +1,10 @@
|
||||||
|
postgresql_service: "postgresql"
|
||||||
|
|
||||||
|
postgresql_packages:
|
||||||
|
- "postgresql"
|
||||||
|
- "postgresql-common"
|
||||||
|
- "python-psycopg2"
|
||||||
|
|
||||||
|
pg_hba_location: "/etc/postgresql/9.5/main/pg_hba.conf"
|
||||||
|
pg_dir: "/var/lib/postgresql/9.5/main"
|
||||||
|
pg_ver: 9.5
|
|
@ -21,11 +21,11 @@
|
||||||
zypper: name=apache2 state=present
|
zypper: name=apache2 state=present
|
||||||
when: "ansible_os_family == 'Suse'"
|
when: "ansible_os_family == 'Suse'"
|
||||||
|
|
||||||
- name: disable alias module
|
- name: disable userdir module
|
||||||
apache2_module: name=alias state=absent
|
apache2_module: name=userdir state=absent
|
||||||
|
|
||||||
- name: disable alias module, second run
|
- name: disable userdir module, second run
|
||||||
apache2_module: name=alias state=absent
|
apache2_module: name=userdir state=absent
|
||||||
register: disable
|
register: disable
|
||||||
|
|
||||||
- name: ensure apache2_module is idempotent
|
- name: ensure apache2_module is idempotent
|
||||||
|
@ -33,8 +33,8 @@
|
||||||
that:
|
that:
|
||||||
- 'not disable.changed'
|
- 'not disable.changed'
|
||||||
|
|
||||||
- name: enable alias module
|
- name: enable userdir module
|
||||||
apache2_module: name=alias state=present
|
apache2_module: name=userdir state=present
|
||||||
register: enable
|
register: enable
|
||||||
|
|
||||||
- name: ensure changed on successful enable
|
- name: ensure changed on successful enable
|
||||||
|
@ -42,8 +42,8 @@
|
||||||
that:
|
that:
|
||||||
- 'enable.changed'
|
- 'enable.changed'
|
||||||
|
|
||||||
- name: enable alias module, second run
|
- name: enable userdir module, second run
|
||||||
apache2_module: name=alias state=present
|
apache2_module: name=userdir state=present
|
||||||
register: enabletwo
|
register: enabletwo
|
||||||
|
|
||||||
- name: ensure apache2_module is idempotent
|
- name: ensure apache2_module is idempotent
|
||||||
|
@ -51,8 +51,8 @@
|
||||||
that:
|
that:
|
||||||
- 'not enabletwo.changed'
|
- 'not enabletwo.changed'
|
||||||
|
|
||||||
- name: disable alias module, final run
|
- name: disable userdir module, final run
|
||||||
apache2_module: name=alias state=absent
|
apache2_module: name=userdir state=absent
|
||||||
register: disablefinal
|
register: disablefinal
|
||||||
|
|
||||||
- name: ensure changed on successful disable
|
- name: ensure changed on successful disable
|
||||||
|
|
|
@ -21,7 +21,7 @@
|
||||||
register: apt_result
|
register: apt_result
|
||||||
|
|
||||||
- name: check hello with dpkg
|
- name: check hello with dpkg
|
||||||
shell: dpkg --get-selections | fgrep hello
|
shell: dpkg-query -l hello
|
||||||
failed_when: False
|
failed_when: False
|
||||||
register: dpkg_result
|
register: dpkg_result
|
||||||
|
|
||||||
|
@ -47,7 +47,7 @@
|
||||||
register: apt_result
|
register: apt_result
|
||||||
|
|
||||||
- name: check hello with dpkg
|
- name: check hello with dpkg
|
||||||
shell: dpkg --get-selections | fgrep hello
|
shell: dpkg-query -l hello
|
||||||
failed_when: False
|
failed_when: False
|
||||||
register: dpkg_result
|
register: dpkg_result
|
||||||
|
|
||||||
|
@ -89,7 +89,7 @@
|
||||||
register: apt_result
|
register: apt_result
|
||||||
|
|
||||||
- name: check hello with wildcard with dpkg
|
- name: check hello with wildcard with dpkg
|
||||||
shell: dpkg --get-selections | fgrep hello
|
shell: dpkg-query -l hello
|
||||||
failed_when: False
|
failed_when: False
|
||||||
register: dpkg_result
|
register: dpkg_result
|
||||||
|
|
||||||
|
@ -103,10 +103,10 @@
|
||||||
- "dpkg_result.rc == 0"
|
- "dpkg_result.rc == 0"
|
||||||
|
|
||||||
- name: check hello version
|
- name: check hello version
|
||||||
shell: dpkg -s hello | grep Version | sed -r 's/Version:\s+([a-zA-Z0-9.-]+)\s*$/\1/'
|
shell: dpkg -s hello | grep Version | awk '{print $2}'
|
||||||
register: hello_version
|
register: hello_version
|
||||||
- name: check hello architecture
|
- name: check hello architecture
|
||||||
shell: dpkg -s hello | grep Architecture | sed -r 's/Architecture:\s+([a-zA-Z0-9.-]+)\s*$/\1/'
|
shell: dpkg -s hello | grep Architecture | awk '{print $2}'
|
||||||
register: hello_architecture
|
register: hello_architecture
|
||||||
|
|
||||||
- name: uninstall hello with apt
|
- name: uninstall hello with apt
|
||||||
|
|
|
@ -17,5 +17,5 @@
|
||||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
- include: 'apt.yml'
|
- include: 'apt.yml'
|
||||||
when: ansible_distribution in ('Ubuntu', 'Debian')
|
when: ansible_distribution in ('Ubuntu') and ansible_distribution_version|version_compare('16.04', '<')
|
||||||
|
|
||||||
|
|
54
test/integration/roles/test_binary_modules/tasks/main.yml
Normal file
54
test/integration/roles/test_binary_modules/tasks/main.yml
Normal file
|
@ -0,0 +1,54 @@
|
||||||
|
- debug: var=ansible_system
|
||||||
|
|
||||||
|
- name: ping
|
||||||
|
ping:
|
||||||
|
when: ansible_system != 'Win32NT'
|
||||||
|
|
||||||
|
- name: win_ping
|
||||||
|
win_ping:
|
||||||
|
when: ansible_system == 'Win32NT'
|
||||||
|
|
||||||
|
- name: Hello, World!
|
||||||
|
action: "helloworld_{{ ansible_system|lower }}"
|
||||||
|
register: hello_world
|
||||||
|
|
||||||
|
- assert:
|
||||||
|
that:
|
||||||
|
- 'hello_world.msg == "Hello, World!"'
|
||||||
|
|
||||||
|
- name: Hello, Ansible!
|
||||||
|
action: "helloworld_{{ ansible_system|lower }}"
|
||||||
|
args:
|
||||||
|
name: Ansible
|
||||||
|
register: hello_ansible
|
||||||
|
|
||||||
|
- assert:
|
||||||
|
that:
|
||||||
|
- 'hello_ansible.msg == "Hello, Ansible!"'
|
||||||
|
|
||||||
|
- name: Async Hello, World!
|
||||||
|
action: "helloworld_{{ ansible_system|lower }}"
|
||||||
|
async: 1
|
||||||
|
poll: 1
|
||||||
|
when: ansible_system != 'Win32NT'
|
||||||
|
register: async_hello_world
|
||||||
|
|
||||||
|
- assert:
|
||||||
|
that:
|
||||||
|
- 'async_hello_world.msg == "Hello, World!"'
|
||||||
|
when: not async_hello_world|skipped
|
||||||
|
|
||||||
|
- name: Async Hello, Ansible!
|
||||||
|
action: "helloworld_{{ ansible_system|lower }}"
|
||||||
|
args:
|
||||||
|
name: Ansible
|
||||||
|
async: 1
|
||||||
|
poll: 1
|
||||||
|
when: ansible_system != 'Win32NT'
|
||||||
|
register: async_hello_ansible
|
||||||
|
|
||||||
|
- assert:
|
||||||
|
that:
|
||||||
|
- 'async_hello_ansible.msg == "Hello, Ansible!"'
|
||||||
|
when: not async_hello_ansible|skipped
|
||||||
|
|
|
@ -1,3 +1,4 @@
|
||||||
dependencies:
|
dependencies:
|
||||||
- prepare_tests
|
- prepare_tests
|
||||||
|
- prepare_http_tests
|
||||||
|
|
||||||
|
|
|
@ -66,27 +66,21 @@
|
||||||
- result.failed
|
- result.failed
|
||||||
|
|
||||||
- name: test https fetch
|
- name: test https fetch
|
||||||
get_url: url="https://raw.githubusercontent.com/ansible/ansible/devel/README.md" dest={{output_dir}}/get_url.txt force=yes
|
get_url: url="https://{{ httpbin_host }}/get" dest={{output_dir}}/get_url.txt force=yes
|
||||||
register: result
|
register: result
|
||||||
|
|
||||||
- name: assert the get_url call was successful
|
- name: assert the get_url call was successful
|
||||||
assert:
|
assert:
|
||||||
that:
|
that:
|
||||||
- result.changed
|
- result.changed
|
||||||
- '"OK" in result.msg'
|
- '"OK" in result.msg'
|
||||||
|
|
||||||
- name: test https fetch to a site with mismatched hostname and certificate
|
- name: test https fetch to a site with mismatched hostname and certificate
|
||||||
get_url:
|
get_url:
|
||||||
url: "https://www.kennethreitz.org/"
|
url: "https://{{ badssl_host }}/"
|
||||||
dest: "{{ output_dir }}/shouldnotexist.html"
|
dest: "{{ output_dir }}/shouldnotexist.html"
|
||||||
ignore_errors: True
|
ignore_errors: True
|
||||||
register: result
|
register: result
|
||||||
# kennethreitz having trouble staying up. Eventually need to install our own
|
|
||||||
# certs & web server to test this... also need to install and test it with
|
|
||||||
# a proxy so the complications are inevitable
|
|
||||||
until: "'read operation timed out' not in result.msg"
|
|
||||||
retries: 30
|
|
||||||
delay: 10
|
|
||||||
|
|
||||||
- stat:
|
- stat:
|
||||||
path: "{{ output_dir }}/shouldnotexist.html"
|
path: "{{ output_dir }}/shouldnotexist.html"
|
||||||
|
@ -101,16 +95,13 @@
|
||||||
|
|
||||||
- name: test https fetch to a site with mismatched hostname and certificate and validate_certs=no
|
- name: test https fetch to a site with mismatched hostname and certificate and validate_certs=no
|
||||||
get_url:
|
get_url:
|
||||||
url: "https://www.kennethreitz.org/"
|
url: "https://{{ badssl_host }}/"
|
||||||
dest: "{{ output_dir }}/kreitz.html"
|
dest: "{{ output_dir }}/get_url_no_validate.html"
|
||||||
validate_certs: no
|
validate_certs: no
|
||||||
register: result
|
register: result
|
||||||
until: "'read operation timed out' not in result.msg"
|
|
||||||
retries: 30
|
|
||||||
delay: 10
|
|
||||||
|
|
||||||
- stat:
|
- stat:
|
||||||
path: "{{ output_dir }}/kreitz.html"
|
path: "{{ output_dir }}/get_url_no_validate.html"
|
||||||
register: stat_result
|
register: stat_result
|
||||||
|
|
||||||
- name: Assert that the file was downloaded
|
- name: Assert that the file was downloaded
|
||||||
|
@ -119,48 +110,44 @@
|
||||||
- "result.changed == true"
|
- "result.changed == true"
|
||||||
- "stat_result.stat.exists == true"
|
- "stat_result.stat.exists == true"
|
||||||
|
|
||||||
# At the moment, AWS can't make an https request to velox.ch... connection
|
# SNI Tests
|
||||||
# timed out. So we'll use a different test until/unless the problem is resolved
|
# SNI is only built into the stdlib from python-2.7.9 onwards
|
||||||
## SNI Tests
|
- name: Test that SNI works
|
||||||
## SNI is only built into the stdlib from python-2.7.9 onwards
|
get_url:
|
||||||
#- name: Test that SNI works
|
url: 'https://{{ sni_host }}/'
|
||||||
# get_url:
|
dest: "{{ output_dir }}/sni.html"
|
||||||
# # A test site that returns a page with information on what SNI information
|
register: get_url_result
|
||||||
# # the client sent. A failure would have the string: did not send a TLS server name indication extension
|
ignore_errors: True
|
||||||
# url: 'https://foo.sni.velox.ch/'
|
|
||||||
# dest: "{{ output_dir }}/sni.html"
|
- command: "grep '{{ sni_host }}' {{ output_dir}}/sni.html"
|
||||||
# register: get_url_result
|
register: data_result
|
||||||
# ignore_errors: True
|
when: "{{ python_has_ssl_context }}"
|
||||||
#
|
|
||||||
#- command: "grep 'sent the following TLS server name indication extension' {{ output_dir}}/sni.html"
|
- debug: var=get_url_result
|
||||||
# register: data_result
|
- name: Assert that SNI works with this python version
|
||||||
# when: "{{ python_has_ssl_context }}"
|
assert:
|
||||||
#
|
that:
|
||||||
#- debug: var=get_url_result
|
- 'data_result.rc == 0'
|
||||||
#- name: Assert that SNI works with this python version
|
- '"failed" not in get_url_result'
|
||||||
# assert:
|
when: "{{ python_has_ssl_context }}"
|
||||||
# that:
|
|
||||||
# - 'data_result.rc == 0'
|
# If the client doesn't support SNI then get_url should have failed with a certificate mismatch
|
||||||
# - '"failed" not in get_url_result'
|
- name: Assert that hostname verification failed because SNI is not supported on this version of python
|
||||||
# when: "{{ python_has_ssl_context }}"
|
assert:
|
||||||
#
|
that:
|
||||||
## If the client doesn't support SNI then get_url should have failed with a certificate mismatch
|
- 'get_url_result["failed"]'
|
||||||
#- name: Assert that hostname verification failed because SNI is not supported on this version of python
|
when: "{{ not python_has_ssl_context }}"
|
||||||
# assert:
|
|
||||||
# that:
|
|
||||||
# - 'get_url_result["failed"]'
|
|
||||||
# when: "{{ not python_has_ssl_context }}"
|
|
||||||
|
|
||||||
# These tests are just side effects of how the site is hosted. It's not
|
# These tests are just side effects of how the site is hosted. It's not
|
||||||
# specifically a test site. So the tests may break due to the hosting changing
|
# specifically a test site. So the tests may break due to the hosting changing
|
||||||
- name: Test that SNI works
|
- name: Test that SNI works
|
||||||
get_url:
|
get_url:
|
||||||
url: 'https://www.mnot.net/blog/2014/05/09/if_you_can_read_this_youre_sniing'
|
url: 'https://{{ sni_host }}/'
|
||||||
dest: "{{ output_dir }}/sni.html"
|
dest: "{{ output_dir }}/sni.html"
|
||||||
register: get_url_result
|
register: get_url_result
|
||||||
ignore_errors: True
|
ignore_errors: True
|
||||||
|
|
||||||
- command: "grep '<h2>If You Can Read This, You.re SNIing</h2>' {{ output_dir}}/sni.html"
|
- command: "grep '{{ sni_host }}' {{ output_dir}}/sni.html"
|
||||||
register: data_result
|
register: data_result
|
||||||
when: "{{ python_has_ssl_context }}"
|
when: "{{ python_has_ssl_context }}"
|
||||||
|
|
||||||
|
@ -182,12 +169,12 @@
|
||||||
|
|
||||||
- name: Test get_url with redirect
|
- name: Test get_url with redirect
|
||||||
get_url:
|
get_url:
|
||||||
url: 'http://httpbin.org/redirect/6'
|
url: 'http://{{ httpbin_host }}/redirect/6'
|
||||||
dest: "{{ output_dir }}/redirect.json"
|
dest: "{{ output_dir }}/redirect.json"
|
||||||
|
|
||||||
- name: Test that setting file modes work
|
- name: Test that setting file modes work
|
||||||
get_url:
|
get_url:
|
||||||
url: 'http://httpbin.org/'
|
url: 'http://{{ httpbin_host }}/'
|
||||||
dest: '{{ output_dir }}/test'
|
dest: '{{ output_dir }}/test'
|
||||||
mode: '0707'
|
mode: '0707'
|
||||||
register: result
|
register: result
|
||||||
|
@ -204,7 +191,7 @@
|
||||||
|
|
||||||
- name: Test that setting file modes on an already downlaoded file work
|
- name: Test that setting file modes on an already downlaoded file work
|
||||||
get_url:
|
get_url:
|
||||||
url: 'http://httpbin.org/'
|
url: 'http://{{ httpbin_host }}/'
|
||||||
dest: '{{ output_dir }}/test'
|
dest: '{{ output_dir }}/test'
|
||||||
mode: '0070'
|
mode: '0070'
|
||||||
register: result
|
register: result
|
||||||
|
|
|
@ -84,7 +84,7 @@
|
||||||
# ENV LOOKUP
|
# ENV LOOKUP
|
||||||
|
|
||||||
- name: get first environment var name
|
- name: get first environment var name
|
||||||
shell: env | head -n1 | cut -d\= -f1
|
shell: env | fgrep -v '.' | head -n1 | cut -d\= -f1
|
||||||
register: known_var_name
|
register: known_var_name
|
||||||
|
|
||||||
- name: get first environment var value
|
- name: get first environment var value
|
||||||
|
|
|
@ -13,7 +13,7 @@
|
||||||
- include: 'sysv_setup.yml'
|
- include: 'sysv_setup.yml'
|
||||||
when: ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux'] and (ansible_distribution_version|version_compare('6', '>=') and ansible_distribution_version|version_compare('7', '<'))
|
when: ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux'] and (ansible_distribution_version|version_compare('6', '>=') and ansible_distribution_version|version_compare('7', '<'))
|
||||||
- include: 'systemd_setup.yml'
|
- include: 'systemd_setup.yml'
|
||||||
when: (ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux'] and (ansible_distribution_version|version_compare('7', '>=') and ansible_distribution_version|version_compare('8', '<'))) or ansible_distribution == 'Fedora' or (ansible_distribution == 'Ubuntu' and ansible_distribution_version|version_compare('15.04', '>='))
|
when: (ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux'] and (ansible_distribution_version|version_compare('7', '>=') and ansible_distribution_version|version_compare('8', '<'))) or ansible_distribution == 'Fedora' or (ansible_distribution == 'Ubuntu' and ansible_distribution_version|version_compare('15.04', '>=')) or (ansible_distribution == 'Debian' and ansible_distribution_version|version_compare('8', '>='))
|
||||||
- include: 'upstart_setup.yml'
|
- include: 'upstart_setup.yml'
|
||||||
when: ansible_distribution == 'Ubuntu' and ansible_distribution_version|version_compare('15.04', '<')
|
when: ansible_distribution == 'Ubuntu' and ansible_distribution_version|version_compare('15.04', '<')
|
||||||
|
|
||||||
|
|
|
@ -1,18 +1,18 @@
|
||||||
- name: install the systemd unit file
|
- name: install the systemd unit file
|
||||||
copy: src=ansible.systemd dest=/usr/lib/systemd/system/ansible_test.service
|
copy: src=ansible.systemd dest=/etc/systemd/system/ansible_test.service
|
||||||
register: install_systemd_result
|
register: install_systemd_result
|
||||||
|
|
||||||
- name: install a broken systemd unit file
|
- name: install a broken systemd unit file
|
||||||
file: src=ansible_test.service path=/usr/lib/systemd/system/ansible_test_broken.service state=link
|
file: src=ansible_test.service path=/etc/systemd/system/ansible_test_broken.service state=link
|
||||||
register: install_broken_systemd_result
|
register: install_broken_systemd_result
|
||||||
|
|
||||||
- name: assert that the systemd unit file was installed
|
- name: assert that the systemd unit file was installed
|
||||||
assert:
|
assert:
|
||||||
that:
|
that:
|
||||||
- "install_systemd_result.dest == '/usr/lib/systemd/system/ansible_test.service'"
|
- "install_systemd_result.dest == '/etc/systemd/system/ansible_test.service'"
|
||||||
- "install_systemd_result.state == 'file'"
|
- "install_systemd_result.state == 'file'"
|
||||||
- "install_systemd_result.mode == '0644'"
|
- "install_systemd_result.mode == '0644'"
|
||||||
- "install_systemd_result.checksum == 'ca4b413fdf3cb2002f51893b9e42d2e449ec5afb'"
|
- "install_systemd_result.checksum == 'ca4b413fdf3cb2002f51893b9e42d2e449ec5afb'"
|
||||||
- "install_broken_systemd_result.dest == '/usr/lib/systemd/system/ansible_test_broken.service'"
|
- "install_broken_systemd_result.dest == '/etc/systemd/system/ansible_test_broken.service'"
|
||||||
- "install_broken_systemd_result.state == 'link'"
|
- "install_broken_systemd_result.state == 'link'"
|
||||||
|
|
||||||
|
|
|
@ -1,2 +1,3 @@
|
||||||
dependencies:
|
dependencies:
|
||||||
- prepare_tests
|
- prepare_tests
|
||||||
|
- prepare_http_tests
|
||||||
|
|
|
@ -94,16 +94,10 @@
|
||||||
|
|
||||||
- name: test https fetch to a site with mismatched hostname and certificate
|
- name: test https fetch to a site with mismatched hostname and certificate
|
||||||
uri:
|
uri:
|
||||||
url: "https://www.kennethreitz.org/"
|
url: "https://{{ badssl_host }}/"
|
||||||
dest: "{{ output_dir }}/shouldnotexist.html"
|
dest: "{{ output_dir }}/shouldnotexist.html"
|
||||||
ignore_errors: True
|
ignore_errors: True
|
||||||
register: result
|
register: result
|
||||||
# kennethreitz having trouble staying up. Eventually need to install our own
|
|
||||||
# certs & web server to test this... also need to install and test it with
|
|
||||||
# a proxy so the complications are inevitable
|
|
||||||
until: "'read operation timed out' not in result.msg"
|
|
||||||
retries: 30
|
|
||||||
delay: 10
|
|
||||||
|
|
||||||
- stat:
|
- stat:
|
||||||
path: "{{ output_dir }}/shouldnotexist.html"
|
path: "{{ output_dir }}/shouldnotexist.html"
|
||||||
|
@ -123,13 +117,10 @@
|
||||||
|
|
||||||
- name: test https fetch to a site with mismatched hostname and certificate and validate_certs=no
|
- name: test https fetch to a site with mismatched hostname and certificate and validate_certs=no
|
||||||
uri:
|
uri:
|
||||||
url: "https://www.kennethreitz.org/"
|
url: "https://{{ badssl_host }}/"
|
||||||
dest: "{{ output_dir }}/kreitz.html"
|
dest: "{{ output_dir }}/kreitz.html"
|
||||||
validate_certs: no
|
validate_certs: no
|
||||||
register: result
|
register: result
|
||||||
until: "'read operation timed out' not in result.msg"
|
|
||||||
retries: 30
|
|
||||||
delay: 10
|
|
||||||
|
|
||||||
- stat:
|
- stat:
|
||||||
path: "{{ output_dir }}/kreitz.html"
|
path: "{{ output_dir }}/kreitz.html"
|
||||||
|
@ -143,7 +134,7 @@
|
||||||
|
|
||||||
- name: test redirect without follow_redirects
|
- name: test redirect without follow_redirects
|
||||||
uri:
|
uri:
|
||||||
url: 'http://httpbin.org/redirect/2'
|
url: 'http://{{ httpbin_host }}/redirect/2'
|
||||||
follow_redirects: 'none'
|
follow_redirects: 'none'
|
||||||
status_code: 302
|
status_code: 302
|
||||||
register: result
|
register: result
|
||||||
|
@ -151,21 +142,21 @@
|
||||||
- name: Assert location header
|
- name: Assert location header
|
||||||
assert:
|
assert:
|
||||||
that:
|
that:
|
||||||
- 'result.location|default("") == "http://httpbin.org/relative-redirect/1"'
|
- 'result.location|default("") == "http://{{ httpbin_host }}/relative-redirect/1"'
|
||||||
|
|
||||||
- name: Check SSL with redirect
|
- name: Check SSL with redirect
|
||||||
uri:
|
uri:
|
||||||
url: 'https://httpbin.org/redirect/2'
|
url: 'https://{{ httpbin_host }}/redirect/2'
|
||||||
register: result
|
register: result
|
||||||
|
|
||||||
- name: Assert SSL with redirect
|
- name: Assert SSL with redirect
|
||||||
assert:
|
assert:
|
||||||
that:
|
that:
|
||||||
- 'result.url|default("") == "https://httpbin.org/get"'
|
- 'result.url|default("") == "https://{{ httpbin_host }}/get"'
|
||||||
|
|
||||||
- name: redirect to bad SSL site
|
- name: redirect to bad SSL site
|
||||||
uri:
|
uri:
|
||||||
url: 'http://wrong.host.badssl.com'
|
url: 'http://{{ badssl_host }}'
|
||||||
register: result
|
register: result
|
||||||
ignore_errors: true
|
ignore_errors: true
|
||||||
|
|
||||||
|
@ -173,30 +164,30 @@
|
||||||
assert:
|
assert:
|
||||||
that:
|
that:
|
||||||
- result|failed
|
- result|failed
|
||||||
- '"wrong.host.badssl.com" in result.msg'
|
- 'badssl_host in result.msg'
|
||||||
|
|
||||||
- name: test basic auth
|
- name: test basic auth
|
||||||
uri:
|
uri:
|
||||||
url: 'http://httpbin.org/basic-auth/user/passwd'
|
url: 'http://{{ httpbin_host }}/basic-auth/user/passwd'
|
||||||
user: user
|
user: user
|
||||||
password: passwd
|
password: passwd
|
||||||
|
|
||||||
- name: test basic forced auth
|
- name: test basic forced auth
|
||||||
uri:
|
uri:
|
||||||
url: 'http://httpbin.org/hidden-basic-auth/user/passwd'
|
url: 'http://{{ httpbin_host }}/hidden-basic-auth/user/passwd'
|
||||||
force_basic_auth: true
|
force_basic_auth: true
|
||||||
user: user
|
user: user
|
||||||
password: passwd
|
password: passwd
|
||||||
|
|
||||||
- name: test PUT
|
- name: test PUT
|
||||||
uri:
|
uri:
|
||||||
url: 'http://httpbin.org/put'
|
url: 'http://{{ httpbin_host }}/put'
|
||||||
method: PUT
|
method: PUT
|
||||||
body: 'foo=bar'
|
body: 'foo=bar'
|
||||||
|
|
||||||
- name: test OPTIONS
|
- name: test OPTIONS
|
||||||
uri:
|
uri:
|
||||||
url: 'http://httpbin.org/'
|
url: 'http://{{ httpbin_host }}/'
|
||||||
method: OPTIONS
|
method: OPTIONS
|
||||||
register: result
|
register: result
|
||||||
|
|
||||||
|
@ -211,9 +202,13 @@
|
||||||
set_fact:
|
set_fact:
|
||||||
is_ubuntu_precise: "{{ ansible_distribution == 'Ubuntu' and ansible_distribution_release == 'precise' }}"
|
is_ubuntu_precise: "{{ ansible_distribution == 'Ubuntu' and ansible_distribution_release == 'precise' }}"
|
||||||
|
|
||||||
|
# These tests are just side effects of how the site is hosted. It's not
|
||||||
|
# specifically a test site. So the tests may break due to the hosting
|
||||||
|
# changing. Eventually we need to standup a webserver with SNI as part of the
|
||||||
|
# test run.
|
||||||
- name: Test that SNI succeeds on python versions that have SNI
|
- name: Test that SNI succeeds on python versions that have SNI
|
||||||
uri:
|
uri:
|
||||||
url: 'https://sni.velox.ch'
|
url: 'https://{{ sni_host }}/'
|
||||||
return_content: true
|
return_content: true
|
||||||
when: ansible_python.has_sslcontext
|
when: ansible_python.has_sslcontext
|
||||||
register: result
|
register: result
|
||||||
|
@ -222,12 +217,12 @@
|
||||||
assert:
|
assert:
|
||||||
that:
|
that:
|
||||||
- result|success
|
- result|success
|
||||||
- '"Great! Your client" in result.content'
|
- 'sni_host == result.content'
|
||||||
when: ansible_python.has_sslcontext
|
when: ansible_python.has_sslcontext
|
||||||
|
|
||||||
- name: Verify SNI verification fails on old python without urllib3 contrib
|
- name: Verify SNI verification fails on old python without urllib3 contrib
|
||||||
uri:
|
uri:
|
||||||
url: 'https://sni.velox.ch'
|
url: 'https://{{ sni_host }}'
|
||||||
ignore_errors: true
|
ignore_errors: true
|
||||||
when: not ansible_python.has_sslcontext
|
when: not ansible_python.has_sslcontext
|
||||||
register: result
|
register: result
|
||||||
|
@ -253,7 +248,7 @@
|
||||||
|
|
||||||
- name: Verify SNI verificaiton succeeds on old python with urllib3 contrib
|
- name: Verify SNI verificaiton succeeds on old python with urllib3 contrib
|
||||||
uri:
|
uri:
|
||||||
url: 'https://sni.velox.ch'
|
url: 'https://{{ sni_host }}'
|
||||||
return_content: true
|
return_content: true
|
||||||
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
|
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
|
||||||
register: result
|
register: result
|
||||||
|
@ -262,7 +257,7 @@
|
||||||
assert:
|
assert:
|
||||||
that:
|
that:
|
||||||
- result|success
|
- result|success
|
||||||
- '"Great! Your client" in result.content'
|
- 'sni_host == result.content'
|
||||||
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
|
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
|
||||||
|
|
||||||
- name: Uninstall ndg-httpsclient and urllib3
|
- name: Uninstall ndg-httpsclient and urllib3
|
||||||
|
@ -282,7 +277,7 @@
|
||||||
|
|
||||||
- name: validate the status_codes are correct
|
- name: validate the status_codes are correct
|
||||||
uri:
|
uri:
|
||||||
url: https://httpbin.org/status/202
|
url: "https://{{ httpbin_host }}/status/202"
|
||||||
status_code: 202
|
status_code: 202
|
||||||
method: POST
|
method: POST
|
||||||
body: foo
|
body: foo
|
||||||
|
|
|
@ -7,3 +7,8 @@ uri_os_packages:
|
||||||
- python-pyasn1
|
- python-pyasn1
|
||||||
- python-openssl
|
- python-openssl
|
||||||
- python-urllib3
|
- python-urllib3
|
||||||
|
|
||||||
|
# Needs to be a url to a site that is hosted using SNI.
|
||||||
|
# Eventually we should make this a test server that we stand up as part of the test run.
|
||||||
|
#SNI_URI: 'https://sni.velox.ch'
|
||||||
|
SNI_URI: "https://www.mnot.net/blog/2014/05/09/if_you_can_read_this_youre_sniing"
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
---
|
---
|
||||||
|
|
||||||
test_win_get_url_link: http://docs.ansible.com
|
test_win_get_url_link: http://docs.ansible.com
|
||||||
test_win_get_url_path: "C:\\Users\\{{ansible_ssh_user}}\\docs_index.html"
|
|
||||||
test_win_get_url_invalid_link: http://docs.ansible.com/skynet_module.html
|
test_win_get_url_invalid_link: http://docs.ansible.com/skynet_module.html
|
||||||
test_win_get_url_invalid_path: "Q:\\Filez\\Cyberdyne.html"
|
test_win_get_url_invalid_path: "Q:\\Filez\\Cyberdyne.html"
|
||||||
test_win_get_url_dir_path: "C:\\Users\\{{ansible_ssh_user}}"
|
test_win_get_url_path: "{{ test_win_get_url_dir_path }}\\docs_index.html"
|
|
@ -16,6 +16,14 @@
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
- name: get tempdir path
|
||||||
|
raw: $env:TEMP
|
||||||
|
register: tempdir
|
||||||
|
|
||||||
|
- name: set output path dynamically
|
||||||
|
set_fact:
|
||||||
|
test_win_get_url_dir_path: "{{ tempdir.stdout_lines[0] }}"
|
||||||
|
|
||||||
- name: remove test file if it exists
|
- name: remove test file if it exists
|
||||||
raw: >
|
raw: >
|
||||||
PowerShell -Command Remove-Item "{{test_win_get_url_path}}" -Force
|
PowerShell -Command Remove-Item "{{test_win_get_url_path}}" -Force
|
||||||
|
|
|
@ -3,4 +3,3 @@
|
||||||
# Parameters to pass to test scripts.
|
# Parameters to pass to test scripts.
|
||||||
test_win_script_value: VaLuE
|
test_win_script_value: VaLuE
|
||||||
test_win_script_splat: "@{This='THIS'; That='THAT'; Other='OTHER'}"
|
test_win_script_splat: "@{This='THIS'; That='THAT'; Other='OTHER'}"
|
||||||
test_win_script_filename: "C:/Users/{{ansible_ssh_user}}/testing_win_script.txt"
|
|
||||||
|
|
|
@ -16,6 +16,14 @@
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
- name: get tempdir path
|
||||||
|
raw: $env:TEMP
|
||||||
|
register: tempdir
|
||||||
|
|
||||||
|
- name: set script path dynamically
|
||||||
|
set_fact:
|
||||||
|
test_win_script_filename: "{{ tempdir.stdout_lines[0] }}/testing_win_script.txt"
|
||||||
|
|
||||||
- name: run simple test script
|
- name: run simple test script
|
||||||
script: test_script.ps1
|
script: test_script.ps1
|
||||||
register: test_script_result
|
register: test_script_result
|
||||||
|
|
6
test/integration/test_binary_modules.yml
Normal file
6
test/integration/test_binary_modules.yml
Normal file
|
@ -0,0 +1,6 @@
|
||||||
|
- hosts: all
|
||||||
|
roles:
|
||||||
|
- role: test_binary_modules
|
||||||
|
tags:
|
||||||
|
- test_binary_modules
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue