mirror of
https://github.com/ansible-collections/community.general.git
synced 2024-09-14 20:13:21 +02:00
Merge remote-tracking branch 'upstream/devel' into devel
Conflicts: library/hg
This commit is contained in:
commit
92fd1c6578
25 changed files with 534 additions and 345 deletions
74
CHANGELOG.md
74
CHANGELOG.md
|
@ -3,52 +3,76 @@ Ansible Changes By Release
|
||||||
|
|
||||||
1.0 "Eruption" -- release pending -- changes unsorted for now
|
1.0 "Eruption" -- release pending -- changes unsorted for now
|
||||||
|
|
||||||
* default_sudo_exe parameter can be set in config to use sudo alternatives
|
New modules:
|
||||||
|
|
||||||
* new sysctl module
|
* new sysctl module
|
||||||
* new pacman module (Arch linux)
|
* new pacman module (Arch linux)
|
||||||
* added when_failed and when_changed
|
|
||||||
* when_set and when_unset can take more than one var (when_set: $a and $b and $c)
|
|
||||||
* new apt_key module
|
* new apt_key module
|
||||||
|
* hg module now in core
|
||||||
|
* new ec2_facts module
|
||||||
|
* added pkgin module for Joyent SmartOS
|
||||||
|
|
||||||
|
New config settings:
|
||||||
|
|
||||||
|
* sudo_exe parameter can be set in config to use sudo alternatives
|
||||||
|
* sudo_flags parameter can alter the flags used with sudo
|
||||||
|
|
||||||
|
New playbook/language features:
|
||||||
|
|
||||||
|
* added when_failed and when_changed
|
||||||
|
* task includes can now be of infinite depth
|
||||||
|
* when_set and when_unset can take more than one var (when_set: $a and $b and $c)
|
||||||
|
* added the with_sequence lookup plugin
|
||||||
|
* can override "connection:" on an indvidual task
|
||||||
|
* parameterized playbook includes can now define complex variables (not just all on one line)
|
||||||
|
* making inventory variables available for use in vars_files paths
|
||||||
|
* messages when skipping plays are now more clear
|
||||||
|
|
||||||
|
Module fixes and new flags:
|
||||||
|
|
||||||
|
* ability to use raw module without python on remote system
|
||||||
* fix for service status checking on Ubuntu
|
* fix for service status checking on Ubuntu
|
||||||
* service module now responds to additional exit code for SERVICE_UNAVAILABLE
|
* service module now responds to additional exit code for SERVICE_UNAVAILABLE
|
||||||
* usage of run_command standardized between module implementations
|
|
||||||
* fix for raw module with '-c local'
|
* fix for raw module with '-c local'
|
||||||
* fixes to git module
|
* various fixes to git module
|
||||||
* ec2 module now reports the public DNS name
|
* ec2 module now reports the public DNS name
|
||||||
* added the with_sequence lookup plugin
|
|
||||||
* various fixes for variable resolution in playbooks
|
|
||||||
* task includes can now be of infinite depth
|
|
||||||
* can pass executable= to the raw module to specify alternative shells
|
* can pass executable= to the raw module to specify alternative shells
|
||||||
* fixes for handling of "~" in some paths
|
|
||||||
* can override "connection:" on an indvidual task
|
|
||||||
* fix for postgres module when user contains a "-"
|
* fix for postgres module when user contains a "-"
|
||||||
* various other database module fixes
|
|
||||||
* added additional template variables -- $template_fullpath and $template_run_date
|
* added additional template variables -- $template_fullpath and $template_run_date
|
||||||
* raise errors on invalid arguments used with a task include statement
|
* raise errors on invalid arguments used with a task include statement
|
||||||
* making inventory variables available for use in vars_files paths
|
|
||||||
* various fixes to DWIM'ing of relative paths
|
|
||||||
* ability to use raw module without python on remote system
|
|
||||||
* shell/command module takes a executable= parameter to specify a different shell than /bin/sh
|
* shell/command module takes a executable= parameter to specify a different shell than /bin/sh
|
||||||
* added return code and error output to the raw module
|
* added return code and error output to the raw module
|
||||||
* added support for @reboot to the cron module
|
* added support for @reboot to the cron module
|
||||||
* hostname patterns in the inventory file can now use alphabetic ranges
|
|
||||||
* whitespace is now allowed around group variables in the inventory file
|
|
||||||
* parameterized playbook includes can now define complex variables (not just all on one line)
|
|
||||||
* misc fixes to the pip module
|
* misc fixes to the pip module
|
||||||
* inventory scripts can now define groups of groups and group vars (need example for docs?)
|
|
||||||
* nagios module can schedule downtime for all services on the host
|
* nagios module can schedule downtime for all services on the host
|
||||||
* various patterns can now take a regex vs a glob if they start with "~" (need docs on which!)
|
|
||||||
* /bin/ansible now takes a --list-hosts just like ansible-playbook did
|
|
||||||
* various subversion module improvements
|
* various subversion module improvements
|
||||||
* various mail module improvements
|
* various mail module improvements
|
||||||
* allow intersecting host patterns by using "&" ("webservers:!debian:&datacenter1")
|
|
||||||
* messages when skipping plays are now more clear
|
|
||||||
* SELinux fix for files created by authorized_key module
|
* SELinux fix for files created by authorized_key module
|
||||||
* "template override" ??
|
* "template override" ??
|
||||||
* lots of documentation tweaks
|
* get_url module can now send user/password authorization
|
||||||
* handle tilde shell character for --private-key
|
* ec2 module can now deploy multiple simultaneous instances
|
||||||
|
* fix for apt_key modules stalling in some situations
|
||||||
|
* fix to enable Jinja2 {% include %} to work again in template
|
||||||
|
* ec2 module is now powered by Boto
|
||||||
|
* setup module can now detect if package manager is using pacman
|
||||||
|
|
||||||
* ...
|
Core fixes and new behaviors:
|
||||||
|
|
||||||
|
* various fixes for variable resolution in playbooks
|
||||||
|
* fixes for handling of "~" in some paths
|
||||||
|
* various fixes to DWIM'ing of relative paths
|
||||||
|
* /bin/ansible now takes a --list-hosts just like ansible-playbook did
|
||||||
|
* various patterns can now take a regex vs a glob if they start with "~" (need docs on which!) - also /usr/bin/ansible
|
||||||
|
* allow intersecting host patterns by using "&" ("webservers:!debian:&datacenter1")
|
||||||
|
* handle tilde shell character for --private-key
|
||||||
|
* hash merging policy is now selectable in the config file, can choose to override or merge
|
||||||
|
* environment variables now available for setting all plugin paths (ANSIBLE_CALLBACK_PLUGINS, etc)
|
||||||
|
|
||||||
|
Inventory files/scripts:
|
||||||
|
|
||||||
|
* hostname patterns in the inventory file can now use alphabetic ranges
|
||||||
|
* whitespace is now allowed around group variables in the inventory file
|
||||||
|
* inventory scripts can now define groups of groups and group vars (need example for docs?)
|
||||||
|
|
||||||
0.9 "Dreams" -- Nov 30 2012
|
0.9 "Dreams" -- Nov 30 2012
|
||||||
|
|
||||||
|
|
4
Makefile
4
Makefile
|
@ -180,3 +180,7 @@ modulejs:
|
||||||
webdocs:
|
webdocs:
|
||||||
(cd docsite; make docs)
|
(cd docsite; make docs)
|
||||||
|
|
||||||
|
# just for quick testing of all the module docs
|
||||||
|
webdocs2:
|
||||||
|
(cd docsite; make modules)
|
||||||
|
|
||||||
|
|
|
@ -33,6 +33,7 @@ from ansible import errors
|
||||||
from ansible.utils import module_docs
|
from ansible.utils import module_docs
|
||||||
import ansible.constants as C
|
import ansible.constants as C
|
||||||
from ansible.utils import version
|
from ansible.utils import version
|
||||||
|
import traceback
|
||||||
|
|
||||||
MODULEDIR = C.DEFAULT_MODULE_PATH
|
MODULEDIR = C.DEFAULT_MODULE_PATH
|
||||||
|
|
||||||
|
@ -75,6 +76,7 @@ def print_man(doc):
|
||||||
opt_leadin = "-"
|
opt_leadin = "-"
|
||||||
|
|
||||||
print "%s %s" % (opt_leadin, o)
|
print "%s %s" % (opt_leadin, o)
|
||||||
|
|
||||||
desc = "".join(opt['description'])
|
desc = "".join(opt['description'])
|
||||||
|
|
||||||
if 'choices' in opt:
|
if 'choices' in opt:
|
||||||
|
@ -162,7 +164,8 @@ def main():
|
||||||
desc = desc + '...'
|
desc = desc + '...'
|
||||||
print "%-20s %-60.60s" % (module, desc)
|
print "%-20s %-60.60s" % (module, desc)
|
||||||
except:
|
except:
|
||||||
sys.stderr.write("ERROR: module %s missing documentation\n" % module)
|
traceback.print_exc()
|
||||||
|
sys.stderr.write("ERROR: module %s has a documentation error formatting or is missing documentation\n" % module)
|
||||||
pass
|
pass
|
||||||
|
|
||||||
sys.exit()
|
sys.exit()
|
||||||
|
@ -184,10 +187,11 @@ def main():
|
||||||
try:
|
try:
|
||||||
doc = module_docs.get_docstring(filename)
|
doc = module_docs.get_docstring(filename)
|
||||||
except:
|
except:
|
||||||
sys.stderr.write("ERROR: module %s missing documentation\n" % module)
|
traceback.print_exc()
|
||||||
|
sys.stderr.write("ERROR: module %s has a documentation error formatting or is missing documentation\n" % module)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if not doc is None:
|
if doc is not None:
|
||||||
|
|
||||||
all_keys = []
|
all_keys = []
|
||||||
for (k,v) in doc['options'].iteritems():
|
for (k,v) in doc['options'].iteritems():
|
||||||
|
|
|
@ -76,6 +76,9 @@ remote_port=22
|
||||||
|
|
||||||
sudo_exe=sudo
|
sudo_exe=sudo
|
||||||
|
|
||||||
|
# the default flags passed to sudo
|
||||||
|
# sudo_flags=-H
|
||||||
|
|
||||||
# how to handle hash defined in several places
|
# how to handle hash defined in several places
|
||||||
# hash can be merged, or replaced
|
# hash can be merged, or replaced
|
||||||
# if you use replace, and have multiple hashes named 'x', the last defined
|
# if you use replace, and have multiple hashes named 'x', the last defined
|
||||||
|
|
|
@ -11,7 +11,7 @@ fi
|
||||||
# The below is an alternative to readlink -fn which doesn't exist on OS X
|
# The below is an alternative to readlink -fn which doesn't exist on OS X
|
||||||
# Source: http://stackoverflow.com/a/1678636
|
# Source: http://stackoverflow.com/a/1678636
|
||||||
FULL_PATH=`python -c "import os; print(os.path.realpath('$HACKING_DIR'))"`
|
FULL_PATH=`python -c "import os; print(os.path.realpath('$HACKING_DIR'))"`
|
||||||
ANSIBLE_HOME=`dirname $FULL_PATH`
|
ANSIBLE_HOME=`dirname "$FULL_PATH"`
|
||||||
|
|
||||||
PREFIX_PYTHONPATH="$ANSIBLE_HOME/lib"
|
PREFIX_PYTHONPATH="$ANSIBLE_HOME/lib"
|
||||||
PREFIX_PATH="$ANSIBLE_HOME/bin"
|
PREFIX_PATH="$ANSIBLE_HOME/bin"
|
||||||
|
|
|
@ -92,14 +92,15 @@ DEFAULT_MANAGED_STR = get_config(p, DEFAULTS, 'ansible_managed', None,
|
||||||
DEFAULT_SYSLOG_FACILITY = get_config(p, DEFAULTS, 'syslog_facility', 'ANSIBLE_SYSLOG_FACILITY', 'LOG_USER')
|
DEFAULT_SYSLOG_FACILITY = get_config(p, DEFAULTS, 'syslog_facility', 'ANSIBLE_SYSLOG_FACILITY', 'LOG_USER')
|
||||||
DEFAULT_KEEP_REMOTE_FILES = get_config(p, DEFAULTS, 'keep_remote_files', 'ANSIBLE_KEEP_REMOTE_FILES', '0')
|
DEFAULT_KEEP_REMOTE_FILES = get_config(p, DEFAULTS, 'keep_remote_files', 'ANSIBLE_KEEP_REMOTE_FILES', '0')
|
||||||
DEFAULT_SUDO_EXE = get_config(p, DEFAULTS, 'sudo_exe', 'ANSIBLE_SUDO_EXE', 'sudo')
|
DEFAULT_SUDO_EXE = get_config(p, DEFAULTS, 'sudo_exe', 'ANSIBLE_SUDO_EXE', 'sudo')
|
||||||
|
DEFAULT_SUDO_FLAGS = get_config(p, DEFAULTS, 'sudo_flags', 'ANSIBLE_SUDO_FLAGS', '-H')
|
||||||
DEFAULT_HASH_BEHAVIOUR = get_config(p, DEFAULTS, 'hash_behaviour', 'ANSIBLE_HASH_BEHAVIOUR', 'replace')
|
DEFAULT_HASH_BEHAVIOUR = get_config(p, DEFAULTS, 'hash_behaviour', 'ANSIBLE_HASH_BEHAVIOUR', 'replace')
|
||||||
|
|
||||||
DEFAULT_ACTION_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'action_plugins', None, '/usr/share/ansible_plugins/action_plugins'))
|
DEFAULT_ACTION_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', '/usr/share/ansible_plugins/action_plugins'))
|
||||||
DEFAULT_CALLBACK_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'callback_plugins', None, '/usr/share/ansible_plugins/callback_plugins'))
|
DEFAULT_CALLBACK_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'callback_plugins', 'ANSIBLE_CALLBACK_PLUGINS', '/usr/share/ansible_plugins/callback_plugins'))
|
||||||
DEFAULT_CONNECTION_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'connection_plugins', None, '/usr/share/ansible_plugins/connection_plugins'))
|
DEFAULT_CONNECTION_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'connection_plugins', 'ANSIBLE_CONNECTION_PLUGINS', '/usr/share/ansible_plugins/connection_plugins'))
|
||||||
DEFAULT_LOOKUP_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'lookup_plugins', None, '/usr/share/ansible_plugins/lookup_plugins'))
|
DEFAULT_LOOKUP_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'lookup_plugins', 'ANSIBLE_LOOKUP_PLUGINS', '/usr/share/ansible_plugins/lookup_plugins'))
|
||||||
DEFAULT_VARS_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'vars_plugins', None, '/usr/share/ansible_plugins/vars_plugins'))
|
DEFAULT_VARS_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'vars_plugins', 'ANSIBLE_VARS_PLUGINS', '/usr/share/ansible_plugins/vars_plugins'))
|
||||||
DEFAULT_FILTER_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'filter_plugins', None, '/usr/share/ansible_plugins/filter_plugins'))
|
DEFAULT_FILTER_PLUGIN_PATH = shell_expand_path(get_config(p, DEFAULTS, 'filter_plugins', 'ANSIBLE_FILTER_PLUGINS', '/usr/share/ansible_plugins/filter_plugins'))
|
||||||
|
|
||||||
# non-configurable things
|
# non-configurable things
|
||||||
DEFAULT_SUDO_PASS = None
|
DEFAULT_SUDO_PASS = None
|
||||||
|
|
|
@ -481,7 +481,7 @@ class AnsibleModule(object):
|
||||||
if spec is None:
|
if spec is None:
|
||||||
return
|
return
|
||||||
for check in spec:
|
for check in spec:
|
||||||
counts = [ self.count_terms([field]) for field in check ]
|
counts = [ self._count_terms([field]) for field in check ]
|
||||||
non_zero = [ c for c in counts if c > 0 ]
|
non_zero = [ c for c in counts if c > 0 ]
|
||||||
if len(non_zero) > 0:
|
if len(non_zero) > 0:
|
||||||
if 0 in counts:
|
if 0 in counts:
|
||||||
|
@ -677,7 +677,7 @@ class AnsibleModule(object):
|
||||||
self.set_context_if_different(src, context, False)
|
self.set_context_if_different(src, context, False)
|
||||||
os.rename(src, dest)
|
os.rename(src, dest)
|
||||||
|
|
||||||
def run_command(self, args, check_rc=False, close_fds=False, executable=None):
|
def run_command(self, args, check_rc=False, close_fds=False, executable=None, data=None):
|
||||||
'''
|
'''
|
||||||
Execute a command, returns rc, stdout, and stderr.
|
Execute a command, returns rc, stdout, and stderr.
|
||||||
args is the command to run
|
args is the command to run
|
||||||
|
@ -700,12 +700,20 @@ class AnsibleModule(object):
|
||||||
self.fail_json(rc=257, cmd=args, msg=msg)
|
self.fail_json(rc=257, cmd=args, msg=msg)
|
||||||
rc = 0
|
rc = 0
|
||||||
msg = None
|
msg = None
|
||||||
|
st_in = None
|
||||||
|
if data:
|
||||||
|
st_in = subprocess.PIPE
|
||||||
try:
|
try:
|
||||||
cmd = subprocess.Popen(args,
|
cmd = subprocess.Popen(args,
|
||||||
executable=executable,
|
executable=executable,
|
||||||
shell=shell,
|
shell=shell,
|
||||||
close_fds=close_fds,
|
close_fds=close_fds,
|
||||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
stdin=st_in,
|
||||||
|
stdout=subprocess.PIPE,
|
||||||
|
stderr=subprocess.PIPE)
|
||||||
|
if data:
|
||||||
|
cmd.stdin.write(data)
|
||||||
|
cmd.stdin.write('\\n')
|
||||||
out, err = cmd.communicate()
|
out, err = cmd.communicate()
|
||||||
rc = cmd.returncode
|
rc = cmd.returncode
|
||||||
except (OSError, IOError), e:
|
except (OSError, IOError), e:
|
||||||
|
|
|
@ -101,6 +101,10 @@ class Play(object):
|
||||||
''' handle task and handler include statements '''
|
''' handle task and handler include statements '''
|
||||||
|
|
||||||
results = []
|
results = []
|
||||||
|
if tasks is None:
|
||||||
|
# support empty handler files, and the like.
|
||||||
|
tasks = []
|
||||||
|
|
||||||
for x in tasks:
|
for x in tasks:
|
||||||
task_vars = self.vars.copy()
|
task_vars = self.vars.copy()
|
||||||
task_vars.update(vars)
|
task_vars.update(vars)
|
||||||
|
|
|
@ -142,7 +142,9 @@ class Connection(object):
|
||||||
except socket.timeout:
|
except socket.timeout:
|
||||||
raise errors.AnsibleError('ssh timed out waiting for sudo.\n' + sudo_output)
|
raise errors.AnsibleError('ssh timed out waiting for sudo.\n' + sudo_output)
|
||||||
|
|
||||||
return (chan.recv_exit_status(), chan.makefile('wb', bufsize), chan.makefile('rb', bufsize), chan.makefile_stderr('rb', bufsize))
|
stdout = ''.join(chan.makefile('rb', bufsize))
|
||||||
|
stderr = ''.join(chan.makefile_stderr('rb', bufsize))
|
||||||
|
return (chan.recv_exit_status(), '', stdout, stderr)
|
||||||
|
|
||||||
def put_file(self, in_path, out_path):
|
def put_file(self, in_path, out_path):
|
||||||
''' transfer a file from local to remote '''
|
''' transfer a file from local to remote '''
|
||||||
|
|
|
@ -590,6 +590,7 @@ def make_sudo_cmd(sudo_user, executable, cmd):
|
||||||
# the -p option.
|
# the -p option.
|
||||||
randbits = ''.join(chr(random.randint(ord('a'), ord('z'))) for x in xrange(32))
|
randbits = ''.join(chr(random.randint(ord('a'), ord('z'))) for x in xrange(32))
|
||||||
prompt = '[sudo via ansible, key=%s] password: ' % randbits
|
prompt = '[sudo via ansible, key=%s] password: ' % randbits
|
||||||
sudocmd = '%s -k && %s -S -p "%s" -u %s %s -c %s' % (
|
sudocmd = '%s -k && %s %s -S -p "%s" -u %s %s -c %s' % (
|
||||||
C.DEFAULT_SUDO_EXE, C.DEFAULT_SUDO_EXE, prompt, sudo_user, executable or '$SHELL', pipes.quote(cmd))
|
C.DEFAULT_SUDO_EXE, C.DEFAULT_SUDO_EXE, C.DEFAULT_SUDO_FLAGS,
|
||||||
|
prompt, sudo_user, executable or '$SHELL', pipes.quote(cmd))
|
||||||
return ('/bin/sh -c ' + pipes.quote(sudocmd), prompt)
|
return ('/bin/sh -c ' + pipes.quote(sudocmd), prompt)
|
||||||
|
|
|
@ -43,7 +43,6 @@ def get_docstring(filename, verbose=False):
|
||||||
if isinstance(child, ast.Assign):
|
if isinstance(child, ast.Assign):
|
||||||
if 'DOCUMENTATION' in (t.id for t in child.targets):
|
if 'DOCUMENTATION' in (t.id for t in child.targets):
|
||||||
doc = yaml.load(child.value.s)
|
doc = yaml.load(child.value.s)
|
||||||
|
|
||||||
except:
|
except:
|
||||||
if verbose == True:
|
if verbose == True:
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
|
|
|
@ -70,7 +70,7 @@ options:
|
||||||
author: Matthew Williams
|
author: Matthew Williams
|
||||||
notes: []
|
notes: []
|
||||||
examples:
|
examples:
|
||||||
- code: "apt: pkg=foo update-cache=yes"
|
- code: "apt: pkg=foo update_cache=yes"
|
||||||
description: Update repositories cache and install C(foo) package
|
description: Update repositories cache and install C(foo) package
|
||||||
- code: "apt: pkg=foo state=removed"
|
- code: "apt: pkg=foo state=removed"
|
||||||
description: Remove C(foo) package
|
description: Remove C(foo) package
|
||||||
|
@ -78,9 +78,9 @@ examples:
|
||||||
description: Install the package C(foo)
|
description: Install the package C(foo)
|
||||||
- code: "apt: pkg=foo=1.00 state=installed"
|
- code: "apt: pkg=foo=1.00 state=installed"
|
||||||
description: Install the version '1.00' of package C(foo)
|
description: Install the version '1.00' of package C(foo)
|
||||||
- code: "apt: pkg=nginx state=latest default-release=squeeze-backports update-cache=yes"
|
- code: "apt: pkg=nginx state=latest default_release=squeeze-backports update_cache=yes"
|
||||||
description: Update the repository cache and update package C(ngnix) to latest version using default release C(squeeze-backport)
|
description: Update the repository cache and update package C(ngnix) to latest version using default release C(squeeze-backport)
|
||||||
- code: "apt: pkg=openjdk-6-jdk state=latest install-recommends=no"
|
- code: "apt: pkg=openjdk-6-jdk state=latest install_recommends=no"
|
||||||
description: Install latest version of C(openjdk-6-jdk) ignoring C(install-reccomends)
|
description: Install latest version of C(openjdk-6-jdk) ignoring C(install-reccomends)
|
||||||
'''
|
'''
|
||||||
|
|
||||||
|
|
223
library/apt_key
223
library/apt_key
|
@ -22,7 +22,7 @@
|
||||||
DOCUMENTATION = '''
|
DOCUMENTATION = '''
|
||||||
---
|
---
|
||||||
module: apt_key
|
module: apt_key
|
||||||
author: Jayson Vantuyl
|
author: Jayson Vantuyl & others
|
||||||
version_added: 1.0
|
version_added: 1.0
|
||||||
short_description: Add or remove an apt key
|
short_description: Add or remove an apt key
|
||||||
description:
|
description:
|
||||||
|
@ -59,195 +59,116 @@ examples:
|
||||||
description: Remove a Apt specific signing key
|
description: Remove a Apt specific signing key
|
||||||
'''
|
'''
|
||||||
|
|
||||||
|
# FIXME: standardize into module_common
|
||||||
from urllib2 import urlopen, URLError
|
from urllib2 import urlopen, URLError
|
||||||
from traceback import format_exc
|
from traceback import format_exc
|
||||||
from subprocess import Popen, PIPE, call
|
|
||||||
from re import compile as re_compile
|
from re import compile as re_compile
|
||||||
|
# FIXME: standardize into module_common
|
||||||
from distutils.spawn import find_executable
|
from distutils.spawn import find_executable
|
||||||
from os import environ
|
from os import environ
|
||||||
from sys import exc_info
|
from sys import exc_info
|
||||||
|
import traceback
|
||||||
|
|
||||||
match_key = re_compile("^gpg:.*key ([0-9a-fA-F]+):.*$")
|
match_key = re_compile("^gpg:.*key ([0-9a-fA-F]+):.*$")
|
||||||
|
|
||||||
REQUIRED_EXECUTABLES=['gpg', 'grep', 'apt-key']
|
REQUIRED_EXECUTABLES=['gpg', 'grep', 'apt-key']
|
||||||
|
|
||||||
|
|
||||||
def find_missing_binaries():
|
def check_missing_binaries(module):
|
||||||
return [missing for missing in REQUIRED_EXECUTABLES if not find_executable(missing)]
|
missing = [e for e in REQUIRED_EXECUTABLES if not find_executable(e)]
|
||||||
|
if len(missing):
|
||||||
|
module.fail_json(msg="binaries are missing", names=all)
|
||||||
|
|
||||||
|
def all_keys(module):
|
||||||
|
(rc, out, err) = module.run_command("apt-key list")
|
||||||
|
results = []
|
||||||
|
lines = out.split('\n')
|
||||||
|
for line in lines:
|
||||||
|
if line.startswith("pub"):
|
||||||
|
tokens = line.split()
|
||||||
|
code = tokens[1]
|
||||||
|
(len_type, real_code) = code.split("/")
|
||||||
|
results.append(real_code)
|
||||||
|
return results
|
||||||
|
|
||||||
def get_key_ids(key_data):
|
def key_present(module, key_id):
|
||||||
p = Popen("gpg --list-only --import -", shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE)
|
(rc, out, err) = module.run_command("apt-key list | 2>&1 grep -q %s" % key_id)
|
||||||
(stdo, stde) = p.communicate(key_data)
|
return rc == 0
|
||||||
|
|
||||||
if p.returncode > 0:
|
def download_key(module, url):
|
||||||
raise Exception("error running GPG to retrieve keys")
|
# FIXME: move get_url code to common, allow for in-memory D/L, support proxies
|
||||||
|
# and reuse here
|
||||||
output = stdo + stde
|
|
||||||
|
|
||||||
for line in output.split('\n'):
|
|
||||||
match = match_key.match(line)
|
|
||||||
if match:
|
|
||||||
yield match.group(1)
|
|
||||||
|
|
||||||
|
|
||||||
def key_present(key_id):
|
|
||||||
return call("apt-key list | 2>&1 grep -q %s" % key_id, shell=True) == 0
|
|
||||||
|
|
||||||
|
|
||||||
def download_key(url):
|
|
||||||
if url is None:
|
if url is None:
|
||||||
raise Exception("Needed URL but none specified")
|
module.fail_json(msg="needed a URL but was not specified")
|
||||||
connection = urlopen(url)
|
try:
|
||||||
if connection is None:
|
connection = urlopen(url)
|
||||||
raise Exception("error connecting to download key from %r" % url)
|
if connection is None:
|
||||||
return connection.read()
|
module.fail_json("error connecting to download key from url")
|
||||||
|
data = connection.read()
|
||||||
|
return data
|
||||||
|
except:
|
||||||
|
module.fail_json(msg="error getting key id from url", traceback=format_exc())
|
||||||
|
|
||||||
|
|
||||||
def add_key(key):
|
def add_key(module, key):
|
||||||
return call("apt-key add -", shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE)
|
cmd = "apt-key add -"
|
||||||
(_, _) = p.communicate(key)
|
(rc, out, err) = module.run_command(cmd, data=key, check_rc=True)
|
||||||
|
return True
|
||||||
return p.returncode == 0
|
|
||||||
|
|
||||||
|
|
||||||
def remove_key(key_id):
|
def remove_key(key_id):
|
||||||
return call('apt-key del %s' % key_id, shell=True) == 0
|
# FIXME: use module.run_command, fail at point of error and don't discard useful stdin/stdout
|
||||||
|
cmd = 'apt-key del %s'
|
||||||
|
(rc, out, err) = module.run_command(cmd, check_rc=True)
|
||||||
def return_values(tb=False):
|
return True
|
||||||
if tb:
|
|
||||||
return {'exception': format_exc()}
|
|
||||||
else:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
|
|
||||||
# use cues from the environment to mock out functions for testing
|
|
||||||
if 'ANSIBLE_TEST_APT_KEY' in environ:
|
|
||||||
orig_download_key = download_key
|
|
||||||
KEY_ADDED=0
|
|
||||||
KEY_REMOVED=0
|
|
||||||
KEY_DOWNLOADED=0
|
|
||||||
|
|
||||||
|
|
||||||
def download_key(url):
|
|
||||||
global KEY_DOWNLOADED
|
|
||||||
KEY_DOWNLOADED += 1
|
|
||||||
return orig_download_key(url)
|
|
||||||
|
|
||||||
|
|
||||||
def find_missing_binaries():
|
|
||||||
return []
|
|
||||||
|
|
||||||
|
|
||||||
def add_key(key):
|
|
||||||
global KEY_ADDED
|
|
||||||
KEY_ADDED += 1
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
def remove_key(key_id):
|
|
||||||
global KEY_REMOVED
|
|
||||||
KEY_REMOVED += 1
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
def return_values(tb=False):
|
|
||||||
extra = dict(
|
|
||||||
added=KEY_ADDED,
|
|
||||||
removed=KEY_REMOVED,
|
|
||||||
downloaded=KEY_DOWNLOADED
|
|
||||||
)
|
|
||||||
if tb:
|
|
||||||
extra['exception'] = format_exc()
|
|
||||||
return extra
|
|
||||||
|
|
||||||
|
|
||||||
if environ.get('ANSIBLE_TEST_APT_KEY') == 'none':
|
|
||||||
def key_present(key_id):
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
def key_present(key_id):
|
|
||||||
return key_id == environ['ANSIBLE_TEST_APT_KEY']
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
module = AnsibleModule(
|
module = AnsibleModule(
|
||||||
argument_spec=dict(
|
argument_spec=dict(
|
||||||
id=dict(required=False, default=None),
|
id=dict(required=False, default=None),
|
||||||
url=dict(required=False),
|
url=dict(required=False),
|
||||||
|
data=dict(required=False),
|
||||||
|
key=dict(required=False),
|
||||||
state=dict(required=False, choices=['present', 'absent'], default='present')
|
state=dict(required=False, choices=['present', 'absent'], default='present')
|
||||||
)
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
expected_key_id = module.params['id']
|
key_id = module.params['id']
|
||||||
url = module.params['url']
|
url = module.params['url']
|
||||||
state = module.params['state']
|
data = module.params['data']
|
||||||
changed = False
|
state = module.params['state']
|
||||||
|
changed = False
|
||||||
|
|
||||||
|
# FIXME: I think we have a common facility for this, if not, want
|
||||||
|
check_missing_binaries(module)
|
||||||
|
|
||||||
missing = find_missing_binaries()
|
keys = all_keys(module)
|
||||||
|
|
||||||
if missing:
|
|
||||||
module.fail_json(msg="can't find needed binaries to run", missing=missing,
|
|
||||||
**return_values())
|
|
||||||
|
|
||||||
if state == 'present':
|
if state == 'present':
|
||||||
if expected_key_id and key_present(expected_key_id):
|
if key_id and key_id in keys:
|
||||||
# key is present, nothing to do
|
module.exit_json(changed=False)
|
||||||
pass
|
|
||||||
else:
|
else:
|
||||||
# download key
|
if not data:
|
||||||
try:
|
data = download_key(module, url)
|
||||||
key = download_key(url)
|
if key_id and key_id in keys:
|
||||||
(key_id,) = tuple(get_key_ids(key)) # TODO: support multiple key ids?
|
module.exit_json(changed=False)
|
||||||
except Exception:
|
|
||||||
module.fail_json(
|
|
||||||
msg="error getting key id from url",
|
|
||||||
**return_values(True)
|
|
||||||
)
|
|
||||||
|
|
||||||
# sanity check downloaded key
|
|
||||||
if expected_key_id and key_id != expected_key_id:
|
|
||||||
module.fail_json(
|
|
||||||
msg="expected key id %s, got key id %s" % (expected_key_id, key_id),
|
|
||||||
**return_values()
|
|
||||||
)
|
|
||||||
|
|
||||||
# actually add key
|
|
||||||
if key_present(key_id):
|
|
||||||
changed=False
|
|
||||||
elif add_key(key):
|
|
||||||
changed=True
|
|
||||||
else:
|
else:
|
||||||
module.fail_json(
|
add_key(module, data)
|
||||||
msg="failed to add key id %s" % key_id,
|
changed=False
|
||||||
**return_values()
|
keys2 = all_keys(module)
|
||||||
)
|
if len(keys) != len(keys2):
|
||||||
|
changed=True
|
||||||
|
if key_id and not key_id in keys2:
|
||||||
|
module.fail_json(msg="key does not seem to have been added", id=key_id)
|
||||||
|
module.exit_json(changed=changed)
|
||||||
elif state == 'absent':
|
elif state == 'absent':
|
||||||
# optionally download the key and get the id
|
if not key_id:
|
||||||
if not expected_key_id:
|
module.fail_json(msg="key is required")
|
||||||
try:
|
if key_id in keys:
|
||||||
key = download_key(url)
|
|
||||||
(key_id,) = tuple(get_key_ids(key)) # TODO: support multiple key ids?
|
|
||||||
except Exception:
|
|
||||||
module.fail_json(
|
|
||||||
msg="error getting key id from url",
|
|
||||||
**return_values(True)
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
key_id = expected_key_id
|
|
||||||
|
|
||||||
# actually remove key
|
|
||||||
if key_present(key_id):
|
|
||||||
if remove_key(key_id):
|
if remove_key(key_id):
|
||||||
changed=True
|
changed=True
|
||||||
else:
|
else:
|
||||||
|
# FIXME: module.fail_json or exit-json immediately at point of failure
|
||||||
module.fail_json(msg="error removing key_id", **return_values(True))
|
module.fail_json(msg="error removing key_id", **return_values(True))
|
||||||
else:
|
|
||||||
module.fail_json(
|
|
||||||
msg="unexpected state: %s" % state,
|
|
||||||
**return_values()
|
|
||||||
)
|
|
||||||
|
|
||||||
module.exit_json(changed=changed, **return_values())
|
module.exit_json(changed=changed, **return_values())
|
||||||
|
|
||||||
|
|
|
@ -91,12 +91,12 @@ def main():
|
||||||
|
|
||||||
module = AnsibleModule(
|
module = AnsibleModule(
|
||||||
# not checking because of daisy chain to file module
|
# not checking because of daisy chain to file module
|
||||||
check_invalid_arguments = False,
|
|
||||||
argument_spec = dict(
|
argument_spec = dict(
|
||||||
src = dict(required=True),
|
src = dict(required=True),
|
||||||
dest = dict(required=True),
|
dest = dict(required=True),
|
||||||
backup=dict(default=False, choices=BOOLEANS),
|
backup=dict(default=False, choices=BOOLEANS),
|
||||||
)
|
),
|
||||||
|
add_file_common_args=True
|
||||||
)
|
)
|
||||||
|
|
||||||
changed=False
|
changed=False
|
||||||
|
@ -124,11 +124,11 @@ def main():
|
||||||
shutil.copy(path, dest)
|
shutil.copy(path, dest)
|
||||||
changed = True
|
changed = True
|
||||||
|
|
||||||
|
file_args = module.load_file_common_arguments(module.params)
|
||||||
|
changed = module.set_file_attributes_if_different(file_args, changed)
|
||||||
# Mission complete
|
# Mission complete
|
||||||
module.exit_json(src=src, dest=dest, md5sum=destmd5,
|
module.exit_json(src=src, dest=dest, md5sum=destmd5,
|
||||||
changed=changed, msg="OK",
|
changed=changed, msg="OK")
|
||||||
daisychain="file", daisychain_args=module.params)
|
|
||||||
|
|
||||||
# this is magic, see lib/ansible/module_common.py
|
# this is magic, see lib/ansible/module_common.py
|
||||||
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
|
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
|
||||||
|
|
25
library/ec2
25
library/ec2
|
@ -66,7 +66,7 @@ options:
|
||||||
aliases: []
|
aliases: []
|
||||||
ec2_url:
|
ec2_url:
|
||||||
description:
|
description:
|
||||||
- url to use to connect to ec2 or your cloud (for example U(https://ec2.amazonaws.com) when using Amazon ec2 directly and not Eucalyptus)
|
- url to use to connect to ec2 or your Eucalyptus cloud (for example (https://ec2.amazonaws.com) when using Amazon ec2 directly and not Eucalyptus)
|
||||||
required: False
|
required: False
|
||||||
default: null
|
default: null
|
||||||
aliases: []
|
aliases: []
|
||||||
|
@ -82,6 +82,12 @@ options:
|
||||||
required: False
|
required: False
|
||||||
default: null
|
default: null
|
||||||
aliases: []
|
aliases: []
|
||||||
|
count:
|
||||||
|
description:
|
||||||
|
- number of instances to launch
|
||||||
|
required: False
|
||||||
|
default: 1
|
||||||
|
aliases: []
|
||||||
user_data:
|
user_data:
|
||||||
version_added: "0.9"
|
version_added: "0.9"
|
||||||
description:
|
description:
|
||||||
|
@ -90,10 +96,10 @@ options:
|
||||||
default: null
|
default: null
|
||||||
aliases: []
|
aliases: []
|
||||||
examples:
|
examples:
|
||||||
- code: "local_action: ec2 keypair=admin instance_type=m1.large image=emi-40603AD1 wait=true group=webserver"
|
- code: "local_action: ec2 keypair=admin instance_type=m1.large image=emi-40603AD1 wait=true group=webserver count=3"
|
||||||
description: "Examples from Ansible Playbooks"
|
description: "Examples from Ansible Playbooks"
|
||||||
requirements: [ "boto" ]
|
requirements: [ "boto" ]
|
||||||
author: Seth Vidal, Tim Gerla
|
author: Seth Vidal, Tim Gerla, Lester Wade
|
||||||
'''
|
'''
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
|
@ -113,7 +119,7 @@ def main():
|
||||||
instance_type = dict(aliases=['type']),
|
instance_type = dict(aliases=['type']),
|
||||||
image = dict(required=True),
|
image = dict(required=True),
|
||||||
kernel = dict(),
|
kernel = dict(),
|
||||||
#count = dict(default='1'), # maybe someday
|
count = dict(default='1'),
|
||||||
ramdisk = dict(),
|
ramdisk = dict(),
|
||||||
wait = dict(choices=BOOLEANS, default=False),
|
wait = dict(choices=BOOLEANS, default=False),
|
||||||
ec2_url = dict(aliases=['EC2_URL']),
|
ec2_url = dict(aliases=['EC2_URL']),
|
||||||
|
@ -127,7 +133,7 @@ def main():
|
||||||
group = module.params.get('group')
|
group = module.params.get('group')
|
||||||
instance_type = module.params.get('instance_type')
|
instance_type = module.params.get('instance_type')
|
||||||
image = module.params.get('image')
|
image = module.params.get('image')
|
||||||
#count = module.params.get('count')
|
count = module.params.get('count')
|
||||||
kernel = module.params.get('kernel')
|
kernel = module.params.get('kernel')
|
||||||
ramdisk = module.params.get('ramdisk')
|
ramdisk = module.params.get('ramdisk')
|
||||||
wait = module.params.get('wait')
|
wait = module.params.get('wait')
|
||||||
|
@ -148,10 +154,12 @@ def main():
|
||||||
ec2 = boto.connect_ec2_endpoint(ec2_url, ec2_access_key, ec2_secret_key)
|
ec2 = boto.connect_ec2_endpoint(ec2_url, ec2_access_key, ec2_secret_key)
|
||||||
else: # otherwise it's Amazon.
|
else: # otherwise it's Amazon.
|
||||||
ec2 = boto.connect_ec2(ec2_access_key, ec2_secret_key)
|
ec2 = boto.connect_ec2(ec2_access_key, ec2_secret_key)
|
||||||
|
|
||||||
|
# Both min_count and max_count equal count parameter. This means the launch request is explicit (we want count, or fail) in how many instances we want.
|
||||||
|
|
||||||
try:
|
try:
|
||||||
res = ec2.run_instances(image, key_name = key_name,
|
res = ec2.run_instances(image, key_name = key_name,
|
||||||
min_count = 1, max_count = 1,
|
min_count = count, max_count = count,
|
||||||
security_groups = [group],
|
security_groups = [group],
|
||||||
instance_type = instance_type,
|
instance_type = instance_type,
|
||||||
kernel_id = kernel,
|
kernel_id = kernel,
|
||||||
|
@ -171,9 +179,8 @@ def main():
|
||||||
res_list = res.connection.get_all_instances(instids)
|
res_list = res.connection.get_all_instances(instids)
|
||||||
this_res = res_list[0]
|
this_res = res_list[0]
|
||||||
num_running = len([ i for i in this_res.instances if i.state=='running' ])
|
num_running = len([ i for i in this_res.instances if i.state=='running' ])
|
||||||
time.sleep(2)
|
time.sleep(5)
|
||||||
|
|
||||||
# there's only one - but maybe one day there could be more
|
|
||||||
instances = []
|
instances = []
|
||||||
for inst in this_res.instances:
|
for inst in this_res.instances:
|
||||||
d = {
|
d = {
|
||||||
|
|
122
library/ec2_facts
Normal file
122
library/ec2_facts
Normal file
|
@ -0,0 +1,122 @@
|
||||||
|
#!/usr/bin/python -tt
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of Ansible
|
||||||
|
#
|
||||||
|
# Ansible is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# Ansible is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
DOCUMENTATION="""
|
||||||
|
---
|
||||||
|
module: ec2_facts
|
||||||
|
short_description: Gathers facts about remote hosts within ec2 (aws)
|
||||||
|
options: {}
|
||||||
|
description:
|
||||||
|
- This module fetches data from the metadata servers in ec2 (aws).
|
||||||
|
Eucalyptus cloud provides a similar service and this module should
|
||||||
|
work this cloud provider as well.
|
||||||
|
notes:
|
||||||
|
- Parameters to filter on ec2_facts may be added later.
|
||||||
|
examples:
|
||||||
|
- code: ansible all -m ec2_facts
|
||||||
|
description: Obtain facts from ec2 metatdata servers. You will need to run an instance within ec2.
|
||||||
|
author: "Silviu Dicu <silviudicu@gmail.com>"
|
||||||
|
"""
|
||||||
|
|
||||||
|
import urllib2
|
||||||
|
import socket
|
||||||
|
import re
|
||||||
|
|
||||||
|
socket.setdefaulttimeout(5)
|
||||||
|
|
||||||
|
class Ec2Metadata(object):
|
||||||
|
|
||||||
|
ec2_metadata_uri = 'http://169.254.169.254/latest/meta-data/'
|
||||||
|
ec2_sshdata_uri = 'http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key'
|
||||||
|
ec2_userdata_uri = 'http://169.254.169.254/latest/user-data/'
|
||||||
|
|
||||||
|
def __init__(self, ec2_metadata_uri=None, ec2_sshdata_uri=None, ec2_userdata_uri=None):
|
||||||
|
self.uri_meta = ec2_metadata_uri or self.ec2_metadata_uri
|
||||||
|
self.uri_user = ec2_userdata_uri or self.ec2_userdata_uri
|
||||||
|
self.uri_ssh = ec2_sshdata_uri or self.ec2_sshdata_uri
|
||||||
|
self._data = {}
|
||||||
|
self._prefix = 'ansible_ec2_%s'
|
||||||
|
|
||||||
|
def _fetch(self, url):
|
||||||
|
try:
|
||||||
|
return urllib2.urlopen(url).read()
|
||||||
|
except urllib2.HTTPError:
|
||||||
|
return
|
||||||
|
except urllib2.URLError:
|
||||||
|
return
|
||||||
|
|
||||||
|
def _mangle_fields(self, fields, uri, filter_patterns=['public-keys-0']):
|
||||||
|
new_fields = {}
|
||||||
|
for key, value in fields.iteritems():
|
||||||
|
split_fields = key[len(uri):].split('/')
|
||||||
|
if len(split_fields) > 1 and split_fields[1]:
|
||||||
|
new_key = "-".join(split_fields)
|
||||||
|
new_fields[self._prefix % new_key] = value
|
||||||
|
else:
|
||||||
|
new_key = "".join(split_fields)
|
||||||
|
new_fields[self._prefix % new_key] = value
|
||||||
|
for pattern in filter_patterns:
|
||||||
|
for key in new_fields.keys():
|
||||||
|
match = re.search(pattern, key)
|
||||||
|
if match: new_fields.pop(key)
|
||||||
|
return new_fields
|
||||||
|
|
||||||
|
def fetch(self, uri, recurse=True):
|
||||||
|
raw_subfields = self._fetch(uri)
|
||||||
|
if not raw_subfields:
|
||||||
|
return
|
||||||
|
subfields = raw_subfields.split('\n')
|
||||||
|
for field in subfields:
|
||||||
|
if field.endswith('/') and recurse:
|
||||||
|
self.fetch(uri + field)
|
||||||
|
if uri.endswith('/'):
|
||||||
|
new_uri = uri + field
|
||||||
|
else:
|
||||||
|
new_uri = uri + '/' + field
|
||||||
|
if new_uri not in self._data and not new_uri.endswith('/'):
|
||||||
|
content = self._fetch(new_uri)
|
||||||
|
if field == 'security-groups':
|
||||||
|
sg_fields = ",".join(content.split('\n'))
|
||||||
|
self._data['%s' % (new_uri)] = sg_fields
|
||||||
|
else:
|
||||||
|
self._data['%s' % (new_uri)] = content
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
self.fetch(self.uri_meta) # populate _data
|
||||||
|
data = self._mangle_fields(self._data,
|
||||||
|
self.uri_meta)
|
||||||
|
data[self._prefix % 'user-data'] = self._fetch(self.uri_user)
|
||||||
|
data[self._prefix % 'public-key'] = self._fetch(self.uri_ssh)
|
||||||
|
return data
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
ec2_facts = Ec2Metadata().run()
|
||||||
|
ec2_facts_result = {
|
||||||
|
"changed" : False,
|
||||||
|
"ansible_facts" : ec2_facts
|
||||||
|
}
|
||||||
|
module = AnsibleModule(
|
||||||
|
argument_spec = dict()
|
||||||
|
)
|
||||||
|
module.exit_json(**ec2_facts_result)
|
||||||
|
|
||||||
|
# this is magic, see lib/ansible/module_common.py
|
||||||
|
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
|
||||||
|
|
||||||
|
main()
|
|
@ -143,7 +143,7 @@ def daemonize_self(module, password, port, minutes):
|
||||||
os.dup2(dev_null.fileno(), sys.stderr.fileno())
|
os.dup2(dev_null.fileno(), sys.stderr.fileno())
|
||||||
log("daemonizing successful (%s,%s)" % (password, port))
|
log("daemonizing successful (%s,%s)" % (password, port))
|
||||||
|
|
||||||
def command(data):
|
def command(module, data):
|
||||||
if 'cmd' not in data:
|
if 'cmd' not in data:
|
||||||
return dict(failed=True, msg='internal error: cmd is required')
|
return dict(failed=True, msg='internal error: cmd is required')
|
||||||
if 'tmp_path' not in data:
|
if 'tmp_path' not in data:
|
||||||
|
@ -220,7 +220,7 @@ def serve(module, password, port, minutes):
|
||||||
response = {}
|
response = {}
|
||||||
|
|
||||||
if mode == 'command':
|
if mode == 'command':
|
||||||
response = command(data)
|
response = command(module, data)
|
||||||
elif mode == 'put':
|
elif mode == 'put':
|
||||||
response = put(data)
|
response = put(data)
|
||||||
elif mode == 'fetch':
|
elif mode == 'fetch':
|
||||||
|
|
|
@ -35,7 +35,7 @@ version_added: "0.6"
|
||||||
options:
|
options:
|
||||||
url:
|
url:
|
||||||
description:
|
description:
|
||||||
- HTTP, HTTPS, or FTP URL
|
- HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path
|
||||||
required: true
|
required: true
|
||||||
default: null
|
default: null
|
||||||
aliases: []
|
aliases: []
|
||||||
|
@ -63,18 +63,18 @@ examples:
|
||||||
- code: "get_url: url=http://example.com/path/file.conf dest=/etc/foo.conf mode=0440"
|
- code: "get_url: url=http://example.com/path/file.conf dest=/etc/foo.conf mode=0440"
|
||||||
description: "Example from Ansible Playbooks"
|
description: "Example from Ansible Playbooks"
|
||||||
notes:
|
notes:
|
||||||
- This module doesn't yet support configuration for proxies or passwords.
|
- This module doesn't yet support configuration for proxies.
|
||||||
# informational: requirements for nodes
|
# informational: requirements for nodes
|
||||||
requirements: [ urllib2, urlparse ]
|
requirements: [ urllib2, urlparse ]
|
||||||
author: Jan-Piet Mens
|
author: Jan-Piet Mens
|
||||||
'''
|
'''
|
||||||
|
|
||||||
HAS_URLLIB2=True
|
HAS_URLLIB2 = True
|
||||||
try:
|
try:
|
||||||
import urllib2
|
import urllib2
|
||||||
except ImportError:
|
except ImportError:
|
||||||
HAS_URLLIB2=False
|
HAS_URLLIB2 = False
|
||||||
HAS_URLPARSE=True
|
HAS_URLPARSE = True
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import urlparse
|
import urlparse
|
||||||
|
@ -100,6 +100,29 @@ def url_do_get(module, url, dest):
|
||||||
USERAGENT = 'ansible-httpget'
|
USERAGENT = 'ansible-httpget'
|
||||||
info = dict(url=url, dest=dest)
|
info = dict(url=url, dest=dest)
|
||||||
r = None
|
r = None
|
||||||
|
parsed = urlparse.urlparse(url)
|
||||||
|
if '@' in parsed.netloc:
|
||||||
|
credentials = parsed.netloc.split('@')[0]
|
||||||
|
if ':' in credentials:
|
||||||
|
username, password = credentials.split(':')
|
||||||
|
netloc = parsed.netloc.split('@')[1]
|
||||||
|
parsed = list(parsed)
|
||||||
|
parsed[1] = netloc
|
||||||
|
|
||||||
|
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
|
||||||
|
# this creates a password manager
|
||||||
|
passman.add_password(None, netloc, username, password)
|
||||||
|
# because we have put None at the start it will always
|
||||||
|
# use this username/password combination for urls
|
||||||
|
# for which `theurl` is a super-url
|
||||||
|
|
||||||
|
authhandler = urllib2.HTTPBasicAuthHandler(passman)
|
||||||
|
# create the AuthHandler
|
||||||
|
|
||||||
|
opener = urllib2.build_opener(authhandler)
|
||||||
|
urllib2.install_opener(opener)
|
||||||
|
#reconstruct url without credentials
|
||||||
|
url = urlparse.urlunparse(parsed)
|
||||||
|
|
||||||
request = urllib2.Request(url)
|
request = urllib2.Request(url)
|
||||||
request.add_header('User-agent', USERAGENT)
|
request.add_header('User-agent', USERAGENT)
|
||||||
|
@ -232,8 +255,7 @@ def main():
|
||||||
|
|
||||||
# Mission complete
|
# Mission complete
|
||||||
module.exit_json(url=url, dest=dest, src=tmpsrc, md5sum=md5sum_src,
|
module.exit_json(url=url, dest=dest, src=tmpsrc, md5sum=md5sum_src,
|
||||||
changed=changed, msg=info.get('msg',''),
|
changed=changed, msg=info.get('msg', ''))
|
||||||
daisychain="file", daisychain_args=info.get('daisychain_args',''))
|
|
||||||
|
|
||||||
# this is magic, see lib/ansible/module_common.py
|
# this is magic, see lib/ansible/module_common.py
|
||||||
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
|
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
|
||||||
|
|
66
library/hg
66
library/hg
|
@ -122,32 +122,30 @@ def _undo_hgrc(hgrc, vals):
|
||||||
parser.write(f)
|
parser.write(f)
|
||||||
f.close()
|
f.close()
|
||||||
|
|
||||||
def _hg_command(args_list):
|
def _hg_command(module, args_list):
|
||||||
cmd = ['hg'] + args_list
|
(rc, out, err) = module.run_command(['hg'] + args_list)
|
||||||
p = Popen(cmd, stdout=PIPE, stderr=PIPE)
|
return (out, err, rc)
|
||||||
out, err = p.communicate()
|
|
||||||
return out, err, p.returncode
|
|
||||||
|
|
||||||
def _hg_discard(dest):
|
def _hg_discard(module, dest):
|
||||||
out, err, code = _hg_command(['up', '-C', '-R', dest])
|
out, err, code = _hg_command(module, ['up', '-C', '-R', dest])
|
||||||
if code != 0:
|
if code != 0:
|
||||||
raise HgError(err)
|
raise HgError(err)
|
||||||
|
|
||||||
def _hg_purge(dest):
|
def _hg_purge(module, dest):
|
||||||
hgrc = os.path.join(dest, '.hg/hgrc')
|
hgrc = os.path.join(dest, '.hg/hgrc')
|
||||||
purge_option = [('extensions', 'purge', '')]
|
purge_option = [('extensions', 'purge', '')]
|
||||||
_set_hgrc(hgrc, purge_option)
|
_set_hgrc(hgrc, purge_option)
|
||||||
out, err, code = _hg_command(['purge', '-R', dest])
|
out, err, code = _hg_command(module, ['purge', '-R', dest])
|
||||||
if code == 0:
|
if code == 0:
|
||||||
_undo_hgrc(hgrc, purge_option)
|
_undo_hgrc(hgrc, purge_option)
|
||||||
else:
|
else:
|
||||||
raise HgError(err)
|
raise HgError(err)
|
||||||
|
|
||||||
def _hg_verify(dest):
|
def _hg_verify(module, dest):
|
||||||
error1 = "hg verify failed."
|
error1 = "hg verify failed."
|
||||||
error2 = "{dest} is not a repository.".format(dest=dest)
|
error2 = "{dest} is not a repository.".format(dest=dest)
|
||||||
|
|
||||||
out, err, code = _hg_command(['verify', '-R', dest])
|
out, err, code = _hg_command(module, ['verify', '-R', dest])
|
||||||
if code == 1:
|
if code == 1:
|
||||||
raise HgError(error1, stderr=err)
|
raise HgError(error1, stderr=err)
|
||||||
elif code == 255:
|
elif code == 255:
|
||||||
|
@ -155,7 +153,7 @@ def _hg_verify(dest):
|
||||||
elif code == 0:
|
elif code == 0:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def _post_op_hg_revision_check(dest, revision):
|
def _post_op_hg_revision_check(module, dest, revision):
|
||||||
"""
|
"""
|
||||||
Verify the tip is the same as `revision`.
|
Verify the tip is the same as `revision`.
|
||||||
|
|
||||||
|
@ -170,13 +168,13 @@ def _post_op_hg_revision_check(dest, revision):
|
||||||
err2 = "tip is different from %s. See below for extended summary." % revision
|
err2 = "tip is different from %s. See below for extended summary." % revision
|
||||||
|
|
||||||
if revision == 'default':
|
if revision == 'default':
|
||||||
out, err, code = _hg_command(['pull', '-R', dest])
|
out, err, code = _hg_command(module, ['pull', '-R', dest])
|
||||||
if "no changes found" in out:
|
if "no changes found" in out:
|
||||||
return True
|
return True
|
||||||
else:
|
else:
|
||||||
raise HgError(err2, stderr=out)
|
raise HgError(err2, stderr=out)
|
||||||
else:
|
else:
|
||||||
out, err, code = _hg_command(['tip', '-R', dest])
|
out, err, code = _hg_command(module, ['tip', '-R', dest])
|
||||||
if revision in out: # revision should be part of the output (changeset: $revision ...)
|
if revision in out: # revision should be part of the output (changeset: $revision ...)
|
||||||
return True
|
return True
|
||||||
else:
|
else:
|
||||||
|
@ -185,45 +183,45 @@ def _post_op_hg_revision_check(dest, revision):
|
||||||
else: # hg tip is fine, but tip != revision
|
else: # hg tip is fine, but tip != revision
|
||||||
raise HgError(err2, stderr=out)
|
raise HgError(err2, stderr=out)
|
||||||
|
|
||||||
def force_and_clean(dest):
|
def force_and_clean(module, dest):
|
||||||
_hg_discard(dest)
|
_hg_discard(module, dest)
|
||||||
_hg_purge(dest)
|
_hg_purge(module, dest)
|
||||||
|
|
||||||
def pull_and_update(repo, dest, revision, force):
|
def pull_and_update(module, repo, dest, revision, force):
|
||||||
if force == 'yes':
|
if force == 'yes':
|
||||||
force_and_clean(dest)
|
force_and_clean(module, dest)
|
||||||
|
|
||||||
if _hg_verify(dest):
|
if _hg_verify(module, dest):
|
||||||
cmd1 = ['pull', '-R', dest, '-r', revision]
|
cmd1 = ['pull', '-R', dest, '-r', revision]
|
||||||
out, err, code = _hg_command(cmd1)
|
out, err, code = _hg_command(module, cmd1)
|
||||||
|
|
||||||
if code == 1:
|
if code == 1:
|
||||||
raise HgError("Unable to perform pull on %s" % dest, stderr=err)
|
raise HgError("Unable to perform pull on %s" % dest, stderr=err)
|
||||||
elif code == 0:
|
elif code == 0:
|
||||||
cmd2 = ['update', '-R', dest, '-r', revision]
|
cmd2 = ['update', '-R', dest, '-r', revision]
|
||||||
out, err, code = _hg_command(cmd2)
|
out, err, code = _hg_command(module, cmd2)t
|
||||||
if code == 1:
|
if code == 1:
|
||||||
raise HgError("There are unresolved files in %s" % dest, stderr=err)
|
raise HgError("There are unresolved files in %s" % dest, stderr=err)
|
||||||
elif code == 0:
|
elif code == 0:
|
||||||
# so far pull and update seems to be working, check revision and $revision are equal
|
# so far pull and update seems to be working, check revision and $revision are equal
|
||||||
_post_op_hg_revision_check(dest, revision)
|
_post_op_hg_revision_check(module, dest, revision)
|
||||||
return True
|
return True
|
||||||
# when code aren't 1 or 0 in either command
|
# when code aren't 1 or 0 in either command
|
||||||
raise HgError("", stderr=err)
|
raise HgError("", stderr=err)
|
||||||
|
|
||||||
def clone(repo, dest, revision, force):
|
def clone(module, repo, dest, revision, force):
|
||||||
if os.path.exists(dest):
|
if os.path.exists(dest):
|
||||||
if _hg_verify(dest): # make sure it's a real repo
|
if _hg_verify(module, dest): # make sure it's a real repo
|
||||||
if _post_op_hg_revision_check(dest, revision): # make sure revision and $revision are equal
|
if _post_op_hg_revision_check(module, dest, revision): # make sure revision and $revision are equal
|
||||||
if force == 'yes':
|
if force == 'yes':
|
||||||
force_and_clean(dest)
|
force_and_clean(module, dest)
|
||||||
return False
|
return False
|
||||||
|
|
||||||
cmd = ['clone', repo, dest, '-r', revision]
|
cmd = ['clone', repo, dest, '-r', revision]
|
||||||
out, err, code = _hg_command(cmd)
|
out, err, code = _hg_command(module, cmd)
|
||||||
if code == 0:
|
if code == 0:
|
||||||
_hg_verify(dest)
|
_hg_verify(module, dest)
|
||||||
_post_op_hg_revision_check(dest, revision)
|
_post_op_hg_revision_check(module, dest, revision)
|
||||||
return True
|
return True
|
||||||
else:
|
else:
|
||||||
raise HgError(err, stderr='')
|
raise HgError(err, stderr='')
|
||||||
|
@ -250,15 +248,11 @@ def main():
|
||||||
shutil.rmtree(dest)
|
shutil.rmtree(dest)
|
||||||
changed = True
|
changed = True
|
||||||
elif state == 'present':
|
elif state == 'present':
|
||||||
changed = clone(repo, dest, revision, force)
|
changed = clone(module, repo, dest, revision, force)
|
||||||
elif state == 'latest':
|
elif state == 'latest':
|
||||||
changed = pull_and_update(repo, dest, revision, force)
|
changed = pull_and_update(module, repo, dest, revision, force)
|
||||||
|
|
||||||
module.exit_json(dest=dest, changed=changed)
|
module.exit_json(dest=dest, changed=changed)
|
||||||
#except HgError as e:
|
|
||||||
# module.fail_json(msg=str(e), params=module.params)
|
|
||||||
#except IOError as e:
|
|
||||||
# module.fail_json(msg=str(e), params=module.params)
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
module.fail_json(msg=str(e), params=module.params)
|
module.fail_json(msg=str(e), params=module.params)
|
||||||
|
|
||||||
|
|
|
@ -153,8 +153,6 @@ def do_ini(module, filename, section=None, option=None, value=None, state='prese
|
||||||
def main():
|
def main():
|
||||||
|
|
||||||
module = AnsibleModule(
|
module = AnsibleModule(
|
||||||
# not checking because of daisy chain to file module
|
|
||||||
check_invalid_arguments = False,
|
|
||||||
argument_spec = dict(
|
argument_spec = dict(
|
||||||
dest = dict(required=True),
|
dest = dict(required=True),
|
||||||
section = dict(required=True),
|
section = dict(required=True),
|
||||||
|
@ -162,7 +160,8 @@ def main():
|
||||||
value = dict(required=False),
|
value = dict(required=False),
|
||||||
backup = dict(default='no', choices=BOOLEANS),
|
backup = dict(default='no', choices=BOOLEANS),
|
||||||
state = dict(default='present', choices=['present', 'absent'])
|
state = dict(default='present', choices=['present', 'absent'])
|
||||||
)
|
),
|
||||||
|
add_file_common_args = True
|
||||||
)
|
)
|
||||||
|
|
||||||
info = dict()
|
info = dict()
|
||||||
|
@ -176,14 +175,11 @@ def main():
|
||||||
|
|
||||||
changed = do_ini(module, dest, section, option, value, state, backup)
|
changed = do_ini(module, dest, section, option, value, state, backup)
|
||||||
|
|
||||||
info['daisychain_args'] = module.params
|
file_args = module.load_file_common_arguments(module.params)
|
||||||
info['daisychain_args']['state'] = 'file'
|
changed = module.set_file_attributes_if_different(file_args, changed)
|
||||||
info['daisychain_args']['dest'] = dest
|
|
||||||
|
|
||||||
# Mission complete
|
# Mission complete
|
||||||
module.exit_json(dest=dest,
|
module.exit_json(dest=dest, changed=changed, msg="OK")
|
||||||
changed=changed, msg="OK",
|
|
||||||
daisychain="file", daisychain_args=info.get('daisychain_args',''))
|
|
||||||
|
|
||||||
# this is magic, see lib/ansible/module_common.py
|
# this is magic, see lib/ansible/module_common.py
|
||||||
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
|
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
|
||||||
|
|
140
library/pkgin
Executable file
140
library/pkgin
Executable file
|
@ -0,0 +1,140 @@
|
||||||
|
#!/usr/bin/python -tt
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# (c) 2013, Shaun Zinck
|
||||||
|
# Written by Shaun Zinck <shaun.zinck at gmail.com>
|
||||||
|
# Based on pacman module written by Afterburn <http://github.com/afterburn>
|
||||||
|
# that was based on apt module written by Matthew Williams <matthew@flowroute.com>
|
||||||
|
#
|
||||||
|
# This module is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This software is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this software. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
|
||||||
|
DOCUMENTATION = '''
|
||||||
|
---
|
||||||
|
module: pkgin
|
||||||
|
short_description: Package manager for SmartOS
|
||||||
|
description:
|
||||||
|
- Manages SmartOS packages
|
||||||
|
version_added: "1.0"
|
||||||
|
options:
|
||||||
|
name:
|
||||||
|
description:
|
||||||
|
- name of package to install/remove
|
||||||
|
required: true
|
||||||
|
state:
|
||||||
|
description:
|
||||||
|
- state of the package
|
||||||
|
choices: [ 'present', 'absent' ]
|
||||||
|
required: false
|
||||||
|
default: present
|
||||||
|
author: Shaun Zinck
|
||||||
|
notes: []
|
||||||
|
examples:
|
||||||
|
- code: "pkgin: name=foo state=present"
|
||||||
|
description: install package foo"
|
||||||
|
- code: "pkgin: name=foo state=absent"
|
||||||
|
description: remove package foo
|
||||||
|
- code: "pkgin: name=foo,bar state=absent"
|
||||||
|
description: remove packages foo and bar
|
||||||
|
'''
|
||||||
|
|
||||||
|
|
||||||
|
import json
|
||||||
|
import shlex
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
PKGIN_PATH = "/opt/local/bin/pkgin"
|
||||||
|
|
||||||
|
def query_package(module, name, state="present"):
|
||||||
|
|
||||||
|
if state == "present":
|
||||||
|
|
||||||
|
rc, out, err = module.run_command("%s list | grep ^%s" % (PKGIN_PATH, name))
|
||||||
|
|
||||||
|
if rc == 0:
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def remove_packages(module, packages):
|
||||||
|
|
||||||
|
remove_c = 0
|
||||||
|
# Using a for loop incase of error, we can report the package that failed
|
||||||
|
for package in packages:
|
||||||
|
# Query the package first, to see if we even need to remove
|
||||||
|
if not query_package(module, package):
|
||||||
|
continue
|
||||||
|
|
||||||
|
rc, out, err = module.run_command("%s -y remove %s" % (PKGIN_PATH, package))
|
||||||
|
|
||||||
|
if query_package(module, package):
|
||||||
|
module.fail_json(msg="failed to remove %s: %s" % (package, out))
|
||||||
|
|
||||||
|
remove_c += 1
|
||||||
|
|
||||||
|
if remove_c > 0:
|
||||||
|
|
||||||
|
module.exit_json(changed=True, msg="removed %s package(s)" % remove_c)
|
||||||
|
|
||||||
|
module.exit_json(changed=False, msg="package(s) already absent")
|
||||||
|
|
||||||
|
|
||||||
|
def install_packages(module, packages):
|
||||||
|
|
||||||
|
install_c = 0
|
||||||
|
|
||||||
|
for package in packages:
|
||||||
|
if query_package(module, package):
|
||||||
|
continue
|
||||||
|
|
||||||
|
rc, out, err = module.run_command("%s -y install %s" % (PKGIN_PATH, package))
|
||||||
|
|
||||||
|
if not query_package(module, package):
|
||||||
|
module.fail_json(msg="failed to install %s: %s" % (package, out))
|
||||||
|
|
||||||
|
install_c += 1
|
||||||
|
|
||||||
|
if install_c > 0:
|
||||||
|
module.exit_json(changed=True, msg="present %s package(s)" % (install_c))
|
||||||
|
|
||||||
|
module.exit_json(changed=False, msg="package(s) already present")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
module = AnsibleModule(
|
||||||
|
argument_spec = dict(
|
||||||
|
state = dict(default="present", choices=["present","absent"]),
|
||||||
|
name = dict(aliases=["pkg"], required=True)))
|
||||||
|
|
||||||
|
|
||||||
|
if not os.path.exists(PKGIN_PATH):
|
||||||
|
module.fail_json(msg="cannot find pkgin, looking for %s" % (PKGIN_PATH))
|
||||||
|
|
||||||
|
p = module.params
|
||||||
|
|
||||||
|
pkgs = p["name"].split(",")
|
||||||
|
|
||||||
|
if p["state"] == "present":
|
||||||
|
install_packages(module, pkgs)
|
||||||
|
|
||||||
|
elif p["state"] == "absent":
|
||||||
|
remove_packages(module, pkgs)
|
||||||
|
|
||||||
|
# this is magic, see lib/ansible/module_common.py
|
||||||
|
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
|
||||||
|
|
||||||
|
main()
|
|
@ -82,10 +82,11 @@ class Facts(object):
|
||||||
# A list of dicts. If there is a platform with more than one
|
# A list of dicts. If there is a platform with more than one
|
||||||
# package manager, put the preferred one last. If there is an
|
# package manager, put the preferred one last. If there is an
|
||||||
# ansible module, use that as the value for the 'name' key.
|
# ansible module, use that as the value for the 'name' key.
|
||||||
PKG_MGRS = [ { 'path' : '/usr/bin/yum', 'name' : 'yum' },
|
PKG_MGRS = [ { 'path' : '/usr/bin/yum', 'name' : 'yum' },
|
||||||
{ 'path' : '/usr/bin/apt-get', 'name' : 'apt' },
|
{ 'path' : '/usr/bin/apt-get', 'name' : 'apt' },
|
||||||
{ 'path' : '/usr/bin/zypper', 'name' : 'zypper' },
|
{ 'path' : '/usr/bin/zypper', 'name' : 'zypper' },
|
||||||
{ 'path' : '/usr/bin/pacman', 'name' : 'pacman' } ]
|
{ 'path' : '/usr/bin/pacman', 'name' : 'pacman' },
|
||||||
|
{ 'path' : '/opt/local/bin/pkgin', 'name' : 'pkgin' } ]
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.facts = {}
|
self.facts = {}
|
||||||
|
|
|
@ -24,47 +24,43 @@ DOCUMENTATION = '''
|
||||||
module: sysctl
|
module: sysctl
|
||||||
short_description: Permit to handle sysctl.conf entries
|
short_description: Permit to handle sysctl.conf entries
|
||||||
description:
|
description:
|
||||||
- This module handle the entries in C(/etc/sysctl.conf),
|
- This module manipulates sysctl entries and performs a I(/sbin/sysctl -p) after changing them.
|
||||||
and perform a I(/sbin/sysctl -p) after any change
|
|
||||||
version_added: "0.6"
|
version_added: "0.6"
|
||||||
options:
|
options:
|
||||||
name:
|
name:
|
||||||
description:
|
description:
|
||||||
- |
|
- this is the short path, decimal seperated, to the sysctl entry
|
||||||
also known as "key",
|
|
||||||
this is the short path, point separated to the sysctl entry eg: C(vm.swappiness)"
|
|
||||||
required: true
|
required: true
|
||||||
default: null
|
default: null
|
||||||
aliases: [ 'key' ]
|
aliases: [ 'key' ]
|
||||||
value:
|
value:
|
||||||
description:
|
description:
|
||||||
- "value to affect to the sysctl entry, to not provide if state=absent"
|
- set the sysctl value to this entry
|
||||||
required: false
|
required: false
|
||||||
default: null
|
default: null
|
||||||
aliases: [ 'val' ]
|
aliases: [ 'val' ]
|
||||||
state:
|
state:
|
||||||
description:
|
description:
|
||||||
- state=present the entry is added if not exist, or updated if exist
|
- whether the entry should be present or absent
|
||||||
state=absent the entry is removed if exist
|
|
||||||
choices: [ "present", "absent" ]
|
choices: [ "present", "absent" ]
|
||||||
default: present
|
default: present
|
||||||
checks:
|
checks:
|
||||||
description:
|
description:
|
||||||
- C(checks)=I(none) no smart/facultative checks will be made
|
- if C(checks)=I(none) no smart/facultative checks will be made
|
||||||
C(checks)=I(before) some checks performed before any update (ie. does the sysctl key is writable ?)
|
- if C(checks)=I(before) some checks performed before any update (ie. does the sysctl key is writable ?)
|
||||||
C(checks)=I(after) some checks performed after an update (ie. does kernel give back the setted value ?)
|
- if C(checks)=I(after) some checks performed after an update (ie. does kernel give back the setted value ?)
|
||||||
C(checks)=I(both) all the smart checks I(before and after) are performed
|
- if C(checks)=I(both) all the smart checks I(before and after) are performed
|
||||||
choices: [ "none", "before", "after", "both" ]
|
choices: [ "none", "before", "after", "both" ]
|
||||||
default: both
|
default: both
|
||||||
reload:
|
reload:
|
||||||
description:
|
description:
|
||||||
- C(reload=yes) perform a I(/sbin/sysctl -p) if C(sysctl_file) updated !
|
- if C(reload=yes), performs a I(/sbin/sysctl -p) if the C(sysctl_file) is updated
|
||||||
C(reload=no) do not reload I(sysctl) even if C(sysctl_file) updated !
|
- if C(reload=no), does not reload I(sysctl) even if the C(sysctl_file) is updated
|
||||||
choices: [ yes, no ]
|
choices: [ yes, no ]
|
||||||
default: yes
|
default: yes
|
||||||
sysctl_file:
|
sysctl_file:
|
||||||
description:
|
description:
|
||||||
- specify the absolute path to C(/etc/sysctl.conf)
|
- specifies the absolute path to C(sysctl.conf), if not /etc/sysctl.conf
|
||||||
required: false
|
required: false
|
||||||
default: /etc/sysctl.conf
|
default: /etc/sysctl.conf
|
||||||
examples:
|
examples:
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
#Maintainer: Michel Blanc <mblanc@erasme.org>
|
#Maintainer: Michel Blanc <mblanc@erasme.org>
|
||||||
pkgname=ansible-git
|
pkgname=ansible-git
|
||||||
pkgver=20130109
|
pkgver=20130123
|
||||||
pkgrel=2
|
pkgrel=1
|
||||||
pkgdesc="A radically simple deployment, model-driven configuration management, and command execution framework"
|
pkgdesc="A radically simple deployment, model-driven configuration management, and command execution framework"
|
||||||
arch=('any')
|
arch=('any')
|
||||||
url="http://ansible.cc"
|
url="http://ansible.cc"
|
||||||
|
|
|
@ -290,63 +290,3 @@ class TestRunner(unittest.TestCase):
|
||||||
print result
|
print result
|
||||||
assert result['changed'] == False
|
assert result['changed'] == False
|
||||||
|
|
||||||
def test_apt_key(self):
|
|
||||||
try:
|
|
||||||
key_file = self._get_test_file("apt_key.gpg")
|
|
||||||
key_file_url = 'file://' + urllib2.quote(key_file)
|
|
||||||
key_id = '473041FA'
|
|
||||||
|
|
||||||
os.environ['ANSIBLE_TEST_APT_KEY'] = 'none'
|
|
||||||
# key missing, should download and add
|
|
||||||
result = self._run('apt_key', ['state=present', 'url=' + key_file_url])
|
|
||||||
assert 'failed' not in result
|
|
||||||
assert result['added'] == 1
|
|
||||||
assert result['downloaded'] == 1
|
|
||||||
assert result['removed'] == 0
|
|
||||||
assert result['changed']
|
|
||||||
|
|
||||||
os.environ["ANSIBLE_TEST_APT_KEY"] = key_id
|
|
||||||
# key missing, shouldn't download, no changes
|
|
||||||
result = self._run('apt_key', ['id=12345678', 'state=absent', 'url=' + key_file_url])
|
|
||||||
assert 'failed' not in result
|
|
||||||
assert result['added'] == 0
|
|
||||||
assert result['downloaded'] == 0
|
|
||||||
assert result['removed'] == 0
|
|
||||||
assert not result['changed']
|
|
||||||
# key missing, should download and fail sanity check, no changes
|
|
||||||
result = self._run('apt_key', ['id=12345678', 'state=present', 'url=' + key_file_url])
|
|
||||||
assert 'failed' in result
|
|
||||||
assert result['added'] == 0
|
|
||||||
assert result['downloaded'] == 1
|
|
||||||
assert result['removed'] == 0
|
|
||||||
# key present, shouldn't download, no changes
|
|
||||||
result = self._run('apt_key', ['id=' + key_id, 'state=present', 'url=' + key_file_url])
|
|
||||||
assert 'failed' not in result
|
|
||||||
assert result['added'] == 0
|
|
||||||
assert result['downloaded'] == 0
|
|
||||||
assert result['removed'] == 0
|
|
||||||
assert not result['changed']
|
|
||||||
# key present, should download to get key id
|
|
||||||
result = self._run('apt_key', ['state=present', 'url=' + key_file_url])
|
|
||||||
assert 'failed' not in result
|
|
||||||
assert result['added'] == 0
|
|
||||||
assert result['downloaded'] == 1
|
|
||||||
assert result['removed'] == 0
|
|
||||||
assert not result['changed']
|
|
||||||
# key present, should download to get key id and remove
|
|
||||||
result = self._run('apt_key', ['state=absent', 'url=' + key_file_url])
|
|
||||||
assert 'failed' not in result
|
|
||||||
assert result['added'] == 0
|
|
||||||
assert result['downloaded'] == 1
|
|
||||||
assert result['removed'] == 1
|
|
||||||
assert result['changed']
|
|
||||||
# key present, should remove but not download
|
|
||||||
result = self._run('apt_key', ['id=' + key_id, 'state=absent', 'url=' + key_file_url])
|
|
||||||
assert 'failed' not in result
|
|
||||||
assert result['added'] == 0
|
|
||||||
assert result['downloaded'] == 0
|
|
||||||
assert result['removed'] == 1
|
|
||||||
assert result['changed']
|
|
||||||
finally:
|
|
||||||
# always clean up the environment
|
|
||||||
os.environ.pop('ANSIBLE_TEST_APT_KEY', None)
|
|
||||||
|
|
Loading…
Add table
Reference in a new issue