mirror of
https://github.com/ansible-collections/community.general.git
synced 2024-09-14 20:13:21 +02:00
Merge branch 'devel' of github.com:ansible/ansible into devel
Conflicts: library/monitoring/pagerduty
This commit is contained in:
commit
265d9adbb9
330 changed files with 22275 additions and 5935 deletions
5
.gitignore
vendored
5
.gitignore
vendored
|
@ -39,3 +39,8 @@ debian/
|
|||
*.swp
|
||||
*.swo
|
||||
credentials.yml
|
||||
# test output
|
||||
.coverage
|
||||
results.xml
|
||||
coverage.xml
|
||||
/test/units/cover-html
|
||||
|
|
80
CHANGELOG.md
80
CHANGELOG.md
|
@ -6,17 +6,91 @@ Ansible Changes By Release
|
|||
Major features/changes:
|
||||
|
||||
* The deprecated legacy variable templating system has been finally removed. Use {{ foo }} always not $foo or ${foo}.
|
||||
* Any data file can also be JSON. Use sparingly -- with great power comes great responsibility. Starting file with "{" or "[" denotes JSON.
|
||||
* Added 'gathering' param for ansible.cfg to change the default gather_facts policy.
|
||||
* Accelerate improvements:
|
||||
- multiple users can connect with different keys, when `accelerate_multi_key = yes` is specified in the ansible.cfg.
|
||||
- daemon lifetime is now based on the time from the last activity, not the time from the daemon's launch.
|
||||
* ansible-playbook now accepts --force-handlers to run handlers even if tasks result in failures
|
||||
|
||||
|
||||
New Modules:
|
||||
|
||||
* packaging: cpanm
|
||||
* files: replace
|
||||
* packaging: cpanm (Perl)
|
||||
* packaging: portage
|
||||
* packaging: composer (PHP)
|
||||
* packaging: homebrew_tap (OS X)
|
||||
* packaging: homebrew_cask (OS X)
|
||||
* packaging: apt_rpm
|
||||
* packaging: layman
|
||||
* monitoring: logentries
|
||||
* monitoring: rollbar_deployment
|
||||
* monitoring: librato_annotation
|
||||
* notification: nexmo (SMS)
|
||||
* notification: twilio (SMS)
|
||||
* notification: slack (Slack.com)
|
||||
* notification: typetalk (Typetalk.in)
|
||||
* notification: sns (Amazon)
|
||||
* system: debconf
|
||||
* system: ufw
|
||||
* system: locale_gen
|
||||
* system: alternatives
|
||||
* system: capabilities
|
||||
* net_infrastructure: bigip_facts
|
||||
* net_infrastructure: dnssimple
|
||||
* net_infrastructure: lldp
|
||||
* web_infrastructure: apache2_module
|
||||
* cloud: digital_ocean_domain
|
||||
* cloud: digital_ocean_sshkey
|
||||
* cloud: rax_identity
|
||||
* cloud: rax_cbs (cloud block storage)
|
||||
* cloud: rax_cbs_attachments
|
||||
* cloud: ec2_asg (configure autoscaling groups)
|
||||
* cloud: ec2_scaling_policy
|
||||
* cloud: ec2_metric_alarm
|
||||
|
||||
Other notable changes:
|
||||
|
||||
* info pending
|
||||
* example callback plugin added for hipchat
|
||||
* added example inventory plugin for vcenter/vsphere
|
||||
* added example inventory plugin for doing really trivial inventory from SSH config files
|
||||
* libvirt module now supports destroyed and paused as states
|
||||
* s3 module can specify metadata
|
||||
* security token additions to ec2 modules
|
||||
* setup module code moved into module_utils/, facts now accessible by other modules
|
||||
* synchronize module sets relative dirs based on inventory or role path
|
||||
* misc bugfixes and other parameters
|
||||
* the ec2_key module now has wait/wait_timeout parameters
|
||||
* added version_compare filter (see docs)
|
||||
* added ability for module documentation YAML to utilize shared module snippets for common args
|
||||
* apt module now accepts "deb" parameter to install local dpkg files
|
||||
* regex_replace filter plugin added
|
||||
* ... to be filled in from changelogs ...
|
||||
*
|
||||
|
||||
## 1.5 "Love Walks In" - Feb 28, 2014
|
||||
## 1.5.4 "Love Walks In" - April 1, 2014
|
||||
|
||||
- Security fix for safe_eval, which further hardens the checking of the evaluation function.
|
||||
- Changing order of variable precendence for system facts, to ensure that inventory variables take precedence over any facts that may be set on a host.
|
||||
|
||||
## 1.5.3 "Love Walks In" - March 13, 2014
|
||||
|
||||
- Fix validate_certs and run_command errors from previous release
|
||||
- Fixes to the git module related to host key checking
|
||||
|
||||
## 1.5.2 "Love Walks In" - March 11, 2014
|
||||
|
||||
- Fix module errors in airbrake and apt from previous release
|
||||
|
||||
## 1.5.1 "Love Walks In" - March 10, 2014
|
||||
|
||||
- Force command action to not be executed by the shell unless specifically enabled.
|
||||
- Validate SSL certs accessed through urllib*.
|
||||
- Implement new default cipher class AES256 in ansible-vault.
|
||||
- Misc bug fixes.
|
||||
|
||||
## 1.5 "Love Walks In" - February 28, 2014
|
||||
|
||||
Major features/changes:
|
||||
|
||||
|
|
|
@ -66,8 +66,10 @@ Functions and Methods
|
|||
|
||||
* In general, functions should not be 'too long' and should describe a meaningful amount of work
|
||||
* When code gets too nested, that's usually the sign the loop body could benefit from being a function
|
||||
* Parts of our existing code are not the best examples of this at times.
|
||||
* Functions should have names that describe what they do, along with docstrings
|
||||
* Functions should be named with_underscores
|
||||
* "Don't repeat yourself" is generally a good philosophy
|
||||
|
||||
Variables
|
||||
=========
|
||||
|
@ -76,6 +78,16 @@ Variables
|
|||
* Ansible python code uses identifiers like 'ClassesLikeThis and variables_like_this
|
||||
* Module parameters should also use_underscores and not runtogether
|
||||
|
||||
Module Security
|
||||
===============
|
||||
|
||||
* Modules must take steps to avoid passing user input from the shell and always check return codes
|
||||
* always use module.run_command instead of subprocess or Popen or os.system -- this is mandatory
|
||||
* if you use need the shell you must pass use_unsafe_shell=True to module.run_command
|
||||
* if you do not need the shell, avoid using the shell
|
||||
* any variables that can come from the user input with use_unsafe_shell=True must be wrapped by pipes.quote(x)
|
||||
* downloads of https:// resource urls must import module_utils.urls and use the fetch_url method
|
||||
|
||||
Misc Preferences
|
||||
================
|
||||
|
||||
|
@ -149,16 +161,19 @@ All contributions to the core repo should preserve original licenses and new con
|
|||
Module Documentation
|
||||
====================
|
||||
|
||||
All module pull requests must include a DOCUMENTATION docstring (YAML format, see other modules for examples) as well as an EXAMPLES docstring, which
|
||||
is free form.
|
||||
All module pull requests must include a DOCUMENTATION docstring (YAML format,
|
||||
see other modules for examples) as well as an EXAMPLES docstring, which is free form.
|
||||
|
||||
When adding new modules, any new parameter must have a "version_added" attribute. When submitting a new module, the module should have a "version_added"
|
||||
attribute in the pull request as well, set to the current development version.
|
||||
When adding new modules, any new parameter must have a "version_added" attribute.
|
||||
When submitting a new module, the module should have a "version_added" attribute in the
|
||||
pull request as well, set to the current development version.
|
||||
|
||||
Be sure to check grammar and spelling.
|
||||
|
||||
It's frequently the case that modules get submitted with YAML that isn't valid, so you can run "make webdocs" from the checkout to preview your module's documentation.
|
||||
If it fails to build, take a look at your DOCUMENTATION string or you might have a Python syntax error in there too.
|
||||
It's frequently the case that modules get submitted with YAML that isn't valid,
|
||||
so you can run "make webdocs" from the checkout to preview your module's documentation.
|
||||
If it fails to build, take a look at your DOCUMENTATION string
|
||||
or you might have a Python syntax error in there too.
|
||||
|
||||
Python Imports
|
||||
==============
|
||||
|
|
|
@ -29,13 +29,9 @@ content up on places like github to share with others.
|
|||
Sharing A Feature Idea
|
||||
----------------------
|
||||
|
||||
If you have an idea for a new feature, you can open a new ticket at
|
||||
[github.com/ansible/ansible](https://github.com/ansible/ansible), though in general we like to
|
||||
talk about feature ideas first and bring in lots of people into the discussion. Consider stopping
|
||||
by the
|
||||
[Ansible project mailing list](https://groups.google.com/forum/#!forum/ansible-project) ([Subscribe](https://groups.google.com/forum/#!forum/ansible-project/join))
|
||||
or #ansible on irc.freenode.net. There is an overview about more mailing lists
|
||||
later in this document.
|
||||
Ideas are very welcome and the best place to share them is the [Ansible project mailing list](https://groups.google.com/forum/#!forum/ansible-project) ([Subscribe](https://groups.google.com/forum/#!forum/ansible-project/join)) or #ansible on irc.freenode.net.
|
||||
|
||||
While you can file a feature request on GitHub, pull requests are a much better way to get your feature added than submitting a feature request. Open source is all about itch scratching, and it's less likely that someone else will have the same itches as yourself. We keep code reasonably simple on purpose so it's easy to dive in and make additions, but be sure to read the "Contributing Code" section below too -- as it doesn't hurt to have a discussion about a feature first -- we're inclined to have preferences about how incoming features might be implemented, and that can save confusion later.
|
||||
|
||||
Helping with Documentation
|
||||
--------------------------
|
||||
|
@ -58,18 +54,24 @@ The Ansible project keeps it’s source on github at
|
|||
and takes contributions through
|
||||
[github pull requests](https://help.github.com/articles/using-pull-requests).
|
||||
|
||||
It is usually a good idea to join the ansible-devel list to discuss any large features prior to submission, and this
|
||||
especially helps in avoiding duplicate work or efforts where we decide, upon seeing a pull request for the first
|
||||
time, that revisions are needed. (This is not usually needed for module development)
|
||||
It is usually a good idea to join the ansible-devel list to discuss any large features prior to submission, and this especially helps in avoiding duplicate work or efforts where we decide, upon seeing a pull request for the first time, that revisions are needed. (This is not usually needed for module development)
|
||||
|
||||
Note that we do keep Ansible to a particular aesthetic, so if you are unclear about whether a feature
|
||||
is a good fit or not, having the discussion on the development list is often a lot easier than having
|
||||
to modify a pull request later.
|
||||
|
||||
When submitting patches, be sure to run the unit tests first “make tests” and always use
|
||||
“git rebase” vs “git merge” (aliasing git pull to git pull --rebase is a great idea) to
|
||||
avoid merge commits in your submissions. We will require resubmission of pull requests that
|
||||
contain merge commits.
|
||||
avoid merge commits in your submissions. There are also integration tests that can be run in the "tests/integration" directory.
|
||||
|
||||
We’ll then review your contributions and engage with you about questions and so on. Please be
|
||||
advised we have a very large and active community, so it may take awhile to get your contributions
|
||||
in! Patches should be made against the 'devel' branch.
|
||||
In order to keep the history clean and better audit incoming code, we will require resubmission of pull requests that contain merge commits. Use "git pull --rebase" vs "git pull" and "git rebase" vs "git merge". Also be sure to use topic branches to keep your additions on different branches, such that they won't pick up stray commits later.
|
||||
|
||||
We’ll then review your contributions and engage with you about questions and so on.
|
||||
|
||||
As we have a very large and active community, so it may take awhile to get your contributions
|
||||
in! See the notes about priorities in a later section for understanding our work queue.
|
||||
|
||||
Patches should be made against the 'devel' branch.
|
||||
|
||||
Contributions can be for new features like modules, or to fix bugs you or others have found. If you
|
||||
are interested in writing new modules to be included in the core Ansible distribution, please refer
|
||||
|
@ -87,6 +89,8 @@ required. You're now live!
|
|||
Reporting A Bug
|
||||
---------------
|
||||
|
||||
Ansible practices responsible disclosure - if this is a security related bug, email security@ansible.com instead of filing a ticket or posting to the Google Group and you will recieve a prompt response.
|
||||
|
||||
Bugs should be reported to [github.com/ansible/ansible](http://github.com/ansible/ansible) after
|
||||
signing up for a free github account. Before reporting a bug, please use the bug/issue search
|
||||
to see if the issue has already been reported.
|
||||
|
@ -108,6 +112,44 @@ the mailing list or IRC first. As we are a very high volume project, if you det
|
|||
you do have a bug, please be sure to open the issue yourself to ensure we have a record of
|
||||
it. Don’t rely on someone else in the community to file the bug report for you.
|
||||
|
||||
It may take some time to get to your report, see "A Note About Priorities" below.
|
||||
|
||||
A Note About Priorities
|
||||
=======================
|
||||
|
||||
Ansible was one of the top 5 projects with the most OSS contributors on GitHub in 2013, and well over
|
||||
600 people have added code to the project. As a result, we have a LOT of incoming activity to process.
|
||||
|
||||
In the interest of transparency, we're telling you how we do this.
|
||||
|
||||
In our bug tracker you'll notice some labels - P1, P2, P3, P4, and P5. These are our internal
|
||||
priority orders that we use to sort tickets.
|
||||
|
||||
With some exceptions for easy merges (like documentation typos for instance),
|
||||
we're going to spend most of our time working on P1 and P2 items first, including pull requests.
|
||||
These usually relate to important
|
||||
bugs or features affecting large segments of the userbase. So if you see something categorized
|
||||
"P3 or P4", and it's not appearing to get a lot of immediate attention, this is why.
|
||||
|
||||
These labels don't really have definition - they are a simple ordering. However something
|
||||
affecting a major module (yum, apt, etc) is likely to be prioritized higher than a module
|
||||
affecting a smaller number of users.
|
||||
|
||||
Since we place a strong emphasis on testing and code review, it may take a few months for a minor feature to get merged.
|
||||
|
||||
Don't worry though -- we'll also take periodic sweeps through the lower priority queues and give
|
||||
them some attention as well, particularly in the area of new module changes. So it doesn't neccessarily
|
||||
mean that we'll be exhausting all of the higher-priority queues before getting to your ticket.
|
||||
|
||||
Release Numbering
|
||||
=================
|
||||
|
||||
Releases ending in ".0" are major releases and this is where all new features land. Releases ending
|
||||
in another integer, like "0.X.1" and "0.X.2" are dot releases, and these are only going to contain
|
||||
bugfixes. Typically we don't do dot releases for minor releases, but may occasionally decide to cut
|
||||
dot releases containing a large number of smaller fixes if it's still a fairly long time before
|
||||
the next release comes out.
|
||||
|
||||
Online Resources
|
||||
================
|
||||
|
||||
|
@ -165,11 +207,10 @@ we post with an @ansible.com address.
|
|||
Community Code of Conduct
|
||||
-------------------------
|
||||
|
||||
Ansible’s community welcomes users of all types, backgrounds, and skill levels. Please
|
||||
treat others as you expect to be treated, keep discussions positive, and avoid discrimination
|
||||
or engaging in controversial debates (except vi vs emacs is cool). Posts to mailing lists
|
||||
should remain focused around Ansible and IT automation. Abuse of these community guidelines
|
||||
will not be tolerated and may result in banning from community resources.
|
||||
Ansible’s community welcomes users of all types, backgrounds, and skill levels. Please
|
||||
treat others as you expect to be treated, keep discussions positive, and avoid discrimination, profanity, allegations of Cthulhu worship, or engaging in controversial debates (except vi vs emacs is cool).
|
||||
|
||||
Posts to mailing lists should remain focused around Ansible and IT automation. Abuse of these community guidelines will not be tolerated and may result in banning from community resources.
|
||||
|
||||
Contributors License Agreement
|
||||
------------------------------
|
||||
|
|
3
Makefile
3
Makefile
|
@ -20,7 +20,7 @@ OS = $(shell uname -s)
|
|||
# Manpages are currently built with asciidoc -- would like to move to markdown
|
||||
# This doesn't evaluate until it's called. The -D argument is the
|
||||
# directory of the target file ($@), kinda like `dirname`.
|
||||
MANPAGES := docs/man/man1/ansible.1 docs/man/man1/ansible-playbook.1 docs/man/man1/ansible-pull.1 docs/man/man1/ansible-doc.1
|
||||
MANPAGES := docs/man/man1/ansible.1 docs/man/man1/ansible-playbook.1 docs/man/man1/ansible-pull.1 docs/man/man1/ansible-doc.1 docs/man/man1/ansible-galaxy.1 docs/man/man1/ansible-vault.1
|
||||
ifneq ($(shell which a2x 2>/dev/null),)
|
||||
ASCII2MAN = a2x -D $(dir $@) -d manpage -f manpage $<
|
||||
ASCII2HTMLMAN = a2x -D docs/html/man/ -d manpage -f xhtml
|
||||
|
@ -172,3 +172,4 @@ deb: debian
|
|||
webdocs: $(MANPAGES)
|
||||
(cd docsite/; make docs)
|
||||
|
||||
docs: $(MANPAGES)
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
[![PyPI version](https://badge.fury.io/py/ansible.png)](http://badge.fury.io/py/ansible)
|
||||
[![PyPI version](https://badge.fury.io/py/ansible.png)](http://badge.fury.io/py/ansible) [![PyPI downloads](https://pypip.in/d/ansible/badge.png)](https://pypi.python.org/pypi/ansible)
|
||||
|
||||
|
||||
Ansible
|
||||
=======
|
||||
|
|
|
@ -14,6 +14,11 @@ Active Development
|
|||
Previous
|
||||
++++++++
|
||||
|
||||
=======
|
||||
1.6 "The Cradle Will Rock" - NEXT
|
||||
1.5.3 "Love Walks In" -------- 03-13-2014
|
||||
1.5.2 "Love Walks In" -------- 03-11-2014
|
||||
1.5.1 "Love Walks In" -------- 03-10-2014
|
||||
1.5 "Love Walks In" -------- 02-28-2014
|
||||
1.4.5 "Could This Be Magic?" - 02-12-2014
|
||||
1.4.4 "Could This Be Magic?" - 01-06-2014
|
||||
|
|
|
@ -128,14 +128,11 @@ class Cli(object):
|
|||
this_path = os.path.expanduser(options.vault_password_file)
|
||||
try:
|
||||
f = open(this_path, "rb")
|
||||
tmp_vault_pass=f.read()
|
||||
tmp_vault_pass=f.read().strip()
|
||||
f.close()
|
||||
except (OSError, IOError), e:
|
||||
raise errors.AnsibleError("Could not read %s: %s" % (this_path, e))
|
||||
|
||||
# get rid of newline chars
|
||||
tmp_vault_pass = tmp_vault_pass.strip()
|
||||
|
||||
if not options.ask_vault_pass:
|
||||
vault_pass = tmp_vault_pass
|
||||
|
||||
|
@ -160,8 +157,6 @@ class Cli(object):
|
|||
|
||||
if options.su_user or options.ask_su_pass:
|
||||
options.su = True
|
||||
elif options.sudo_user or options.ask_sudo_pass:
|
||||
options.sudo = True
|
||||
options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER
|
||||
options.su_user = options.su_user or C.DEFAULT_SU_USER
|
||||
if options.tree:
|
||||
|
|
|
@ -98,7 +98,7 @@ def get_man_text(doc):
|
|||
if 'option_keys' in doc and len(doc['option_keys']) > 0:
|
||||
text.append("Options (= is mandatory):\n")
|
||||
|
||||
for o in doc['option_keys']:
|
||||
for o in sorted(doc['option_keys']):
|
||||
opt = doc['options'][o]
|
||||
|
||||
if opt.get('required', False):
|
||||
|
@ -146,10 +146,15 @@ def get_snippet_text(doc):
|
|||
text.append("- name: %s" % (desc))
|
||||
text.append(" action: %s" % (doc['module']))
|
||||
|
||||
for o in doc['options']:
|
||||
for o in sorted(doc['options'].keys()):
|
||||
opt = doc['options'][o]
|
||||
desc = tty_ify("".join(opt['description']))
|
||||
s = o + "="
|
||||
|
||||
if opt.get('required', False):
|
||||
s = o + "="
|
||||
else:
|
||||
s = o
|
||||
|
||||
text.append(" %-20s # %s" % (s, desc))
|
||||
text.append('')
|
||||
|
||||
|
|
|
@ -170,7 +170,7 @@ def build_option_parser(action):
|
|||
parser.set_usage("usage: %prog init [options] role_name")
|
||||
parser.add_option(
|
||||
'-p', '--init-path', dest='init_path', default="./",
|
||||
help='The path in which the skeleton role will be created.'
|
||||
help='The path in which the skeleton role will be created. '
|
||||
'The default is the current working directory.')
|
||||
elif action == "install":
|
||||
parser.set_usage("usage: %prog install [options] [-r FILE | role_name(s)[,version] | tar_file(s)]")
|
||||
|
@ -181,7 +181,7 @@ def build_option_parser(action):
|
|||
'-n', '--no-deps', dest='no_deps', action='store_true', default=False,
|
||||
help='Don\'t download roles listed as dependencies')
|
||||
parser.add_option(
|
||||
'-r', '--role-file', dest='role_file',
|
||||
'-r', '--role-file', dest='role_file',
|
||||
help='A file containing a list of roles to be imported')
|
||||
elif action == "remove":
|
||||
parser.set_usage("usage: %prog remove role1 role2 ...")
|
||||
|
@ -192,7 +192,7 @@ def build_option_parser(action):
|
|||
if action != "init":
|
||||
parser.add_option(
|
||||
'-p', '--roles-path', dest='roles_path', default=C.DEFAULT_ROLES_PATH,
|
||||
help='The path to the directory containing your roles.'
|
||||
help='The path to the directory containing your roles. '
|
||||
'The default is the roles_path configured in your '
|
||||
'ansible.cfg file (/etc/ansible/roles if not configured)')
|
||||
|
||||
|
@ -655,7 +655,7 @@ def execute_install(args, options, parser):
|
|||
|
||||
if role_name == "" or role_name.startswith("#"):
|
||||
continue
|
||||
elif role_name.find(',') != -1:
|
||||
elif ',' in role_name:
|
||||
role_name,role_version = role_name.split(',',1)
|
||||
role_name = role_name.strip()
|
||||
role_version = role_version.strip()
|
||||
|
|
|
@ -78,6 +78,8 @@ def main(args):
|
|||
help="one-step-at-a-time: confirm each task before running")
|
||||
parser.add_option('--start-at-task', dest='start_at',
|
||||
help="start the playbook at the task matching this name")
|
||||
parser.add_option('--force-handlers', dest='force_handlers', action='store_true',
|
||||
help="run handlers even if a task fails")
|
||||
|
||||
options, args = parser.parse_args(args)
|
||||
|
||||
|
@ -122,14 +124,11 @@ def main(args):
|
|||
this_path = os.path.expanduser(options.vault_password_file)
|
||||
try:
|
||||
f = open(this_path, "rb")
|
||||
tmp_vault_pass=f.read()
|
||||
tmp_vault_pass=f.read().strip()
|
||||
f.close()
|
||||
except (OSError, IOError), e:
|
||||
raise errors.AnsibleError("Could not read %s: %s" % (this_path, e))
|
||||
|
||||
# get rid of newline chars
|
||||
tmp_vault_pass = tmp_vault_pass.strip()
|
||||
|
||||
if not options.ask_vault_pass:
|
||||
vault_pass = tmp_vault_pass
|
||||
|
||||
|
@ -137,7 +136,7 @@ def main(args):
|
|||
for extra_vars_opt in options.extra_vars:
|
||||
if extra_vars_opt.startswith("@"):
|
||||
# Argument is a YAML file (JSON is a subset of YAML)
|
||||
extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml_from_file(extra_vars_opt[1:]))
|
||||
extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml_from_file(extra_vars_opt[1:], vault_password=vault_pass))
|
||||
elif extra_vars_opt and extra_vars_opt[0] in '[{':
|
||||
# Arguments as YAML
|
||||
extra_vars = utils.combine_vars(extra_vars, utils.parse_yaml(extra_vars_opt))
|
||||
|
@ -194,7 +193,8 @@ def main(args):
|
|||
su=options.su,
|
||||
su_pass=su_pass,
|
||||
su_user=options.su_user,
|
||||
vault_password=vault_pass
|
||||
vault_password=vault_pass,
|
||||
force_handlers=options.force_handlers
|
||||
)
|
||||
|
||||
if options.listhosts or options.listtasks or options.syntax:
|
||||
|
@ -206,12 +206,12 @@ def main(args):
|
|||
playnum += 1
|
||||
play = ansible.playbook.Play(pb, play_ds, play_basedir)
|
||||
label = play.name
|
||||
if options.listhosts:
|
||||
hosts = pb.inventory.list_hosts(play.hosts)
|
||||
print ' play #%d (%s): host count=%d' % (playnum, label, len(hosts))
|
||||
for host in hosts:
|
||||
print ' %s' % host
|
||||
if options.listtasks:
|
||||
hosts = pb.inventory.list_hosts(play.hosts)
|
||||
|
||||
# Filter all tasks by given tags
|
||||
if pb.only_tags != 'all':
|
||||
if options.subset and not hosts:
|
||||
continue
|
||||
matched_tags, unmatched_tags = play.compare_tags(pb.only_tags)
|
||||
|
||||
# Remove skipped tasks
|
||||
|
@ -223,6 +223,13 @@ def main(args):
|
|||
|
||||
if unknown_tags:
|
||||
continue
|
||||
|
||||
if options.listhosts:
|
||||
print ' play #%d (%s): host count=%d' % (playnum, label, len(hosts))
|
||||
for host in hosts:
|
||||
print ' %s' % host
|
||||
|
||||
if options.listtasks:
|
||||
print ' play #%d (%s):' % (playnum, label)
|
||||
|
||||
for task in play.tasks():
|
||||
|
|
|
@ -44,6 +44,8 @@ import subprocess
|
|||
import sys
|
||||
import datetime
|
||||
import socket
|
||||
import random
|
||||
import time
|
||||
from ansible import utils
|
||||
from ansible.utils import cmd_functions
|
||||
from ansible import errors
|
||||
|
@ -102,6 +104,8 @@ def main(args):
|
|||
help='purge checkout after playbook run')
|
||||
parser.add_option('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true',
|
||||
help='only run the playbook if the repository has been updated')
|
||||
parser.add_option('-s', '--sleep', dest='sleep', default=None,
|
||||
help='sleep for random interval (between 0 and n number of seconds) before starting. this is a useful way to disperse git requests')
|
||||
parser.add_option('-f', '--force', dest='force', default=False,
|
||||
action='store_true',
|
||||
help='run the playbook even if the repository could '
|
||||
|
@ -117,6 +121,8 @@ def main(args):
|
|||
'Defaults to behavior of repository module.')
|
||||
parser.add_option('-i', '--inventory-file', dest='inventory',
|
||||
help="location of the inventory host file")
|
||||
parser.add_option('-e', '--extra-vars', dest="extra_vars", action="append",
|
||||
help="set additional variables as key=value or YAML/JSON", default=[])
|
||||
parser.add_option('-v', '--verbose', default=False, action="callback",
|
||||
callback=increment_debug,
|
||||
help='Pass -vvvv to ansible-playbook')
|
||||
|
@ -126,6 +132,8 @@ def main(args):
|
|||
'Default is %s.' % DEFAULT_REPO_TYPE)
|
||||
parser.add_option('--vault-password-file', dest='vault_password_file',
|
||||
help="vault password file")
|
||||
parser.add_option('-K', '--ask-sudo-pass', default=False, dest='ask_sudo_pass', action='store_true',
|
||||
help='ask for sudo password')
|
||||
options, args = parser.parse_args(args)
|
||||
|
||||
hostname = socket.getfqdn()
|
||||
|
@ -162,7 +170,18 @@ def main(args):
|
|||
inv_opts, base_opts, options.module_name, repo_opts
|
||||
)
|
||||
|
||||
# RUN THE CHECKOUT COMMAND
|
||||
if options.sleep:
|
||||
try:
|
||||
secs = random.randint(0,int(options.sleep));
|
||||
except ValueError:
|
||||
parser.error("%s is not a number." % options.sleep)
|
||||
return 1
|
||||
|
||||
print >>sys.stderr, "Sleeping for %d seconds..." % secs
|
||||
time.sleep(secs);
|
||||
|
||||
|
||||
# RUN THe CHECKOUT COMMAND
|
||||
rc, out, err = cmd_functions.run_cmd(cmd, live=True)
|
||||
|
||||
if rc != 0:
|
||||
|
@ -185,6 +204,10 @@ def main(args):
|
|||
cmd += " --vault-password-file=%s" % options.vault_password_file
|
||||
if options.inventory:
|
||||
cmd += ' -i "%s"' % options.inventory
|
||||
for ev in options.extra_vars:
|
||||
cmd += ' -e "%s"' % ev
|
||||
if options.ask_sudo_pass:
|
||||
cmd += ' -K'
|
||||
os.chdir(options.dest)
|
||||
|
||||
# RUN THE PLAYBOOK COMMAND
|
||||
|
|
|
@ -52,7 +52,7 @@ def build_option_parser(action):
|
|||
sys.exit()
|
||||
|
||||
# options for all actions
|
||||
#parser.add_option('-c', '--cipher', dest='cipher', default="AES", help="cipher to use")
|
||||
#parser.add_option('-c', '--cipher', dest='cipher', default="AES256", help="cipher to use")
|
||||
parser.add_option('--debug', dest='debug', action="store_true", help="debug")
|
||||
parser.add_option('--vault-password-file', dest='password_file',
|
||||
help="vault password file")
|
||||
|
@ -105,7 +105,6 @@ def _read_password(filename):
|
|||
f = open(filename, "rb")
|
||||
data = f.read()
|
||||
f.close
|
||||
# get rid of newline chars
|
||||
data = data.strip()
|
||||
return data
|
||||
|
||||
|
@ -119,7 +118,7 @@ def execute_create(args, options, parser):
|
|||
else:
|
||||
password = _read_password(options.password_file)
|
||||
|
||||
cipher = 'AES'
|
||||
cipher = 'AES256'
|
||||
if hasattr(options, 'cipher'):
|
||||
cipher = options.cipher
|
||||
|
||||
|
@ -133,7 +132,7 @@ def execute_decrypt(args, options, parser):
|
|||
else:
|
||||
password = _read_password(options.password_file)
|
||||
|
||||
cipher = 'AES'
|
||||
cipher = 'AES256'
|
||||
if hasattr(options, 'cipher'):
|
||||
cipher = options.cipher
|
||||
|
||||
|
@ -161,15 +160,12 @@ def execute_edit(args, options, parser):
|
|||
|
||||
def execute_encrypt(args, options, parser):
|
||||
|
||||
if len(args) > 1:
|
||||
raise errors.AnsibleError("'create' does not accept more than one filename")
|
||||
|
||||
if not options.password_file:
|
||||
password, new_password = utils.ask_vault_passwords(ask_vault_pass=True, confirm_vault=True)
|
||||
else:
|
||||
password = _read_password(options.password_file)
|
||||
|
||||
cipher = 'AES'
|
||||
cipher = 'AES256'
|
||||
if hasattr(options, 'cipher'):
|
||||
cipher = options.cipher
|
||||
|
||||
|
|
180
docs/man/man1/ansible-galaxy.1
Normal file
180
docs/man/man1/ansible-galaxy.1
Normal file
|
@ -0,0 +1,180 @@
|
|||
'\" t
|
||||
.\" Title: ansible-galaxy
|
||||
.\" Author: [see the "AUTHOR" section]
|
||||
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
|
||||
.\" Date: 03/16/2014
|
||||
.\" Manual: System administration commands
|
||||
.\" Source: Ansible 1.6
|
||||
.\" Language: English
|
||||
.\"
|
||||
.TH "ANSIBLE\-GALAXY" "1" "03/16/2014" "Ansible 1\&.6" "System administration commands"
|
||||
.\" -----------------------------------------------------------------
|
||||
.\" * Define some portability stuff
|
||||
.\" -----------------------------------------------------------------
|
||||
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
.\" http://bugs.debian.org/507673
|
||||
.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
|
||||
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
.ie \n(.g .ds Aq \(aq
|
||||
.el .ds Aq '
|
||||
.\" -----------------------------------------------------------------
|
||||
.\" * set default formatting
|
||||
.\" -----------------------------------------------------------------
|
||||
.\" disable hyphenation
|
||||
.nh
|
||||
.\" disable justification (adjust text to left margin only)
|
||||
.ad l
|
||||
.\" -----------------------------------------------------------------
|
||||
.\" * MAIN CONTENT STARTS HERE *
|
||||
.\" -----------------------------------------------------------------
|
||||
.SH "NAME"
|
||||
ansible-galaxy \- manage roles using galaxy\&.ansible\&.com
|
||||
.SH "SYNOPSIS"
|
||||
.sp
|
||||
ansible\-galaxy [init|info|install|list|remove] [\-\-help] [options] \&...
|
||||
.SH "DESCRIPTION"
|
||||
.sp
|
||||
\fBAnsible Galaxy\fR is a shared repository for Ansible roles (added in ansible version 1\&.2)\&. The ansible\-galaxy command can be used to manage these roles, or by creating a skeleton framework for roles you\(cqd like to upload to Galaxy\&.
|
||||
.SH "COMMON OPTIONS"
|
||||
.PP
|
||||
\fB\-h\fR, \fB\-\-help\fR
|
||||
.RS 4
|
||||
Show a help message related to the given sub\-command\&.
|
||||
.RE
|
||||
.SH "INSTALL"
|
||||
.sp
|
||||
The \fBinstall\fR sub\-command is used to install roles\&.
|
||||
.SS "USAGE"
|
||||
.sp
|
||||
$ ansible\-galaxy install [options] [\-r FILE | role_name(s)[,version] | tar_file(s)]
|
||||
.sp
|
||||
Roles can be installed in several different ways:
|
||||
.sp
|
||||
.RS 4
|
||||
.ie n \{\
|
||||
\h'-04'\(bu\h'+03'\c
|
||||
.\}
|
||||
.el \{\
|
||||
.sp -1
|
||||
.IP \(bu 2.3
|
||||
.\}
|
||||
A username\&.rolename[,version] \- this will install a single role\&. The Galaxy API will be contacted to provide the information about the role, and the corresponding \&.tar\&.gz will be downloaded from
|
||||
\fBgithub\&.com\fR\&. If the version is omitted, the most recent version available will be installed\&.
|
||||
.RE
|
||||
.sp
|
||||
.RS 4
|
||||
.ie n \{\
|
||||
\h'-04'\(bu\h'+03'\c
|
||||
.\}
|
||||
.el \{\
|
||||
.sp -1
|
||||
.IP \(bu 2.3
|
||||
.\}
|
||||
A file name, using
|
||||
\fB\-r\fR
|
||||
\- this will install multiple roles listed one per line\&. The format of each line is the same as above: username\&.rolename[,version]
|
||||
.RE
|
||||
.sp
|
||||
.RS 4
|
||||
.ie n \{\
|
||||
\h'-04'\(bu\h'+03'\c
|
||||
.\}
|
||||
.el \{\
|
||||
.sp -1
|
||||
.IP \(bu 2.3
|
||||
.\}
|
||||
A \&.tar\&.gz of a valid role you\(cqve downloaded directly from
|
||||
\fBgithub\&.com\fR\&. This is mainly useful when the system running Ansible does not have access to the Galaxy API, for instance when behind a firewall or proxy\&.
|
||||
.RE
|
||||
.SS "OPTIONS"
|
||||
.PP
|
||||
\fB\-f\fR, \fB\-\-force\fR
|
||||
.RS 4
|
||||
Force overwriting an existing role\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-i\fR, \fB\-\-ignore\-errors\fR
|
||||
.RS 4
|
||||
Ignore errors and continue with the next specified role\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-n\fR, \fB\-\-no\-deps\fR
|
||||
.RS 4
|
||||
Don\(cqt download roles listed as dependencies\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-p\fR \fIROLES_PATH\fR, \fB\-\-roles\-path=\fR\fIROLES_PATH\fR
|
||||
.RS 4
|
||||
The path to the directory containing your roles\&. The default is the
|
||||
\fBroles_path\fR
|
||||
configured in your
|
||||
\fBansible\&.cfg\fR
|
||||
file (/etc/ansible/roles if not configured)
|
||||
.RE
|
||||
.PP
|
||||
\fB\-r\fR \fIROLE_FILE\fR, \fB\-\-role\-file=\fR\fIROLE_FILE\fR
|
||||
.RS 4
|
||||
A file containing a list of roles to be imported, as specified above\&. This option cannot be used if a rolename or \&.tar\&.gz have been specified\&.
|
||||
.RE
|
||||
.SH "REMOVE"
|
||||
.sp
|
||||
The \fBremove\fR sub\-command is used to remove one or more roles\&.
|
||||
.SS "USAGE"
|
||||
.sp
|
||||
$ ansible\-galaxy remove role1 role2 \&...
|
||||
.SS "OPTIONS"
|
||||
.PP
|
||||
\fB\-p\fR \fIROLES_PATH\fR, \fB\-\-roles\-path=\fR\fIROLES_PATH\fR
|
||||
.RS 4
|
||||
The path to the directory containing your roles\&. The default is the
|
||||
\fBroles_path\fR
|
||||
configured in your
|
||||
\fBansible\&.cfg\fR
|
||||
file (/etc/ansible/roles if not configured)
|
||||
.RE
|
||||
.SH "INIT"
|
||||
.sp
|
||||
The \fBinit\fR command is used to create an empty role suitable for uploading to https://galaxy\&.ansible\&.com (or for roles in general)\&.
|
||||
.SS "USAGE"
|
||||
.sp
|
||||
$ ansible\-galaxy init [options] role_name
|
||||
.SS "OPTIONS"
|
||||
.PP
|
||||
\fB\-f\fR, \fB\-\-force\fR
|
||||
.RS 4
|
||||
Force overwriting an existing role\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-p\fR \fIINIT_PATH\fR, \fB\-\-init\-path=\fR\fIINIT_PATH\fR
|
||||
.RS 4
|
||||
The path in which the skeleton role will be created\&.The default is the current working directory\&.
|
||||
.RE
|
||||
.SH "LIST"
|
||||
.sp
|
||||
The \fBlist\fR sub\-command is used to show what roles are currently instaled\&. You can specify a role name, and if installed only that role will be shown\&.
|
||||
.SS "USAGE"
|
||||
.sp
|
||||
$ ansible\-galaxy list [role_name]
|
||||
.SS "OPTIONS"
|
||||
.PP
|
||||
\fB\-p\fR \fIROLES_PATH\fR, \fB\-\-roles\-path=\fR\fIROLES_PATH\fR
|
||||
.RS 4
|
||||
The path to the directory containing your roles\&. The default is the
|
||||
\fBroles_path\fR
|
||||
configured in your
|
||||
\fBansible\&.cfg\fR
|
||||
file (/etc/ansible/roles if not configured)
|
||||
.RE
|
||||
.SH "AUTHOR"
|
||||
.sp
|
||||
Ansible was originally written by Michael DeHaan\&. See the AUTHORS file for a complete list of contributors\&.
|
||||
.SH "COPYRIGHT"
|
||||
.sp
|
||||
Copyright \(co 2014, Michael DeHaan
|
||||
.sp
|
||||
Ansible is released under the terms of the GPLv3 License\&.
|
||||
.SH "SEE ALSO"
|
||||
.sp
|
||||
\fBansible\fR(1), \fBansible\-pull\fR(1), \fBansible\-doc\fR(1)
|
||||
.sp
|
||||
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
|
167
docs/man/man1/ansible-galaxy.1.asciidoc.in
Normal file
167
docs/man/man1/ansible-galaxy.1.asciidoc.in
Normal file
|
@ -0,0 +1,167 @@
|
|||
ansible-galaxy(1)
|
||||
===================
|
||||
:doctype: manpage
|
||||
:man source: Ansible
|
||||
:man version: %VERSION%
|
||||
:man manual: System administration commands
|
||||
|
||||
NAME
|
||||
----
|
||||
ansible-galaxy - manage roles using galaxy.ansible.com
|
||||
|
||||
|
||||
SYNOPSIS
|
||||
--------
|
||||
ansible-galaxy [init|info|install|list|remove] [--help] [options] ...
|
||||
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
|
||||
*Ansible Galaxy* is a shared repository for Ansible roles (added in
|
||||
ansible version 1.2). The ansible-galaxy command can be used to manage
|
||||
these roles, or by creating a skeleton framework for roles you'd like
|
||||
to upload to Galaxy.
|
||||
|
||||
COMMON OPTIONS
|
||||
--------------
|
||||
|
||||
*-h*, *--help*::
|
||||
|
||||
Show a help message related to the given sub-command.
|
||||
|
||||
|
||||
INSTALL
|
||||
-------
|
||||
|
||||
The *install* sub-command is used to install roles.
|
||||
|
||||
USAGE
|
||||
~~~~~
|
||||
|
||||
$ ansible-galaxy install [options] [-r FILE | role_name(s)[,version] | tar_file(s)]
|
||||
|
||||
Roles can be installed in several different ways:
|
||||
|
||||
* A username.rolename[,version] - this will install a single role. The Galaxy
|
||||
API will be contacted to provide the information about the role, and the
|
||||
corresponding .tar.gz will be downloaded from *github.com*. If the version
|
||||
is omitted, the most recent version available will be installed.
|
||||
|
||||
* A file name, using *-r* - this will install multiple roles listed one per
|
||||
line. The format of each line is the same as above: username.rolename[,version]
|
||||
|
||||
* A .tar.gz of a valid role you've downloaded directly from *github.com*. This
|
||||
is mainly useful when the system running Ansible does not have access to
|
||||
the Galaxy API, for instance when behind a firewall or proxy.
|
||||
|
||||
|
||||
OPTIONS
|
||||
~~~~~~~
|
||||
|
||||
*-f*, *--force*::
|
||||
|
||||
Force overwriting an existing role.
|
||||
|
||||
*-i*, *--ignore-errors*::
|
||||
|
||||
Ignore errors and continue with the next specified role.
|
||||
|
||||
*-n*, *--no-deps*::
|
||||
|
||||
Don't download roles listed as dependencies.
|
||||
|
||||
*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH'::
|
||||
|
||||
The path to the directory containing your roles. The default is the *roles_path*
|
||||
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
|
||||
|
||||
*-r* 'ROLE_FILE', *--role-file=*'ROLE_FILE'::
|
||||
|
||||
A file containing a list of roles to be imported, as specified above. This
|
||||
option cannot be used if a rolename or .tar.gz have been specified.
|
||||
|
||||
REMOVE
|
||||
------
|
||||
|
||||
The *remove* sub-command is used to remove one or more roles.
|
||||
|
||||
USAGE
|
||||
~~~~~
|
||||
|
||||
$ ansible-galaxy remove role1 role2 ...
|
||||
|
||||
OPTIONS
|
||||
~~~~~~~
|
||||
|
||||
*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH'::
|
||||
|
||||
The path to the directory containing your roles. The default is the *roles_path*
|
||||
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
|
||||
|
||||
INIT
|
||||
----
|
||||
|
||||
The *init* command is used to create an empty role suitable for uploading
|
||||
to https://galaxy.ansible.com (or for roles in general).
|
||||
|
||||
USAGE
|
||||
~~~~~
|
||||
|
||||
$ ansible-galaxy init [options] role_name
|
||||
|
||||
OPTIONS
|
||||
~~~~~~~
|
||||
|
||||
*-f*, *--force*::
|
||||
|
||||
Force overwriting an existing role.
|
||||
|
||||
*-p* 'INIT_PATH', *--init-path=*'INIT_PATH'::
|
||||
|
||||
The path in which the skeleton role will be created.The default is the current
|
||||
working directory.
|
||||
|
||||
LIST
|
||||
----
|
||||
|
||||
The *list* sub-command is used to show what roles are currently instaled.
|
||||
You can specify a role name, and if installed only that role will be shown.
|
||||
|
||||
USAGE
|
||||
~~~~~
|
||||
|
||||
$ ansible-galaxy list [role_name]
|
||||
|
||||
OPTIONS
|
||||
~~~~~~~
|
||||
|
||||
*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH'::
|
||||
|
||||
The path to the directory containing your roles. The default is the *roles_path*
|
||||
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
|
||||
|
||||
|
||||
AUTHOR
|
||||
------
|
||||
|
||||
Ansible was originally written by Michael DeHaan. See the AUTHORS file
|
||||
for a complete list of contributors.
|
||||
|
||||
|
||||
COPYRIGHT
|
||||
---------
|
||||
|
||||
Copyright © 2014, Michael DeHaan
|
||||
|
||||
Ansible is released under the terms of the GPLv3 License.
|
||||
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
|
||||
*ansible*(1), *ansible-pull*(1), *ansible-doc*(1)
|
||||
|
||||
Extensive documentation is available in the documentation site:
|
||||
<http://docs.ansible.com>. IRC and mailing list info can be found
|
||||
in file CONTRIBUTING.md, available in: <https://github.com/ansible/ansible>
|
|
@ -91,6 +91,66 @@ Prompt for the password to use for playbook plays that request sudo access, if a
|
|||
Desired sudo user (default=root)\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-S\fR, \fB\-\-su\fR
|
||||
.RS 4
|
||||
run operations with su\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-\-ask\-su\-pass\fR
|
||||
.RS 4
|
||||
Prompt for the password to use for playbook plays that request su access, if any\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-R\fR, \fISU_USER\fR, \fB\-\-sudo\-user=\fR\fISU_USER\fR
|
||||
.RS 4
|
||||
Desired su user (default=root)\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-\-ask\-vault\-pass\fR
|
||||
.RS 4
|
||||
Ask for vault password\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-\-vault\-password\-file=\fR\fIVAULT_PASSWORD_FILE\fR
|
||||
.RS 4
|
||||
Vault password file\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-\-force\-handlers\fR
|
||||
.RS 4
|
||||
Run play handlers even if a task fails\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-\-list\-hosts\fR
|
||||
.RS 4
|
||||
Outputs a list of matching hosts without executing anything else\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-\-list\-tasks\fR
|
||||
.RS 4
|
||||
List all tasks that would be executed\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-\-start\-at\-task=\fR\fISTART_AT\fR
|
||||
.RS 4
|
||||
Start the playbook at the task matching this name\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-\-step\fR
|
||||
.RS 4
|
||||
one-step-at-a-time: confirm each task before running\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-\-syntax\-check\fR
|
||||
.RS 4
|
||||
Perform a syntax check on the playbook, but do not execute it\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-\-private\-key\fR
|
||||
.RS 4
|
||||
Use this file to authenticate the connection\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-t\fR, \fITAGS\fR, \fB\fI\-\-tags=\fR\fR\fB\*(AqTAGS\fR
|
||||
.RS 4
|
||||
Only run plays and tasks tagged with these values\&.
|
||||
|
@ -147,6 +207,13 @@ is mostly useful for crontab or kickstarts\&.
|
|||
.RS 4
|
||||
Further limits the selected host/group patterns\&.
|
||||
.RE
|
||||
|
||||
.PP
|
||||
\fB\-\-version\fR
|
||||
.RS 4
|
||||
Show program's version number and exit\&.
|
||||
.RE
|
||||
|
||||
.SH "ENVIRONMENT"
|
||||
.sp
|
||||
The following environment variables may be specified\&.
|
||||
|
|
|
@ -76,11 +76,11 @@ access, if any.
|
|||
|
||||
Desired sudo user (default=root).
|
||||
|
||||
*-t*, 'TAGS', *'--tags=*'TAGS'::
|
||||
*-t*, 'TAGS', *--tags=*'TAGS'::
|
||||
|
||||
Only run plays and tasks tagged with these values.
|
||||
|
||||
*'--skip-tags=*'SKIP_TAGS'::
|
||||
*--skip-tags=*'SKIP_TAGS'::
|
||||
|
||||
Only run plays and tasks whose tags do not match these values.
|
||||
|
||||
|
|
103
docs/man/man1/ansible-vault.1
Normal file
103
docs/man/man1/ansible-vault.1
Normal file
|
@ -0,0 +1,103 @@
|
|||
'\" t
|
||||
.\" Title: ansible-vault
|
||||
.\" Author: [see the "AUTHOR" section]
|
||||
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
|
||||
.\" Date: 03/17/2014
|
||||
.\" Manual: System administration commands
|
||||
.\" Source: Ansible 1.6
|
||||
.\" Language: English
|
||||
.\"
|
||||
.TH "ANSIBLE\-VAULT" "1" "03/17/2014" "Ansible 1\&.6" "System administration commands"
|
||||
.\" -----------------------------------------------------------------
|
||||
.\" * Define some portability stuff
|
||||
.\" -----------------------------------------------------------------
|
||||
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
.\" http://bugs.debian.org/507673
|
||||
.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
|
||||
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
.ie \n(.g .ds Aq \(aq
|
||||
.el .ds Aq '
|
||||
.\" -----------------------------------------------------------------
|
||||
.\" * set default formatting
|
||||
.\" -----------------------------------------------------------------
|
||||
.\" disable hyphenation
|
||||
.nh
|
||||
.\" disable justification (adjust text to left margin only)
|
||||
.ad l
|
||||
.\" -----------------------------------------------------------------
|
||||
.\" * MAIN CONTENT STARTS HERE *
|
||||
.\" -----------------------------------------------------------------
|
||||
.SH "NAME"
|
||||
ansible-vault \- manage encrypted YAML data\&.
|
||||
.SH "SYNOPSIS"
|
||||
.sp
|
||||
ansible\-vault [create|decrypt|edit|encrypt|rekey] [\-\-help] [options] file_name
|
||||
.SH "DESCRIPTION"
|
||||
.sp
|
||||
\fBansible\-vault\fR can encrypt any structured data file used by Ansible\&. This can include \fBgroup_vars/\fR or \fBhost_vars/\fR inventory variables, variables loaded by \fBinclude_vars\fR or \fBvars_files\fR, or variable files passed on the ansible\-playbook command line with \fB\-e @file\&.yml\fR or \fB\-e @file\&.json\fR\&. Role variables and defaults are also included!
|
||||
.sp
|
||||
Because Ansible tasks, handlers, and so on are also data, these can also be encrypted with vault\&. If you\(cqd like to not betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted\&.
|
||||
.SH "COMMON OPTIONS"
|
||||
.sp
|
||||
The following options are available to all sub\-commands:
|
||||
.PP
|
||||
\fB\-\-vault\-password\-file=\fR\fIFILE\fR
|
||||
.RS 4
|
||||
A file containing the vault password to be used during the encryption/decryption steps\&. Be sure to keep this file secured if it is used\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-h\fR, \fB\-\-help\fR
|
||||
.RS 4
|
||||
Show a help message related to the given sub\-command\&.
|
||||
.RE
|
||||
.PP
|
||||
\fB\-\-debug\fR
|
||||
.RS 4
|
||||
Enable debugging output for troubleshooting\&.
|
||||
.RE
|
||||
.SH "CREATE"
|
||||
.sp
|
||||
\fB$ ansible\-vault create [options] FILE\fR
|
||||
.sp
|
||||
The \fBcreate\fR sub\-command is used to initialize a new encrypted file\&.
|
||||
.sp
|
||||
First you will be prompted for a password\&. The password used with vault currently must be the same for all files you wish to use together at the same time\&.
|
||||
.sp
|
||||
After providing a password, the tool will launch whatever editor you have defined with $EDITOR, and defaults to vim\&. Once you are done with the editor session, the file will be saved as encrypted data\&.
|
||||
.sp
|
||||
The default cipher is AES (which is shared\-secret based)\&.
|
||||
.SH "EDIT"
|
||||
.sp
|
||||
\fB$ ansible\-vault edit [options] FILE\fR
|
||||
.sp
|
||||
The \fBedit\fR sub\-command is used to modify a file which was previously encrypted using ansible\-vault\&.
|
||||
.sp
|
||||
This command will decrypt the file to a temporary file and allow you to edit the file, saving it back when done and removing the temporary file\&.
|
||||
.SH "REKEY"
|
||||
.sp
|
||||
*$ ansible\-vault rekey [options] FILE_1 [FILE_2, \&..., FILE_N]
|
||||
.sp
|
||||
The \fBrekey\fR command is used to change the password on a vault\-encrypted files\&. This command can update multiple files at once, and will prompt for both the old and new passwords before modifying any data\&.
|
||||
.SH "ENCRYPT"
|
||||
.sp
|
||||
*$ ansible\-vault encrypt [options] FILE_1 [FILE_2, \&..., FILE_N]
|
||||
.sp
|
||||
The \fBencrypt\fR sub\-command is used to encrypt pre\-existing data files\&. As with the \fBrekey\fR command, you can specify multiple files in one command\&.
|
||||
.SH "DECRYPT"
|
||||
.sp
|
||||
*$ ansible\-vault decrypt [options] FILE_1 [FILE_2, \&..., FILE_N]
|
||||
.sp
|
||||
The \fBdecrypt\fR sub\-command is used to remove all encryption from data files\&. The files will be stored as plain\-text YAML once again, so be sure that you do not run this command on data files with active passwords or other sensitive data\&. In most cases, users will want to use the \fBedit\fR sub\-command to modify the files securely\&.
|
||||
.SH "AUTHOR"
|
||||
.sp
|
||||
Ansible was originally written by Michael DeHaan\&. See the AUTHORS file for a complete list of contributors\&.
|
||||
.SH "COPYRIGHT"
|
||||
.sp
|
||||
Copyright \(co 2014, Michael DeHaan
|
||||
.sp
|
||||
Ansible is released under the terms of the GPLv3 License\&.
|
||||
.SH "SEE ALSO"
|
||||
.sp
|
||||
\fBansible\fR(1), \fBansible\-pull\fR(1), \fBansible\-doc\fR(1)
|
||||
.sp
|
||||
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
|
126
docs/man/man1/ansible-vault.1.asciidoc.in
Normal file
126
docs/man/man1/ansible-vault.1.asciidoc.in
Normal file
|
@ -0,0 +1,126 @@
|
|||
ansible-vault(1)
|
||||
================
|
||||
:doctype: manpage
|
||||
:man source: Ansible
|
||||
:man version: %VERSION%
|
||||
:man manual: System administration commands
|
||||
|
||||
NAME
|
||||
----
|
||||
ansible-vault - manage encrypted YAML data.
|
||||
|
||||
|
||||
SYNOPSIS
|
||||
--------
|
||||
ansible-vault [create|decrypt|edit|encrypt|rekey] [--help] [options] file_name
|
||||
|
||||
|
||||
DESCRIPTION
|
||||
-----------
|
||||
|
||||
*ansible-vault* can encrypt any structured data file used by Ansible. This can include
|
||||
*group_vars/* or *host_vars/* inventory variables, variables loaded by *include_vars* or
|
||||
*vars_files*, or variable files passed on the ansible-playbook command line with
|
||||
*-e @file.yml* or *-e @file.json*. Role variables and defaults are also included!
|
||||
|
||||
Because Ansible tasks, handlers, and so on are also data, these can also be encrypted with
|
||||
vault. If you’d like to not betray what variables you are even using, you can go as far to
|
||||
keep an individual task file entirely encrypted.
|
||||
|
||||
|
||||
COMMON OPTIONS
|
||||
--------------
|
||||
|
||||
The following options are available to all sub-commands:
|
||||
|
||||
*--vault-password-file=*'FILE'::
|
||||
|
||||
A file containing the vault password to be used during the encryption/decryption
|
||||
steps. Be sure to keep this file secured if it is used.
|
||||
|
||||
*-h*, *--help*::
|
||||
|
||||
Show a help message related to the given sub-command.
|
||||
|
||||
*--debug*::
|
||||
|
||||
Enable debugging output for troubleshooting.
|
||||
|
||||
CREATE
|
||||
------
|
||||
|
||||
*$ ansible-vault create [options] FILE*
|
||||
|
||||
The *create* sub-command is used to initialize a new encrypted file.
|
||||
|
||||
First you will be prompted for a password. The password used with vault currently
|
||||
must be the same for all files you wish to use together at the same time.
|
||||
|
||||
After providing a password, the tool will launch whatever editor you have defined
|
||||
with $EDITOR, and defaults to vim. Once you are done with the editor session, the
|
||||
file will be saved as encrypted data.
|
||||
|
||||
The default cipher is AES (which is shared-secret based).
|
||||
|
||||
EDIT
|
||||
----
|
||||
|
||||
*$ ansible-vault edit [options] FILE*
|
||||
|
||||
The *edit* sub-command is used to modify a file which was previously encrypted
|
||||
using ansible-vault.
|
||||
|
||||
This command will decrypt the file to a temporary file and allow you to edit the
|
||||
file, saving it back when done and removing the temporary file.
|
||||
|
||||
REKEY
|
||||
-----
|
||||
|
||||
*$ ansible-vault rekey [options] FILE_1 [FILE_2, ..., FILE_N]
|
||||
|
||||
The *rekey* command is used to change the password on a vault-encrypted files.
|
||||
This command can update multiple files at once, and will prompt for both the
|
||||
old and new passwords before modifying any data.
|
||||
|
||||
ENCRYPT
|
||||
-------
|
||||
|
||||
*$ ansible-vault encrypt [options] FILE_1 [FILE_2, ..., FILE_N]
|
||||
|
||||
The *encrypt* sub-command is used to encrypt pre-existing data files. As with the
|
||||
*rekey* command, you can specify multiple files in one command.
|
||||
|
||||
DECRYPT
|
||||
-------
|
||||
|
||||
*$ ansible-vault decrypt [options] FILE_1 [FILE_2, ..., FILE_N]
|
||||
|
||||
The *decrypt* sub-command is used to remove all encryption from data files. The files
|
||||
will be stored as plain-text YAML once again, so be sure that you do not run this
|
||||
command on data files with active passwords or other sensitive data. In most cases,
|
||||
users will want to use the *edit* sub-command to modify the files securely.
|
||||
|
||||
|
||||
AUTHOR
|
||||
------
|
||||
|
||||
Ansible was originally written by Michael DeHaan. See the AUTHORS file
|
||||
for a complete list of contributors.
|
||||
|
||||
|
||||
COPYRIGHT
|
||||
---------
|
||||
|
||||
Copyright © 2014, Michael DeHaan
|
||||
|
||||
Ansible is released under the terms of the GPLv3 License.
|
||||
|
||||
|
||||
SEE ALSO
|
||||
--------
|
||||
|
||||
*ansible*(1), *ansible-pull*(1), *ansible-doc*(1)
|
||||
|
||||
Extensive documentation is available in the documentation site:
|
||||
<http://docs.ansible.com>. IRC and mailing list info can be found
|
||||
in file CONTRIBUTING.md, available in: <https://github.com/ansible/ansible>
|
|
@ -123,7 +123,7 @@ a lot shorter than this::
|
|||
for arg in arguments:
|
||||
|
||||
# ignore any arguments without an equals in it
|
||||
if arg.find("=") != -1:
|
||||
if "=" in arg:
|
||||
|
||||
(key, value) = arg.split("=")
|
||||
|
||||
|
|
|
@ -140,16 +140,16 @@ Then you can use the facts inside your template, like this::
|
|||
|
||||
.. _programatic_access_to_a_variable:
|
||||
|
||||
How do I access a variable name programatically?
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
How do I access a variable name programmatically?
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied
|
||||
via a role parameter or other input. Variable names can be built by adding strings together, like so::
|
||||
|
||||
{{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }}
|
||||
|
||||
The trick about going through hostvars is neccessary because it's a dictionary of the entire namespace of variables. 'inventory_hostname'
|
||||
is a magic variable that indiciates the current host you are looping over in the host loop.
|
||||
The trick about going through hostvars is necessary because it's a dictionary of the entire namespace of variables. 'inventory_hostname'
|
||||
is a magic variable that indicates the current host you are looping over in the host loop.
|
||||
|
||||
.. _first_host_in_a_group:
|
||||
|
||||
|
@ -179,17 +179,7 @@ Notice how we interchanged the bracket syntax for dots -- that can be done anywh
|
|||
How do I copy files recursively onto a target host?
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
The "copy" module doesn't handle recursive copies of directories. A common solution to do this is to use a local action to call 'rsync' to recursively copy files to the managed servers.
|
||||
|
||||
Here is an example::
|
||||
|
||||
---
|
||||
# ...
|
||||
tasks:
|
||||
- name: recursively copy files from management server to target
|
||||
local_action: command rsync -a /path/to/files $inventory_hostname:/path/to/target/
|
||||
|
||||
Note that you'll need passphrase-less SSH or ssh-agent set up to let rsync copy without prompting for a passphrase or password.
|
||||
The "copy" module has a recursive parameter, though if you want to do something more efficient for a large number of files, take a look at the "synchronize" module instead, which wraps rsync. See the module index for info on both of these modules.
|
||||
|
||||
.. _shell_env:
|
||||
|
||||
|
@ -256,7 +246,7 @@ Great question! Documentation for Ansible is kept in the main project git repos
|
|||
How do I keep secret data in my playbook?
|
||||
+++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
If you would like to keep secret data in your Ansible content and still share it publically or keep things in source control, see :doc:`playbooks_vault`.
|
||||
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :doc:`playbooks_vault`.
|
||||
|
||||
.. _i_dont_see_my_question:
|
||||
|
||||
|
|
|
@ -129,7 +129,7 @@ it will be automatically discoverable via a dynamic group like so::
|
|||
- ping
|
||||
|
||||
Using this philosophy can be a great way to manage groups dynamically, without
|
||||
having to maintain seperate inventory.
|
||||
having to maintain separate inventory.
|
||||
|
||||
.. _aws_pull:
|
||||
|
||||
|
|
245
docsite/rst/guide_gce.rst
Normal file
245
docsite/rst/guide_gce.rst
Normal file
|
@ -0,0 +1,245 @@
|
|||
Google Cloud Platform Guide
|
||||
===========================
|
||||
|
||||
.. gce_intro:
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
.. note:: This section of the documentation is under construction. We are in the process of adding more examples about all of the GCE modules and how they work together. Upgrades via github pull requests are welcomed!
|
||||
|
||||
Ansible contains modules for managing Google Compute Engine resources, including creating instances, controlling network access, working with persistent disks, and managing
|
||||
load balancers. Additionally, there is an inventory plugin that can automatically suck down all of your GCE instances into Ansible dynamic inventory, and create groups by tag and other properties.
|
||||
|
||||
The GCE modules all require the apache-libcloud module, which you can install from pip:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ pip install apache-libcloud
|
||||
|
||||
.. note:: If you're using Ansible on Mac OS X, libcloud also needs to access a CA cert chain. You'll need to download one (you can get one for `here <http://curl.haxx.se/docs/caextract.html>`_.)
|
||||
|
||||
Credentials
|
||||
-----------
|
||||
|
||||
To work with the GCE modules, you'll first need to get some credentials. You can create new one from the `console <https://console.developers.google.com/>`_ by going to the "APIs and Auth" section. Once you've created a new client ID and downloaded the generated private key (in the `pkcs12 format <http://en.wikipedia.org/wiki/PKCS_12>`_), you'll need to convert the key by running the following command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ openssl pkcs12 -in pkey.pkcs12 -passin pass:notasecret -nodes -nocerts | openssl rsa -out pkey.pem
|
||||
|
||||
There are two different ways to provide credentials to Ansible so that it can talk with Google Cloud for provisioning and configuration actions:
|
||||
|
||||
* by providing to the modules directly
|
||||
* by populating a ``secrets.py`` file
|
||||
|
||||
Calling Modules By Passing Credentials
|
||||
``````````````````````````````````````
|
||||
|
||||
For the GCE modules you can specify the credentials as arguments:
|
||||
|
||||
* ``service_account_email``: email associated with the project
|
||||
* ``pem_file``: path to the pem file
|
||||
* ``project_id``: id of the project
|
||||
|
||||
For example, to create a new instance using the cloud module, you can use the following configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
- name: Create instance(s)
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
|
||||
vars:
|
||||
service_account_email: unique-id@developer.gserviceaccount.com
|
||||
pem_file: /path/to/project.pem
|
||||
project_id: project-id
|
||||
machine_type: n1-standard-1
|
||||
image: debian-7
|
||||
|
||||
tasks:
|
||||
|
||||
- name: Launch instances
|
||||
gce:
|
||||
instance_names: dev
|
||||
machine_type: "{{ machine_type }}"
|
||||
image: "{{ image }}"
|
||||
service_account_email: "{{ service_account_email }}"
|
||||
pem_file: "{{ pem_file }}"
|
||||
project_id: "{{ project_id }}"
|
||||
|
||||
Calling Modules with secrets.py
|
||||
```````````````````````````````
|
||||
|
||||
Create a file ``secrets.py`` looking like following, and put it in some folder which is in your ``$PYTHONPATH``:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
GCE_PARAMS = ('i...@project.googleusercontent.com', '/path/to/project.pem')
|
||||
GCE_KEYWORD_PARAMS = {'project': 'project-name'}
|
||||
|
||||
Now the modules can be used as above, but the account information can be omitted.
|
||||
|
||||
GCE Dynamic Inventory
|
||||
---------------------
|
||||
|
||||
The best way to interact with your hosts is to use the gce inventory plugin, which dynamically queries GCE and tells Ansible what nodes can be managed.
|
||||
|
||||
Note that when using the inventory script ``gce.py``, you also need to populate the ``gce.ini`` file that you can find in the plugins/inventory directory of the ansible checkout.
|
||||
|
||||
To use the GCE dynamic inventory script, copy ``gce.py`` from ``plugins/inventory`` into your inventory directory and make it executable. You can specify credentials for ``gce.py`` using the ``GCE_INI_PATH`` environment variable -- the default is to look for gce.ini in the same directory as the inventory script.
|
||||
|
||||
Let's see if inventory is working:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ ./gce.py --list
|
||||
|
||||
You should see output describing the hosts you have, if any, running in Google Compute Engine.
|
||||
|
||||
Now let's see if we can use the inventory script to talk to Google.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ GCE_INI_PATH=~/.gce.ini ansible all -i gce.py -m setup
|
||||
hostname | success >> {
|
||||
"ansible_facts": {
|
||||
"ansible_all_ipv4_addresses": [
|
||||
"x.x.x.x"
|
||||
],
|
||||
|
||||
As with all dynamic inventory plugins in Ansible, you can configure the inventory path in ansible.cfg. The recommended way to use the inventory is to create an ``inventory`` directory, and place both the ``gce.py`` script and a file containing ``localhost`` in it. This can allow for cloud inventory to be used alongside local inventory (such as a physical datacenter) or machines running in different providers.
|
||||
|
||||
Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead of an individual file will cause ansible to evaluate each file in that directory for inventory.
|
||||
|
||||
Let's once again use our inventory script to see if it can talk to Google Cloud:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ ansible all -i inventory/ -m setup
|
||||
hostname | success >> {
|
||||
"ansible_facts": {
|
||||
"ansible_all_ipv4_addresses": [
|
||||
"x.x.x.x"
|
||||
],
|
||||
|
||||
The output should be similar to the previous command. If you're wanting less output and just want to check for SSH connectivity, use "-m" ping instead.
|
||||
|
||||
Use Cases
|
||||
---------
|
||||
|
||||
For the following use case, let's use this small shell script as a wrapper.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
#!/bin/bash
|
||||
PLAYBOOK="$1"
|
||||
|
||||
if [ -z $PLAYBOOK ]; then
|
||||
echo "You need to pass a playback as argument to this script."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
export SSL_CERT_FILE=$(pwd)/cacert.cer
|
||||
export ANSIBLE_HOST_KEY_CHECKING=False
|
||||
|
||||
if [ ! -f "$SSL_CERT_FILE" ]; then
|
||||
curl -O http://curl.haxx.se/ca/cacert.pem
|
||||
fi
|
||||
|
||||
ansible-playbook -v -i inventory/ "$PLAYBOOK"
|
||||
|
||||
|
||||
Create an instance
|
||||
``````````````````
|
||||
|
||||
The GCE module provides the ability to provision instances within Google Compute Engine. The provisioning task is typically performed from your Ansible control server against Google Cloud's API.
|
||||
|
||||
A playbook would looks like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
- name: Create instance(s)
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
connection: local
|
||||
|
||||
vars:
|
||||
machine_type: n1-standard-1 # default
|
||||
image: debian-7
|
||||
service_account_email: unique-id@developer.gserviceaccount.com
|
||||
pem_file: /path/to/project.pem
|
||||
project_id: project-id
|
||||
|
||||
tasks:
|
||||
- name: Launch instances
|
||||
gce:
|
||||
instance_names: dev
|
||||
machine_type: "{{ machine_type }}"
|
||||
image: "{{ image }}"
|
||||
service_account_email: "{{ service_account_email }}"
|
||||
pem_file: "{{ pem_file }}"
|
||||
project_id: "{{ project_id }}"
|
||||
tags: webserver
|
||||
register: gce
|
||||
|
||||
- name: Wait for SSH to come up
|
||||
wait_for: host={{ item.public_ip }} port=22 delay=10 timeout=60
|
||||
with_items: gce.instance_data
|
||||
|
||||
- name: add_host hostname={{ item.public_ip }} groupname=new_instances
|
||||
|
||||
- name: Manage new instances
|
||||
hosts: new_instances
|
||||
connection: ssh
|
||||
roles:
|
||||
- base_configuration
|
||||
- production_server
|
||||
|
||||
Note that use of the "add_host" module above creates a temporary, in-memory group. This means that a play in the same playbook can then manage machines
|
||||
in the 'new_instances' group, if so desired. Any sort of arbitrary configuration is possible at this point.
|
||||
|
||||
Configuring instances in a group
|
||||
````````````````````````````````
|
||||
|
||||
All of the created instances in GCE are grouped by tag. Since this is a cloud, it's probably best to ignore hostnames and just focus on group management.
|
||||
|
||||
Normally we'd also use roles here, but the following example is a simple one. Here we will also use the "gce_net" module to open up access to port 80 on
|
||||
these nodes.
|
||||
|
||||
The variables in the 'vars' section could also be kept in a 'vars_files' file or something encrypted with Ansible-vault, if you so choose. This is just
|
||||
a basic example of what is possible::
|
||||
|
||||
- name: Setup web servers
|
||||
hosts: tag_webserver
|
||||
gather_facts: no
|
||||
|
||||
vars:
|
||||
machine_type: n1-standard-1 # default
|
||||
image: debian-7
|
||||
service_account_email: unique-id@developer.gserviceaccount.com
|
||||
pem_file: /path/to/project.pem
|
||||
project_id: project-id
|
||||
|
||||
roles:
|
||||
|
||||
- name: Install lighttpd
|
||||
apt: pkg=lighttpd state=installed
|
||||
sudo: True
|
||||
|
||||
- name: Allow HTTP
|
||||
local_action: gce_net
|
||||
args:
|
||||
fwname: "all-http"
|
||||
name: "default"
|
||||
allowed: "tcp:80"
|
||||
state: "present"
|
||||
service_account_email: "{{ service_account_email }}"
|
||||
pem_file: "{{ pem_file }}"
|
||||
project_id: "{{ project_id }}"
|
||||
|
||||
By pointing your browser to the IP of the server, you should see a page welcoming you.
|
||||
|
||||
Upgrades to this documentation are welcome, hit the github link at the top right of this page if you would like to make additions!
|
||||
|
|
@ -11,7 +11,7 @@ Introduction
|
|||
Ansible contains a number of core modules for interacting with Rackspace Cloud.
|
||||
|
||||
The purpose of this section is to explain how to put Ansible modules together
|
||||
(and use inventory scripts) to use Ansible in Rackspace Cloud context.
|
||||
(and use inventory scripts) to use Ansible in a Rackspace Cloud context.
|
||||
|
||||
Prerequisites for using the rax modules are minimal. In addition to ansible itself,
|
||||
all of the modules require and are tested against pyrax 1.5 or higher.
|
||||
|
@ -32,7 +32,7 @@ to add localhost to the inventory file. (Ansible may not require this manual st
|
|||
[localhost]
|
||||
localhost ansible_connection=local
|
||||
|
||||
In playbook steps we'll typically be using the following pattern:
|
||||
In playbook steps, we'll typically be using the following pattern:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
@ -66,21 +66,19 @@ https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authentic
|
|||
Running from a Python Virtual Environment (Optional)
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Special considerations need to
|
||||
be taken if pyrax is not installed globally but instead using a python virtualenv (it's fine if you install it globally).
|
||||
Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.
|
||||
|
||||
Ansible assumes, unless otherwise instructed, that the python binary will live at
|
||||
/usr/bin/python. This is done so via the interpret line in the modules, however
|
||||
when instructed using ansible_python_interpreter, ansible will use this specified path instead for finding
|
||||
python.
|
||||
|
||||
If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
|
||||
There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done via the interpreter line in modules, however when instructed by setting the inventory variable 'ansible_python_interpreter', Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on 'localhost', or perhaps running via 'local_action', are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[localhost]
|
||||
localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python
|
||||
|
||||
.. note::
|
||||
|
||||
pyrax may be installed in the global Python package scope or in a virtual environment. There are no special considerations to keep in mind when installing pyrax.
|
||||
|
||||
.. _provisioning:
|
||||
|
||||
Provisioning
|
||||
|
@ -88,16 +86,20 @@ Provisioning
|
|||
|
||||
Now for the fun parts.
|
||||
|
||||
The 'rax' module provides the ability to provision instances within Rackspace Cloud. Typically the
|
||||
provisioning task will be performed from your Ansible control server against the Rackspace cloud API.
|
||||
The 'rax' module provides the ability to provision instances within Rackspace Cloud. Typically the provisioning task will be performed from your Ansible control server (in our example, localhost) against the Rackspace cloud API. This is done for several reasons:
|
||||
|
||||
- Avoiding installing the pyrax library on remote nodes
|
||||
- No need to encrypt and distribute credentials to remote nodes
|
||||
- Speed and simplicity
|
||||
|
||||
.. note::
|
||||
|
||||
Authentication with the Rackspace-related modules is handled by either
|
||||
specifying your username and API key as environment variables or passing
|
||||
them as module arguments.
|
||||
them as module arguments, or by specifying the location of a credentials
|
||||
file.
|
||||
|
||||
Here is a basic example of provisioning a instance in ad-hoc mode:
|
||||
Here is a basic example of provisioning an instance in ad-hoc mode:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
@ -119,8 +121,9 @@ Here's what it would look like in a playbook, assuming the parameters were defin
|
|||
wait: yes
|
||||
register: rax
|
||||
|
||||
By registering the return value of the step, it is then possible to dynamically add the resulting hosts to inventory (temporarily, in memory).
|
||||
This facilitates performing configuration actions on the hosts immediately in a subsequent task::
|
||||
The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By registering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the following example, the servers that were successfully created using the above task are dynamically added to a group called "raxhosts", with each nodes hostname, IP address, and root password being added to the inventory.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
- name: Add the instances we created (by public IP) to the group 'raxhosts'
|
||||
local_action:
|
||||
|
@ -132,7 +135,9 @@ This facilitates performing configuration actions on the hosts immediately in a
|
|||
with_items: rax.success
|
||||
when: rax.action == 'create'
|
||||
|
||||
With the host group now created, a second play in your provision playbook could now configure them, for example::
|
||||
With the host group now created, the next play in this playbook could now configure servers belonging to the raxhosts group.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
- name: Configuration play
|
||||
hosts: raxhosts
|
||||
|
@ -141,7 +146,6 @@ With the host group now created, a second play in your provision playbook could
|
|||
- ntp
|
||||
- webserver
|
||||
|
||||
|
||||
The method above ties the configuration of a host with the provisioning step. This isn't always what you want, and leads us
|
||||
to the next section.
|
||||
|
||||
|
@ -150,41 +154,28 @@ to the next section.
|
|||
Host Inventory
|
||||
``````````````
|
||||
|
||||
Once your nodes are spun up, you'll probably want to talk to them again.
|
||||
Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle his is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up Ansible via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, etc. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
|
||||
|
||||
The best way to handle his is to use the rax inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what
|
||||
nodes you have to manage.
|
||||
|
||||
You might want to use this even if you are spinning up Ansible via other tools, including the Rackspace Cloud user interface.
|
||||
|
||||
The inventory plugin can be used to group resources by their meta data. Utilizing meta data is highly
|
||||
recommended in rax and can provide an easy way to sort between host groups and roles.
|
||||
|
||||
If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file,
|
||||
though this is less recommended.
|
||||
|
||||
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common
|
||||
directory and be sure the scripts are chmod +x, and the INI-based ones are not.
|
||||
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
|
||||
|
||||
.. _raxpy:
|
||||
|
||||
rax.py
|
||||
++++++
|
||||
|
||||
To use the rackspace dynamic inventory script, copy ``rax.py`` from ``plugins/inventory`` into your inventory directory and make it executable. You can specify credentials for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable.
|
||||
To use the rackspace dynamic inventory script, copy ``rax.py`` into your inventory directory and make it executable. You can specify a credentails file for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable.
|
||||
|
||||
.. note:: Dynamic inventory scripts (like ``rax.py``) are saved in ``/usr/share/ansible/inventory`` if Ansible has been installed globally. If installed to a virtualenv, the inventory scripts are installed to ``$VIRTUALENV/share/inventory``.
|
||||
|
||||
.. note:: Users of :doc:`tower` will note that dynamic inventory is natively supported by Tower, and all you have to do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through these steps::
|
||||
|
||||
$ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup
|
||||
|
||||
``rax.py`` also accepts a ``RAX_REGION`` environment variable, which can contain an individual region, or a
|
||||
comma separated list of regions.
|
||||
``rax.py`` also accepts a ``RAX_REGION`` environment variable, which can contain an individual region, or a comma separated list of regions.
|
||||
|
||||
When using ``rax.py``, you will not have a 'localhost' defined in the inventory.
|
||||
|
||||
As mentioned previously, you will often be running most of these modules outside of the host loop,
|
||||
and will need 'localhost' defined. The recommended way to do this, would be to create an ``inventory`` directory,
|
||||
and place both the ``rax.py`` script and a file containing ``localhost`` in it.
|
||||
As mentioned previously, you will often be running most of these modules outside of the host loop, and will need 'localhost' defined. The recommended way to do this, would be to create an ``inventory`` directory, and place both the ``rax.py`` script and a file containing ``localhost`` in it.
|
||||
|
||||
Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead
|
||||
of an individual file, will cause ansible to evaluate each file in that directory for inventory.
|
||||
|
@ -295,8 +286,7 @@ following information, which will be utilized for inventory and variables.
|
|||
Standard Inventory
|
||||
++++++++++++++++++
|
||||
|
||||
When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin),
|
||||
it may still be adventageous to retrieve discoverable hostvar information from the Rackspace API.
|
||||
When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be adventageous to retrieve discoverable hostvar information from the Rackspace API.
|
||||
|
||||
This can be achieved with the ``rax_facts`` module and an inventory file similar to the following:
|
||||
|
||||
|
@ -579,7 +569,7 @@ Autoscaling with Tower
|
|||
|
||||
:doc:`tower` also contains a very nice feature for auto-scaling use cases.
|
||||
In this mode, a simple curl script can call a defined URL and the server will "dial out" to the requester
|
||||
and configure an instance that is spinning up. This can be a great way to reconfigure ephmeral nodes.
|
||||
and configure an instance that is spinning up. This can be a great way to reconfigure ephemeral nodes.
|
||||
See the Tower documentation for more details.
|
||||
|
||||
A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded
|
||||
|
@ -587,9 +577,16 @@ and less information has to be shared with remote hosts.
|
|||
|
||||
.. _pending_information:
|
||||
|
||||
Pending Information
|
||||
```````````````````
|
||||
Orchestration in the Rackspace Cloud
|
||||
++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other pice of software in an environment. Complex deployments might have previously required manaul manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additioanl nodes contingent on the current number of running nodes, or the configuration of a clustered applicaiton dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:
|
||||
|
||||
* Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool
|
||||
* Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed
|
||||
* A procedure where app log files are uploaded to a central location, like Cloud Files, before a node is decommissioned
|
||||
* Servers and load balancers that have DNS receords created and destroyed on creation and decomissioning, respectively
|
||||
|
||||
|
||||
More to come!
|
||||
|
||||
|
||||
|
|
|
@ -172,7 +172,7 @@ Here's another example, from the same template::
|
|||
{% endfor %}
|
||||
|
||||
This loops over all of the hosts in the group called ``monitoring``, and adds an ACCEPT line for
|
||||
each monitoring hosts's default IPV4 address to the current machine's iptables configuration, so that Nagios can monitor those hosts.
|
||||
each monitoring hosts' default IPV4 address to the current machine's iptables configuration, so that Nagios can monitor those hosts.
|
||||
|
||||
You can learn a lot more about Jinja2 and its capabilities `here <http://jinja.pocoo.org/docs/>`_, and you
|
||||
can read more about Ansible variables in general in the :doc:`playbooks_variables` section.
|
||||
|
@ -184,7 +184,7 @@ The Rolling Upgrade
|
|||
|
||||
Now you have a fully-deployed site with web servers, a load balancer, and monitoring. How do you update it? This is where Ansible's
|
||||
orchestration features come into play. While some applications use the term 'orchestration' to mean basic ordering or command-blasting, Ansible
|
||||
referes to orchestration as 'conducting machines like an orchestra', and has a pretty sophisticated engine for it.
|
||||
refers to orchestration as 'conducting machines like an orchestra', and has a pretty sophisticated engine for it.
|
||||
|
||||
Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook, called ``rolling_upgrade.yml``.
|
||||
|
||||
|
@ -201,7 +201,7 @@ The next part is the update play. The first part looks like this::
|
|||
user: root
|
||||
serial: 1
|
||||
|
||||
This is just a normal play definition, operating on the ``webservers`` group. The ``serial`` keyword tells Ansible how many servers to operate on at once. If it's not specified, Ansible will paralleize these operations up to the default "forks" limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set ``serial`` to 1, for one host at a time. If you have 100, maybe you could set ``serial`` to 10, for ten at a time.
|
||||
This is just a normal play definition, operating on the ``webservers`` group. The ``serial`` keyword tells Ansible how many servers to operate on at once. If it's not specified, Ansible will parallelize these operations up to the default "forks" limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set ``serial`` to 1, for one host at a time. If you have 100, maybe you could set ``serial`` to 10, for ten at a time.
|
||||
|
||||
Here is the next part of the update play::
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ Introduction
|
|||
````````````
|
||||
|
||||
Vagrant is a tool to manage virtual machine environments, and allows you to
|
||||
configure and use reproducable work environments on top of various
|
||||
configure and use reproducible work environments on top of various
|
||||
virtualization and cloud platforms. It also has integration with Ansible as a
|
||||
provisioner for these virtual machines, and the two tools work together well.
|
||||
|
||||
|
|
|
@ -8,8 +8,9 @@ This section is new and evolving. The idea here is explore particular use cases
|
|||
|
||||
guide_aws
|
||||
guide_rax
|
||||
guide_gce
|
||||
guide_vagrant
|
||||
guide_rolling_upgrade
|
||||
|
||||
Pending topics may include: Docker, Jenkins, Google Compute Engine, Linode/Digital Ocean, Continous Deployment, and more.
|
||||
Pending topics may include: Docker, Jenkins, Google Compute Engine, Linode/Digital Ocean, Continuous Deployment, and more.
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ Ansible Guru
|
|||
|
||||
While many users should be able to get on fine with the documentation, mailing list, and IRC, sometimes you want a bit more.
|
||||
|
||||
`Ansible Guru <http://ansible.com/ansible-guru>`_ is an offering from Ansible, Inc that helps users who would like more dedicated help with Ansible, including building playbooks, best practices, architecture suggestions, and more -- all from our awesome support and services team. It also includes some useful discounts and also some free T-shirts, though you shoudn't get it just for the free shirts! It's a great way to train up to becoming an Ansible expert.
|
||||
`Ansible Guru <http://ansible.com/ansible-guru>`_ is an offering from Ansible, Inc that helps users who would like more dedicated help with Ansible, including building playbooks, best practices, architecture suggestions, and more -- all from our awesome support and services team. It also includes some useful discounts and also some free T-shirts, though you shouldn't get it just for the free shirts! It's a great way to train up to becoming an Ansible expert.
|
||||
|
||||
For those interested, click through the link above. You can sign up in minutes!
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ We believe simplicity is relevant to all sizes of environments and design for bu
|
|||
Ansible manages machines in an agentless manner. There is never a question of how to
|
||||
upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. As OpenSSH is one of the most peer reviewed open source components, the security exposure of using the tool is greatly reduced. Ansible is decentralized -- it relies on your existing OS credentials to control access to remote machines; if needed it can easily connect with Kerberos, LDAP, and other centralized authentication management systems.
|
||||
|
||||
This documentation covers the current released version of Ansible (1.5) and also some development version features (1.6). For recent features, in each section, the version of Ansible where the feature is added is indicated. Ansible, Inc releases a new major release of Ansible approximately every 2 months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup, while the community around new modules and plugins being developed and contributed moves very very quickly, typically adding 20 or so new modules in each release.
|
||||
This documentation covers the current released version of Ansible (1.5.3) and also some development version features (1.6). For recent features, in each section, the version of Ansible where the feature is added is indicated. Ansible, Inc releases a new major release of Ansible approximately every 2 months. The core application evolves somewhat conservatively, valuing simplicity in language design and setup, while the community around new modules and plugins being developed and contributed moves very very quickly, typically adding 20 or so new modules in each release.
|
||||
|
||||
.. _an_introduction:
|
||||
|
||||
|
|
|
@ -248,7 +248,7 @@ Be sure to use a high enough ``--forks`` value if you want to get all of your jo
|
|||
very quickly. After the time limit (in seconds) runs out (``-B``), the process on
|
||||
the remote nodes will be terminated.
|
||||
|
||||
Typically you'll be only be backgrounding long-running
|
||||
Typically you'll only be backgrounding long-running
|
||||
shell commands or software upgrades only. Backgrounding the copy module does not do a background file transfer. :doc:`Playbooks <playbooks>` also support polling, and have a simplified syntax for this.
|
||||
|
||||
.. _checking_facts:
|
||||
|
|
|
@ -211,6 +211,16 @@ is very very conservative::
|
|||
|
||||
forks=5
|
||||
|
||||
.. _gathering:
|
||||
|
||||
gathering
|
||||
=========
|
||||
|
||||
New in 1.6, the 'gathering' setting controls the default policy of facts gathering (variables discovered about remote systems).
|
||||
|
||||
The value 'implicit' is the default, meaning facts will be gathered per play unless 'gather_facts: False' is set in the play. The value 'explicit' is the inverse, facts will not be gathered unless directly requested in the play.
|
||||
|
||||
The value 'smart' means each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the playbook run. This option can be useful for those wishing to save fact gathering time.
|
||||
|
||||
hash_behaviour
|
||||
==============
|
||||
|
@ -310,6 +320,13 @@ different locations::
|
|||
|
||||
Most users will not need to use this feature. See :doc:`developing_plugins` for more details
|
||||
|
||||
.. _module_lang:
|
||||
|
||||
module_lang
|
||||
===========
|
||||
|
||||
This is to set the default language to communicate between the module and the system. By default, the value is 'C'.
|
||||
|
||||
.. _module_name:
|
||||
|
||||
module_name
|
||||
|
@ -422,6 +439,10 @@ choose to establish a convention to checkout roles in /opt/mysite/roles like so:
|
|||
|
||||
roles_path = /opt/mysite/roles
|
||||
|
||||
Additional paths can be provided separated by colon characters, in the same way as other pathstrings::
|
||||
|
||||
roles_path = /opt/mysite/roles:/opt/othersite/roles
|
||||
|
||||
Roles will be first searched for in the playbook directory. Should a role not be found, it will indicate all the possible paths
|
||||
that were searched.
|
||||
|
||||
|
@ -622,4 +643,29 @@ This setting controls the timeout for the socket connect call, and should be kep
|
|||
|
||||
Note, this value can be set to less than one second, however it is probably not a good idea to do so unless you're on a very fast and reliable LAN. If you're connecting to systems over the internet, it may be necessary to increase this timeout.
|
||||
|
||||
.. _accelerate_daemon_timeout:
|
||||
|
||||
accelerate_daemon_timeout
|
||||
=========================
|
||||
|
||||
.. versionadded:: 1.6
|
||||
|
||||
This setting controls the timeout for the accelerated daemon, as measured in minutes. The default daemon timeout is 30 minutes::
|
||||
|
||||
accelerate_daemon_timeout = 30
|
||||
|
||||
Note, prior to 1.6, the timeout was hard-coded from the time of the daemon's launch. For version 1.6+, the timeout is now based on the last activity to the daemon and is configurable via this option.
|
||||
|
||||
.. _accelerate_multi_key:
|
||||
|
||||
accelerate_multi_key
|
||||
====================
|
||||
|
||||
.. versionadded:: 1.6
|
||||
|
||||
If enabled, this setting allows multiple private keys to be uploaded to the daemon. Any clients connecting to the daemon must also enable this option::
|
||||
|
||||
accelerate_multi_key = yes
|
||||
|
||||
New clients first connect to the target node over SSH to upload the key, which is done via a local socket file, so they must have the same access as the user that launched the daemon originally.
|
||||
|
||||
|
|
|
@ -28,11 +28,11 @@ It is expected that many Ansible users with a reasonable amount of physical hard
|
|||
|
||||
While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic
|
||||
layer that allows it to represent data for multiple configuration management systems (even at the same time), and has
|
||||
been referred to as a 'lightweight CMDB' by some admins. This particular script will communicate with Cobbler
|
||||
using Cobbler's XMLRPC API.
|
||||
been referred to as a 'lightweight CMDB' by some admins.
|
||||
|
||||
To tie Ansible's inventory to Cobbler (optional), copy `this script <https://raw.github.com/ansible/ansible/devel/plugins/inventory/cobbler.py>`_ to /etc/ansible and `chmod +x` the file. cobblerd will now need
|
||||
to be running when you are using Ansible and you'll need to use Ansible's ``-i`` command line option (e.g. ``-i /etc/ansible/cobbler.py``).
|
||||
This particular script will communicate with Cobbler using Cobbler's XMLRPC API.
|
||||
|
||||
First test the script by running ``/etc/ansible/cobbler.py`` directly. You should see some JSON data output, but it may not have anything in it just yet.
|
||||
|
||||
|
|
|
@ -204,6 +204,18 @@ You may also wish to install from ports, run:
|
|||
|
||||
$ sudo make -C /usr/ports/sysutils/ansible install
|
||||
|
||||
.. _from_brew:
|
||||
|
||||
Latest Releases Via Homebrew (Mac OSX)
|
||||
++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
To install on a Mac, make sure you have Homebrew, then run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ brew update
|
||||
$ brew install ansible
|
||||
|
||||
.. _from_pip:
|
||||
|
||||
Latest Releases Via Pip
|
||||
|
|
|
@ -17,7 +17,7 @@ handle executing system commands.
|
|||
|
||||
Let's review how we execute three different modules from the command line::
|
||||
|
||||
ansible webservers -m service -a "name=httpd state=running"
|
||||
ansible webservers -m service -a "name=httpd state=started"
|
||||
ansible webservers -m ping
|
||||
ansible webservers -m command -a "/sbin/reboot -t now"
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ You Might Not Need This!
|
|||
|
||||
Are you running Ansible 1.5 or later? If so, you may not need accelerate mode due to a new feature called "SSH pipelining" and should read the :ref:`pipelining` section of the documentation.
|
||||
|
||||
For users on 1.5 and later, accelerate mode only makes sense if you are (A) are managing from an Enterprise Linux 6 or earlier host
|
||||
For users on 1.5 and later, accelerate mode only makes sense if you (A) are managing from an Enterprise Linux 6 or earlier host
|
||||
and still are on paramiko, or (B) can't enable TTYs with sudo as described in the pipelining docs.
|
||||
|
||||
If you can use pipelining, Ansible will reduce the amount of files transferred over the wire,
|
||||
|
@ -76,4 +76,11 @@ As noted above, accelerated mode also supports running tasks via sudo, however t
|
|||
* You must remove requiretty from your sudoers options.
|
||||
* Prompting for the sudo password is not yet supported, so the NOPASSWD option is required for sudo'ed commands.
|
||||
|
||||
As of Ansible version `1.6`, you can also allow the use of multiple keys for connections from multiple Ansible management nodes. To do so, add the following option
|
||||
to your `ansible.cfg` configuration::
|
||||
|
||||
accelerate_multi_key = yes
|
||||
|
||||
When enabled, the daemon will open a UNIX socket file (by default `$ANSIBLE_REMOTE_TEMP/.ansible-accelerate/.local.socket`). New connections over SSH can
|
||||
use this socket file to upload new keys to the daemon.
|
||||
|
||||
|
|
|
@ -51,6 +51,8 @@ The top level of the directory would contain files and directories like so::
|
|||
foo.sh # <-- script files for use with the script resource
|
||||
vars/ #
|
||||
main.yml # <-- variables associated with this role
|
||||
meta/ #
|
||||
main.yml # <-- role dependencies
|
||||
|
||||
webtier/ # same kind of structure as "common" was above, done for the webtier role
|
||||
monitoring/ # ""
|
||||
|
@ -223,8 +225,8 @@ What about just the first 10, and then the next 10?::
|
|||
|
||||
And of course just basic ad-hoc stuff is also possible.::
|
||||
|
||||
ansible -i production -m ping
|
||||
ansible -i production -m command -a '/sbin/reboot' --limit boston
|
||||
ansible boston -i production -m ping
|
||||
ansible boston -i production -m command -a '/sbin/reboot'
|
||||
|
||||
And there are some useful commands to know (at least in 1.1 and higher)::
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ The environment can also be stored in a variable, and accessed like so::
|
|||
- hosts: all
|
||||
remote_user: root
|
||||
|
||||
# here we make a variable named "env" that is a dictionary
|
||||
# here we make a variable named "proxy_env" that is a dictionary
|
||||
vars:
|
||||
proxy_env:
|
||||
http_proxy: http://proxy.example.com:8080
|
||||
|
|
|
@ -350,7 +350,7 @@ Assuming you load balance your checkout location, ansible-pull scales essentiall
|
|||
|
||||
Run ``ansible-pull --help`` for details.
|
||||
|
||||
There's also a `clever playbook <https://github.com/ansible/ansible-examples/blob/master/language_features/ansible_pull.yml>`_ available to using ansible in push mode to configure ansible-pull via a crontab!
|
||||
There's also a `clever playbook <https://github.com/ansible/ansible-examples/blob/master/language_features/ansible_pull.yml>`_ available to configure ansible-pull via a crontab from push mode.
|
||||
|
||||
.. _tips_and_tricks:
|
||||
|
||||
|
@ -370,7 +370,7 @@ package is installed. Try it!
|
|||
To see what hosts would be affected by a playbook before you run it, you
|
||||
can do this::
|
||||
|
||||
ansible-playbook playbook.yml --list-hosts.
|
||||
ansible-playbook playbook.yml --list-hosts
|
||||
|
||||
.. seealso::
|
||||
|
||||
|
|
|
@ -7,6 +7,8 @@ in Ansible, and are typically used to load variables or templates with informati
|
|||
|
||||
.. note:: This is considered an advanced feature, and many users will probably not rely on these features.
|
||||
|
||||
.. note:: Lookups occur on the local computer, not on the remote computer.
|
||||
|
||||
.. contents:: Topics
|
||||
|
||||
.. _getting_file_contents:
|
||||
|
|
|
@ -250,7 +250,7 @@ that matches a given criteria, and some of the filenames are determined by varia
|
|||
- name: INTERFACES | Create Ansible header for /etc/network/interfaces
|
||||
template: src={{ item }} dest=/etc/foo.conf
|
||||
with_first_found:
|
||||
- "{{ansible_virtualization_type}_foo.conf"
|
||||
- "{{ansible_virtualization_type}}_foo.conf"
|
||||
- "default_foo.conf"
|
||||
|
||||
This tool also has a long form version that allows for configurable search paths. Here's an example::
|
||||
|
|
|
@ -101,7 +101,7 @@ Inside a template you automatically have access to all of the variables that are
|
|||
it's more than that -- you can also read variables about other hosts. We'll show how to do that in a bit.
|
||||
|
||||
.. note:: ansible allows Jinja2 loops and conditionals in templates, but in playbooks, we do not use them. Ansible
|
||||
templates are pure machine-parseable YAML. This is a rather important feature as it means it is possible to code-generate
|
||||
playbooks are pure machine-parseable YAML. This is a rather important feature as it means it is possible to code-generate
|
||||
pieces of files, or to have other ecosystem tools read Ansible files. Not everyone will need this but it can unlock
|
||||
possibilities.
|
||||
|
||||
|
@ -208,11 +208,62 @@ To get the symmetric difference of 2 lists (items exclusive to each list)::
|
|||
|
||||
{{ list1 | symmetric_difference(list2) }}
|
||||
|
||||
.. _version_comparison_filters:
|
||||
|
||||
Version Comparison Filters
|
||||
--------------------------
|
||||
|
||||
.. versionadded:: 1.6
|
||||
|
||||
To compare a version number, such as checking if the ``ansible_distribution_version``
|
||||
version is greater than or equal to '12.04', you can use the ``version_compare`` filter::
|
||||
|
||||
The ``version_compare`` filter can also be used to evaluate the ``ansible_distribution_version``::
|
||||
|
||||
{{ ansible_distribution_version | version_compare('12.04', '>=') }}
|
||||
|
||||
If ``ansible_distribution_version`` is greater than or equal to 12, this filter will return True, otherwise
|
||||
it will return False.
|
||||
|
||||
The ``version_compare`` filter accepts the following operators::
|
||||
|
||||
<, lt, <=, le, >, gt, >=, ge, ==, =, eq, !=, <>, ne
|
||||
|
||||
This filter also accepts a 3rd parameter, ``strict`` which defines if strict version parsing should
|
||||
be used. The default is ``False``, and if set as ``True`` will use more strict version parsing::
|
||||
|
||||
{{ sample_version_var | version_compare('1.0', operator='lt', strict=True) }}
|
||||
|
||||
.. _random_filter
|
||||
|
||||
Random Number Filter
|
||||
--------------------------
|
||||
|
||||
.. versionadded:: 1.6
|
||||
|
||||
To get a random number from 0 to supplied end::
|
||||
|
||||
{{ 59 |random}} * * * * root /script/from/cron
|
||||
|
||||
Get a random number from 0 to 100 but in steps of 10::
|
||||
|
||||
{{ 100 |random(step=10) }} => 70
|
||||
|
||||
Get a random number from 1 to 100 but in steps of 10::
|
||||
|
||||
{{ 100 |random(1, 10) }} => 31
|
||||
{{ 100 |random(start=1, step=10) }} => 51
|
||||
|
||||
|
||||
.. _other_useful_filters:
|
||||
|
||||
Other Useful Filters
|
||||
--------------------
|
||||
|
||||
To concatenate a list into a string::
|
||||
|
||||
{{ list | join(" ") }}
|
||||
|
||||
To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt'::
|
||||
|
||||
{{ path | basename }}
|
||||
|
@ -240,6 +291,14 @@ doesn't know it is a boolean value::
|
|||
- debug: msg=test
|
||||
when: some_string_value | bool
|
||||
|
||||
To replace text in a string with regex, use the "regex_replace" filter::
|
||||
|
||||
# convert "ansible" to "able"
|
||||
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
|
||||
|
||||
# convert "foobar" to "bar"
|
||||
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
|
||||
|
||||
A few useful filters are typically added with each new Ansible release. The development documentation shows
|
||||
how to extend Ansible filters by writing your own as plugins, though in general, we encourage new ones
|
||||
to be added to core so everyone can make use of them.
|
||||
|
@ -837,8 +896,11 @@ If multiple variables of the same name are defined in different places, they win
|
|||
* -e variables always win
|
||||
* then comes "most everything else"
|
||||
* then comes variables defined in inventory
|
||||
* then comes facts discovered about a system
|
||||
* then "role defaults", which are the most "defaulty" and lose in priority to everything.
|
||||
|
||||
.. note:: In versions prior to 1.5.4, facts discovered about a system were in the "most everything else" category above.
|
||||
|
||||
That seems a little theoretical. Let's show some examples and where you would choose to put what based on the kind of
|
||||
control you might want over values.
|
||||
|
||||
|
@ -880,7 +942,7 @@ See :doc:`playbooks_roles` for more info about this::
|
|||
|
||||
---
|
||||
# file: roles/x/defaults/main.yml
|
||||
# if not overriden in inventory or as a parameter, this is the value that will be used
|
||||
# if not overridden in inventory or as a parameter, this is the value that will be used
|
||||
http_port: 80
|
||||
|
||||
if you are writing a role and want to ensure the value in the role is absolutely used in that role, and is not going to be overridden
|
||||
|
|
|
@ -14,7 +14,7 @@ What Can Be Encrypted With Vault
|
|||
|
||||
The vault feature can encrypt any structured data file used by Ansible. This can include "group_vars/" or "host_vars/" inventory variables, variables loaded by "include_vars" or "vars_files", or variable files passed on the ansible-playbook command line with "-e @file.yml" or "-e @file.json". Role variables and defaults are also included!
|
||||
|
||||
Because Ansible tasks, handlers, and so on are also data, these two can also be encrypted with vault. If you'd like to not betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted. However, that might be a little much and could annoy your coworkers :)
|
||||
Because Ansible tasks, handlers, and so on are also data, these can also be encrypted with vault. If you'd like to not betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted. However, that might be a little much and could annoy your coworkers :)
|
||||
|
||||
.. _creating_files:
|
||||
|
||||
|
|
|
@ -22,8 +22,17 @@ sudo_user = root
|
|||
#ask_pass = True
|
||||
transport = smart
|
||||
remote_port = 22
|
||||
module_lang = C
|
||||
|
||||
# additional paths to search for roles in, colon seperated
|
||||
# plays will gather facts by default, which contain information about
|
||||
# the remote system.
|
||||
#
|
||||
# smart - gather by default, but don't regather if already gathered
|
||||
# implicit - gather by default, turn off with gather_facts: False
|
||||
# explicit - do not gather by default, must say gather_facts: True
|
||||
gathering = implicit
|
||||
|
||||
# additional paths to search for roles in, colon separated
|
||||
#roles_path = /etc/ansible/roles
|
||||
|
||||
# uncomment this to disable SSH key host checking
|
||||
|
@ -82,7 +91,7 @@ ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid}
|
|||
# to revert the behavior to pre-1.3.
|
||||
#error_on_undefined_vars = False
|
||||
|
||||
# set plugin path directories here, seperate with colons
|
||||
# set plugin path directories here, separate with colons
|
||||
action_plugins = /usr/share/ansible_plugins/action_plugins
|
||||
callback_plugins = /usr/share/ansible_plugins/callback_plugins
|
||||
connection_plugins = /usr/share/ansible_plugins/connection_plugins
|
||||
|
@ -98,6 +107,20 @@ filter_plugins = /usr/share/ansible_plugins/filter_plugins
|
|||
# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1
|
||||
#nocolor = 1
|
||||
|
||||
# the CA certificate path used for validating SSL certs. This path
|
||||
# should exist on the controlling node, not the target nodes
|
||||
# common locations:
|
||||
# RHEL/CentOS: /etc/pki/tls/certs/ca-bundle.crt
|
||||
# Fedora : /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
|
||||
# Ubuntu : /usr/share/ca-certificates/cacert.org/cacert.org.crt
|
||||
#ca_file_path =
|
||||
|
||||
# the http user-agent string to use when fetching urls. Some web server
|
||||
# operators block the default urllib user agent as it is frequently used
|
||||
# by malicious attacks/scripts, so we set it to something unique to
|
||||
# avoid issues.
|
||||
#http_user_agent = ansible-agent
|
||||
|
||||
[paramiko_connection]
|
||||
|
||||
# uncomment this line to cause the paramiko connection plugin to not record new host
|
||||
|
@ -145,3 +168,14 @@ filter_plugins = /usr/share/ansible_plugins/filter_plugins
|
|||
accelerate_port = 5099
|
||||
accelerate_timeout = 30
|
||||
accelerate_connect_timeout = 5.0
|
||||
|
||||
# The daemon timeout is measured in minutes. This time is measured
|
||||
# from the last activity to the accelerate daemon.
|
||||
accelerate_daemon_timeout = 30
|
||||
|
||||
# If set to yes, accelerate_multi_key will allow multiple
|
||||
# private keys to be uploaded to it, though each user must
|
||||
# have access to the system via SSH to add a new key. The default
|
||||
# is "no".
|
||||
#accelerate_multi_key = yes
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ and do not wish to install them from your operating system package manager, you
|
|||
can install them from pip
|
||||
|
||||
$ easy_install pip # if pip is not already available
|
||||
$ pip install pyyaml jinja2
|
||||
$ pip install pyyaml jinja2 nose passlib pycrypto
|
||||
|
||||
From there, follow ansible instructions on docs.ansible.com as normal.
|
||||
|
||||
|
|
|
@ -185,7 +185,7 @@ def process_module(module, options, env, template, outputname, module_map):
|
|||
fname = module_map[module]
|
||||
|
||||
# ignore files with extensions
|
||||
if os.path.basename(fname).find(".") != -1:
|
||||
if "." in os.path.basename(fname):
|
||||
return
|
||||
|
||||
# use ansible core library to parse out doc metadata YAML and plaintext examples
|
||||
|
|
|
@ -93,6 +93,10 @@ def boilerplate_module(modfile, args, interpreter):
|
|||
# Argument is a YAML file (JSON is a subset of YAML)
|
||||
complex_args = utils.combine_vars(complex_args, utils.parse_yaml_from_file(args[1:]))
|
||||
args=''
|
||||
elif args.startswith("{"):
|
||||
# Argument is a YAML document (not a file)
|
||||
complex_args = utils.combine_vars(complex_args, utils.parse_yaml(args))
|
||||
args=''
|
||||
|
||||
inject = {}
|
||||
if interpreter:
|
||||
|
|
|
@ -115,6 +115,12 @@ def log_unflock(runner):
|
|||
except OSError:
|
||||
pass
|
||||
|
||||
def set_playbook(callback, playbook):
|
||||
''' used to notify callback plugins of playbook context '''
|
||||
callback.playbook = playbook
|
||||
for callback_plugin in callback_plugins:
|
||||
callback_plugin.playbook = playbook
|
||||
|
||||
def set_play(callback, play):
|
||||
''' used to notify callback plugins of context '''
|
||||
callback.play = play
|
||||
|
@ -250,7 +256,7 @@ def regular_generic_msg(hostname, result, oneline, caption):
|
|||
|
||||
def banner_cowsay(msg):
|
||||
|
||||
if msg.find(": [") != -1:
|
||||
if ": [" in msg:
|
||||
msg = msg.replace("[","")
|
||||
if msg.endswith("]"):
|
||||
msg = msg[:-1]
|
||||
|
|
|
@ -15,7 +15,6 @@
|
|||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import os
|
||||
import sys
|
||||
import constants
|
||||
|
||||
|
@ -37,7 +36,7 @@ else:
|
|||
# curses returns an error (e.g. could not find terminal)
|
||||
ANSIBLE_COLOR=False
|
||||
|
||||
if os.getenv("ANSIBLE_FORCE_COLOR") is not None:
|
||||
if constants.ANSIBLE_FORCE_COLOR:
|
||||
ANSIBLE_COLOR=True
|
||||
|
||||
# --- begin "pretty"
|
||||
|
|
|
@ -93,8 +93,8 @@ else:
|
|||
DIST_MODULE_PATH = '/usr/share/ansible/'
|
||||
|
||||
# check all of these extensions when looking for yaml files for things like
|
||||
# group variables
|
||||
YAML_FILENAME_EXTENSIONS = [ "", ".yml", ".yaml" ]
|
||||
# group variables -- really anything we can load
|
||||
YAML_FILENAME_EXTENSIONS = [ "", ".yml", ".yaml", ".json" ]
|
||||
|
||||
# sections in config file
|
||||
DEFAULTS='defaults'
|
||||
|
@ -134,6 +134,7 @@ DEFAULT_SU = get_config(p, DEFAULTS, 'su', 'ANSIBLE_SU', False, boolean=True)
|
|||
DEFAULT_SU_FLAGS = get_config(p, DEFAULTS, 'su_flags', 'ANSIBLE_SU_FLAGS', '')
|
||||
DEFAULT_SU_USER = get_config(p, DEFAULTS, 'su_user', 'ANSIBLE_SU_USER', 'root')
|
||||
DEFAULT_ASK_SU_PASS = get_config(p, DEFAULTS, 'ask_su_pass', 'ANSIBLE_ASK_SU_PASS', False, boolean=True)
|
||||
DEFAULT_GATHERING = get_config(p, DEFAULTS, 'gathering', 'ANSIBLE_GATHERING', 'implicit').lower()
|
||||
|
||||
DEFAULT_ACTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', '/usr/share/ansible_plugins/action_plugins')
|
||||
DEFAULT_CALLBACK_PLUGIN_PATH = get_config(p, DEFAULTS, 'callback_plugins', 'ANSIBLE_CALLBACK_PLUGINS', '/usr/share/ansible_plugins/callback_plugins')
|
||||
|
@ -143,6 +144,7 @@ DEFAULT_VARS_PLUGIN_PATH = get_config(p, DEFAULTS, 'vars_plugins', '
|
|||
DEFAULT_FILTER_PLUGIN_PATH = get_config(p, DEFAULTS, 'filter_plugins', 'ANSIBLE_FILTER_PLUGINS', '/usr/share/ansible_plugins/filter_plugins')
|
||||
DEFAULT_LOG_PATH = shell_expand_path(get_config(p, DEFAULTS, 'log_path', 'ANSIBLE_LOG_PATH', ''))
|
||||
|
||||
ANSIBLE_FORCE_COLOR = get_config(p, DEFAULTS, 'force_color', 'ANSIBLE_FORCE_COLOR', None, boolean=True)
|
||||
ANSIBLE_NOCOLOR = get_config(p, DEFAULTS, 'nocolor', 'ANSIBLE_NOCOLOR', None, boolean=True)
|
||||
ANSIBLE_NOCOWS = get_config(p, DEFAULTS, 'nocows', 'ANSIBLE_NOCOWS', None, boolean=True)
|
||||
DISPLAY_SKIPPED_HOSTS = get_config(p, DEFAULTS, 'display_skipped_hosts', 'DISPLAY_SKIPPED_HOSTS', True, boolean=True)
|
||||
|
@ -160,9 +162,11 @@ ZEROMQ_PORT = get_config(p, 'fireball_connection', 'zeromq_po
|
|||
ACCELERATE_PORT = get_config(p, 'accelerate', 'accelerate_port', 'ACCELERATE_PORT', 5099, integer=True)
|
||||
ACCELERATE_TIMEOUT = get_config(p, 'accelerate', 'accelerate_timeout', 'ACCELERATE_TIMEOUT', 30, integer=True)
|
||||
ACCELERATE_CONNECT_TIMEOUT = get_config(p, 'accelerate', 'accelerate_connect_timeout', 'ACCELERATE_CONNECT_TIMEOUT', 1.0, floating=True)
|
||||
ACCELERATE_DAEMON_TIMEOUT = get_config(p, 'accelerate', 'accelerate_daemon_timeout', 'ACCELERATE_DAEMON_TIMEOUT', 30, integer=True)
|
||||
ACCELERATE_KEYS_DIR = get_config(p, 'accelerate', 'accelerate_keys_dir', 'ACCELERATE_KEYS_DIR', '~/.fireball.keys')
|
||||
ACCELERATE_KEYS_DIR_PERMS = get_config(p, 'accelerate', 'accelerate_keys_dir_perms', 'ACCELERATE_KEYS_DIR_PERMS', '700')
|
||||
ACCELERATE_KEYS_FILE_PERMS = get_config(p, 'accelerate', 'accelerate_keys_file_perms', 'ACCELERATE_KEYS_FILE_PERMS', '600')
|
||||
ACCELERATE_MULTI_KEY = get_config(p, 'accelerate', 'accelerate_multi_key', 'ACCELERATE_MULTI_KEY', False, boolean=True)
|
||||
PARAMIKO_PTY = get_config(p, 'paramiko_connection', 'pty', 'ANSIBLE_PARAMIKO_PTY', True, boolean=True)
|
||||
|
||||
# characters included in auto-generated passwords
|
||||
|
|
|
@ -99,12 +99,40 @@ class Inventory(object):
|
|||
self.host_list = os.path.join(self.host_list, "")
|
||||
self.parser = InventoryDirectory(filename=host_list)
|
||||
self.groups = self.parser.groups.values()
|
||||
elif utils.is_executable(host_list):
|
||||
self.parser = InventoryScript(filename=host_list)
|
||||
self.groups = self.parser.groups.values()
|
||||
else:
|
||||
self.parser = InventoryParser(filename=host_list)
|
||||
self.groups = self.parser.groups.values()
|
||||
# check to see if the specified file starts with a
|
||||
# shebang (#!/), so if an error is raised by the parser
|
||||
# class we can show a more apropos error
|
||||
shebang_present = False
|
||||
try:
|
||||
inv_file = open(host_list)
|
||||
first_line = inv_file.readlines()[0]
|
||||
inv_file.close()
|
||||
if first_line.startswith('#!'):
|
||||
shebang_present = True
|
||||
except:
|
||||
pass
|
||||
|
||||
if utils.is_executable(host_list):
|
||||
try:
|
||||
self.parser = InventoryScript(filename=host_list)
|
||||
self.groups = self.parser.groups.values()
|
||||
except:
|
||||
if not shebang_present:
|
||||
raise errors.AnsibleError("The file %s is marked as executable, but failed to execute correctly. " % host_list + \
|
||||
"If this is not supposed to be an executable script, correct this with `chmod -x %s`." % host_list)
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
try:
|
||||
self.parser = InventoryParser(filename=host_list)
|
||||
self.groups = self.parser.groups.values()
|
||||
except:
|
||||
if shebang_present:
|
||||
raise errors.AnsibleError("The file %s looks like it should be an executable inventory script, but is not marked executable. " % host_list + \
|
||||
"Perhaps you want to correct this with `chmod +x %s`?" % host_list)
|
||||
else:
|
||||
raise
|
||||
|
||||
utils.plugins.vars_loader.add_directory(self.basedir(), with_subdir=True)
|
||||
else:
|
||||
|
@ -208,12 +236,14 @@ class Inventory(object):
|
|||
"""
|
||||
|
||||
# The regex used to match on the range, which can be [x] or [x-y].
|
||||
pattern_re = re.compile("^(.*)\[([0-9]+)(?:(?:-)([0-9]+))?\](.*)$")
|
||||
pattern_re = re.compile("^(.*)\[([-]?[0-9]+)(?:(?:-)([0-9]+))?\](.*)$")
|
||||
m = pattern_re.match(pattern)
|
||||
if m:
|
||||
(target, first, last, rest) = m.groups()
|
||||
first = int(first)
|
||||
if last:
|
||||
if first < 0:
|
||||
raise errors.AnsibleError("invalid range: negative indices cannot be used as the first item in a range")
|
||||
last = int(last)
|
||||
else:
|
||||
last = first
|
||||
|
@ -245,10 +275,13 @@ class Inventory(object):
|
|||
right = 0
|
||||
left=int(left)
|
||||
right=int(right)
|
||||
if left != right:
|
||||
return hosts[left:right]
|
||||
else:
|
||||
return [ hosts[left] ]
|
||||
try:
|
||||
if left != right:
|
||||
return hosts[left:right]
|
||||
else:
|
||||
return [ hosts[left] ]
|
||||
except IndexError:
|
||||
raise errors.AnsibleError("no hosts matching the pattern '%s' were found" % pat)
|
||||
|
||||
def _create_implicit_localhost(self, pattern):
|
||||
new_host = Host(pattern)
|
||||
|
@ -363,9 +396,9 @@ class Inventory(object):
|
|||
vars_results = [ plugin.run(host, vault_password=vault_password) for plugin in self._vars_plugins ]
|
||||
for updated in vars_results:
|
||||
if updated is not None:
|
||||
vars.update(updated)
|
||||
vars = utils.combine_vars(vars, updated)
|
||||
|
||||
vars.update(host.get_variables())
|
||||
vars = utils.combine_vars(vars, host.get_variables())
|
||||
if self.parser is not None:
|
||||
vars = utils.combine_vars(vars, self.parser.get_host_variables(host))
|
||||
return vars
|
||||
|
|
|
@ -41,10 +41,7 @@ def detect_range(line = None):
|
|||
|
||||
Returnes True if the given line contains a pattern, else False.
|
||||
'''
|
||||
if (line.find("[") != -1 and
|
||||
line.find(":") != -1 and
|
||||
line.find("]") != -1 and
|
||||
line.index("[") < line.index(":") < line.index("]")):
|
||||
if 0 <= line.find("[") < line.find(":") < line.find("]"):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import ansible.constants as C
|
||||
from ansible import utils
|
||||
|
||||
class Host(object):
|
||||
''' a single ansible host '''
|
||||
|
@ -56,7 +57,7 @@ class Host(object):
|
|||
results = {}
|
||||
groups = self.get_groups()
|
||||
for group in sorted(groups, key=lambda g: g.depth):
|
||||
results.update(group.get_variables())
|
||||
results = utils.combine_vars(results, group.get_variables())
|
||||
results.update(self.vars)
|
||||
results['inventory_hostname'] = self.name
|
||||
results['inventory_hostname_short'] = self.name.split('.')[0]
|
||||
|
|
|
@ -23,6 +23,7 @@ from ansible.inventory.group import Group
|
|||
from ansible.inventory.expand_hosts import detect_range
|
||||
from ansible.inventory.expand_hosts import expand_hostname_range
|
||||
from ansible import errors
|
||||
from ansible import utils
|
||||
import shlex
|
||||
import re
|
||||
import ast
|
||||
|
@ -47,6 +48,20 @@ class InventoryParser(object):
|
|||
self._parse_group_variables()
|
||||
return self.groups
|
||||
|
||||
@staticmethod
|
||||
def _parse_value(v):
|
||||
if "#" not in v:
|
||||
try:
|
||||
return ast.literal_eval(v)
|
||||
# Using explicit exceptions.
|
||||
# Likely a string that literal_eval does not like. We wil then just set it.
|
||||
except ValueError:
|
||||
# For some reason this was thought to be malformed.
|
||||
pass
|
||||
except SyntaxError:
|
||||
# Is this a hash with an equals at the end?
|
||||
pass
|
||||
return v
|
||||
|
||||
# [webservers]
|
||||
# alpha
|
||||
|
@ -65,10 +80,10 @@ class InventoryParser(object):
|
|||
active_group_name = 'ungrouped'
|
||||
|
||||
for line in self.lines:
|
||||
line = line.split("#")[0].strip()
|
||||
line = utils.before_comment(line).strip()
|
||||
if line.startswith("[") and line.endswith("]"):
|
||||
active_group_name = line.replace("[","").replace("]","")
|
||||
if line.find(":vars") != -1 or line.find(":children") != -1:
|
||||
if ":vars" in line or ":children" in line:
|
||||
active_group_name = active_group_name.rsplit(":", 1)[0]
|
||||
if active_group_name not in self.groups:
|
||||
new_group = self.groups[active_group_name] = Group(name=active_group_name)
|
||||
|
@ -94,11 +109,11 @@ class InventoryParser(object):
|
|||
# FQDN foo.example.com
|
||||
if hostname.count(".") == 1:
|
||||
(hostname, port) = hostname.rsplit(".", 1)
|
||||
elif (hostname.find("[") != -1 and
|
||||
hostname.find("]") != -1 and
|
||||
hostname.find(":") != -1 and
|
||||
elif ("[" in hostname and
|
||||
"]" in hostname and
|
||||
":" in hostname and
|
||||
(hostname.rindex("]") < hostname.rindex(":")) or
|
||||
(hostname.find("]") == -1 and hostname.find(":") != -1)):
|
||||
("]" not in hostname and ":" in hostname)):
|
||||
(hostname, port) = hostname.rsplit(":", 1)
|
||||
|
||||
hostnames = []
|
||||
|
@ -122,12 +137,7 @@ class InventoryParser(object):
|
|||
(k,v) = t.split("=", 1)
|
||||
except ValueError, e:
|
||||
raise errors.AnsibleError("Invalid ini entry: %s - %s" % (t, str(e)))
|
||||
try:
|
||||
host.set_variable(k,ast.literal_eval(v))
|
||||
except:
|
||||
# most likely a string that literal_eval
|
||||
# doesn't like, so just set it
|
||||
host.set_variable(k,v)
|
||||
host.set_variable(k, self._parse_value(v))
|
||||
self.groups[active_group_name].add_host(host)
|
||||
|
||||
# [southeast:children]
|
||||
|
@ -141,7 +151,7 @@ class InventoryParser(object):
|
|||
line = line.strip()
|
||||
if line is None or line == '':
|
||||
continue
|
||||
if line.startswith("[") and line.find(":children]") != -1:
|
||||
if line.startswith("[") and ":children]" in line:
|
||||
line = line.replace("[","").replace(":children]","")
|
||||
group = self.groups.get(line, None)
|
||||
if group is None:
|
||||
|
@ -166,7 +176,7 @@ class InventoryParser(object):
|
|||
group = None
|
||||
for line in self.lines:
|
||||
line = line.strip()
|
||||
if line.startswith("[") and line.find(":vars]") != -1:
|
||||
if line.startswith("[") and ":vars]" in line:
|
||||
line = line.replace("[","").replace(":vars]","")
|
||||
group = self.groups.get(line, None)
|
||||
if group is None:
|
||||
|
@ -178,16 +188,11 @@ class InventoryParser(object):
|
|||
elif line == '':
|
||||
pass
|
||||
elif group:
|
||||
if line.find("=") == -1:
|
||||
if "=" not in line:
|
||||
raise errors.AnsibleError("variables assigned to group must be in key=value form")
|
||||
else:
|
||||
(k, v) = [e.strip() for e in line.split("=", 1)]
|
||||
# When the value is a single-quoted or double-quoted string
|
||||
if re.match(r"^(['\"]).*\1$", v):
|
||||
# Unquote the string
|
||||
group.set_variable(k, re.sub(r"^['\"]|['\"]$", '', v))
|
||||
else:
|
||||
group.set_variable(k, v)
|
||||
group.set_variable(k, self._parse_value(v))
|
||||
|
||||
def get_host_variables(self, host):
|
||||
return {}
|
||||
|
|
|
@ -86,7 +86,7 @@ def _load_vars_from_path(path, results, vault_password=None):
|
|||
if stat.S_ISDIR(pathstat.st_mode):
|
||||
|
||||
# support organizing variables across multiple files in a directory
|
||||
return True, _load_vars_from_folder(path, results)
|
||||
return True, _load_vars_from_folder(path, results, vault_password=vault_password)
|
||||
|
||||
# regular file
|
||||
elif stat.S_ISREG(pathstat.st_mode):
|
||||
|
@ -105,7 +105,7 @@ def _load_vars_from_path(path, results, vault_password=None):
|
|||
raise errors.AnsibleError("Expected a variable file or directory "
|
||||
"but found a non-file object at path %s" % (path, ))
|
||||
|
||||
def _load_vars_from_folder(folder_path, results):
|
||||
def _load_vars_from_folder(folder_path, results, vault_password=None):
|
||||
"""
|
||||
Load all variables within a folder recursively.
|
||||
"""
|
||||
|
@ -123,9 +123,10 @@ def _load_vars_from_folder(folder_path, results):
|
|||
# filesystem lists them.
|
||||
names.sort()
|
||||
|
||||
paths = [os.path.join(folder_path, name) for name in names]
|
||||
# do not parse hidden files or dirs, e.g. .svn/
|
||||
paths = [os.path.join(folder_path, name) for name in names if not name.startswith('.')]
|
||||
for path in paths:
|
||||
_found, results = _load_vars_from_path(path, results)
|
||||
_found, results = _load_vars_from_path(path, results, vault_password=vault_password)
|
||||
return results
|
||||
|
||||
|
||||
|
|
|
@ -95,7 +95,7 @@ class ModuleReplacer(object):
|
|||
|
||||
for line in lines:
|
||||
|
||||
if line.find(REPLACER) != -1:
|
||||
if REPLACER in line:
|
||||
output.write(self.slurp(os.path.join(self.snippet_path, "basic.py")))
|
||||
snippet_names.append('basic')
|
||||
elif line.startswith('from ansible.module_utils.'):
|
||||
|
@ -103,7 +103,7 @@ class ModuleReplacer(object):
|
|||
import_error = False
|
||||
if len(tokens) != 3:
|
||||
import_error = True
|
||||
if line.find(" import *") == -1:
|
||||
if " import *" not in line:
|
||||
import_error = True
|
||||
if import_error:
|
||||
raise errors.AnsibleError("error importing module in %s, expecting format like 'from ansible.module_utils.basic import *'" % module_path)
|
||||
|
|
|
@ -46,6 +46,7 @@ BOOLEANS = BOOLEANS_TRUE + BOOLEANS_FALSE
|
|||
|
||||
import os
|
||||
import re
|
||||
import pipes
|
||||
import shlex
|
||||
import subprocess
|
||||
import sys
|
||||
|
@ -54,11 +55,13 @@ import types
|
|||
import time
|
||||
import shutil
|
||||
import stat
|
||||
import tempfile
|
||||
import traceback
|
||||
import grp
|
||||
import pwd
|
||||
import platform
|
||||
import errno
|
||||
import tempfile
|
||||
|
||||
try:
|
||||
import json
|
||||
|
@ -112,8 +115,11 @@ FILE_COMMON_ARGUMENTS=dict(
|
|||
backup = dict(),
|
||||
force = dict(),
|
||||
remote_src = dict(), # used by assemble
|
||||
delimiter = dict(), # used by assemble
|
||||
directory_mode = dict(), # used by copy
|
||||
)
|
||||
|
||||
|
||||
def get_platform():
|
||||
''' what's the platform? example: Linux is a platform. '''
|
||||
return platform.system()
|
||||
|
@ -188,7 +194,7 @@ class AnsibleModule(object):
|
|||
os.environ['LANG'] = MODULE_LANG
|
||||
(self.params, self.args) = self._load_params()
|
||||
|
||||
self._legal_inputs = [ 'CHECKMODE', 'NO_LOG' ]
|
||||
self._legal_inputs = ['CHECKMODE', 'NO_LOG']
|
||||
|
||||
self.aliases = self._handle_aliases()
|
||||
|
||||
|
@ -214,6 +220,9 @@ class AnsibleModule(object):
|
|||
if not self.no_log:
|
||||
self._log_invocation()
|
||||
|
||||
# finally, make sure we're in a sane working dir
|
||||
self._set_cwd()
|
||||
|
||||
def load_file_common_arguments(self, params):
|
||||
'''
|
||||
many modules deal with files, this encapsulates common
|
||||
|
@ -461,7 +470,7 @@ class AnsibleModule(object):
|
|||
changed = True
|
||||
return changed
|
||||
|
||||
def set_file_attributes_if_different(self, file_args, changed):
|
||||
def set_fs_attributes_if_different(self, file_args, changed):
|
||||
# set modes owners and context as needed
|
||||
changed = self.set_context_if_different(
|
||||
file_args['path'], file_args['secontext'], changed
|
||||
|
@ -478,19 +487,10 @@ class AnsibleModule(object):
|
|||
return changed
|
||||
|
||||
def set_directory_attributes_if_different(self, file_args, changed):
|
||||
changed = self.set_context_if_different(
|
||||
file_args['path'], file_args['secontext'], changed
|
||||
)
|
||||
changed = self.set_owner_if_different(
|
||||
file_args['path'], file_args['owner'], changed
|
||||
)
|
||||
changed = self.set_group_if_different(
|
||||
file_args['path'], file_args['group'], changed
|
||||
)
|
||||
changed = self.set_mode_if_different(
|
||||
file_args['path'], file_args['mode'], changed
|
||||
)
|
||||
return changed
|
||||
return self.set_fs_attributes_if_different(file_args, changed)
|
||||
|
||||
def set_file_attributes_if_different(self, file_args, changed):
|
||||
return self.set_fs_attributes_if_different(file_args, changed)
|
||||
|
||||
def add_path_info(self, kwargs):
|
||||
'''
|
||||
|
@ -571,8 +571,9 @@ class AnsibleModule(object):
|
|||
|
||||
def _check_invalid_arguments(self):
|
||||
for (k,v) in self.params.iteritems():
|
||||
if k in ('CHECKMODE', 'NO_LOG'):
|
||||
continue
|
||||
# these should be in legal inputs already
|
||||
#if k in ('CHECKMODE', 'NO_LOG'):
|
||||
# continue
|
||||
if k not in self._legal_inputs:
|
||||
self.fail_json(msg="unsupported parameter for module: %s" % k)
|
||||
|
||||
|
@ -686,6 +687,8 @@ class AnsibleModule(object):
|
|||
if not isinstance(value, list):
|
||||
if isinstance(value, basestring):
|
||||
self.params[k] = value.split(",")
|
||||
elif isinstance(value, int) or isinstance(value, float):
|
||||
self.params[k] = [ str(value) ]
|
||||
else:
|
||||
is_invalid = True
|
||||
elif wanted == 'dict':
|
||||
|
@ -805,6 +808,12 @@ class AnsibleModule(object):
|
|||
else:
|
||||
msg = 'Invoked'
|
||||
|
||||
# 6655 - allow for accented characters
|
||||
try:
|
||||
msg = unicode(msg).encode('utf8')
|
||||
except UnicodeDecodeError, e:
|
||||
pass
|
||||
|
||||
if (has_journal):
|
||||
journal_args = ["MESSAGE=%s %s" % (module, msg)]
|
||||
journal_args.append("MODULE=%s" % os.path.basename(__file__))
|
||||
|
@ -815,10 +824,30 @@ class AnsibleModule(object):
|
|||
except IOError, e:
|
||||
# fall back to syslog since logging to journal failed
|
||||
syslog.openlog(str(module), 0, syslog.LOG_USER)
|
||||
syslog.syslog(syslog.LOG_NOTICE, unicode(msg).encode('utf8'))
|
||||
syslog.syslog(syslog.LOG_NOTICE, msg) #1
|
||||
else:
|
||||
syslog.openlog(str(module), 0, syslog.LOG_USER)
|
||||
syslog.syslog(syslog.LOG_NOTICE, unicode(msg).encode('utf8'))
|
||||
syslog.syslog(syslog.LOG_NOTICE, msg) #2
|
||||
|
||||
def _set_cwd(self):
|
||||
try:
|
||||
cwd = os.getcwd()
|
||||
if not os.access(cwd, os.F_OK|os.R_OK):
|
||||
raise
|
||||
return cwd
|
||||
except:
|
||||
# we don't have access to the cwd, probably because of sudo.
|
||||
# Try and move to a neutral location to prevent errors
|
||||
for cwd in [os.path.expandvars('$HOME'), tempfile.gettempdir()]:
|
||||
try:
|
||||
if os.access(cwd, os.F_OK|os.R_OK):
|
||||
os.chdir(cwd)
|
||||
return cwd
|
||||
except:
|
||||
pass
|
||||
# we won't error here, as it may *not* be a problem,
|
||||
# and we don't want to break modules unnecessarily
|
||||
return None
|
||||
|
||||
def get_bin_path(self, arg, required=False, opt_dirs=[]):
|
||||
'''
|
||||
|
@ -865,6 +894,9 @@ class AnsibleModule(object):
|
|||
for encoding in ("utf-8", "latin-1", "unicode_escape"):
|
||||
try:
|
||||
return json.dumps(data, encoding=encoding)
|
||||
# Old systems using simplejson module does not support encoding keyword.
|
||||
except TypeError, e:
|
||||
return json.dumps(data)
|
||||
except UnicodeDecodeError, e:
|
||||
continue
|
||||
self.fail_json(msg='Invalid unicode encoding encountered')
|
||||
|
@ -944,11 +976,12 @@ class AnsibleModule(object):
|
|||
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
|
||||
to work around limitations, corner cases and ensure selinux context is saved if possible'''
|
||||
context = None
|
||||
dest_stat = None
|
||||
if os.path.exists(dest):
|
||||
try:
|
||||
st = os.stat(dest)
|
||||
os.chmod(src, st.st_mode & 07777)
|
||||
os.chown(src, st.st_uid, st.st_gid)
|
||||
dest_stat = os.stat(dest)
|
||||
os.chmod(src, dest_stat.st_mode & 07777)
|
||||
os.chown(src, dest_stat.st_uid, dest_stat.st_gid)
|
||||
except OSError, e:
|
||||
if e.errno != errno.EPERM:
|
||||
raise
|
||||
|
@ -958,8 +991,10 @@ class AnsibleModule(object):
|
|||
if self.selinux_enabled():
|
||||
context = self.selinux_default_context(dest)
|
||||
|
||||
creating = not os.path.exists(dest)
|
||||
|
||||
try:
|
||||
# Optimistically try a rename, solves some corner cases and can avoid useless work.
|
||||
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
|
||||
os.rename(src, dest)
|
||||
except (IOError,OSError), e:
|
||||
# only try workarounds for errno 18 (cross device), 1 (not permited) and 13 (permission denied)
|
||||
|
@ -968,31 +1003,40 @@ class AnsibleModule(object):
|
|||
|
||||
dest_dir = os.path.dirname(dest)
|
||||
dest_file = os.path.basename(dest)
|
||||
tmp_dest = "%s/.%s.%s.%s" % (dest_dir,dest_file,os.getpid(),time.time())
|
||||
tmp_dest = tempfile.NamedTemporaryFile(
|
||||
prefix=".ansible_tmp", dir=dest_dir, suffix=dest_file)
|
||||
|
||||
try: # leaves tmp file behind when sudo and not root
|
||||
if os.getenv("SUDO_USER") and os.getuid() != 0:
|
||||
# cleanup will happen by 'rm' of tempdir
|
||||
shutil.copy(src, tmp_dest)
|
||||
# copy2 will preserve some metadata
|
||||
shutil.copy2(src, tmp_dest.name)
|
||||
else:
|
||||
shutil.move(src, tmp_dest)
|
||||
shutil.move(src, tmp_dest.name)
|
||||
if self.selinux_enabled():
|
||||
self.set_context_if_different(tmp_dest, context, False)
|
||||
os.rename(tmp_dest, dest)
|
||||
self.set_context_if_different(
|
||||
tmp_dest.name, context, False)
|
||||
if dest_stat:
|
||||
os.chown(tmp_dest.name, dest_stat.st_uid, dest_stat.st_gid)
|
||||
os.rename(tmp_dest.name, dest)
|
||||
except (shutil.Error, OSError, IOError), e:
|
||||
self.cleanup(tmp_dest)
|
||||
self.cleanup(tmp_dest.name)
|
||||
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, e))
|
||||
|
||||
if creating and os.getenv("SUDO_USER"):
|
||||
os.chown(dest, os.getuid(), os.getgid())
|
||||
|
||||
if self.selinux_enabled():
|
||||
# rename might not preserve context
|
||||
self.set_context_if_different(dest, context, False)
|
||||
|
||||
def run_command(self, args, check_rc=False, close_fds=False, executable=None, data=None, binary_data=False, path_prefix=None):
|
||||
def run_command(self, args, check_rc=False, close_fds=False, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None, use_unsafe_shell=False):
|
||||
'''
|
||||
Execute a command, returns rc, stdout, and stderr.
|
||||
args is the command to run
|
||||
If args is a list, the command will be run with shell=False.
|
||||
Otherwise, the command will be run with shell=True when args is a string.
|
||||
If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
|
||||
If args is a string and use_unsafe_shell=True it run with shell=True.
|
||||
Other arguments:
|
||||
- check_rc (boolean) Whether to call fail_json in case of
|
||||
non zero RC. Default is False.
|
||||
|
@ -1001,13 +1045,24 @@ class AnsibleModule(object):
|
|||
- executable (string) See documentation for subprocess.Popen().
|
||||
Default is None.
|
||||
'''
|
||||
|
||||
shell = False
|
||||
if isinstance(args, list):
|
||||
shell = False
|
||||
elif isinstance(args, basestring):
|
||||
if use_unsafe_shell:
|
||||
args = " ".join([pipes.quote(x) for x in args])
|
||||
shell = True
|
||||
elif isinstance(args, basestring) and use_unsafe_shell:
|
||||
shell = True
|
||||
elif isinstance(args, basestring):
|
||||
args = shlex.split(args.encode('utf-8'))
|
||||
else:
|
||||
msg = "Argument 'args' to run_command must be list or string"
|
||||
self.fail_json(rc=257, cmd=args, msg=msg)
|
||||
|
||||
# expand things like $HOME and ~
|
||||
if not shell:
|
||||
args = [ os.path.expandvars(os.path.expanduser(x)) for x in args ]
|
||||
|
||||
rc = 0
|
||||
msg = None
|
||||
st_in = None
|
||||
|
@ -1017,41 +1072,85 @@ class AnsibleModule(object):
|
|||
if path_prefix:
|
||||
env['PATH']="%s:%s" % (path_prefix, env['PATH'])
|
||||
|
||||
# create a printable version of the command for use
|
||||
# in reporting later, which strips out things like
|
||||
# passwords from the args list
|
||||
if isinstance(args, list):
|
||||
clean_args = " ".join(pipes.quote(arg) for arg in args)
|
||||
else:
|
||||
clean_args = args
|
||||
|
||||
# all clean strings should return two match groups,
|
||||
# where the first is the CLI argument and the second
|
||||
# is the password/key/phrase that will be hidden
|
||||
clean_re_strings = [
|
||||
# this removes things like --password, --pass, --pass-wd, etc.
|
||||
# optionally followed by an '=' or a space. The password can
|
||||
# be quoted or not too, though it does not care about quotes
|
||||
# that are not balanced
|
||||
# source: http://blog.stevenlevithan.com/archives/match-quoted-string
|
||||
r'([-]{0,2}pass[-]?(?:word|wd)?[=\s]?)((?:["\'])?(?:[^\s])*(?:\1)?)',
|
||||
# TODO: add more regex checks here
|
||||
]
|
||||
for re_str in clean_re_strings:
|
||||
r = re.compile(re_str)
|
||||
clean_args = r.sub(r'\1********', clean_args)
|
||||
|
||||
if data:
|
||||
st_in = subprocess.PIPE
|
||||
|
||||
kwargs = dict(
|
||||
executable=executable,
|
||||
shell=shell,
|
||||
close_fds=close_fds,
|
||||
stdin= st_in,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE
|
||||
)
|
||||
|
||||
if path_prefix:
|
||||
kwargs['env'] = env
|
||||
if cwd and os.path.isdir(cwd):
|
||||
kwargs['cwd'] = cwd
|
||||
|
||||
# store the pwd
|
||||
prev_dir = os.getcwd()
|
||||
|
||||
# make sure we're in the right working directory
|
||||
if cwd and os.path.isdir(cwd):
|
||||
try:
|
||||
os.chdir(cwd)
|
||||
except (OSError, IOError), e:
|
||||
self.fail_json(rc=e.errno, msg="Could not open %s , %s" % (cwd, str(e)))
|
||||
|
||||
try:
|
||||
if path_prefix is not None:
|
||||
cmd = subprocess.Popen(args,
|
||||
executable=executable,
|
||||
shell=shell,
|
||||
close_fds=close_fds,
|
||||
stdin=st_in,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
env=env)
|
||||
else:
|
||||
cmd = subprocess.Popen(args,
|
||||
executable=executable,
|
||||
shell=shell,
|
||||
close_fds=close_fds,
|
||||
stdin=st_in,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE)
|
||||
|
||||
cmd = subprocess.Popen(args, **kwargs)
|
||||
|
||||
if data:
|
||||
if not binary_data:
|
||||
data += '\\n'
|
||||
data += '\n'
|
||||
out, err = cmd.communicate(input=data)
|
||||
rc = cmd.returncode
|
||||
except (OSError, IOError), e:
|
||||
self.fail_json(rc=e.errno, msg=str(e), cmd=args)
|
||||
self.fail_json(rc=e.errno, msg=str(e), cmd=clean_args)
|
||||
except:
|
||||
self.fail_json(rc=257, msg=traceback.format_exc(), cmd=args)
|
||||
self.fail_json(rc=257, msg=traceback.format_exc(), cmd=clean_args)
|
||||
|
||||
if rc != 0 and check_rc:
|
||||
msg = err.rstrip()
|
||||
self.fail_json(cmd=args, rc=rc, stdout=out, stderr=err, msg=msg)
|
||||
self.fail_json(cmd=clean_args, rc=rc, stdout=out, stderr=err, msg=msg)
|
||||
|
||||
# reset the pwd
|
||||
os.chdir(prev_dir)
|
||||
|
||||
return (rc, out, err)
|
||||
|
||||
def append_to_file(self, filename, str):
|
||||
filename = os.path.expandvars(os.path.expanduser(filename))
|
||||
fh = open(filename, 'a')
|
||||
fh.write(str)
|
||||
fh.close()
|
||||
|
||||
def pretty_bytes(self,size):
|
||||
ranges = (
|
||||
(1<<70L, 'ZB'),
|
||||
|
@ -1068,4 +1167,5 @@ class AnsibleModule(object):
|
|||
break
|
||||
return '%.2f %s' % (float(size)/ limit, suffix)
|
||||
|
||||
|
||||
def get_module_path():
|
||||
return os.path.dirname(os.path.realpath(__file__))
|
||||
|
|
|
@ -1,3 +1,31 @@
|
|||
# This code is part of Ansible, but is an independent component.
|
||||
# This particular file snippet, and this file snippet only, is BSD licensed.
|
||||
# Modules you write using this snippet, which is embedded dynamically by Ansible
|
||||
# still belong to the author of the module, and may assign their own license
|
||||
# to the complete work.
|
||||
#
|
||||
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without modification,
|
||||
# are permitted provided that the following conditions are met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# * Redistributions in binary form must reproduce the above copyright notice,
|
||||
# this list of conditions and the following disclaimer in the documentation
|
||||
# and/or other materials provided with the distribution.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
||||
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
|
||||
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
|
||||
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
try:
|
||||
from distutils.version import LooseVersion
|
||||
HAS_LOOSE_VERSION = True
|
||||
|
@ -14,33 +42,44 @@ AWS_REGIONS = ['ap-northeast-1',
|
|||
'us-west-2']
|
||||
|
||||
|
||||
def ec2_argument_keys_spec():
|
||||
def aws_common_argument_spec():
|
||||
return dict(
|
||||
ec2_url=dict(),
|
||||
aws_secret_key=dict(aliases=['ec2_secret_key', 'secret_key'], no_log=True),
|
||||
aws_access_key=dict(aliases=['ec2_access_key', 'access_key']),
|
||||
validate_certs=dict(default=True, type='bool'),
|
||||
security_token=dict(no_log=True),
|
||||
profile=dict(),
|
||||
)
|
||||
return spec
|
||||
|
||||
|
||||
def ec2_argument_spec():
|
||||
spec = ec2_argument_keys_spec()
|
||||
spec = aws_common_argument_spec()
|
||||
spec.update(
|
||||
dict(
|
||||
region=dict(aliases=['aws_region', 'ec2_region'], choices=AWS_REGIONS),
|
||||
validate_certs=dict(default=True, type='bool'),
|
||||
ec2_url=dict(),
|
||||
)
|
||||
)
|
||||
return spec
|
||||
|
||||
|
||||
def get_ec2_creds(module):
|
||||
def boto_supports_profile_name():
|
||||
return hasattr(boto.ec2.EC2Connection, 'profile_name')
|
||||
|
||||
|
||||
def get_aws_connection_info(module):
|
||||
|
||||
# Check module args for credentials, then check environment vars
|
||||
# access_key
|
||||
|
||||
ec2_url = module.params.get('ec2_url')
|
||||
ec2_secret_key = module.params.get('aws_secret_key')
|
||||
ec2_access_key = module.params.get('aws_access_key')
|
||||
access_key = module.params.get('aws_access_key')
|
||||
secret_key = module.params.get('aws_secret_key')
|
||||
security_token = module.params.get('security_token')
|
||||
region = module.params.get('region')
|
||||
profile_name = module.params.get('profile')
|
||||
validate_certs = module.params.get('validate_certs')
|
||||
|
||||
if not ec2_url:
|
||||
if 'EC2_URL' in os.environ:
|
||||
|
@ -48,21 +87,27 @@ def get_ec2_creds(module):
|
|||
elif 'AWS_URL' in os.environ:
|
||||
ec2_url = os.environ['AWS_URL']
|
||||
|
||||
if not ec2_access_key:
|
||||
if not access_key:
|
||||
if 'EC2_ACCESS_KEY' in os.environ:
|
||||
ec2_access_key = os.environ['EC2_ACCESS_KEY']
|
||||
access_key = os.environ['EC2_ACCESS_KEY']
|
||||
elif 'AWS_ACCESS_KEY_ID' in os.environ:
|
||||
ec2_access_key = os.environ['AWS_ACCESS_KEY_ID']
|
||||
access_key = os.environ['AWS_ACCESS_KEY_ID']
|
||||
elif 'AWS_ACCESS_KEY' in os.environ:
|
||||
ec2_access_key = os.environ['AWS_ACCESS_KEY']
|
||||
access_key = os.environ['AWS_ACCESS_KEY']
|
||||
else:
|
||||
# in case access_key came in as empty string
|
||||
access_key = None
|
||||
|
||||
if not ec2_secret_key:
|
||||
if not secret_key:
|
||||
if 'EC2_SECRET_KEY' in os.environ:
|
||||
ec2_secret_key = os.environ['EC2_SECRET_KEY']
|
||||
secret_key = os.environ['EC2_SECRET_KEY']
|
||||
elif 'AWS_SECRET_ACCESS_KEY' in os.environ:
|
||||
ec2_secret_key = os.environ['AWS_SECRET_ACCESS_KEY']
|
||||
secret_key = os.environ['AWS_SECRET_ACCESS_KEY']
|
||||
elif 'AWS_SECRET_KEY' in os.environ:
|
||||
ec2_secret_key = os.environ['AWS_SECRET_KEY']
|
||||
secret_key = os.environ['AWS_SECRET_KEY']
|
||||
else:
|
||||
# in case secret_key came in as empty string
|
||||
secret_key = None
|
||||
|
||||
if not region:
|
||||
if 'EC2_REGION' in os.environ:
|
||||
|
@ -71,39 +116,75 @@ def get_ec2_creds(module):
|
|||
region = os.environ['AWS_REGION']
|
||||
else:
|
||||
# boto.config.get returns None if config not found
|
||||
region = boto.config.get('Boto', 'aws_region')
|
||||
region = boto.config.get('Boto', 'aws_region')
|
||||
if not region:
|
||||
region = boto.config.get('Boto', 'ec2_region')
|
||||
|
||||
return ec2_url, ec2_access_key, ec2_secret_key, region
|
||||
if not security_token:
|
||||
if 'AWS_SECURITY_TOKEN' in os.environ:
|
||||
security_token = os.environ['AWS_SECURITY_TOKEN']
|
||||
else:
|
||||
# in case security_token came in as empty string
|
||||
security_token = None
|
||||
|
||||
boto_params = dict(aws_access_key_id=access_key,
|
||||
aws_secret_access_key=secret_key,
|
||||
security_token=security_token)
|
||||
|
||||
# profile_name only works as a key in boto >= 2.24
|
||||
# so only set profile_name if passed as an argument
|
||||
if profile_name:
|
||||
if not boto_supports_profile_name():
|
||||
module.fail_json("boto does not support profile_name before 2.24")
|
||||
boto_params['profile_name'] = profile_name
|
||||
|
||||
if validate_certs and HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"):
|
||||
boto_params['validate_certs'] = validate_certs
|
||||
|
||||
return region, ec2_url, boto_params
|
||||
|
||||
|
||||
def get_ec2_creds(module):
|
||||
''' for compatibility mode with old modules that don't/can't yet
|
||||
use ec2_connect method '''
|
||||
region, ec2_url, boto_params = get_aws_connection_info(module)
|
||||
return ec2_url, boto_params['aws_access_key_id'], boto_params['aws_secret_access_key'], region
|
||||
|
||||
|
||||
def boto_fix_security_token_in_profile(conn, profile_name):
|
||||
''' monkey patch for boto issue boto/boto#2100 '''
|
||||
profile = 'profile ' + profile_name
|
||||
if boto.config.has_option(profile, 'aws_security_token'):
|
||||
conn.provider.set_security_token(boto.config.get(profile, 'aws_security_token'))
|
||||
return conn
|
||||
|
||||
|
||||
def connect_to_aws(aws_module, region, **params):
|
||||
conn = aws_module.connect_to_region(region, **params)
|
||||
if params.get('profile_name'):
|
||||
conn = boto_fix_security_token_in_profile(conn, params['profile_name'])
|
||||
return conn
|
||||
|
||||
|
||||
def ec2_connect(module):
|
||||
|
||||
""" Return an ec2 connection"""
|
||||
|
||||
ec2_url, aws_access_key, aws_secret_key, region = get_ec2_creds(module)
|
||||
validate_certs = module.params.get('validate_certs', True)
|
||||
region, ec2_url, boto_params = get_aws_connection_info(module)
|
||||
|
||||
# If we have a region specified, connect to its endpoint.
|
||||
if region:
|
||||
try:
|
||||
if HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"):
|
||||
ec2 = boto.ec2.connect_to_region(region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key, validate_certs=validate_certs)
|
||||
else:
|
||||
ec2 = boto.ec2.connect_to_region(region, aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key)
|
||||
ec2 = connect_to_aws(boto.ec2, region, **boto_params)
|
||||
except boto.exception.NoAuthHandlerFound, e:
|
||||
module.fail_json(msg = str(e))
|
||||
module.fail_json(msg=str(e))
|
||||
# Otherwise, no region so we fallback to the old connection method
|
||||
elif ec2_url:
|
||||
try:
|
||||
if HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"):
|
||||
ec2 = boto.connect_ec2_endpoint(ec2_url, aws_access_key, aws_secret_key, validate_certs=validate_certs)
|
||||
else:
|
||||
ec2 = boto.connect_ec2_endpoint(ec2_url, aws_access_key, aws_secret_key)
|
||||
ec2 = boto.connect_ec2_endpoint(ec2_url, **boto_params)
|
||||
except boto.exception.NoAuthHandlerFound, e:
|
||||
module.fail_json(msg = str(e))
|
||||
module.fail_json(msg=str(e))
|
||||
else:
|
||||
module.fail_json(msg="Either region or ec2_url must be specified")
|
||||
return ec2
|
||||
|
||||
return ec2
|
||||
|
|
2345
lib/ansible/module_utils/facts.py
Normal file
2345
lib/ansible/module_utils/facts.py
Normal file
File diff suppressed because it is too large
Load diff
|
@ -1,3 +1,32 @@
|
|||
# This code is part of Ansible, but is an independent component.
|
||||
# This particular file snippet, and this file snippet only, is BSD licensed.
|
||||
# Modules you write using this snippet, which is embedded dynamically by Ansible
|
||||
# still belong to the author of the module, and may assign their own license
|
||||
# to the complete work.
|
||||
#
|
||||
# Copyright (c), Franck Cuny <franck.cuny@gmail.com>, 2014
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without modification,
|
||||
# are permitted provided that the following conditions are met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# * Redistributions in binary form must reproduce the above copyright notice,
|
||||
# this list of conditions and the following disclaimer in the documentation
|
||||
# and/or other materials provided with the distribution.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
||||
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
|
||||
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
|
||||
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
#
|
||||
|
||||
USER_AGENT_PRODUCT="Ansible-gce"
|
||||
USER_AGENT_VERSION="v1"
|
||||
|
||||
|
|
|
@ -1,4 +1,36 @@
|
|||
def add_git_host_key(module, url, accept_hostkey=True):
|
||||
# This code is part of Ansible, but is an independent component.
|
||||
# This particular file snippet, and this file snippet only, is BSD licensed.
|
||||
# Modules you write using this snippet, which is embedded dynamically by Ansible
|
||||
# still belong to the author of the module, and may assign their own license
|
||||
# to the complete work.
|
||||
#
|
||||
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without modification,
|
||||
# are permitted provided that the following conditions are met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# * Redistributions in binary form must reproduce the above copyright notice,
|
||||
# this list of conditions and the following disclaimer in the documentation
|
||||
# and/or other materials provided with the distribution.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
||||
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
|
||||
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
|
||||
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
import hmac
|
||||
from hashlib import sha1
|
||||
HASHED_KEY_MAGIC = "|1|"
|
||||
|
||||
def add_git_host_key(module, url, accept_hostkey=True, create_dir=True):
|
||||
|
||||
""" idempotently add a git url hostkey """
|
||||
|
||||
|
@ -8,7 +40,7 @@ def add_git_host_key(module, url, accept_hostkey=True):
|
|||
known_host = check_hostkey(module, fqdn)
|
||||
if not known_host:
|
||||
if accept_hostkey:
|
||||
rc, out, err = add_host_key(module, fqdn)
|
||||
rc, out, err = add_host_key(module, fqdn, create_dir=create_dir)
|
||||
if rc != 0:
|
||||
module.fail_json(msg="failed to add %s hostkey: %s" % (fqdn, out + err))
|
||||
else:
|
||||
|
@ -30,41 +62,94 @@ def get_fqdn(repo_url):
|
|||
|
||||
return result
|
||||
|
||||
|
||||
def check_hostkey(module, fqdn):
|
||||
return not not_in_host_file(module, fqdn)
|
||||
|
||||
""" use ssh-keygen to check if key is known """
|
||||
# this is a variant of code found in connection_plugins/paramiko.py and we should modify
|
||||
# the paramiko code to import and use this.
|
||||
|
||||
result = False
|
||||
keygen_cmd = module.get_bin_path('ssh-keygen', True)
|
||||
this_cmd = keygen_cmd + " -H -F " + fqdn
|
||||
rc, out, err = module.run_command(this_cmd)
|
||||
def not_in_host_file(self, host):
|
||||
|
||||
if rc == 0 and out != "":
|
||||
result = True
|
||||
|
||||
if 'USER' in os.environ:
|
||||
user_host_file = os.path.expandvars("~${USER}/.ssh/known_hosts")
|
||||
else:
|
||||
# Check the main system location
|
||||
this_cmd = keygen_cmd + " -H -f /etc/ssh/ssh_known_hosts -F " + fqdn
|
||||
rc, out, err = module.run_command(this_cmd)
|
||||
user_host_file = "~/.ssh/known_hosts"
|
||||
user_host_file = os.path.expanduser(user_host_file)
|
||||
|
||||
if rc == 0:
|
||||
if out != "":
|
||||
result = True
|
||||
host_file_list = []
|
||||
host_file_list.append(user_host_file)
|
||||
host_file_list.append("/etc/ssh/ssh_known_hosts")
|
||||
host_file_list.append("/etc/ssh/ssh_known_hosts2")
|
||||
|
||||
return result
|
||||
hfiles_not_found = 0
|
||||
for hf in host_file_list:
|
||||
if not os.path.exists(hf):
|
||||
hfiles_not_found += 1
|
||||
continue
|
||||
|
||||
def add_host_key(module, fqdn, key_type="rsa"):
|
||||
try:
|
||||
host_fh = open(hf)
|
||||
except IOError, e:
|
||||
hfiles_not_found += 1
|
||||
continue
|
||||
else:
|
||||
data = host_fh.read()
|
||||
host_fh.close()
|
||||
|
||||
for line in data.split("\n"):
|
||||
if line is None or " " not in line:
|
||||
continue
|
||||
tokens = line.split()
|
||||
if tokens[0].find(HASHED_KEY_MAGIC) == 0:
|
||||
# this is a hashed known host entry
|
||||
try:
|
||||
(kn_salt,kn_host) = tokens[0][len(HASHED_KEY_MAGIC):].split("|",2)
|
||||
hash = hmac.new(kn_salt.decode('base64'), digestmod=sha1)
|
||||
hash.update(host)
|
||||
if hash.digest() == kn_host.decode('base64'):
|
||||
return False
|
||||
except:
|
||||
# invalid hashed host key, skip it
|
||||
continue
|
||||
else:
|
||||
# standard host file entry
|
||||
if host in tokens[0]:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def add_host_key(module, fqdn, key_type="rsa", create_dir=False):
|
||||
|
||||
""" use ssh-keyscan to add the hostkey """
|
||||
|
||||
result = False
|
||||
keyscan_cmd = module.get_bin_path('ssh-keyscan', True)
|
||||
|
||||
if not os.path.exists(os.path.expanduser("~/.ssh/")):
|
||||
module.fail_json(msg="%s does not exist" % os.path.expanduser("~/.ssh/"))
|
||||
if 'USER' in os.environ:
|
||||
user_ssh_dir = os.path.expandvars("~${USER}/.ssh/")
|
||||
user_host_file = os.path.expandvars("~${USER}/.ssh/known_hosts")
|
||||
else:
|
||||
user_ssh_dir = "~/.ssh/"
|
||||
user_host_file = "~/.ssh/known_hosts"
|
||||
user_ssh_dir = os.path.expanduser(user_ssh_dir)
|
||||
|
||||
if not os.path.exists(user_ssh_dir):
|
||||
if create_dir:
|
||||
try:
|
||||
os.makedirs(user_ssh_dir, 0700)
|
||||
except:
|
||||
module.fail_json(msg="failed to create host key directory: %s" % user_ssh_dir)
|
||||
else:
|
||||
module.fail_json(msg="%s does not exist" % user_ssh_dir)
|
||||
elif not os.path.isdir(user_ssh_dir):
|
||||
module.fail_json(msg="%s is not a directory" % user_ssh_dir)
|
||||
|
||||
this_cmd = "%s -t %s %s" % (keyscan_cmd, key_type, fqdn)
|
||||
|
||||
this_cmd = "%s -t %s %s >> ~/.ssh/known_hosts" % (keyscan_cmd, key_type, fqdn)
|
||||
rc, out, err = module.run_command(this_cmd)
|
||||
module.append_to_file(user_host_file, out)
|
||||
|
||||
return rc, out, err
|
||||
|
||||
|
|
|
@ -1,5 +1,32 @@
|
|||
import os
|
||||
# This code is part of Ansible, but is an independent component.
|
||||
# This particular file snippet, and this file snippet only, is BSD licensed.
|
||||
# Modules you write using this snippet, which is embedded dynamically by Ansible
|
||||
# still belong to the author of the module, and may assign their own license
|
||||
# to the complete work.
|
||||
#
|
||||
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without modification,
|
||||
# are permitted provided that the following conditions are met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# * Redistributions in binary form must reproduce the above copyright notice,
|
||||
# this list of conditions and the following disclaimer in the documentation
|
||||
# and/or other materials provided with the distribution.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
||||
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
|
||||
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
|
||||
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
import os
|
||||
|
||||
def rax_argument_spec():
|
||||
return dict(
|
||||
|
|
252
lib/ansible/module_utils/redhat.py
Normal file
252
lib/ansible/module_utils/redhat.py
Normal file
|
@ -0,0 +1,252 @@
|
|||
import os
|
||||
import re
|
||||
import types
|
||||
import ConfigParser
|
||||
import shlex
|
||||
|
||||
|
||||
class RegistrationBase(object):
|
||||
def __init__(self, module, username=None, password=None):
|
||||
self.module = module
|
||||
self.username = username
|
||||
self.password = password
|
||||
|
||||
def configure(self):
|
||||
raise NotImplementedError("Must be implemented by a sub-class")
|
||||
|
||||
def enable(self):
|
||||
# Remove any existing redhat.repo
|
||||
redhat_repo = '/etc/yum.repos.d/redhat.repo'
|
||||
if os.path.isfile(redhat_repo):
|
||||
os.unlink(redhat_repo)
|
||||
|
||||
def register(self):
|
||||
raise NotImplementedError("Must be implemented by a sub-class")
|
||||
|
||||
def unregister(self):
|
||||
raise NotImplementedError("Must be implemented by a sub-class")
|
||||
|
||||
def unsubscribe(self):
|
||||
raise NotImplementedError("Must be implemented by a sub-class")
|
||||
|
||||
def update_plugin_conf(self, plugin, enabled=True):
|
||||
plugin_conf = '/etc/yum/pluginconf.d/%s.conf' % plugin
|
||||
if os.path.isfile(plugin_conf):
|
||||
cfg = ConfigParser.ConfigParser()
|
||||
cfg.read([plugin_conf])
|
||||
if enabled:
|
||||
cfg.set('main', 'enabled', 1)
|
||||
else:
|
||||
cfg.set('main', 'enabled', 0)
|
||||
fd = open(plugin_conf, 'rwa+')
|
||||
cfg.write(fd)
|
||||
fd.close()
|
||||
|
||||
def subscribe(self, **kwargs):
|
||||
raise NotImplementedError("Must be implemented by a sub-class")
|
||||
|
||||
|
||||
class Rhsm(RegistrationBase):
|
||||
def __init__(self, module, username=None, password=None):
|
||||
RegistrationBase.__init__(self, module, username, password)
|
||||
self.config = self._read_config()
|
||||
self.module = module
|
||||
|
||||
def _read_config(self, rhsm_conf='/etc/rhsm/rhsm.conf'):
|
||||
'''
|
||||
Load RHSM configuration from /etc/rhsm/rhsm.conf.
|
||||
Returns:
|
||||
* ConfigParser object
|
||||
'''
|
||||
|
||||
# Read RHSM defaults ...
|
||||
cp = ConfigParser.ConfigParser()
|
||||
cp.read(rhsm_conf)
|
||||
|
||||
# Add support for specifying a default value w/o having to standup some configuration
|
||||
# Yeah, I know this should be subclassed ... but, oh well
|
||||
def get_option_default(self, key, default=''):
|
||||
sect, opt = key.split('.', 1)
|
||||
if self.has_section(sect) and self.has_option(sect, opt):
|
||||
return self.get(sect, opt)
|
||||
else:
|
||||
return default
|
||||
|
||||
cp.get_option = types.MethodType(get_option_default, cp, ConfigParser.ConfigParser)
|
||||
|
||||
return cp
|
||||
|
||||
def enable(self):
|
||||
'''
|
||||
Enable the system to receive updates from subscription-manager.
|
||||
This involves updating affected yum plugins and removing any
|
||||
conflicting yum repositories.
|
||||
'''
|
||||
RegistrationBase.enable(self)
|
||||
self.update_plugin_conf('rhnplugin', False)
|
||||
self.update_plugin_conf('subscription-manager', True)
|
||||
|
||||
def configure(self, **kwargs):
|
||||
'''
|
||||
Configure the system as directed for registration with RHN
|
||||
Raises:
|
||||
* Exception - if error occurs while running command
|
||||
'''
|
||||
args = ['subscription-manager', 'config']
|
||||
|
||||
# Pass supplied **kwargs as parameters to subscription-manager. Ignore
|
||||
# non-configuration parameters and replace '_' with '.'. For example,
|
||||
# 'server_hostname' becomes '--system.hostname'.
|
||||
for k,v in kwargs.items():
|
||||
if re.search(r'^(system|rhsm)_', k):
|
||||
args.append('--%s=%s' % (k.replace('_','.'), v))
|
||||
|
||||
self.module.run_command(args, check_rc=True)
|
||||
|
||||
@property
|
||||
def is_registered(self):
|
||||
'''
|
||||
Determine whether the current system
|
||||
Returns:
|
||||
* Boolean - whether the current system is currently registered to
|
||||
RHN.
|
||||
'''
|
||||
# Quick version...
|
||||
if False:
|
||||
return os.path.isfile('/etc/pki/consumer/cert.pem') and \
|
||||
os.path.isfile('/etc/pki/consumer/key.pem')
|
||||
|
||||
args = ['subscription-manager', 'identity']
|
||||
rc, stdout, stderr = self.module.run_command(args, check_rc=False)
|
||||
if rc == 0:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def register(self, username, password, autosubscribe, activationkey):
|
||||
'''
|
||||
Register the current system to the provided RHN server
|
||||
Raises:
|
||||
* Exception - if error occurs while running command
|
||||
'''
|
||||
args = ['subscription-manager', 'register']
|
||||
|
||||
# Generate command arguments
|
||||
if activationkey:
|
||||
args.append('--activationkey "%s"' % activationkey)
|
||||
else:
|
||||
if autosubscribe:
|
||||
args.append('--autosubscribe')
|
||||
if username:
|
||||
args.extend(['--username', username])
|
||||
if password:
|
||||
args.extend(['--password', password])
|
||||
|
||||
# Do the needful...
|
||||
rc, stderr, stdout = self.module.run_command(args, check_rc=True)
|
||||
|
||||
def unsubscribe(self):
|
||||
'''
|
||||
Unsubscribe a system from all subscribed channels
|
||||
Raises:
|
||||
* Exception - if error occurs while running command
|
||||
'''
|
||||
args = ['subscription-manager', 'unsubscribe', '--all']
|
||||
rc, stderr, stdout = self.module.run_command(args, check_rc=True)
|
||||
|
||||
def unregister(self):
|
||||
'''
|
||||
Unregister a currently registered system
|
||||
Raises:
|
||||
* Exception - if error occurs while running command
|
||||
'''
|
||||
args = ['subscription-manager', 'unregister']
|
||||
rc, stderr, stdout = self.module.run_command(args, check_rc=True)
|
||||
|
||||
def subscribe(self, regexp):
|
||||
'''
|
||||
Subscribe current system to available pools matching the specified
|
||||
regular expression
|
||||
Raises:
|
||||
* Exception - if error occurs while running command
|
||||
'''
|
||||
|
||||
# Available pools ready for subscription
|
||||
available_pools = RhsmPools(self.module)
|
||||
|
||||
for pool in available_pools.filter(regexp):
|
||||
pool.subscribe()
|
||||
|
||||
|
||||
class RhsmPool(object):
|
||||
'''
|
||||
Convenience class for housing subscription information
|
||||
'''
|
||||
|
||||
def __init__(self, module, **kwargs):
|
||||
self.module = module
|
||||
for k,v in kwargs.items():
|
||||
setattr(self, k, v)
|
||||
|
||||
def __str__(self):
|
||||
return str(self.__getattribute__('_name'))
|
||||
|
||||
def subscribe(self):
|
||||
args = "subscription-manager subscribe --pool %s" % self.PoolId
|
||||
rc, stdout, stderr = self.module.run_command(args, check_rc=True)
|
||||
if rc == 0:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
class RhsmPools(object):
|
||||
"""
|
||||
This class is used for manipulating pools subscriptions with RHSM
|
||||
"""
|
||||
def __init__(self, module):
|
||||
self.module = module
|
||||
self.products = self._load_product_list()
|
||||
|
||||
def __iter__(self):
|
||||
return self.products.__iter__()
|
||||
|
||||
def _load_product_list(self):
|
||||
"""
|
||||
Loads list of all availaible pools for system in data structure
|
||||
"""
|
||||
args = "subscription-manager list --available"
|
||||
rc, stdout, stderr = self.module.run_command(args, check_rc=True)
|
||||
|
||||
products = []
|
||||
for line in stdout.split('\n'):
|
||||
# Remove leading+trailing whitespace
|
||||
line = line.strip()
|
||||
# An empty line implies the end of a output group
|
||||
if len(line) == 0:
|
||||
continue
|
||||
# If a colon ':' is found, parse
|
||||
elif ':' in line:
|
||||
(key, value) = line.split(':',1)
|
||||
key = key.strip().replace(" ", "") # To unify
|
||||
value = value.strip()
|
||||
if key in ['ProductName', 'SubscriptionName']:
|
||||
# Remember the name for later processing
|
||||
products.append(RhsmPool(self.module, _name=value, key=value))
|
||||
elif products:
|
||||
# Associate value with most recently recorded product
|
||||
products[-1].__setattr__(key, value)
|
||||
# FIXME - log some warning?
|
||||
#else:
|
||||
# warnings.warn("Unhandled subscription key/value: %s/%s" % (key,value))
|
||||
return products
|
||||
|
||||
def filter(self, regexp='^$'):
|
||||
'''
|
||||
Return a list of RhsmPools whose name matches the provided regular expression
|
||||
'''
|
||||
r = re.compile(regexp)
|
||||
for product in self.products:
|
||||
if r.search(product._name):
|
||||
yield product
|
||||
|
319
lib/ansible/module_utils/urls.py
Normal file
319
lib/ansible/module_utils/urls.py
Normal file
|
@ -0,0 +1,319 @@
|
|||
# This code is part of Ansible, but is an independent component.
|
||||
# This particular file snippet, and this file snippet only, is BSD licensed.
|
||||
# Modules you write using this snippet, which is embedded dynamically by Ansible
|
||||
# still belong to the author of the module, and may assign their own license
|
||||
# to the complete work.
|
||||
#
|
||||
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without modification,
|
||||
# are permitted provided that the following conditions are met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# * Redistributions in binary form must reproduce the above copyright notice,
|
||||
# this list of conditions and the following disclaimer in the documentation
|
||||
# and/or other materials provided with the distribution.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
||||
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
|
||||
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
|
||||
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
try:
|
||||
import urllib
|
||||
HAS_URLLIB = True
|
||||
except:
|
||||
HAS_URLLIB = False
|
||||
|
||||
try:
|
||||
import urllib2
|
||||
HAS_URLLIB2 = True
|
||||
except:
|
||||
HAS_URLLIB2 = False
|
||||
|
||||
try:
|
||||
import urlparse
|
||||
HAS_URLPARSE = True
|
||||
except:
|
||||
HAS_URLPARSE = False
|
||||
|
||||
try:
|
||||
import ssl
|
||||
HAS_SSL=True
|
||||
except:
|
||||
HAS_SSL=False
|
||||
|
||||
import socket
|
||||
import tempfile
|
||||
|
||||
|
||||
# This is a dummy cacert provided for Mac OS since you need at least 1
|
||||
# ca cert, regardless of validity, for Python on Mac OS to use the
|
||||
# keychain functionality in OpenSSL for validating SSL certificates.
|
||||
# See: http://mercurial.selenic.com/wiki/CACertificates#Mac_OS_X_10.6_and_higher
|
||||
DUMMY_CA_CERT = """-----BEGIN CERTIFICATE-----
|
||||
MIICvDCCAiWgAwIBAgIJAO8E12S7/qEpMA0GCSqGSIb3DQEBBQUAMEkxCzAJBgNV
|
||||
BAYTAlVTMRcwFQYDVQQIEw5Ob3J0aCBDYXJvbGluYTEPMA0GA1UEBxMGRHVyaGFt
|
||||
MRAwDgYDVQQKEwdBbnNpYmxlMB4XDTE0MDMxODIyMDAyMloXDTI0MDMxNTIyMDAy
|
||||
MlowSTELMAkGA1UEBhMCVVMxFzAVBgNVBAgTDk5vcnRoIENhcm9saW5hMQ8wDQYD
|
||||
VQQHEwZEdXJoYW0xEDAOBgNVBAoTB0Fuc2libGUwgZ8wDQYJKoZIhvcNAQEBBQAD
|
||||
gY0AMIGJAoGBANtvpPq3IlNlRbCHhZAcP6WCzhc5RbsDqyh1zrkmLi0GwcQ3z/r9
|
||||
gaWfQBYhHpobK2Tiq11TfraHeNB3/VfNImjZcGpN8Fl3MWwu7LfVkJy3gNNnxkA1
|
||||
4Go0/LmIvRFHhbzgfuo9NFgjPmmab9eqXJceqZIlz2C8xA7EeG7ku0+vAgMBAAGj
|
||||
gaswgagwHQYDVR0OBBYEFPnN1nPRqNDXGlCqCvdZchRNi/FaMHkGA1UdIwRyMHCA
|
||||
FPnN1nPRqNDXGlCqCvdZchRNi/FaoU2kSzBJMQswCQYDVQQGEwJVUzEXMBUGA1UE
|
||||
CBMOTm9ydGggQ2Fyb2xpbmExDzANBgNVBAcTBkR1cmhhbTEQMA4GA1UEChMHQW5z
|
||||
aWJsZYIJAO8E12S7/qEpMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEA
|
||||
MUB80IR6knq9K/tY+hvPsZer6eFMzO3JGkRFBh2kn6JdMDnhYGX7AXVHGflrwNQH
|
||||
qFy+aenWXsC0ZvrikFxbQnX8GVtDADtVznxOi7XzFw7JOxdsVrpXgSN0eh0aMzvV
|
||||
zKPZsZ2miVGclicJHzm5q080b1p/sZtuKIEZk6vZqEg=
|
||||
-----END CERTIFICATE-----
|
||||
"""
|
||||
|
||||
|
||||
class RequestWithMethod(urllib2.Request):
|
||||
'''
|
||||
Workaround for using DELETE/PUT/etc with urllib2
|
||||
Originally contained in library/net_infrastructure/dnsmadeeasy
|
||||
'''
|
||||
|
||||
def __init__(self, url, method, data=None, headers={}):
|
||||
self._method = method
|
||||
urllib2.Request.__init__(self, url, data, headers)
|
||||
|
||||
def get_method(self):
|
||||
if self._method:
|
||||
return self._method
|
||||
else:
|
||||
return urllib2.Request.get_method(self)
|
||||
|
||||
|
||||
class SSLValidationHandler(urllib2.BaseHandler):
|
||||
'''
|
||||
A custom handler class for SSL validation.
|
||||
|
||||
Based on:
|
||||
http://stackoverflow.com/questions/1087227/validate-ssl-certificates-with-python
|
||||
http://techknack.net/python-urllib2-handlers/
|
||||
'''
|
||||
|
||||
def __init__(self, module, hostname, port):
|
||||
self.module = module
|
||||
self.hostname = hostname
|
||||
self.port = port
|
||||
|
||||
def get_ca_certs(self):
|
||||
# tries to find a valid CA cert in one of the
|
||||
# standard locations for the current distribution
|
||||
|
||||
ca_certs = []
|
||||
paths_checked = []
|
||||
platform = get_platform()
|
||||
distribution = get_distribution()
|
||||
|
||||
# build a list of paths to check for .crt/.pem files
|
||||
# based on the platform type
|
||||
paths_checked.append('/etc/ssl/certs')
|
||||
if platform == 'Linux':
|
||||
paths_checked.append('/etc/pki/ca-trust/extracted/pem')
|
||||
paths_checked.append('/etc/pki/tls/certs')
|
||||
paths_checked.append('/usr/share/ca-certificates/cacert.org')
|
||||
elif platform == 'FreeBSD':
|
||||
paths_checked.append('/usr/local/share/certs')
|
||||
elif platform == 'OpenBSD':
|
||||
paths_checked.append('/etc/ssl')
|
||||
elif platform == 'NetBSD':
|
||||
ca_certs.append('/etc/openssl/certs')
|
||||
|
||||
# fall back to a user-deployed cert in a standard
|
||||
# location if the OS platform one is not available
|
||||
paths_checked.append('/etc/ansible')
|
||||
|
||||
tmp_fd, tmp_path = tempfile.mkstemp()
|
||||
|
||||
# Write the dummy ca cert if we are running on Mac OS X
|
||||
if platform == 'Darwin':
|
||||
os.write(tmp_fd, DUMMY_CA_CERT)
|
||||
|
||||
# for all of the paths, find any .crt or .pem files
|
||||
# and compile them into single temp file for use
|
||||
# in the ssl check to speed up the test
|
||||
for path in paths_checked:
|
||||
if os.path.exists(path) and os.path.isdir(path):
|
||||
dir_contents = os.listdir(path)
|
||||
for f in dir_contents:
|
||||
full_path = os.path.join(path, f)
|
||||
if os.path.isfile(full_path) and os.path.splitext(f)[1] in ('.crt','.pem'):
|
||||
try:
|
||||
cert_file = open(full_path, 'r')
|
||||
os.write(tmp_fd, cert_file.read())
|
||||
cert_file.close()
|
||||
except:
|
||||
pass
|
||||
|
||||
return (tmp_path, paths_checked)
|
||||
|
||||
def http_request(self, req):
|
||||
tmp_ca_cert_path, paths_checked = self.get_ca_certs()
|
||||
try:
|
||||
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED)
|
||||
ssl_s.connect((self.hostname, self.port))
|
||||
ssl_s.close()
|
||||
except (ssl.SSLError, socket.error), e:
|
||||
# fail if we tried all of the certs but none worked
|
||||
if 'connection refused' in str(e).lower():
|
||||
self.module.fail_json(msg='Failed to connect to %s:%s.' % (self.hostname, self.port))
|
||||
else:
|
||||
self.module.fail_json(
|
||||
msg='Failed to validate the SSL certificate for %s:%s. ' % (self.hostname, self.port) + \
|
||||
'Use validate_certs=no or make sure your managed systems have a valid CA certificate installed. ' + \
|
||||
'Paths checked for this platform: %s' % ", ".join(paths_checked)
|
||||
)
|
||||
try:
|
||||
# cleanup the temp file created, don't worry
|
||||
# if it fails for some reason
|
||||
os.remove(tmp_ca_cert_path)
|
||||
except:
|
||||
pass
|
||||
|
||||
return req
|
||||
|
||||
https_request = http_request
|
||||
|
||||
|
||||
def url_argument_spec():
|
||||
'''
|
||||
Creates an argument spec that can be used with any module
|
||||
that will be requesting content via urllib/urllib2
|
||||
'''
|
||||
return dict(
|
||||
url = dict(),
|
||||
force = dict(default='no', aliases=['thirsty'], type='bool'),
|
||||
http_agent = dict(default='ansible-httpget'),
|
||||
use_proxy = dict(default='yes', type='bool'),
|
||||
validate_certs = dict(default='yes', type='bool'),
|
||||
)
|
||||
|
||||
|
||||
def fetch_url(module, url, data=None, headers=None, method=None,
|
||||
use_proxy=False, force=False, last_mod_time=None, timeout=10):
|
||||
'''
|
||||
Fetches a file from an HTTP/FTP server using urllib2
|
||||
'''
|
||||
|
||||
if not HAS_URLLIB:
|
||||
module.fail_json(msg='urllib is not installed')
|
||||
if not HAS_URLLIB2:
|
||||
module.fail_json(msg='urllib2 is not installed')
|
||||
elif not HAS_URLPARSE:
|
||||
module.fail_json(msg='urlparse is not installed')
|
||||
|
||||
r = None
|
||||
handlers = []
|
||||
info = dict(url=url)
|
||||
|
||||
# Get validate_certs from the module params
|
||||
validate_certs = module.params.get('validate_certs', True)
|
||||
|
||||
parsed = urlparse.urlparse(url)
|
||||
if parsed[0] == 'https':
|
||||
if not HAS_SSL and validate_certs:
|
||||
module.fail_json(msg='SSL validation is not available in your version of python. You can use validate_certs=no, however this is unsafe and not recommended')
|
||||
elif validate_certs:
|
||||
# do the cert validation
|
||||
netloc = parsed[1]
|
||||
if '@' in netloc:
|
||||
netloc = netloc.split('@', 1)[1]
|
||||
if ':' in netloc:
|
||||
hostname, port = netloc.split(':', 1)
|
||||
else:
|
||||
hostname = netloc
|
||||
port = 443
|
||||
# create the SSL validation handler and
|
||||
# add it to the list of handlers
|
||||
ssl_handler = SSLValidationHandler(module, hostname, port)
|
||||
handlers.append(ssl_handler)
|
||||
|
||||
if parsed[0] != 'ftp' and '@' in parsed[1]:
|
||||
credentials, netloc = parsed[1].split('@', 1)
|
||||
if ':' in credentials:
|
||||
username, password = credentials.split(':', 1)
|
||||
else:
|
||||
username = credentials
|
||||
password = ''
|
||||
parsed = list(parsed)
|
||||
parsed[1] = netloc
|
||||
|
||||
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
|
||||
# this creates a password manager
|
||||
passman.add_password(None, netloc, username, password)
|
||||
# because we have put None at the start it will always
|
||||
# use this username/password combination for urls
|
||||
# for which `theurl` is a super-url
|
||||
|
||||
authhandler = urllib2.HTTPBasicAuthHandler(passman)
|
||||
# create the AuthHandler
|
||||
handlers.append(authhandler)
|
||||
|
||||
#reconstruct url without credentials
|
||||
url = urlparse.urlunparse(parsed)
|
||||
|
||||
if not use_proxy:
|
||||
proxyhandler = urllib2.ProxyHandler({})
|
||||
handlers.append(proxyhandler)
|
||||
|
||||
opener = urllib2.build_opener(*handlers)
|
||||
urllib2.install_opener(opener)
|
||||
|
||||
if method:
|
||||
if method.upper() not in ('OPTIONS','GET','HEAD','POST','PUT','DELETE','TRACE','CONNECT'):
|
||||
module.fail_json(msg='invalid HTTP request method; %s' % method.upper())
|
||||
request = RequestWithMethod(url, method.upper(), data)
|
||||
else:
|
||||
request = urllib2.Request(url, data)
|
||||
|
||||
# add the custom agent header, to help prevent issues
|
||||
# with sites that block the default urllib agent string
|
||||
request.add_header('User-agent', module.params.get('http_agent'))
|
||||
|
||||
# if we're ok with getting a 304, set the timestamp in the
|
||||
# header, otherwise make sure we don't get a cached copy
|
||||
if last_mod_time and not force:
|
||||
tstamp = last_mod_time.strftime('%a, %d %b %Y %H:%M:%S +0000')
|
||||
request.add_header('If-Modified-Since', tstamp)
|
||||
else:
|
||||
request.add_header('cache-control', 'no-cache')
|
||||
|
||||
# user defined headers now, which may override things we've set above
|
||||
if headers:
|
||||
if not isinstance(headers, dict):
|
||||
module.fail_json("headers provided to fetch_url() must be a dict")
|
||||
for header in headers:
|
||||
request.add_header(header, headers[header])
|
||||
|
||||
try:
|
||||
if sys.version_info < (2,6,0):
|
||||
# urlopen in python prior to 2.6.0 did not
|
||||
# have a timeout parameter
|
||||
r = urllib2.urlopen(request, None)
|
||||
else:
|
||||
r = urllib2.urlopen(request, None, timeout)
|
||||
info.update(r.info())
|
||||
info['url'] = r.geturl() # The URL goes in too, because of redirects.
|
||||
info.update(dict(msg="OK (%s bytes)" % r.headers.get('Content-Length', 'unknown'), status=200))
|
||||
except urllib2.HTTPError, e:
|
||||
info.update(dict(msg=str(e), status=e.code))
|
||||
except urllib2.URLError, e:
|
||||
code = int(getattr(e, 'code', -1))
|
||||
info.update(dict(msg="Request failed: %s" % str(e), status=code))
|
||||
|
||||
return r, info
|
||||
|
|
@ -29,7 +29,11 @@ from play import Play
|
|||
import StringIO
|
||||
import pipes
|
||||
|
||||
# the setup cache stores all variables about a host
|
||||
# gathered during the setup step, while the vars cache
|
||||
# holds all other variables about a host
|
||||
SETUP_CACHE = collections.defaultdict(dict)
|
||||
VARS_CACHE = collections.defaultdict(dict)
|
||||
|
||||
class PlayBook(object):
|
||||
'''
|
||||
|
@ -73,6 +77,7 @@ class PlayBook(object):
|
|||
su_user = False,
|
||||
su_pass = False,
|
||||
vault_password = False,
|
||||
force_handlers = False,
|
||||
):
|
||||
|
||||
"""
|
||||
|
@ -92,9 +97,12 @@ class PlayBook(object):
|
|||
sudo: if not specified per play, requests all plays use sudo mode
|
||||
inventory: can be specified instead of host_list to use a pre-existing inventory object
|
||||
check: don't change anything, just try to detect some potential changes
|
||||
any_errors_fatal: terminate the entire execution immediately when one of the hosts has failed
|
||||
force_handlers: continue to notify and run handlers even if a task fails
|
||||
"""
|
||||
|
||||
self.SETUP_CACHE = SETUP_CACHE
|
||||
self.VARS_CACHE = VARS_CACHE
|
||||
|
||||
arguments = []
|
||||
if playbook is None:
|
||||
|
@ -140,6 +148,7 @@ class PlayBook(object):
|
|||
self.su_user = su_user
|
||||
self.su_pass = su_pass
|
||||
self.vault_password = vault_password
|
||||
self.force_handlers = force_handlers
|
||||
|
||||
self.callbacks.playbook = self
|
||||
self.runner_callbacks.playbook = self
|
||||
|
@ -166,6 +175,7 @@ class PlayBook(object):
|
|||
self.filename = playbook
|
||||
(self.playbook, self.play_basedirs) = self._load_playbook_from_file(playbook, vars)
|
||||
ansible.callbacks.load_callback_plugins()
|
||||
ansible.callbacks.set_playbook(self.callbacks, self)
|
||||
|
||||
# *****************************************************
|
||||
|
||||
|
@ -300,7 +310,7 @@ class PlayBook(object):
|
|||
# since these likely got killed by async_wrapper
|
||||
for host in poller.hosts_to_poll:
|
||||
reason = { 'failed' : 1, 'rc' : None, 'msg' : 'timed out' }
|
||||
self.runner_callbacks.on_async_failed(host, reason, poller.jid)
|
||||
self.runner_callbacks.on_async_failed(host, reason, poller.runner.vars_cache[host]['ansible_job_id'])
|
||||
results['contacted'][host] = reason
|
||||
|
||||
return results
|
||||
|
@ -335,6 +345,7 @@ class PlayBook(object):
|
|||
default_vars=task.default_vars,
|
||||
private_key_file=self.private_key_file,
|
||||
setup_cache=self.SETUP_CACHE,
|
||||
vars_cache=self.VARS_CACHE,
|
||||
basedir=task.play.basedir,
|
||||
conditional=task.when,
|
||||
callbacks=self.runner_callbacks,
|
||||
|
@ -371,7 +382,7 @@ class PlayBook(object):
|
|||
results = self._async_poll(poller, task.async_seconds, task.async_poll_interval)
|
||||
else:
|
||||
for (host, res) in results.get('contacted', {}).iteritems():
|
||||
self.runner_callbacks.on_async_ok(host, res, poller.jid)
|
||||
self.runner_callbacks.on_async_ok(host, res, poller.runner.vars_cache[host]['ansible_job_id'])
|
||||
|
||||
contacted = results.get('contacted',{})
|
||||
dark = results.get('dark', {})
|
||||
|
@ -402,6 +413,10 @@ class PlayBook(object):
|
|||
ansible.callbacks.set_task(self.runner_callbacks, None)
|
||||
return True
|
||||
|
||||
# template ignore_errors
|
||||
cond = template(play.basedir, task.ignore_errors, task.module_vars, expand_lists=False)
|
||||
task.ignore_errors = utils.check_conditional(cond , play.basedir, task.module_vars, fail_on_undefined=C.DEFAULT_UNDEFINED_VAR_BEHAVIOR)
|
||||
|
||||
# load up an appropriate ansible runner to run the task in parallel
|
||||
results = self._run_task_internal(task)
|
||||
|
||||
|
@ -426,8 +441,6 @@ class PlayBook(object):
|
|||
else:
|
||||
facts = result.get('ansible_facts', {})
|
||||
self.SETUP_CACHE[host].update(facts)
|
||||
# extra vars need to always trump - so update again following the facts
|
||||
self.SETUP_CACHE[host].update(self.extra_vars)
|
||||
if task.register:
|
||||
if 'stdout' in result and 'stdout_lines' not in result:
|
||||
result['stdout_lines'] = result['stdout'].splitlines()
|
||||
|
@ -475,11 +488,15 @@ class PlayBook(object):
|
|||
def _do_setup_step(self, play):
|
||||
''' get facts from the remote system '''
|
||||
|
||||
if play.gather_facts is False:
|
||||
return {}
|
||||
|
||||
host_list = self._trim_unavailable_hosts(play._play_hosts)
|
||||
|
||||
if play.gather_facts is None and C.DEFAULT_GATHERING == 'smart':
|
||||
host_list = [h for h in host_list if h not in self.SETUP_CACHE or 'module_setup' not in self.SETUP_CACHE[h]]
|
||||
if len(host_list) == 0:
|
||||
return {}
|
||||
elif play.gather_facts is False or (play.gather_facts is None and C.DEFAULT_GATHERING == 'explicit'):
|
||||
return {}
|
||||
|
||||
self.callbacks.on_setup()
|
||||
self.inventory.restrict_to(host_list)
|
||||
|
||||
|
@ -500,6 +517,7 @@ class PlayBook(object):
|
|||
remote_port=play.remote_port,
|
||||
private_key_file=self.private_key_file,
|
||||
setup_cache=self.SETUP_CACHE,
|
||||
vars_cache=self.VARS_CACHE,
|
||||
callbacks=self.runner_callbacks,
|
||||
sudo=play.sudo,
|
||||
sudo_user=play.sudo_user,
|
||||
|
@ -560,7 +578,7 @@ class PlayBook(object):
|
|||
|
||||
def _run_play(self, play):
|
||||
''' run a list of tasks for a given pattern, in order '''
|
||||
|
||||
|
||||
self.callbacks.on_play_start(play.name)
|
||||
# Get the hosts for this play
|
||||
play._play_hosts = self.inventory.list_hosts(play.hosts)
|
||||
|
@ -589,6 +607,7 @@ class PlayBook(object):
|
|||
play_hosts.append(all_hosts.pop())
|
||||
serialized_batch.append(play_hosts)
|
||||
|
||||
task_errors = False
|
||||
for on_hosts in serialized_batch:
|
||||
|
||||
# restrict the play to just the hosts we have in our on_hosts block that are
|
||||
|
@ -599,41 +618,12 @@ class PlayBook(object):
|
|||
for task in play.tasks():
|
||||
|
||||
if task.meta is not None:
|
||||
|
||||
# meta tasks are an internalism and are not valid for end-user playbook usage
|
||||
# here a meta task is a placeholder that signals handlers should be run
|
||||
|
||||
# meta tasks can force handlers to run mid-play
|
||||
if task.meta == 'flush_handlers':
|
||||
fired_names = {}
|
||||
for handler in play.handlers():
|
||||
if len(handler.notified_by) > 0:
|
||||
self.inventory.restrict_to(handler.notified_by)
|
||||
self.run_handlers(play)
|
||||
|
||||
# Resolve the variables first
|
||||
handler_name = template(play.basedir, handler.name, handler.module_vars)
|
||||
if handler_name not in fired_names:
|
||||
self._run_task(play, handler, True)
|
||||
# prevent duplicate handler includes from running more than once
|
||||
fired_names[handler_name] = 1
|
||||
|
||||
host_list = self._trim_unavailable_hosts(play._play_hosts)
|
||||
if handler.any_errors_fatal and len(host_list) < hosts_count:
|
||||
play.max_fail_pct = 0
|
||||
if (hosts_count - len(host_list)) > int((play.max_fail_pct)/100.0 * hosts_count):
|
||||
host_list = None
|
||||
if not host_list:
|
||||
self.callbacks.on_no_hosts_remaining()
|
||||
return False
|
||||
|
||||
self.inventory.lift_restriction()
|
||||
new_list = handler.notified_by[:]
|
||||
for host in handler.notified_by:
|
||||
if host in on_hosts:
|
||||
while host in new_list:
|
||||
new_list.remove(host)
|
||||
handler.notified_by = new_list
|
||||
|
||||
continue
|
||||
# skip calling the handler till the play is finished
|
||||
continue
|
||||
|
||||
# only run the task if the requested tags match
|
||||
should_run = False
|
||||
|
@ -666,15 +656,74 @@ class PlayBook(object):
|
|||
play.max_fail_pct = 0
|
||||
|
||||
# If threshold for max nodes failed is exceeded , bail out.
|
||||
if (hosts_count - len(host_list)) > int((play.max_fail_pct)/100.0 * hosts_count):
|
||||
host_list = None
|
||||
if play.serial > 0:
|
||||
# if serial is set, we need to shorten the size of host_count
|
||||
play_count = len(play._play_hosts)
|
||||
if (play_count - len(host_list)) > int((play.max_fail_pct)/100.0 * play_count):
|
||||
host_list = None
|
||||
else:
|
||||
if (hosts_count - len(host_list)) > int((play.max_fail_pct)/100.0 * hosts_count):
|
||||
host_list = None
|
||||
|
||||
# if no hosts remain, drop out
|
||||
if not host_list:
|
||||
self.callbacks.on_no_hosts_remaining()
|
||||
return False
|
||||
if self.force_handlers:
|
||||
task_errors = True
|
||||
break
|
||||
else:
|
||||
self.callbacks.on_no_hosts_remaining()
|
||||
return False
|
||||
|
||||
# lift restrictions after each play finishes
|
||||
self.inventory.lift_also_restriction()
|
||||
|
||||
if task_errors and not self.force_handlers:
|
||||
# if there were failed tasks and handler execution
|
||||
# is not forced, quit the play with an error
|
||||
return False
|
||||
else:
|
||||
# no errors, go ahead and execute all handlers
|
||||
if not self.run_handlers(play):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def run_handlers(self, play):
|
||||
on_hosts = play._play_hosts
|
||||
hosts_count = len(on_hosts)
|
||||
for task in play.tasks():
|
||||
if task.meta is not None:
|
||||
|
||||
fired_names = {}
|
||||
for handler in play.handlers():
|
||||
if len(handler.notified_by) > 0:
|
||||
self.inventory.restrict_to(handler.notified_by)
|
||||
|
||||
# Resolve the variables first
|
||||
handler_name = template(play.basedir, handler.name, handler.module_vars)
|
||||
if handler_name not in fired_names:
|
||||
self._run_task(play, handler, True)
|
||||
# prevent duplicate handler includes from running more than once
|
||||
fired_names[handler_name] = 1
|
||||
|
||||
host_list = self._trim_unavailable_hosts(play._play_hosts)
|
||||
if handler.any_errors_fatal and len(host_list) < hosts_count:
|
||||
play.max_fail_pct = 0
|
||||
if (hosts_count - len(host_list)) > int((play.max_fail_pct)/100.0 * hosts_count):
|
||||
host_list = None
|
||||
if not host_list and not self.force_handlers:
|
||||
self.callbacks.on_no_hosts_remaining()
|
||||
return False
|
||||
|
||||
self.inventory.lift_restriction()
|
||||
new_list = handler.notified_by[:]
|
||||
for host in handler.notified_by:
|
||||
if host in on_hosts:
|
||||
while host in new_list:
|
||||
new_list.remove(host)
|
||||
handler.notified_by = new_list
|
||||
|
||||
continue
|
||||
|
||||
return True
|
||||
|
|
|
@ -26,6 +26,7 @@ import pipes
|
|||
import shlex
|
||||
import os
|
||||
import sys
|
||||
import uuid
|
||||
|
||||
class Play(object):
|
||||
|
||||
|
@ -92,6 +93,10 @@ class Play(object):
|
|||
|
||||
self._update_vars_files_for_host(None)
|
||||
|
||||
# apply any extra_vars specified on the command line now
|
||||
if type(self.playbook.extra_vars) == dict:
|
||||
self.vars = utils.combine_vars(self.vars, self.playbook.extra_vars)
|
||||
|
||||
# template everything to be efficient, but do not pre-mature template
|
||||
# tasks/handlers as they may have inventory scope overrides
|
||||
_tasks = ds.pop('tasks', [])
|
||||
|
@ -117,7 +122,6 @@ class Play(object):
|
|||
self.sudo = ds.get('sudo', self.playbook.sudo)
|
||||
self.sudo_user = ds.get('sudo_user', self.playbook.sudo_user)
|
||||
self.transport = ds.get('connection', self.playbook.transport)
|
||||
self.gather_facts = ds.get('gather_facts', True)
|
||||
self.remote_port = self.remote_port
|
||||
self.any_errors_fatal = utils.boolean(ds.get('any_errors_fatal', 'false'))
|
||||
self.accelerate = utils.boolean(ds.get('accelerate', 'false'))
|
||||
|
@ -126,7 +130,13 @@ class Play(object):
|
|||
self.max_fail_pct = int(ds.get('max_fail_percentage', 100))
|
||||
self.su = ds.get('su', self.playbook.su)
|
||||
self.su_user = ds.get('su_user', self.playbook.su_user)
|
||||
#self.vault_password = vault_password
|
||||
|
||||
# gather_facts is not a simple boolean, as None means that a 'smart'
|
||||
# fact gathering mode will be used, so we need to be careful here as
|
||||
# calling utils.boolean(None) returns False
|
||||
self.gather_facts = ds.get('gather_facts', None)
|
||||
if self.gather_facts:
|
||||
self.gather_facts = utils.boolean(self.gather_facts)
|
||||
|
||||
# Fail out if user specifies a sudo param with a su param in a given play
|
||||
if (ds.get('sudo') or ds.get('sudo_user')) and (ds.get('su') or ds.get('su_user')):
|
||||
|
@ -134,6 +144,7 @@ class Play(object):
|
|||
'("su", "su_user") cannot be used together')
|
||||
|
||||
load_vars = {}
|
||||
load_vars['role_names'] = ds.get('role_names',[])
|
||||
load_vars['playbook_dir'] = self.basedir
|
||||
if self.playbook.inventory.basedir() is not None:
|
||||
load_vars['inventory_dir'] = self.playbook.inventory.basedir()
|
||||
|
@ -141,6 +152,8 @@ class Play(object):
|
|||
self._tasks = self._load_tasks(self._ds.get('tasks', []), load_vars)
|
||||
self._handlers = self._load_tasks(self._ds.get('handlers', []), load_vars)
|
||||
|
||||
# apply any missing tags to role tasks
|
||||
self._late_merge_role_tags()
|
||||
|
||||
if self.sudo_user != 'root':
|
||||
self.sudo = True
|
||||
|
@ -227,6 +240,25 @@ class Play(object):
|
|||
if meta_data:
|
||||
allow_dupes = utils.boolean(meta_data.get('allow_duplicates',''))
|
||||
|
||||
# if any tags were specified as role/dep variables, merge
|
||||
# them into the current dep_vars so they're passed on to any
|
||||
# further dependencies too, and so we only have one place
|
||||
# (dep_vars) to look for tags going forward
|
||||
def __merge_tags(var_obj):
|
||||
old_tags = dep_vars.get('tags', [])
|
||||
if isinstance(old_tags, basestring):
|
||||
old_tags = [old_tags, ]
|
||||
if isinstance(var_obj, dict):
|
||||
new_tags = var_obj.get('tags', [])
|
||||
if isinstance(new_tags, basestring):
|
||||
new_tags = [new_tags, ]
|
||||
else:
|
||||
new_tags = []
|
||||
return list(set(old_tags).union(set(new_tags)))
|
||||
|
||||
dep_vars['tags'] = __merge_tags(role_vars)
|
||||
dep_vars['tags'] = __merge_tags(passed_vars)
|
||||
|
||||
# if tags are set from this role, merge them
|
||||
# into the tags list for the dependent role
|
||||
if "tags" in passed_vars:
|
||||
|
@ -235,7 +267,7 @@ class Play(object):
|
|||
included_dep_vars = included_role_dep[2]
|
||||
if included_dep_name == dep:
|
||||
if "tags" in included_dep_vars:
|
||||
included_dep_vars["tags"] = list(set(included_dep_vars["tags"] + passed_vars["tags"]))
|
||||
included_dep_vars["tags"] = list(set(included_dep_vars["tags"]).union(set(passed_vars["tags"])))
|
||||
else:
|
||||
included_dep_vars["tags"] = passed_vars["tags"][:]
|
||||
|
||||
|
@ -254,13 +286,6 @@ class Play(object):
|
|||
if 'role' in dep_vars:
|
||||
del dep_vars['role']
|
||||
|
||||
if "tags" in passed_vars:
|
||||
if not self._is_valid_tag(passed_vars["tags"]):
|
||||
# one of the tags specified for this role was in the
|
||||
# skip list, or we're limiting the tags and it didn't
|
||||
# match one, so we just skip it completely
|
||||
continue
|
||||
|
||||
if not allow_dupes:
|
||||
if dep in self.included_roles:
|
||||
# skip back to the top, since we don't want to
|
||||
|
@ -343,6 +368,13 @@ class Play(object):
|
|||
|
||||
roles = self._build_role_dependencies(roles, [], self.vars)
|
||||
|
||||
# give each role a uuid
|
||||
for idx, val in enumerate(roles):
|
||||
this_uuid = str(uuid.uuid4())
|
||||
roles[idx][-2]['role_uuid'] = this_uuid
|
||||
|
||||
role_names = []
|
||||
|
||||
for (role,role_path,role_vars,default_vars) in roles:
|
||||
# special vars must be extracted from the dict to the included tasks
|
||||
special_keys = [ "sudo", "sudo_user", "when", "with_items" ]
|
||||
|
@ -374,6 +406,7 @@ class Play(object):
|
|||
else:
|
||||
role_name = role
|
||||
|
||||
role_names.append(role_name)
|
||||
if os.path.isfile(task):
|
||||
nt = dict(include=pipes.quote(task), vars=role_vars, default_vars=default_vars, role_name=role_name)
|
||||
for k in special_keys:
|
||||
|
@ -420,6 +453,7 @@ class Play(object):
|
|||
ds['tasks'] = new_tasks
|
||||
ds['handlers'] = new_handlers
|
||||
ds['vars_files'] = new_vars_files
|
||||
ds['role_names'] = role_names
|
||||
|
||||
self.default_vars = self._load_role_defaults(defaults_files)
|
||||
|
||||
|
@ -434,6 +468,7 @@ class Play(object):
|
|||
os.path.join(basepath, 'main'),
|
||||
os.path.join(basepath, 'main.yml'),
|
||||
os.path.join(basepath, 'main.yaml'),
|
||||
os.path.join(basepath, 'main.json'),
|
||||
)
|
||||
if sum([os.path.isfile(x) for x in mains]) > 1:
|
||||
raise errors.AnsibleError("found multiple main files at %s, only one allowed" % (basepath))
|
||||
|
@ -498,7 +533,11 @@ class Play(object):
|
|||
include_vars = {}
|
||||
for k in x:
|
||||
if k.startswith("with_"):
|
||||
utils.deprecated("include + with_items is a removed deprecated feature", "1.5", removed=True)
|
||||
if original_file:
|
||||
offender = " (in %s)" % original_file
|
||||
else:
|
||||
offender = ""
|
||||
utils.deprecated("include + with_items is a removed deprecated feature" + offender, "1.5", removed=True)
|
||||
elif k.startswith("when_"):
|
||||
utils.deprecated("\"when_<criteria>:\" is a removed deprecated feature, use the simplified 'when:' conditional directly", None, removed=True)
|
||||
elif k == 'when':
|
||||
|
@ -545,9 +584,9 @@ class Play(object):
|
|||
include_filename = utils.path_dwim(dirname, include_file)
|
||||
data = utils.parse_yaml_from_file(include_filename, vault_password=self.vault_password)
|
||||
if 'role_name' in x and data is not None:
|
||||
for x in data:
|
||||
if 'include' in x:
|
||||
x['role_name'] = new_role
|
||||
for y in data:
|
||||
if isinstance(y, dict) and 'include' in y:
|
||||
y['role_name'] = new_role
|
||||
loaded = self._load_tasks(data, mv, default_vars, included_sudo_vars, list(included_additional_conditions), original_file=include_filename, role_name=new_role)
|
||||
results += loaded
|
||||
elif type(x) == dict:
|
||||
|
@ -671,11 +710,15 @@ class Play(object):
|
|||
unmatched_tags: tags that were found within the current play but do not match
|
||||
any provided by the user '''
|
||||
|
||||
# gather all the tags in all the tasks into one list
|
||||
# gather all the tags in all the tasks and handlers into one list
|
||||
# FIXME: isn't this in self.tags already?
|
||||
|
||||
all_tags = []
|
||||
for task in self._tasks:
|
||||
if not task.meta:
|
||||
all_tags.extend(task.tags)
|
||||
for handler in self._handlers:
|
||||
all_tags.extend(handler.tags)
|
||||
|
||||
# compare the lists of tags using sets and return the matched and unmatched
|
||||
all_tags_set = set(all_tags)
|
||||
|
@ -687,50 +730,113 @@ class Play(object):
|
|||
|
||||
# *************************************************
|
||||
|
||||
def _late_merge_role_tags(self):
|
||||
# build a local dict of tags for roles
|
||||
role_tags = {}
|
||||
for task in self._ds['tasks']:
|
||||
if 'role_name' in task:
|
||||
this_role = task['role_name'] + "-" + task['vars']['role_uuid']
|
||||
|
||||
if this_role not in role_tags:
|
||||
role_tags[this_role] = []
|
||||
|
||||
if 'tags' in task['vars']:
|
||||
if isinstance(task['vars']['tags'], basestring):
|
||||
role_tags[this_role] += shlex.split(task['vars']['tags'])
|
||||
else:
|
||||
role_tags[this_role] += task['vars']['tags']
|
||||
|
||||
# apply each role's tags to it's tasks
|
||||
for idx, val in enumerate(self._tasks):
|
||||
if getattr(val, 'role_name', None) is not None:
|
||||
this_role = val.role_name + "-" + val.module_vars['role_uuid']
|
||||
if this_role in role_tags:
|
||||
self._tasks[idx].tags = sorted(set(self._tasks[idx].tags + role_tags[this_role]))
|
||||
|
||||
# *************************************************
|
||||
|
||||
def _has_vars_in(self, msg):
|
||||
return ((msg.find("$") != -1) or (msg.find("{{") != -1))
|
||||
return "$" in msg or "{{" in msg
|
||||
|
||||
# *************************************************
|
||||
|
||||
def _update_vars_files_for_host(self, host, vault_password=None):
|
||||
|
||||
def generate_filenames(host, inject, filename):
|
||||
|
||||
""" Render the raw filename into 3 forms """
|
||||
|
||||
filename2 = template(self.basedir, filename, self.vars)
|
||||
filename3 = filename2
|
||||
if host is not None:
|
||||
filename3 = template(self.basedir, filename2, inject)
|
||||
if self._has_vars_in(filename3) and host is not None:
|
||||
# allow play scoped vars and host scoped vars to template the filepath
|
||||
inject.update(self.vars)
|
||||
filename4 = template(self.basedir, filename3, inject)
|
||||
filename4 = utils.path_dwim(self.basedir, filename4)
|
||||
else:
|
||||
filename4 = utils.path_dwim(self.basedir, filename3)
|
||||
return filename2, filename3, filename4
|
||||
|
||||
|
||||
def update_vars_cache(host, inject, data, filename):
|
||||
|
||||
""" update a host's varscache with new var data """
|
||||
|
||||
data = utils.combine_vars(inject, data)
|
||||
self.playbook.VARS_CACHE[host].update(data)
|
||||
self.playbook.callbacks.on_import_for_host(host, filename4)
|
||||
|
||||
def process_files(filename, filename2, filename3, filename4, host=None):
|
||||
|
||||
""" pseudo-algorithm for deciding where new vars should go """
|
||||
|
||||
data = utils.parse_yaml_from_file(filename4, vault_password=self.vault_password)
|
||||
if data:
|
||||
if type(data) != dict:
|
||||
raise errors.AnsibleError("%s must be stored as a dictionary/hash" % filename4)
|
||||
if host is not None:
|
||||
if self._has_vars_in(filename2) and not self._has_vars_in(filename3):
|
||||
# running a host specific pass and has host specific variables
|
||||
# load into setup cache
|
||||
update_vars_cache(host, inject, data, filename4)
|
||||
elif self._has_vars_in(filename3) and not self._has_vars_in(filename4):
|
||||
# handle mixed scope variables in filepath
|
||||
update_vars_cache(host, inject, data, filename4)
|
||||
|
||||
elif not self._has_vars_in(filename4):
|
||||
# found a non-host specific variable, load into vars and NOT
|
||||
# the setup cache
|
||||
if host is not None:
|
||||
self.vars.update(data)
|
||||
else:
|
||||
self.vars = utils.combine_vars(self.vars, data)
|
||||
|
||||
# Enforce that vars_files is always a list
|
||||
if type(self.vars_files) != list:
|
||||
self.vars_files = [ self.vars_files ]
|
||||
|
||||
# Build an inject if this is a host run started by self.update_vars_files
|
||||
if host is not None:
|
||||
inject = {}
|
||||
inject.update(self.playbook.inventory.get_variables(host, vault_password=vault_password))
|
||||
inject.update(self.playbook.SETUP_CACHE[host])
|
||||
inject.update(self.playbook.SETUP_CACHE.get(host, {}))
|
||||
inject.update(self.playbook.VARS_CACHE.get(host, {}))
|
||||
else:
|
||||
inject = None
|
||||
|
||||
for filename in self.vars_files:
|
||||
|
||||
if type(filename) == list:
|
||||
|
||||
# loop over all filenames, loading the first one, and failing if # none found
|
||||
# loop over all filenames, loading the first one, and failing if none found
|
||||
found = False
|
||||
sequence = []
|
||||
for real_filename in filename:
|
||||
filename2 = template(self.basedir, real_filename, self.vars)
|
||||
filename3 = filename2
|
||||
if host is not None:
|
||||
filename3 = template(self.basedir, filename2, inject)
|
||||
filename4 = utils.path_dwim(self.basedir, filename3)
|
||||
filename2, filename3, filename4 = generate_filenames(host, inject, real_filename)
|
||||
sequence.append(filename4)
|
||||
if os.path.exists(filename4):
|
||||
found = True
|
||||
data = utils.parse_yaml_from_file(filename4, vault_password=self.vault_password)
|
||||
if type(data) != dict:
|
||||
raise errors.AnsibleError("%s must be stored as a dictionary/hash" % filename4)
|
||||
if host is not None:
|
||||
if self._has_vars_in(filename2) and not self._has_vars_in(filename3):
|
||||
# this filename has variables in it that were fact specific
|
||||
# so it needs to be loaded into the per host SETUP_CACHE
|
||||
self.playbook.SETUP_CACHE[host].update(data)
|
||||
self.playbook.callbacks.on_import_for_host(host, filename4)
|
||||
elif not self._has_vars_in(filename4):
|
||||
# found a non-host specific variable, load into vars and NOT
|
||||
# the setup cache
|
||||
self.vars.update(data)
|
||||
process_files(filename, filename2, filename3, filename4, host=host)
|
||||
elif host is not None:
|
||||
self.playbook.callbacks.on_not_import_for_host(host, filename4)
|
||||
if found:
|
||||
|
@ -742,24 +848,11 @@ class Play(object):
|
|||
|
||||
else:
|
||||
# just one filename supplied, load it!
|
||||
|
||||
filename2 = template(self.basedir, filename, self.vars)
|
||||
filename3 = filename2
|
||||
if host is not None:
|
||||
filename3 = template(self.basedir, filename2, inject)
|
||||
filename4 = utils.path_dwim(self.basedir, filename3)
|
||||
filename2, filename3, filename4 = generate_filenames(host, inject, filename)
|
||||
if self._has_vars_in(filename4):
|
||||
continue
|
||||
new_vars = utils.parse_yaml_from_file(filename4, vault_password=self.vault_password)
|
||||
if new_vars:
|
||||
if type(new_vars) != dict:
|
||||
raise errors.AnsibleError("%s must be stored as dictionary/hash: %s" % (filename4, type(new_vars)))
|
||||
if host is not None and self._has_vars_in(filename2) and not self._has_vars_in(filename3):
|
||||
# running a host specific pass and has host specific variables
|
||||
# load into setup cache
|
||||
self.playbook.SETUP_CACHE[host] = utils.combine_vars(
|
||||
self.playbook.SETUP_CACHE[host], new_vars)
|
||||
self.playbook.callbacks.on_import_for_host(host, filename4)
|
||||
elif host is None:
|
||||
# running a non-host specific pass and we can update the global vars instead
|
||||
self.vars = utils.combine_vars(self.vars, new_vars)
|
||||
process_files(filename, filename2, filename3, filename4, host=host)
|
||||
|
||||
# finally, update the VARS_CACHE for the host, if it is set
|
||||
if host is not None:
|
||||
self.playbook.VARS_CACHE[host].update(self.playbook.extra_vars)
|
||||
|
|
|
@ -85,7 +85,7 @@ class Task(object):
|
|||
elif x.startswith("with_"):
|
||||
|
||||
if isinstance(ds[x], basestring) and ds[x].lstrip().startswith("{{"):
|
||||
utils.warning("It is unneccessary to use '{{' in loops, leave variables in loop expressions bare.")
|
||||
utils.warning("It is unnecessary to use '{{' in loops, leave variables in loop expressions bare.")
|
||||
|
||||
plugin_name = x.replace("with_","")
|
||||
if plugin_name in utils.plugins.lookup_loader:
|
||||
|
@ -97,7 +97,7 @@ class Task(object):
|
|||
|
||||
elif x in [ 'changed_when', 'failed_when', 'when']:
|
||||
if isinstance(ds[x], basestring) and ds[x].lstrip().startswith("{{"):
|
||||
utils.warning("It is unneccessary to use '{{' in conditionals, leave variables in loop expressions bare.")
|
||||
utils.warning("It is unnecessary to use '{{' in conditionals, leave variables in loop expressions bare.")
|
||||
elif x.startswith("when_"):
|
||||
utils.deprecated("The 'when_' conditional has been removed. Switch to using the regular unified 'when' statements as described on docs.ansible.com.","1.5", removed=True)
|
||||
|
||||
|
@ -206,8 +206,12 @@ class Task(object):
|
|||
self.changed_when = ds.get('changed_when', None)
|
||||
self.failed_when = ds.get('failed_when', None)
|
||||
|
||||
self.async_seconds = int(ds.get('async', 0)) # not async by default
|
||||
self.async_poll_interval = int(ds.get('poll', 10)) # default poll = 10 seconds
|
||||
self.async_seconds = ds.get('async', 0) # not async by default
|
||||
self.async_seconds = template.template_from_string(play.basedir, self.async_seconds, self.module_vars)
|
||||
self.async_seconds = int(self.async_seconds)
|
||||
self.async_poll_interval = ds.get('poll', 10) # default poll = 10 seconds
|
||||
self.async_poll_interval = template.template_from_string(play.basedir, self.async_poll_interval, self.module_vars)
|
||||
self.async_poll_interval = int(self.async_poll_interval)
|
||||
self.notify = ds.get('notify', [])
|
||||
self.first_available_file = ds.get('first_available_file', None)
|
||||
|
||||
|
|
|
@ -28,10 +28,10 @@ import collections
|
|||
import socket
|
||||
import base64
|
||||
import sys
|
||||
import shlex
|
||||
import pipes
|
||||
import jinja2
|
||||
import subprocess
|
||||
import getpass
|
||||
|
||||
import ansible.constants as C
|
||||
import ansible.inventory
|
||||
|
@ -81,18 +81,19 @@ def _executor_hook(job_queue, result_queue, new_stdin):
|
|||
traceback.print_exc()
|
||||
|
||||
class HostVars(dict):
|
||||
''' A special view of setup_cache that adds values from the inventory when needed. '''
|
||||
''' A special view of vars_cache that adds values from the inventory when needed. '''
|
||||
|
||||
def __init__(self, setup_cache, inventory):
|
||||
self.setup_cache = setup_cache
|
||||
def __init__(self, vars_cache, inventory, vault_password=None):
|
||||
self.vars_cache = vars_cache
|
||||
self.inventory = inventory
|
||||
self.lookup = dict()
|
||||
self.update(setup_cache)
|
||||
self.update(vars_cache)
|
||||
self.vault_password = vault_password
|
||||
|
||||
def __getitem__(self, host):
|
||||
if host not in self.lookup:
|
||||
result = self.inventory.get_variables(host)
|
||||
result.update(self.setup_cache.get(host, {}))
|
||||
result = self.inventory.get_variables(host, vault_password=self.vault_password)
|
||||
result.update(self.vars_cache.get(host, {}))
|
||||
self.lookup[host] = result
|
||||
return self.lookup[host]
|
||||
|
||||
|
@ -118,6 +119,7 @@ class Runner(object):
|
|||
background=0, # async poll every X seconds, else 0 for non-async
|
||||
basedir=None, # directory of playbook, if applicable
|
||||
setup_cache=None, # used to share fact data w/ other tasks
|
||||
vars_cache=None, # used to store variables about hosts
|
||||
transport=C.DEFAULT_TRANSPORT, # 'ssh', 'paramiko', 'local'
|
||||
conditional='True', # run only if this fact expression evals to true
|
||||
callbacks=None, # used for output
|
||||
|
@ -155,6 +157,7 @@ class Runner(object):
|
|||
self.check = check
|
||||
self.diff = diff
|
||||
self.setup_cache = utils.default(setup_cache, lambda: collections.defaultdict(dict))
|
||||
self.vars_cache = utils.default(vars_cache, lambda: collections.defaultdict(dict))
|
||||
self.basedir = utils.default(basedir, lambda: os.getcwd())
|
||||
self.callbacks = utils.default(callbacks, lambda: DefaultRunnerCallbacks())
|
||||
self.generated_jid = str(random.randint(0, 999999999999))
|
||||
|
@ -243,7 +246,7 @@ class Runner(object):
|
|||
"""
|
||||
if complex_args is None:
|
||||
return module_args
|
||||
if type(complex_args) != dict:
|
||||
if not isinstance(complex_args, dict):
|
||||
raise errors.AnsibleError("complex arguments are not a dictionary: %s" % complex_args)
|
||||
for (k,v) in complex_args.iteritems():
|
||||
if isinstance(v, basestring):
|
||||
|
@ -292,7 +295,7 @@ class Runner(object):
|
|||
raise errors.AnsibleError("environment must be a dictionary, received %s" % enviro)
|
||||
result = ""
|
||||
for (k,v) in enviro.iteritems():
|
||||
result = "%s=%s %s" % (k, pipes.quote(str(v)), result)
|
||||
result = "%s=%s %s" % (k, pipes.quote(unicode(v)), result)
|
||||
return result
|
||||
|
||||
# *****************************************************
|
||||
|
@ -415,7 +418,7 @@ class Runner(object):
|
|||
|
||||
environment_string = self._compute_environment_string(inject)
|
||||
|
||||
if tmp.find("tmp") != -1 and (self.sudo or self.su) and (self.sudo_user != 'root' or self.su_user != 'root'):
|
||||
if "tmp" in tmp and ((self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root')):
|
||||
# deal with possible umask issues once sudo'ed to other user
|
||||
cmd_chmod = "chmod a+r %s" % remote_module_path
|
||||
self._low_level_exec_command(conn, cmd_chmod, tmp, sudoable=False)
|
||||
|
@ -444,7 +447,7 @@ class Runner(object):
|
|||
else:
|
||||
argsfile = self._transfer_str(conn, tmp, 'arguments', args)
|
||||
|
||||
if (self.sudo or self.su) and (self.sudo_user != 'root' or self.su_user != 'root'):
|
||||
if (self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root'):
|
||||
# deal with possible umask issues once sudo'ed to other user
|
||||
cmd_args_chmod = "chmod a+r %s" % argsfile
|
||||
self._low_level_exec_command(conn, cmd_args_chmod, tmp, sudoable=False)
|
||||
|
@ -469,7 +472,7 @@ class Runner(object):
|
|||
cmd = " ".join([environment_string.strip(), shebang.replace("#!","").strip(), cmd])
|
||||
cmd = cmd.strip()
|
||||
|
||||
if tmp.find("tmp") != -1 and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp:
|
||||
if "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp:
|
||||
if not self.sudo or self.su or self.sudo_user == 'root' or self.su_user == 'root':
|
||||
# not sudoing or sudoing to root, so can cleanup files in the same step
|
||||
cmd = cmd + "; rm -rf %s >/dev/null 2>&1" % tmp
|
||||
|
@ -485,8 +488,8 @@ class Runner(object):
|
|||
else:
|
||||
res = self._low_level_exec_command(conn, cmd, tmp, sudoable=sudoable, in_data=in_data)
|
||||
|
||||
if tmp.find("tmp") != -1 and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp:
|
||||
if (self.sudo or self.su) and (self.sudo_user != 'root' or self.su_user != 'root'):
|
||||
if "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp:
|
||||
if (self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root'):
|
||||
# not sudoing to root, so maybe can't delete files as that other user
|
||||
# have to clean up temp files as original user in a second step
|
||||
cmd2 = "rm -rf %s >/dev/null 2>&1" % tmp
|
||||
|
@ -508,10 +511,15 @@ class Runner(object):
|
|||
fileno = None
|
||||
|
||||
try:
|
||||
self._new_stdin = new_stdin
|
||||
if not new_stdin and fileno is not None:
|
||||
self._new_stdin = os.fdopen(os.dup(fileno))
|
||||
else:
|
||||
self._new_stdin = new_stdin
|
||||
try:
|
||||
self._new_stdin = os.fdopen(os.dup(fileno))
|
||||
except OSError, e:
|
||||
# couldn't dupe stdin, most likely because it's
|
||||
# not a valid file descriptor, so we just rely on
|
||||
# using the one that was passed in
|
||||
pass
|
||||
|
||||
exec_rc = self._executor_internal(host, new_stdin)
|
||||
if type(exec_rc) != ReturnData:
|
||||
|
@ -544,13 +552,21 @@ class Runner(object):
|
|||
# fireball, local, etc
|
||||
port = self.remote_port
|
||||
|
||||
# merge the VARS and SETUP caches for this host
|
||||
combined_cache = self.setup_cache.copy()
|
||||
combined_cache.get(host, {}).update(self.vars_cache.get(host, {}))
|
||||
|
||||
# use combined_cache and host_variables to template the module_vars
|
||||
module_vars_inject = utils.combine_vars(combined_cache.get(host, {}), host_variables)
|
||||
module_vars = template.template(self.basedir, self.module_vars, module_vars_inject)
|
||||
|
||||
inject = {}
|
||||
inject = utils.combine_vars(inject, self.default_vars)
|
||||
inject = utils.combine_vars(inject, host_variables)
|
||||
inject = utils.combine_vars(inject, self.module_vars)
|
||||
inject = utils.combine_vars(inject, self.setup_cache[host])
|
||||
inject = utils.combine_vars(inject, module_vars)
|
||||
inject = utils.combine_vars(inject, combined_cache.get(host, {}))
|
||||
inject.setdefault('ansible_ssh_user', self.remote_user)
|
||||
inject['hostvars'] = HostVars(self.setup_cache, self.inventory)
|
||||
inject['hostvars'] = HostVars(combined_cache, self.inventory, vault_password=self.vault_pass)
|
||||
inject['group_names'] = host_variables.get('group_names', [])
|
||||
inject['groups'] = self.inventory.groups_list()
|
||||
inject['vars'] = self.module_vars
|
||||
|
@ -612,7 +628,6 @@ class Runner(object):
|
|||
if self.background > 0:
|
||||
raise errors.AnsibleError("lookup plugins (with_*) cannot be used with async tasks")
|
||||
|
||||
aggregrate = {}
|
||||
all_comm_ok = True
|
||||
all_changed = False
|
||||
all_failed = False
|
||||
|
@ -711,10 +726,18 @@ class Runner(object):
|
|||
actual_transport = inject.get('ansible_connection', self.transport)
|
||||
actual_private_key_file = inject.get('ansible_ssh_private_key_file', self.private_key_file)
|
||||
actual_private_key_file = template.template(self.basedir, actual_private_key_file, inject, fail_on_undefined=True)
|
||||
self.sudo = utils.boolean(inject.get('ansible_sudo', self.sudo))
|
||||
self.sudo_user = inject.get('ansible_sudo_user', self.sudo_user)
|
||||
self.sudo_pass = inject.get('ansible_sudo_pass', self.sudo_pass)
|
||||
self.su = inject.get('ansible_su', self.su)
|
||||
self.su_pass = inject.get('ansible_su_pass', self.su_pass)
|
||||
|
||||
# select default root user in case self.sudo requested
|
||||
# but no user specified; happens e.g. in host vars when
|
||||
# just ansible_sudo=True is specified
|
||||
if self.sudo and self.sudo_user is None:
|
||||
self.sudo_user = 'root'
|
||||
|
||||
if actual_private_key_file is not None:
|
||||
actual_private_key_file = os.path.expanduser(actual_private_key_file)
|
||||
|
||||
|
@ -750,6 +773,7 @@ class Runner(object):
|
|||
# user/pass may still contain variables at this stage
|
||||
actual_user = template.template(self.basedir, actual_user, inject)
|
||||
actual_pass = template.template(self.basedir, actual_pass, inject)
|
||||
self.sudo_pass = template.template(self.basedir, self.sudo_pass, inject)
|
||||
|
||||
# make actual_user available as __magic__ ansible_ssh_user variable
|
||||
inject['ansible_ssh_user'] = actual_user
|
||||
|
@ -842,22 +866,25 @@ class Runner(object):
|
|||
|
||||
changed_when = self.module_vars.get('changed_when')
|
||||
failed_when = self.module_vars.get('failed_when')
|
||||
if changed_when is not None or failed_when is not None:
|
||||
if (changed_when is not None or failed_when is not None) and self.background == 0:
|
||||
register = self.module_vars.get('register')
|
||||
if register is not None:
|
||||
if register is not None:
|
||||
if 'stdout' in data:
|
||||
data['stdout_lines'] = data['stdout'].splitlines()
|
||||
inject[register] = data
|
||||
if changed_when is not None:
|
||||
data['changed'] = utils.check_conditional(changed_when, self.basedir, inject, fail_on_undefined=self.error_on_undefined_vars)
|
||||
if failed_when is not None:
|
||||
data['failed_when_result'] = data['failed'] = utils.check_conditional(failed_when, self.basedir, inject, fail_on_undefined=self.error_on_undefined_vars)
|
||||
# only run the final checks if the async_status has finished,
|
||||
# or if we're not running an async_status check at all
|
||||
if (module_name == 'async_status' and "finished" in data) or module_name != 'async_status':
|
||||
if changed_when is not None and 'skipped' not in data:
|
||||
data['changed'] = utils.check_conditional(changed_when, self.basedir, inject, fail_on_undefined=self.error_on_undefined_vars)
|
||||
if failed_when is not None:
|
||||
data['failed_when_result'] = data['failed'] = utils.check_conditional(failed_when, self.basedir, inject, fail_on_undefined=self.error_on_undefined_vars)
|
||||
|
||||
if is_chained:
|
||||
# no callbacks
|
||||
return result
|
||||
if 'skipped' in data:
|
||||
self.callbacks.on_skipped(host)
|
||||
self.callbacks.on_skipped(host, inject.get('item',None))
|
||||
elif not result.is_successful():
|
||||
ignore_errors = self.module_vars.get('ignore_errors', False)
|
||||
self.callbacks.on_failed(host, data, ignore_errors)
|
||||
|
@ -875,7 +902,7 @@ class Runner(object):
|
|||
return False
|
||||
|
||||
def _late_needs_tmp_path(self, conn, tmp, module_style):
|
||||
if tmp.find("tmp") != -1:
|
||||
if "tmp" in tmp:
|
||||
# tmp has already been created
|
||||
return False
|
||||
if not conn.has_pipelining or not C.ANSIBLE_SSH_PIPELINING or C.DEFAULT_KEEP_REMOTE_FILES or self.su:
|
||||
|
@ -908,6 +935,12 @@ class Runner(object):
|
|||
if conn.user == sudo_user or conn.user == su_user:
|
||||
sudoable = False
|
||||
su = False
|
||||
else:
|
||||
# assume connection type is local if no user attribute
|
||||
this_user = getpass.getuser()
|
||||
if this_user == sudo_user or this_user == su_user:
|
||||
sudoable = False
|
||||
su = False
|
||||
|
||||
if su:
|
||||
rc, stdin, stdout, stderr = conn.exec_command(cmd,
|
||||
|
@ -986,11 +1019,11 @@ class Runner(object):
|
|||
|
||||
basefile = 'ansible-tmp-%s-%s' % (time.time(), random.randint(0, 2**48))
|
||||
basetmp = os.path.join(C.DEFAULT_REMOTE_TMP, basefile)
|
||||
if (self.sudo or self.su) and (self.sudo_user != 'root' or self.su != 'root') and basetmp.startswith('$HOME'):
|
||||
if (self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root') and basetmp.startswith('$HOME'):
|
||||
basetmp = os.path.join('/tmp', basefile)
|
||||
|
||||
cmd = 'mkdir -p %s' % basetmp
|
||||
if self.remote_user != 'root' or ((self.sudo or self.su) and (self.sudo_user != 'root' or self.su != 'root')):
|
||||
if self.remote_user != 'root' or ((self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root')):
|
||||
cmd += ' && chmod a+rx %s' % basetmp
|
||||
cmd += ' && echo %s' % basetmp
|
||||
|
||||
|
@ -1075,9 +1108,22 @@ class Runner(object):
|
|||
job_queue.put(host)
|
||||
result_queue = manager.Queue()
|
||||
|
||||
try:
|
||||
fileno = sys.stdin.fileno()
|
||||
except ValueError:
|
||||
fileno = None
|
||||
|
||||
workers = []
|
||||
for i in range(self.forks):
|
||||
new_stdin = os.fdopen(os.dup(sys.stdin.fileno()))
|
||||
new_stdin = None
|
||||
if fileno is not None:
|
||||
try:
|
||||
new_stdin = os.fdopen(os.dup(fileno))
|
||||
except OSError, e:
|
||||
# couldn't dupe stdin, most likely because it's
|
||||
# not a valid file descriptor, so we just rely on
|
||||
# using the one that was passed in
|
||||
pass
|
||||
prc = multiprocessing.Process(target=_executor_hook,
|
||||
args=(job_queue, result_queue, new_stdin))
|
||||
prc.start()
|
||||
|
|
|
@ -31,18 +31,43 @@ class ActionModule(object):
|
|||
def __init__(self, runner):
|
||||
self.runner = runner
|
||||
|
||||
def _assemble_from_fragments(self, src_path, delimiter=None):
|
||||
def _assemble_from_fragments(self, src_path, delimiter=None, compiled_regexp=None):
|
||||
''' assemble a file from a directory of fragments '''
|
||||
tmpfd, temp_path = tempfile.mkstemp()
|
||||
tmp = os.fdopen(tmpfd,'w')
|
||||
delimit_me = False
|
||||
add_newline = False
|
||||
|
||||
for f in sorted(os.listdir(src_path)):
|
||||
if compiled_regexp and not compiled_regexp.search(f):
|
||||
continue
|
||||
fragment = "%s/%s" % (src_path, f)
|
||||
if delimit_me and delimiter:
|
||||
tmp.write(delimiter)
|
||||
if os.path.isfile(fragment):
|
||||
tmp.write(file(fragment).read())
|
||||
if not os.path.isfile(fragment):
|
||||
continue
|
||||
fragment_content = file(fragment).read()
|
||||
|
||||
# always put a newline between fragments if the previous fragment didn't end with a newline.
|
||||
if add_newline:
|
||||
tmp.write('\n')
|
||||
|
||||
# delimiters should only appear between fragments
|
||||
if delimit_me:
|
||||
if delimiter:
|
||||
# un-escape anything like newlines
|
||||
delimiter = delimiter.decode('unicode-escape')
|
||||
tmp.write(delimiter)
|
||||
# always make sure there's a newline after the
|
||||
# delimiter, so lines don't run together
|
||||
if delimiter[-1] != '\n':
|
||||
tmp.write('\n')
|
||||
|
||||
tmp.write(fragment_content)
|
||||
delimit_me = True
|
||||
if fragment_content.endswith('\n'):
|
||||
add_newline = False
|
||||
else:
|
||||
add_newline = True
|
||||
|
||||
tmp.close()
|
||||
return temp_path
|
||||
|
||||
|
@ -52,6 +77,7 @@ class ActionModule(object):
|
|||
options = {}
|
||||
if complex_args:
|
||||
options.update(complex_args)
|
||||
|
||||
options.update(utils.parse_kv(module_args))
|
||||
|
||||
src = options.get('src', None)
|
||||
|
@ -59,6 +85,7 @@ class ActionModule(object):
|
|||
delimiter = options.get('delimiter', None)
|
||||
remote_src = utils.boolean(options.get('remote_src', 'yes'))
|
||||
|
||||
|
||||
if src is None or dest is None:
|
||||
result = dict(failed=True, msg="src and dest are required")
|
||||
return ReturnData(conn=conn, comm_ok=False, result=result)
|
||||
|
|
|
@ -33,7 +33,7 @@ class ActionModule(object):
|
|||
module_name = 'command'
|
||||
module_args += " #USE_SHELL"
|
||||
|
||||
if tmp.find("tmp") == -1:
|
||||
if "tmp" not in tmp:
|
||||
tmp = self.runner._make_tmp_path(conn)
|
||||
|
||||
(module_path, is_new_style, shebang) = self.runner._copy_module(conn, tmp, module_name, module_args, inject, complex_args=complex_args)
|
||||
|
|
|
@ -54,6 +54,16 @@ class ActionModule(object):
|
|||
raw = utils.boolean(options.get('raw', 'no'))
|
||||
force = utils.boolean(options.get('force', 'yes'))
|
||||
|
||||
# content with newlines is going to be escaped to safely load in yaml
|
||||
# now we need to unescape it so that the newlines are evaluated properly
|
||||
# when writing the file to disk
|
||||
if content:
|
||||
if isinstance(content, unicode):
|
||||
try:
|
||||
content = content.decode('unicode-escape')
|
||||
except UnicodeDecodeError:
|
||||
pass
|
||||
|
||||
if (source is None and content is None and not 'first_available_file' in inject) or dest is None:
|
||||
result=dict(failed=True, msg="src (or content) and dest are required")
|
||||
return ReturnData(conn=conn, result=result)
|
||||
|
@ -325,7 +335,7 @@ class ActionModule(object):
|
|||
src = open(source)
|
||||
src_contents = src.read(8192)
|
||||
st = os.stat(source)
|
||||
if src_contents.find("\x00") != -1:
|
||||
if "\x00" in src_contents:
|
||||
diff['src_binary'] = 1
|
||||
elif st[stat.ST_SIZE] > utils.MAX_FILE_SIZE_FOR_DIFF:
|
||||
diff['src_larger'] = utils.MAX_FILE_SIZE_FOR_DIFF
|
||||
|
|
|
@ -83,7 +83,8 @@ class ActionModule(object):
|
|||
inv_group = ansible.inventory.Group(name=group)
|
||||
inventory.add_group(inv_group)
|
||||
for host in hosts:
|
||||
del self.runner.inventory._vars_per_host[host]
|
||||
if host in self.runner.inventory._vars_per_host:
|
||||
del self.runner.inventory._vars_per_host[host]
|
||||
inv_host = inventory.get_host(host)
|
||||
if not inv_host:
|
||||
inv_host = ansible.inventory.Host(name=host)
|
||||
|
|
|
@ -77,11 +77,11 @@ class ActionModule(object):
|
|||
# Is 'prompt' a key in 'args'?
|
||||
elif 'prompt' in args:
|
||||
self.pause_type = 'prompt'
|
||||
self.prompt = "[%s]\n%s: " % (hosts, args['prompt'])
|
||||
self.prompt = "[%s]\n%s:\n" % (hosts, args['prompt'])
|
||||
# Is 'args' empty, then this is the default prompted pause
|
||||
elif len(args.keys()) == 0:
|
||||
self.pause_type = 'prompt'
|
||||
self.prompt = "[%s]\nPress enter to continue: " % hosts
|
||||
self.prompt = "[%s]\nPress enter to continue:\n" % hosts
|
||||
# I have no idea what you're trying to do. But it's so wrong.
|
||||
else:
|
||||
raise ae("invalid pause type given. must be one of: %s" % \
|
||||
|
|
|
@ -128,7 +128,7 @@ class ActionModule(object):
|
|||
result = handler.run(conn, tmp, 'raw', module_args, inject)
|
||||
|
||||
# clean up after
|
||||
if tmp.find("tmp") != -1 and not C.DEFAULT_KEEP_REMOTE_FILES:
|
||||
if "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES:
|
||||
self.runner._low_level_exec_command(conn, 'rm -rf %s >/dev/null 2>&1' % tmp, tmp)
|
||||
|
||||
result.result['changed'] = True
|
||||
|
|
|
@ -26,26 +26,54 @@ class ActionModule(object):
|
|||
|
||||
def __init__(self, runner):
|
||||
self.runner = runner
|
||||
self.inject = None
|
||||
|
||||
def _get_absolute_path(self, path=None):
|
||||
if 'vars' in self.inject:
|
||||
if '_original_file' in self.inject['vars']:
|
||||
# roles
|
||||
path = utils.path_dwim_relative(self.inject['_original_file'], 'files', path, self.runner.basedir)
|
||||
elif 'inventory_dir' in self.inject['vars']:
|
||||
# non-roles
|
||||
abs_dir = os.path.abspath(self.inject['vars']['inventory_dir'])
|
||||
path = os.path.join(abs_dir, path)
|
||||
|
||||
return path
|
||||
|
||||
def _process_origin(self, host, path, user):
|
||||
|
||||
if not host in ['127.0.0.1', 'localhost']:
|
||||
return '%s@%s:%s' % (user, host, path)
|
||||
if user:
|
||||
return '%s@%s:%s' % (user, host, path)
|
||||
else:
|
||||
return '%s:%s' % (host, path)
|
||||
else:
|
||||
if not ':' in path:
|
||||
if not path.startswith('/'):
|
||||
path = self._get_absolute_path(path=path)
|
||||
return path
|
||||
|
||||
def _process_remote(self, host, path, user):
|
||||
transport = self.runner.transport
|
||||
return_data = None
|
||||
if not host in ['127.0.0.1', 'localhost'] or transport != "local":
|
||||
return_data = '%s@%s:%s' % (user, host, path)
|
||||
if user:
|
||||
return_data = '%s@%s:%s' % (user, host, path)
|
||||
else:
|
||||
return_data = '%s:%s' % (host, path)
|
||||
else:
|
||||
return_data = path
|
||||
|
||||
if not ':' in return_data:
|
||||
if not return_data.startswith('/'):
|
||||
return_data = self._get_absolute_path(path=return_data)
|
||||
|
||||
return return_data
|
||||
|
||||
def setup(self, module_name, inject):
|
||||
''' Always default to localhost as delegate if None defined '''
|
||||
|
||||
self.inject = inject
|
||||
|
||||
# Store original transport and sudo values.
|
||||
self.original_transport = inject.get('ansible_connection', self.runner.transport)
|
||||
|
@ -65,6 +93,8 @@ class ActionModule(object):
|
|||
|
||||
''' generates params and passes them on to the rsync module '''
|
||||
|
||||
self.inject = inject
|
||||
|
||||
# load up options
|
||||
options = {}
|
||||
if complex_args:
|
||||
|
@ -122,13 +152,14 @@ class ActionModule(object):
|
|||
if process_args or use_delegate:
|
||||
|
||||
user = None
|
||||
if use_delegate:
|
||||
user = inject['hostvars'][conn.delegate].get('ansible_ssh_user')
|
||||
|
||||
if not use_delegate or not user:
|
||||
user = inject.get('ansible_ssh_user',
|
||||
self.runner.remote_user)
|
||||
if utils.boolean(options.get('set_remote_user', 'yes')):
|
||||
if use_delegate:
|
||||
user = inject['hostvars'][conn.delegate].get('ansible_ssh_user')
|
||||
|
||||
if not use_delegate or not user:
|
||||
user = inject.get('ansible_ssh_user',
|
||||
self.runner.remote_user)
|
||||
|
||||
if use_delegate:
|
||||
# FIXME
|
||||
private_key = inject.get('ansible_ssh_private_key_file', self.runner.private_key_file)
|
||||
|
@ -167,12 +198,15 @@ class ActionModule(object):
|
|||
if rsync_path:
|
||||
options['rsync_path'] = '"' + rsync_path + '"'
|
||||
|
||||
module_items = ' '.join(['%s=%s' % (k, v) for (k,
|
||||
v) in options.items()])
|
||||
|
||||
module_args = ""
|
||||
if self.runner.noop_on_check(inject):
|
||||
module_items += " CHECKMODE=True"
|
||||
module_args = "CHECKMODE=True"
|
||||
|
||||
return self.runner._execute_module(conn, tmp, 'synchronize',
|
||||
module_items, inject=inject)
|
||||
# run the module and store the result
|
||||
result = self.runner._execute_module(conn, tmp, 'synchronize', module_args, complex_args=options, inject=inject)
|
||||
|
||||
# reset the sudo property
|
||||
self.runner.sudo = self.original_sudo
|
||||
|
||||
return result
|
||||
|
||||
|
|
|
@ -85,7 +85,7 @@ class ActionModule(object):
|
|||
|
||||
# template the source data locally & get ready to transfer
|
||||
try:
|
||||
resultant = template.template_from_file(self.runner.basedir, source, inject)
|
||||
resultant = template.template_from_file(self.runner.basedir, source, inject, vault_password=self.runner.vault_pass)
|
||||
except Exception, e:
|
||||
result = dict(failed=True, msg=str(e))
|
||||
return ReturnData(conn=conn, comm_ok=False, result=result)
|
||||
|
@ -123,7 +123,8 @@ class ActionModule(object):
|
|||
return ReturnData(conn=conn, comm_ok=True, result=dict(changed=True), diff=dict(before_header=dest, after_header=source, before=dest_contents, after=resultant))
|
||||
else:
|
||||
res = self.runner._execute_module(conn, tmp, 'copy', module_args, inject=inject, complex_args=complex_args)
|
||||
res.diff = dict(before=dest_contents, after=resultant)
|
||||
if res.result.get('changed', False):
|
||||
res.diff = dict(before=dest_contents, after=resultant)
|
||||
return res
|
||||
else:
|
||||
return self.runner._execute_module(conn, tmp, 'file', module_args, inject=inject, complex_args=complex_args)
|
||||
|
|
|
@ -22,10 +22,10 @@ import socket
|
|||
import struct
|
||||
import time
|
||||
from ansible.callbacks import vvv, vvvv
|
||||
from ansible.errors import AnsibleError, AnsibleFileNotFound
|
||||
from ansible.runner.connection_plugins.ssh import Connection as SSHConnection
|
||||
from ansible.runner.connection_plugins.paramiko_ssh import Connection as ParamikoConnection
|
||||
from ansible import utils
|
||||
from ansible import errors
|
||||
from ansible import constants
|
||||
|
||||
# the chunk size to read and send, assuming mtu 1500 and
|
||||
|
@ -85,7 +85,15 @@ class Connection(object):
|
|||
utils.AES_KEYS = self.runner.aes_keys
|
||||
|
||||
def _execute_accelerate_module(self):
|
||||
args = "password=%s port=%s debug=%d ipv6=%s" % (base64.b64encode(self.key.__str__()), str(self.accport), int(utils.VERBOSITY), self.runner.accelerate_ipv6)
|
||||
args = "password=%s port=%s minutes=%d debug=%d ipv6=%s" % (
|
||||
base64.b64encode(self.key.__str__()),
|
||||
str(self.accport),
|
||||
constants.ACCELERATE_DAEMON_TIMEOUT,
|
||||
int(utils.VERBOSITY),
|
||||
self.runner.accelerate_ipv6,
|
||||
)
|
||||
if constants.ACCELERATE_MULTI_KEY:
|
||||
args += " multi_key=yes"
|
||||
inject = dict(password=self.key)
|
||||
if getattr(self.runner, 'accelerate_inventory_host', False):
|
||||
inject = utils.combine_vars(inject, self.runner.inventory.get_variables(self.runner.accelerate_inventory_host))
|
||||
|
@ -109,33 +117,38 @@ class Connection(object):
|
|||
while tries > 0:
|
||||
try:
|
||||
self.conn.connect((self.host,self.accport))
|
||||
if not self.validate_user():
|
||||
# the accelerated daemon was started with a
|
||||
# different remote_user. The above command
|
||||
# should have caused the accelerate daemon to
|
||||
# shutdown, so we'll reconnect.
|
||||
wrong_user = True
|
||||
break
|
||||
except:
|
||||
vvvv("failed, retrying...")
|
||||
except socket.error:
|
||||
vvvv("connection to %s failed, retrying..." % self.host)
|
||||
time.sleep(0.1)
|
||||
tries -= 1
|
||||
if tries == 0:
|
||||
vvv("Could not connect via the accelerated connection, exceeded # of tries")
|
||||
raise errors.AnsibleError("Failed to connect")
|
||||
raise AnsibleError("FAILED")
|
||||
elif wrong_user:
|
||||
vvv("Restarting daemon with a different remote_user")
|
||||
raise errors.AnsibleError("Wrong user")
|
||||
raise AnsibleError("WRONG_USER")
|
||||
|
||||
self.conn.settimeout(constants.ACCELERATE_TIMEOUT)
|
||||
except:
|
||||
if not self.validate_user():
|
||||
# the accelerated daemon was started with a
|
||||
# different remote_user. The above command
|
||||
# should have caused the accelerate daemon to
|
||||
# shutdown, so we'll reconnect.
|
||||
wrong_user = True
|
||||
|
||||
except AnsibleError, e:
|
||||
if allow_ssh:
|
||||
if "WRONG_USER" in e:
|
||||
vvv("Switching users, waiting for the daemon on %s to shutdown completely..." % self.host)
|
||||
time.sleep(5)
|
||||
vvv("Falling back to ssh to startup accelerated mode")
|
||||
res = self._execute_accelerate_module()
|
||||
if not res.is_successful():
|
||||
raise errors.AnsibleError("Failed to launch the accelerated daemon on %s (reason: %s)" % (self.host,res.result.get('msg')))
|
||||
raise AnsibleError("Failed to launch the accelerated daemon on %s (reason: %s)" % (self.host,res.result.get('msg')))
|
||||
return self.connect(allow_ssh=False)
|
||||
else:
|
||||
raise errors.AnsibleError("Failed to connect to %s:%s" % (self.host,self.accport))
|
||||
raise AnsibleError("Failed to connect to %s:%s" % (self.host,self.accport))
|
||||
self.is_connected = True
|
||||
return self
|
||||
|
||||
|
@ -163,11 +176,12 @@ class Connection(object):
|
|||
if not d:
|
||||
vvvv("%s: received nothing, bailing out" % self.host)
|
||||
return None
|
||||
vvvv("%s: received %d bytes" % (self.host, len(d)))
|
||||
data += d
|
||||
vvvv("%s: received all of the data, returning" % self.host)
|
||||
return data
|
||||
except socket.timeout:
|
||||
raise errors.AnsibleError("timed out while waiting to receive data")
|
||||
raise AnsibleError("timed out while waiting to receive data")
|
||||
|
||||
def validate_user(self):
|
||||
'''
|
||||
|
@ -176,6 +190,7 @@ class Connection(object):
|
|||
daemon to exit if they don't match
|
||||
'''
|
||||
|
||||
vvvv("%s: sending request for validate_user" % self.host)
|
||||
data = dict(
|
||||
mode='validate_user',
|
||||
username=self.user,
|
||||
|
@ -183,15 +198,16 @@ class Connection(object):
|
|||
data = utils.jsonify(data)
|
||||
data = utils.encrypt(self.key, data)
|
||||
if self.send_data(data):
|
||||
raise errors.AnsibleError("Failed to send command to %s" % self.host)
|
||||
raise AnsibleError("Failed to send command to %s" % self.host)
|
||||
|
||||
vvvv("%s: waiting for validate_user response" % self.host)
|
||||
while True:
|
||||
# we loop here while waiting for the response, because a
|
||||
# long running command may cause us to receive keepalive packets
|
||||
# ({"pong":"true"}) rather than the response we want.
|
||||
response = self.recv_data()
|
||||
if not response:
|
||||
raise errors.AnsibleError("Failed to get a response from %s" % self.host)
|
||||
raise AnsibleError("Failed to get a response from %s" % self.host)
|
||||
response = utils.decrypt(self.key, response)
|
||||
response = utils.parse_json(response)
|
||||
if "pong" in response:
|
||||
|
@ -199,11 +215,11 @@ class Connection(object):
|
|||
vvvv("%s: received a keepalive packet" % self.host)
|
||||
continue
|
||||
else:
|
||||
vvvv("%s: received the response" % self.host)
|
||||
vvvv("%s: received the validate_user response: %s" % (self.host, response))
|
||||
break
|
||||
|
||||
if response.get('failed'):
|
||||
raise errors.AnsibleError("Error while validating user: %s" % response.get("msg"))
|
||||
return False
|
||||
else:
|
||||
return response.get('rc') == 0
|
||||
|
||||
|
@ -211,10 +227,10 @@ class Connection(object):
|
|||
''' run a command on the remote host '''
|
||||
|
||||
if su or su_user:
|
||||
raise errors.AnsibleError("Internal Error: this module does not support running commands via su")
|
||||
raise AnsibleError("Internal Error: this module does not support running commands via su")
|
||||
|
||||
if in_data:
|
||||
raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining")
|
||||
raise AnsibleError("Internal Error: this module does not support optimized module pipelining")
|
||||
|
||||
if executable == "":
|
||||
executable = constants.DEFAULT_EXECUTABLE
|
||||
|
@ -233,7 +249,7 @@ class Connection(object):
|
|||
data = utils.jsonify(data)
|
||||
data = utils.encrypt(self.key, data)
|
||||
if self.send_data(data):
|
||||
raise errors.AnsibleError("Failed to send command to %s" % self.host)
|
||||
raise AnsibleError("Failed to send command to %s" % self.host)
|
||||
|
||||
while True:
|
||||
# we loop here while waiting for the response, because a
|
||||
|
@ -241,7 +257,7 @@ class Connection(object):
|
|||
# ({"pong":"true"}) rather than the response we want.
|
||||
response = self.recv_data()
|
||||
if not response:
|
||||
raise errors.AnsibleError("Failed to get a response from %s" % self.host)
|
||||
raise AnsibleError("Failed to get a response from %s" % self.host)
|
||||
response = utils.decrypt(self.key, response)
|
||||
response = utils.parse_json(response)
|
||||
if "pong" in response:
|
||||
|
@ -260,7 +276,7 @@ class Connection(object):
|
|||
vvv("PUT %s TO %s" % (in_path, out_path), host=self.host)
|
||||
|
||||
if not os.path.exists(in_path):
|
||||
raise errors.AnsibleFileNotFound("file or module does not exist: %s" % in_path)
|
||||
raise AnsibleFileNotFound("file or module does not exist: %s" % in_path)
|
||||
|
||||
fd = file(in_path, 'rb')
|
||||
fstat = os.stat(in_path)
|
||||
|
@ -279,27 +295,27 @@ class Connection(object):
|
|||
data = utils.encrypt(self.key, data)
|
||||
|
||||
if self.send_data(data):
|
||||
raise errors.AnsibleError("failed to send the file to %s" % self.host)
|
||||
raise AnsibleError("failed to send the file to %s" % self.host)
|
||||
|
||||
response = self.recv_data()
|
||||
if not response:
|
||||
raise errors.AnsibleError("Failed to get a response from %s" % self.host)
|
||||
raise AnsibleError("Failed to get a response from %s" % self.host)
|
||||
response = utils.decrypt(self.key, response)
|
||||
response = utils.parse_json(response)
|
||||
|
||||
if response.get('failed',False):
|
||||
raise errors.AnsibleError("failed to put the file in the requested location")
|
||||
raise AnsibleError("failed to put the file in the requested location")
|
||||
finally:
|
||||
fd.close()
|
||||
vvvv("waiting for final response after PUT")
|
||||
response = self.recv_data()
|
||||
if not response:
|
||||
raise errors.AnsibleError("Failed to get a response from %s" % self.host)
|
||||
raise AnsibleError("Failed to get a response from %s" % self.host)
|
||||
response = utils.decrypt(self.key, response)
|
||||
response = utils.parse_json(response)
|
||||
|
||||
if response.get('failed',False):
|
||||
raise errors.AnsibleError("failed to put the file in the requested location")
|
||||
raise AnsibleError("failed to put the file in the requested location")
|
||||
|
||||
def fetch_file(self, in_path, out_path):
|
||||
''' save a remote file to the specified path '''
|
||||
|
@ -309,7 +325,7 @@ class Connection(object):
|
|||
data = utils.jsonify(data)
|
||||
data = utils.encrypt(self.key, data)
|
||||
if self.send_data(data):
|
||||
raise errors.AnsibleError("failed to initiate the file fetch with %s" % self.host)
|
||||
raise AnsibleError("failed to initiate the file fetch with %s" % self.host)
|
||||
|
||||
fh = open(out_path, "w")
|
||||
try:
|
||||
|
@ -317,11 +333,11 @@ class Connection(object):
|
|||
while True:
|
||||
response = self.recv_data()
|
||||
if not response:
|
||||
raise errors.AnsibleError("Failed to get a response from %s" % self.host)
|
||||
raise AnsibleError("Failed to get a response from %s" % self.host)
|
||||
response = utils.decrypt(self.key, response)
|
||||
response = utils.parse_json(response)
|
||||
if response.get('failed', False):
|
||||
raise errors.AnsibleError("Error during file fetch, aborting")
|
||||
raise AnsibleError("Error during file fetch, aborting")
|
||||
out = base64.b64decode(response['data'])
|
||||
fh.write(out)
|
||||
bytes += len(out)
|
||||
|
@ -330,7 +346,7 @@ class Connection(object):
|
|||
data = utils.jsonify(dict())
|
||||
data = utils.encrypt(self.key, data)
|
||||
if self.send_data(data):
|
||||
raise errors.AnsibleError("failed to send ack during file fetch")
|
||||
raise AnsibleError("failed to send ack during file fetch")
|
||||
if response.get('last', False):
|
||||
break
|
||||
finally:
|
||||
|
|
121
lib/ansible/runner/connection_plugins/libvirt_lxc.py
Normal file
121
lib/ansible/runner/connection_plugins/libvirt_lxc.py
Normal file
|
@ -0,0 +1,121 @@
|
|||
# Based on local.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
|
||||
# Based on chroot.py (c) 2013, Maykel Moya <mmoya@speedyrails.com>
|
||||
# (c) 2013, Michael Scherer <misc@zarb.org>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import distutils.spawn
|
||||
import os
|
||||
import subprocess
|
||||
from ansible import errors
|
||||
from ansible.callbacks import vvv
|
||||
|
||||
class Connection(object):
|
||||
''' Local lxc based connections '''
|
||||
|
||||
def _search_executable(self, executable):
|
||||
cmd = distutils.spawn.find_executable(executable)
|
||||
if not cmd:
|
||||
raise errors.AnsibleError("%s command not found in PATH") % executable
|
||||
return cmd
|
||||
|
||||
def _check_domain(self, domain):
|
||||
p = subprocess.Popen([self.cmd, '-q', '-c', 'lxc:///', 'dominfo', domain],
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
p.communicate()
|
||||
if p.returncode:
|
||||
raise errors.AnsibleError("%s is not a lxc defined in libvirt" % domain)
|
||||
|
||||
def __init__(self, runner, host, port, *args, **kwargs):
|
||||
self.lxc = host
|
||||
|
||||
self.cmd = self._search_executable('virsh')
|
||||
|
||||
self._check_domain(host)
|
||||
|
||||
self.runner = runner
|
||||
self.host = host
|
||||
# port is unused, since this is local
|
||||
self.port = port
|
||||
|
||||
def connect(self, port=None):
|
||||
''' connect to the lxc; nothing to do here '''
|
||||
|
||||
vvv("THIS IS A LOCAL LXC DIR", host=self.lxc)
|
||||
|
||||
return self
|
||||
|
||||
def _generate_cmd(self, executable, cmd):
|
||||
if executable:
|
||||
local_cmd = [self.cmd, '-q', '-c', 'lxc:///', 'lxc-enter-namespace', self.lxc, '--', executable , '-c', cmd]
|
||||
else:
|
||||
local_cmd = '%s -q -c lxc:/// lxc-enter-namespace %s -- %s' % (self.cmd, self.lxc, cmd)
|
||||
return local_cmd
|
||||
|
||||
def exec_command(self, cmd, tmp_path, sudo_user, sudoable=False, executable='/bin/sh'):
|
||||
''' run a command on the chroot '''
|
||||
|
||||
# We enter lxc as root so sudo stuff can be ignored
|
||||
local_cmd = self._generate_cmd(executable, cmd)
|
||||
|
||||
vvv("EXEC %s" % (local_cmd), host=self.lxc)
|
||||
p = subprocess.Popen(local_cmd, shell=isinstance(local_cmd, basestring),
|
||||
cwd=self.runner.basedir,
|
||||
stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
|
||||
stdout, stderr = p.communicate()
|
||||
return (p.returncode, '', stdout, stderr)
|
||||
|
||||
def _normalize_path(self, path, prefix):
|
||||
if not path.startswith(os.path.sep):
|
||||
path = os.path.join(os.path.sep, path)
|
||||
normpath = os.path.normpath(path)
|
||||
return os.path.join(prefix, normpath[1:])
|
||||
|
||||
def put_file(self, in_path, out_path):
|
||||
''' transfer a file from local to lxc '''
|
||||
|
||||
out_path = self._normalize_path(out_path, '/')
|
||||
vvv("PUT %s TO %s" % (in_path, out_path), host=self.lxc)
|
||||
|
||||
local_cmd = [self.cmd, '-q', '-c', 'lxc:///', 'lxc-enter-namespace', self.lxc, '--', '/bin/tee', out_path]
|
||||
vvv("EXEC %s" % (local_cmd), host=self.lxc)
|
||||
|
||||
p = subprocess.Popen(local_cmd, cwd=self.runner.basedir,
|
||||
stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
stdout, stderr = p.communicate(open(in_path,'rb').read())
|
||||
|
||||
def fetch_file(self, in_path, out_path):
|
||||
''' fetch a file from lxc to local '''
|
||||
|
||||
in_path = self._normalize_path(in_path, '/')
|
||||
vvv("FETCH %s TO %s" % (in_path, out_path), host=self.lxc)
|
||||
|
||||
local_cmd = [self.cmd, '-q', '-c', 'lxc:///', 'lxc-enter-namespace', self.lxc, '--', '/bin/cat', in_path]
|
||||
vvv("EXEC %s" % (local_cmd), host=self.lxc)
|
||||
|
||||
p = subprocess.Popen(local_cmd, cwd=self.runner.basedir,
|
||||
stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
stdout, stderr = p.communicate()
|
||||
open(out_path,'wb').write(stdout)
|
||||
|
||||
|
||||
def close(self):
|
||||
''' terminate the connection; nothing to do here '''
|
||||
pass
|
|
@ -68,9 +68,9 @@ class Connection(object):
|
|||
cp_in_use = False
|
||||
cp_path_set = False
|
||||
for arg in self.common_args:
|
||||
if arg.find("ControlPersist") != -1:
|
||||
if "ControlPersist" in arg:
|
||||
cp_in_use = True
|
||||
if arg.find("ControlPath") != -1:
|
||||
if "ControlPath" in arg:
|
||||
cp_path_set = True
|
||||
|
||||
if cp_in_use and not cp_path_set:
|
||||
|
@ -98,6 +98,28 @@ class Connection(object):
|
|||
|
||||
return self
|
||||
|
||||
def _run(self, cmd, indata):
|
||||
if indata:
|
||||
# do not use pseudo-pty
|
||||
p = subprocess.Popen(cmd, stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
stdin = p.stdin
|
||||
else:
|
||||
# try to use upseudo-pty
|
||||
try:
|
||||
# Make sure stdin is a proper (pseudo) pty to avoid: tcgetattr errors
|
||||
master, slave = pty.openpty()
|
||||
p = subprocess.Popen(cmd, stdin=slave,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
stdin = os.fdopen(master, 'w', 0)
|
||||
os.close(slave)
|
||||
except:
|
||||
p = subprocess.Popen(cmd, stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
stdin = p.stdin
|
||||
|
||||
return (p, stdin)
|
||||
|
||||
def _password_cmd(self):
|
||||
if self.password:
|
||||
try:
|
||||
|
@ -116,6 +138,64 @@ class Connection(object):
|
|||
os.write(self.wfd, "%s\n" % self.password)
|
||||
os.close(self.wfd)
|
||||
|
||||
def _communicate(self, p, stdin, indata, su=False, sudoable=False, prompt=None):
|
||||
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK)
|
||||
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK)
|
||||
# We can't use p.communicate here because the ControlMaster may have stdout open as well
|
||||
stdout = ''
|
||||
stderr = ''
|
||||
rpipes = [p.stdout, p.stderr]
|
||||
if indata:
|
||||
try:
|
||||
stdin.write(indata)
|
||||
stdin.close()
|
||||
except:
|
||||
raise errors.AnsibleError('SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh')
|
||||
# Read stdout/stderr from process
|
||||
while True:
|
||||
rfd, wfd, efd = select.select(rpipes, [], rpipes, 1)
|
||||
|
||||
# fail early if the sudo/su password is wrong
|
||||
if self.runner.sudo and sudoable and self.runner.sudo_pass:
|
||||
incorrect_password = gettext.dgettext(
|
||||
"sudo", "Sorry, try again.")
|
||||
if stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)):
|
||||
raise errors.AnsibleError('Incorrect sudo password')
|
||||
|
||||
if self.runner.su and su and self.runner.su_pass:
|
||||
incorrect_password = gettext.dgettext(
|
||||
"su", "Sorry")
|
||||
if stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)):
|
||||
raise errors.AnsibleError('Incorrect su password')
|
||||
|
||||
if p.stdout in rfd:
|
||||
dat = os.read(p.stdout.fileno(), 9000)
|
||||
stdout += dat
|
||||
if dat == '':
|
||||
rpipes.remove(p.stdout)
|
||||
if p.stderr in rfd:
|
||||
dat = os.read(p.stderr.fileno(), 9000)
|
||||
stderr += dat
|
||||
if dat == '':
|
||||
rpipes.remove(p.stderr)
|
||||
# only break out if no pipes are left to read or
|
||||
# the pipes are completely read and
|
||||
# the process is terminated
|
||||
if (not rpipes or not rfd) and p.poll() is not None:
|
||||
break
|
||||
# No pipes are left to read but process is not yet terminated
|
||||
# Only then it is safe to wait for the process to be finished
|
||||
# NOTE: Actually p.poll() is always None here if rpipes is empty
|
||||
elif not rpipes and p.poll() == None:
|
||||
p.wait()
|
||||
# The process is terminated. Since no pipes to read from are
|
||||
# left, there is no need to call select() again.
|
||||
break
|
||||
# close stdin after process is terminated and stdout/stderr are read
|
||||
# completely (see also issue #848)
|
||||
stdin.close()
|
||||
return (p.returncode, stdout, stderr)
|
||||
|
||||
def not_in_host_file(self, host):
|
||||
if 'USER' in os.environ:
|
||||
user_host_file = os.path.expandvars("~${USER}/.ssh/known_hosts")
|
||||
|
@ -137,7 +217,7 @@ class Connection(object):
|
|||
data = host_fh.read()
|
||||
host_fh.close()
|
||||
for line in data.split("\n"):
|
||||
if line is None or line.find(" ") == -1:
|
||||
if line is None or " " not in line:
|
||||
continue
|
||||
tokens = line.split()
|
||||
if tokens[0].find(self.HASHED_KEY_MAGIC) == 0:
|
||||
|
@ -157,7 +237,7 @@ class Connection(object):
|
|||
return False
|
||||
|
||||
if (hfiles_not_found == len(host_file_list)):
|
||||
print "previous known host file not found"
|
||||
vvv("EXEC previous known host file not found for %s" % host)
|
||||
return True
|
||||
|
||||
def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False, executable='/bin/sh', in_data=None, su_user=None, su=False):
|
||||
|
@ -184,6 +264,7 @@ class Connection(object):
|
|||
sudocmd, prompt, success_key = utils.make_su_cmd(su_user, executable, cmd)
|
||||
ssh_cmd.append(sudocmd)
|
||||
elif not self.runner.sudo or not sudoable:
|
||||
prompt = None
|
||||
if executable:
|
||||
ssh_cmd.append(executable + ' -c ' + pipes.quote(cmd))
|
||||
else:
|
||||
|
@ -203,24 +284,7 @@ class Connection(object):
|
|||
fcntl.lockf(self.runner.output_lockfile, fcntl.LOCK_EX)
|
||||
|
||||
# create process
|
||||
if in_data:
|
||||
# do not use pseudo-pty
|
||||
p = subprocess.Popen(ssh_cmd, stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
stdin = p.stdin
|
||||
else:
|
||||
# try to use upseudo-pty
|
||||
try:
|
||||
# Make sure stdin is a proper (pseudo) pty to avoid: tcgetattr errors
|
||||
master, slave = pty.openpty()
|
||||
p = subprocess.Popen(ssh_cmd, stdin=slave,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
stdin = os.fdopen(master, 'w', 0)
|
||||
os.close(slave)
|
||||
except:
|
||||
p = subprocess.Popen(ssh_cmd, stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
stdin = p.stdin
|
||||
(p, stdin) = self._run(ssh_cmd, in_data)
|
||||
|
||||
self._send_password()
|
||||
|
||||
|
@ -269,62 +333,16 @@ class Connection(object):
|
|||
stdin.write(self.runner.sudo_pass + '\n')
|
||||
elif su:
|
||||
stdin.write(self.runner.su_pass + '\n')
|
||||
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK)
|
||||
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK)
|
||||
# We can't use p.communicate here because the ControlMaster may have stdout open as well
|
||||
stdout = ''
|
||||
stderr = ''
|
||||
rpipes = [p.stdout, p.stderr]
|
||||
if in_data:
|
||||
try:
|
||||
stdin.write(in_data)
|
||||
stdin.close()
|
||||
except:
|
||||
raise errors.AnsibleError('SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh')
|
||||
while True:
|
||||
rfd, wfd, efd = select.select(rpipes, [], rpipes, 1)
|
||||
|
||||
# fail early if the sudo/su password is wrong
|
||||
if self.runner.sudo and sudoable and self.runner.sudo_pass:
|
||||
incorrect_password = gettext.dgettext(
|
||||
"sudo", "Sorry, try again.")
|
||||
if stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)):
|
||||
raise errors.AnsibleError('Incorrect sudo password')
|
||||
(returncode, stdout, stderr) = self._communicate(p, stdin, in_data, su=su, sudoable=sudoable, prompt=prompt)
|
||||
|
||||
if self.runner.su and su and self.runner.sudo_pass:
|
||||
incorrect_password = gettext.dgettext(
|
||||
"su", "Sorry")
|
||||
if stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)):
|
||||
raise errors.AnsibleError('Incorrect su password')
|
||||
|
||||
if p.stdout in rfd:
|
||||
dat = os.read(p.stdout.fileno(), 9000)
|
||||
stdout += dat
|
||||
if dat == '':
|
||||
rpipes.remove(p.stdout)
|
||||
if p.stderr in rfd:
|
||||
dat = os.read(p.stderr.fileno(), 9000)
|
||||
stderr += dat
|
||||
if dat == '':
|
||||
rpipes.remove(p.stderr)
|
||||
# only break out if we've emptied the pipes, or there is nothing to
|
||||
# read from and the process has finished.
|
||||
if (not rpipes or not rfd) and p.poll() is not None:
|
||||
break
|
||||
# Calling wait while there are still pipes to read can cause a lock
|
||||
elif not rpipes and p.poll() == None:
|
||||
p.wait()
|
||||
# the process has finished and the pipes are empty,
|
||||
# if we loop and do the select it waits all the timeout
|
||||
break
|
||||
stdin.close() # close stdin after we read from stdout (see also issue #848)
|
||||
|
||||
if C.HOST_KEY_CHECKING and not_in_host_file:
|
||||
# lock around the initial SSH connectivity so the user prompt about whether to add
|
||||
# the host to known hosts is not intermingled with multiprocess output.
|
||||
fcntl.lockf(self.runner.output_lockfile, fcntl.LOCK_UN)
|
||||
fcntl.lockf(self.runner.process_lockfile, fcntl.LOCK_UN)
|
||||
controlpersisterror = stderr.find('Bad configuration option: ControlPersist') != -1 or stderr.find('unknown configuration option: ControlPersist') != -1
|
||||
controlpersisterror = 'Bad configuration option: ControlPersist' in stderr or \
|
||||
'unknown configuration option: ControlPersist' in stderr
|
||||
|
||||
if C.HOST_KEY_CHECKING:
|
||||
if ssh_cmd[0] == "sshpass" and p.returncode == 6:
|
||||
|
@ -332,7 +350,7 @@ class Connection(object):
|
|||
|
||||
if p.returncode != 0 and controlpersisterror:
|
||||
raise errors.AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" (or ansible_ssh_args in the config file) before running again')
|
||||
if p.returncode == 255 and in_data:
|
||||
if p.returncode == 255 and (in_data or self.runner.module_name == 'raw'):
|
||||
raise errors.AnsibleError('SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh')
|
||||
|
||||
return (p.returncode, '', stdout, stderr)
|
||||
|
@ -356,12 +374,13 @@ class Connection(object):
|
|||
cmd += ["sftp"] + self.common_args + [host]
|
||||
indata = "put %s %s\n" % (pipes.quote(in_path), pipes.quote(out_path))
|
||||
|
||||
p = subprocess.Popen(cmd, stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
self._send_password()
|
||||
stdout, stderr = p.communicate(indata)
|
||||
(p, stdin) = self._run(cmd, indata)
|
||||
|
||||
if p.returncode != 0:
|
||||
self._send_password()
|
||||
|
||||
(returncode, stdout, stderr) = self._communicate(p, stdin, indata)
|
||||
|
||||
if returncode != 0:
|
||||
raise errors.AnsibleError("failed to transfer file to %s:\n%s\n%s" % (out_path, stdout, stderr))
|
||||
|
||||
def fetch_file(self, in_path, out_path):
|
||||
|
|
|
@ -23,8 +23,11 @@ import types
|
|||
import pipes
|
||||
import glob
|
||||
import re
|
||||
import operator as py_operator
|
||||
from ansible import errors
|
||||
from ansible.utils import md5s
|
||||
from distutils.version import LooseVersion, StrictVersion
|
||||
from random import SystemRandom
|
||||
|
||||
def to_nice_yaml(*a, **kw):
|
||||
'''Make verbose, human readable yaml'''
|
||||
|
@ -42,8 +45,6 @@ def failed(*a, **kw):
|
|||
''' Test if task result yields failed '''
|
||||
item = a[0]
|
||||
if type(item) != dict:
|
||||
print "DEBUG: GOT A"
|
||||
print item
|
||||
raise errors.AnsibleFilterError("|failed expects a dictionary")
|
||||
rc = item.get('rc',0)
|
||||
failed = item.get('failed',False)
|
||||
|
@ -129,6 +130,15 @@ def search(value, pattern='', ignorecase=False):
|
|||
''' Perform a `re.search` returning a boolean '''
|
||||
return regex(value, pattern, ignorecase, 'search')
|
||||
|
||||
def regex_replace(value='', pattern='', replacement='', ignorecase=False):
|
||||
''' Perform a `re.sub` returning a string '''
|
||||
if ignorecase:
|
||||
flags = re.I
|
||||
else:
|
||||
flags = 0
|
||||
_re = re.compile(pattern, flags=flags)
|
||||
return _re.sub(replacement, value)
|
||||
|
||||
def unique(a):
|
||||
return set(a)
|
||||
|
||||
|
@ -144,6 +154,37 @@ def symmetric_difference(a, b):
|
|||
def union(a, b):
|
||||
return set(a).union(b)
|
||||
|
||||
def version_compare(value, version, operator='eq', strict=False):
|
||||
''' Perform a version comparison on a value '''
|
||||
op_map = {
|
||||
'==': 'eq', '=': 'eq', 'eq': 'eq',
|
||||
'<': 'lt', 'lt': 'lt',
|
||||
'<=': 'le', 'le': 'le',
|
||||
'>': 'gt', 'gt': 'gt',
|
||||
'>=': 'ge', 'ge': 'ge',
|
||||
'!=': 'ne', '<>': 'ne', 'ne': 'ne'
|
||||
}
|
||||
|
||||
if strict:
|
||||
Version = StrictVersion
|
||||
else:
|
||||
Version = LooseVersion
|
||||
|
||||
if operator in op_map:
|
||||
operator = op_map[operator]
|
||||
else:
|
||||
raise errors.AnsibleFilterError('Invalid operator type')
|
||||
|
||||
try:
|
||||
method = getattr(py_operator, operator)
|
||||
return method(Version(str(value)), Version(str(version)))
|
||||
except Exception, e:
|
||||
raise errors.AnsibleFilterError('Version comparison: %s' % e)
|
||||
|
||||
def rand(end, start=0, step=1):
|
||||
r = SystemRandom()
|
||||
return r.randrange(start, end, step)
|
||||
|
||||
class FilterModule(object):
|
||||
''' Ansible core jinja2 filters '''
|
||||
|
||||
|
@ -198,6 +239,7 @@ class FilterModule(object):
|
|||
'match': match,
|
||||
'search': search,
|
||||
'regex': regex,
|
||||
'regex_replace': regex_replace,
|
||||
|
||||
# list
|
||||
'unique' : unique,
|
||||
|
@ -205,5 +247,11 @@ class FilterModule(object):
|
|||
'difference': difference,
|
||||
'symmetric_difference': symmetric_difference,
|
||||
'union': union,
|
||||
|
||||
# version comparison
|
||||
'version_compare': version_compare,
|
||||
|
||||
# random numbers
|
||||
'random': rand,
|
||||
}
|
||||
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from ansible import utils
|
||||
import os
|
||||
import urllib2
|
||||
try:
|
||||
import json
|
||||
|
@ -24,6 +25,8 @@ except ImportError:
|
|||
|
||||
# this can be made configurable, not should not use ansible.cfg
|
||||
ANSIBLE_ETCD_URL = 'http://127.0.0.1:4001'
|
||||
if os.getenv('ANSIBLE_ETCD_URL') is not None:
|
||||
ANSIBLE_ETCD_URL = os.environ['ANSIBLE_ETCD_URL']
|
||||
|
||||
class etcd():
|
||||
def __init__(self, url=ANSIBLE_ETCD_URL):
|
||||
|
@ -62,7 +65,7 @@ class LookupModule(object):
|
|||
|
||||
def run(self, terms, inject=None, **kwargs):
|
||||
|
||||
terms = utils.listify_lookup_plugin_terms(terms, self.basedir, inject)
|
||||
terms = utils.listify_lookup_plugin_terms(terms, self.basedir, inject)
|
||||
|
||||
if isinstance(terms, basestring):
|
||||
terms = [ terms ]
|
||||
|
|
|
@ -32,6 +32,17 @@ class LookupModule(object):
|
|||
|
||||
ret = []
|
||||
for term in terms:
|
||||
'''
|
||||
http://docs.python.org/2/library/subprocess.html#popen-constructor
|
||||
|
||||
The shell argument (which defaults to False) specifies whether to use the
|
||||
shell as the program to execute. If shell is True, it is recommended to pass
|
||||
args as a string rather than as a sequence
|
||||
|
||||
https://github.com/ansible/ansible/issues/6550
|
||||
'''
|
||||
term = str(term)
|
||||
|
||||
p = subprocess.Popen(term, cwd=self.basedir, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
|
||||
(stdout, stderr) = p.communicate()
|
||||
if p.returncode == 0:
|
||||
|
|
|
@ -30,18 +30,21 @@ class AsyncPoller(object):
|
|||
self.hosts_to_poll = []
|
||||
self.completed = False
|
||||
|
||||
# Get job id and which hosts to poll again in the future
|
||||
jid = None
|
||||
# flag to determine if at least one host was contacted
|
||||
self.active = False
|
||||
# True to work with & below
|
||||
skipped = True
|
||||
for (host, res) in results['contacted'].iteritems():
|
||||
if res.get('started', False):
|
||||
self.hosts_to_poll.append(host)
|
||||
jid = res.get('ansible_job_id', None)
|
||||
self.runner.vars_cache[host]['ansible_job_id'] = jid
|
||||
self.active = True
|
||||
else:
|
||||
skipped = skipped & res.get('skipped', False)
|
||||
self.results['contacted'][host] = res
|
||||
for (host, res) in results['dark'].iteritems():
|
||||
self.runner.vars_cache[host]['ansible_job_id'] = ''
|
||||
self.results['dark'][host] = res
|
||||
|
||||
if not skipped:
|
||||
|
@ -49,14 +52,13 @@ class AsyncPoller(object):
|
|||
raise errors.AnsibleError("unexpected error: unable to determine jid")
|
||||
if len(self.hosts_to_poll)==0:
|
||||
raise errors.AnsibleError("unexpected error: no hosts to poll")
|
||||
self.jid = jid
|
||||
|
||||
def poll(self):
|
||||
""" Poll the job status.
|
||||
|
||||
Returns the changes in this iteration."""
|
||||
self.runner.module_name = 'async_status'
|
||||
self.runner.module_args = "jid=%s" % self.jid
|
||||
self.runner.module_args = "jid={{ansible_job_id}}"
|
||||
self.runner.pattern = "*"
|
||||
self.runner.background = 0
|
||||
self.runner.complex_args = None
|
||||
|
@ -75,13 +77,14 @@ class AsyncPoller(object):
|
|||
self.results['contacted'][host] = res
|
||||
poll_results['contacted'][host] = res
|
||||
if res.get('failed', False) or res.get('rc', 0) != 0:
|
||||
self.runner.callbacks.on_async_failed(host, res, self.jid)
|
||||
self.runner.callbacks.on_async_failed(host, res, self.runner.vars_cache[host]['ansible_job_id'])
|
||||
else:
|
||||
self.runner.callbacks.on_async_ok(host, res, self.jid)
|
||||
self.runner.callbacks.on_async_ok(host, res, self.runner.vars_cache[host]['ansible_job_id'])
|
||||
for (host, res) in results['dark'].iteritems():
|
||||
self.results['dark'][host] = res
|
||||
poll_results['dark'][host] = res
|
||||
self.runner.callbacks.on_async_failed(host, res, self.jid)
|
||||
if host in self.hosts_to_poll:
|
||||
self.runner.callbacks.on_async_failed(host, res, self.runner.vars_cache[host].get('ansible_job_id','XX'))
|
||||
|
||||
self.hosts_to_poll = hosts
|
||||
if len(hosts)==0:
|
||||
|
@ -92,7 +95,7 @@ class AsyncPoller(object):
|
|||
def wait(self, seconds, poll_interval):
|
||||
""" Wait a certain time for job completion, check status every poll_interval. """
|
||||
# jid is None when all hosts were skipped
|
||||
if self.jid is None:
|
||||
if not self.active:
|
||||
return self.results
|
||||
|
||||
clock = seconds - poll_interval
|
||||
|
@ -103,7 +106,7 @@ class AsyncPoller(object):
|
|||
|
||||
for (host, res) in poll_results['polled'].iteritems():
|
||||
if res.get('started'):
|
||||
self.runner.callbacks.on_async_poll(host, res, self.jid, clock)
|
||||
self.runner.callbacks.on_async_poll(host, res, self.runner.vars_cache[host]['ansible_job_id'], clock)
|
||||
|
||||
clock = clock - poll_interval
|
||||
|
||||
|
|
|
@ -29,6 +29,7 @@ from ansible.utils.plugins import *
|
|||
from ansible.utils import template
|
||||
from ansible.callbacks import display
|
||||
import ansible.constants as C
|
||||
import ast
|
||||
import time
|
||||
import StringIO
|
||||
import stat
|
||||
|
@ -42,6 +43,7 @@ import traceback
|
|||
import getpass
|
||||
import sys
|
||||
import textwrap
|
||||
import json
|
||||
|
||||
#import vault
|
||||
from vault import VaultLib
|
||||
|
@ -98,7 +100,7 @@ def key_for_hostname(hostname):
|
|||
raise errors.AnsibleError('ACCELERATE_KEYS_DIR is not a directory.')
|
||||
|
||||
if stat.S_IMODE(os.stat(key_path).st_mode) != int(C.ACCELERATE_KEYS_DIR_PERMS, 8):
|
||||
raise errors.AnsibleError('Incorrect permissions on ACCELERATE_KEYS_DIR (%s)' % (C.ACCELERATE_KEYS_DIR,))
|
||||
raise errors.AnsibleError('Incorrect permissions on the private key directory. Use `chmod 0%o %s` to correct this issue, and make sure any of the keys files contained within that directory are set to 0%o' % (int(C.ACCELERATE_KEYS_DIR_PERMS, 8), C.ACCELERATE_KEYS_DIR, int(C.ACCELERATE_KEYS_FILE_PERMS, 8)))
|
||||
|
||||
key_path = os.path.join(key_path, hostname)
|
||||
|
||||
|
@ -112,7 +114,7 @@ def key_for_hostname(hostname):
|
|||
return key
|
||||
else:
|
||||
if stat.S_IMODE(os.stat(key_path).st_mode) != int(C.ACCELERATE_KEYS_FILE_PERMS, 8):
|
||||
raise errors.AnsibleError('Incorrect permissions on ACCELERATE_KEYS_FILE (%s)' % (key_path,))
|
||||
raise errors.AnsibleError('Incorrect permissions on the key file for this host. Use `chmod 0%o %s` to correct this issue.' % (int(C.ACCELERATE_KEYS_FILE_PERMS, 8), key_path))
|
||||
fh = open(key_path)
|
||||
key = AesKey.Read(fh.read())
|
||||
fh.close()
|
||||
|
@ -192,7 +194,7 @@ def check_conditional(conditional, basedir, inject, fail_on_undefined=False):
|
|||
|
||||
conditional = conditional.replace("jinja2_compare ","")
|
||||
# allow variable names
|
||||
if conditional in inject and str(inject[conditional]).find('-') == -1:
|
||||
if conditional in inject and '-' not in str(inject[conditional]):
|
||||
conditional = inject[conditional]
|
||||
conditional = template.template(basedir, conditional, inject, fail_on_undefined=fail_on_undefined)
|
||||
original = str(conditional).replace("jinja2_compare ","")
|
||||
|
@ -205,9 +207,9 @@ def check_conditional(conditional, basedir, inject, fail_on_undefined=False):
|
|||
# variable was undefined. If we happened to be
|
||||
# looking for an undefined variable, return True,
|
||||
# otherwise fail
|
||||
if conditional.find("is undefined") != -1:
|
||||
if "is undefined" in conditional:
|
||||
return True
|
||||
elif conditional.find("is defined") != -1:
|
||||
elif "is defined" in conditional:
|
||||
return False
|
||||
else:
|
||||
raise errors.AnsibleError("error while evaluating conditional: %s" % original)
|
||||
|
@ -313,7 +315,7 @@ def parse_json(raw_data):
|
|||
raise
|
||||
|
||||
for t in tokens:
|
||||
if t.find("=") == -1:
|
||||
if "=" not in t:
|
||||
raise errors.AnsibleError("failed to parse: %s" % orig_data)
|
||||
(key,value) = t.split("=", 1)
|
||||
if key == 'changed' or 'failed':
|
||||
|
@ -330,9 +332,9 @@ def parse_json(raw_data):
|
|||
|
||||
def smush_braces(data):
|
||||
''' smush Jinaj2 braces so unresolved templates like {{ foo }} don't get parsed weird by key=value code '''
|
||||
while data.find('{{ ') != -1:
|
||||
while '{{ ' in data:
|
||||
data = data.replace('{{ ', '{{')
|
||||
while data.find(' }}') != -1:
|
||||
while ' }}' in data:
|
||||
data = data.replace(' }}', '}}')
|
||||
return data
|
||||
|
||||
|
@ -350,14 +352,30 @@ def smush_ds(data):
|
|||
else:
|
||||
return data
|
||||
|
||||
def parse_yaml(data):
|
||||
''' convert a yaml string to a data structure '''
|
||||
return smush_ds(yaml.safe_load(data))
|
||||
def parse_yaml(data, path_hint=None):
|
||||
''' convert a yaml string to a data structure. Also supports JSON, ssssssh!!!'''
|
||||
|
||||
stripped_data = data.lstrip()
|
||||
loaded = None
|
||||
if stripped_data.startswith("{") or stripped_data.startswith("["):
|
||||
# since the line starts with { or [ we can infer this is a JSON document.
|
||||
try:
|
||||
loaded = json.loads(data)
|
||||
except ValueError, ve:
|
||||
if path_hint:
|
||||
raise errors.AnsibleError(path_hint + ": " + str(ve))
|
||||
else:
|
||||
raise errors.AnsibleError(str(ve))
|
||||
else:
|
||||
# else this is pretty sure to be a YAML document
|
||||
loaded = yaml.safe_load(data)
|
||||
|
||||
return smush_ds(loaded)
|
||||
|
||||
def process_common_errors(msg, probline, column):
|
||||
replaced = probline.replace(" ","")
|
||||
|
||||
if replaced.find(":{{") != -1 and replaced.find("}}") != -1:
|
||||
if ":{{" in replaced and "}}" in replaced:
|
||||
msg = msg + """
|
||||
This one looks easy to fix. YAML thought it was looking for the start of a
|
||||
hash/dictionary and was confused to see a second "{". Most likely this was
|
||||
|
@ -407,7 +425,7 @@ Or:
|
|||
match = True
|
||||
elif middle.startswith('"') and not middle.endswith('"'):
|
||||
match = True
|
||||
if len(middle) > 0 and middle[0] in [ '"', "'" ] and middle[-1] in [ '"', "'" ] and probline.count("'") > 2 or probline.count("'") > 2:
|
||||
if len(middle) > 0 and middle[0] in [ '"', "'" ] and middle[-1] in [ '"', "'" ] and probline.count("'") > 2 or probline.count('"') > 2:
|
||||
unbalanced = True
|
||||
if match:
|
||||
msg = msg + """
|
||||
|
@ -512,7 +530,7 @@ def parse_yaml_from_file(path, vault_password=None):
|
|||
data = vault.decrypt(data)
|
||||
|
||||
try:
|
||||
return parse_yaml(data)
|
||||
return parse_yaml(data, path_hint=path)
|
||||
except yaml.YAMLError, exc:
|
||||
process_yaml_error(exc, data, path)
|
||||
|
||||
|
@ -522,10 +540,16 @@ def parse_kv(args):
|
|||
if args is not None:
|
||||
# attempting to split a unicode here does bad things
|
||||
args = args.encode('utf-8')
|
||||
vargs = [x.decode('utf-8') for x in shlex.split(args, posix=True)]
|
||||
#vargs = shlex.split(str(args), posix=True)
|
||||
try:
|
||||
vargs = shlex.split(args, posix=True)
|
||||
except ValueError, ve:
|
||||
if 'no closing quotation' in str(ve).lower():
|
||||
raise errors.AnsibleError("error parsing argument string, try quoting the entire line.")
|
||||
else:
|
||||
raise
|
||||
vargs = [x.decode('utf-8') for x in vargs]
|
||||
for x in vargs:
|
||||
if x.find("=") != -1:
|
||||
if "=" in x:
|
||||
k, v = x.split("=",1)
|
||||
options[k]=v
|
||||
return options
|
||||
|
@ -566,12 +590,15 @@ def md5(filename):
|
|||
return None
|
||||
digest = _md5()
|
||||
blocksize = 64 * 1024
|
||||
infile = open(filename, 'rb')
|
||||
block = infile.read(blocksize)
|
||||
while block:
|
||||
digest.update(block)
|
||||
try:
|
||||
infile = open(filename, 'rb')
|
||||
block = infile.read(blocksize)
|
||||
infile.close()
|
||||
while block:
|
||||
digest.update(block)
|
||||
block = infile.read(blocksize)
|
||||
infile.close()
|
||||
except IOError, e:
|
||||
raise errors.AnsibleError("error while accessing the file %s, error was: %s" % (filename, e))
|
||||
return digest.hexdigest()
|
||||
|
||||
def default(value, function):
|
||||
|
@ -787,6 +814,12 @@ def ask_vault_passwords(ask_vault_pass=False, ask_new_vault_pass=False, confirm_
|
|||
if new_vault_pass != new_vault_pass2:
|
||||
raise errors.AnsibleError("Passwords do not match")
|
||||
|
||||
# enforce no newline chars at the end of passwords
|
||||
if vault_pass:
|
||||
vault_pass = vault_pass.strip()
|
||||
if new_vault_pass:
|
||||
new_vault_pass = new_vault_pass.strip()
|
||||
|
||||
return vault_pass, new_vault_pass
|
||||
|
||||
def ask_passwords(ask_pass=False, ask_sudo_pass=False, ask_su_pass=False, ask_vault_pass=False):
|
||||
|
@ -945,51 +978,95 @@ def is_list_of_strings(items):
|
|||
return False
|
||||
return True
|
||||
|
||||
def safe_eval(str, locals=None, include_exceptions=False):
|
||||
def safe_eval(expr, locals={}, include_exceptions=False):
|
||||
'''
|
||||
this is intended for allowing things like:
|
||||
with_items: a_list_variable
|
||||
where Jinja2 would return a string
|
||||
but we do not want to allow it to call functions (outside of Jinja2, where
|
||||
the env is constrained)
|
||||
|
||||
Based on:
|
||||
http://stackoverflow.com/questions/12523516/using-ast-and-whitelists-to-make-pythons-eval-safe
|
||||
'''
|
||||
# FIXME: is there a more native way to do this?
|
||||
|
||||
def is_set(var):
|
||||
return not var.startswith("$") and not '{{' in var
|
||||
# this is the whitelist of AST nodes we are going to
|
||||
# allow in the evaluation. Any node type other than
|
||||
# those listed here will raise an exception in our custom
|
||||
# visitor class defined below.
|
||||
SAFE_NODES = set(
|
||||
(
|
||||
ast.Expression,
|
||||
ast.Compare,
|
||||
ast.Str,
|
||||
ast.List,
|
||||
ast.Tuple,
|
||||
ast.Dict,
|
||||
ast.Call,
|
||||
ast.Load,
|
||||
ast.BinOp,
|
||||
ast.UnaryOp,
|
||||
ast.Num,
|
||||
ast.Name,
|
||||
ast.Add,
|
||||
ast.Sub,
|
||||
ast.Mult,
|
||||
ast.Div,
|
||||
)
|
||||
)
|
||||
|
||||
def is_unset(var):
|
||||
return var.startswith("$") or '{{' in var
|
||||
# AST node types were expanded after 2.6
|
||||
if not sys.version.startswith('2.6'):
|
||||
SAFE_NODES.union(
|
||||
set(
|
||||
(ast.Set,)
|
||||
)
|
||||
)
|
||||
|
||||
# do not allow method calls to modules
|
||||
if not isinstance(str, basestring):
|
||||
# builtin functions that are not safe to call
|
||||
INVALID_CALLS = (
|
||||
'classmethod', 'compile', 'delattr', 'eval', 'execfile', 'file',
|
||||
'filter', 'help', 'input', 'object', 'open', 'raw_input', 'reduce',
|
||||
'reload', 'repr', 'setattr', 'staticmethod', 'super', 'type',
|
||||
)
|
||||
|
||||
class CleansingNodeVisitor(ast.NodeVisitor):
|
||||
def generic_visit(self, node):
|
||||
if type(node) not in SAFE_NODES:
|
||||
#raise Exception("invalid expression (%s) type=%s" % (expr, type(node)))
|
||||
raise Exception("invalid expression (%s)" % expr)
|
||||
super(CleansingNodeVisitor, self).generic_visit(node)
|
||||
def visit_Call(self, call):
|
||||
if call.func.id in INVALID_CALLS:
|
||||
raise Exception("invalid function: %s" % call.func.id)
|
||||
|
||||
if not isinstance(expr, basestring):
|
||||
# already templated to a datastructure, perhaps?
|
||||
if include_exceptions:
|
||||
return (str, None)
|
||||
return str
|
||||
if re.search(r'\w\.\w+\(', str):
|
||||
if include_exceptions:
|
||||
return (str, None)
|
||||
return str
|
||||
# do not allow imports
|
||||
if re.search(r'import \w+', str):
|
||||
if include_exceptions:
|
||||
return (str, None)
|
||||
return str
|
||||
return (expr, None)
|
||||
return expr
|
||||
|
||||
try:
|
||||
result = None
|
||||
if not locals:
|
||||
result = eval(str)
|
||||
else:
|
||||
result = eval(str, None, locals)
|
||||
parsed_tree = ast.parse(expr, mode='eval')
|
||||
cnv = CleansingNodeVisitor()
|
||||
cnv.visit(parsed_tree)
|
||||
compiled = compile(parsed_tree, expr, 'eval')
|
||||
result = eval(compiled, {}, locals)
|
||||
|
||||
if include_exceptions:
|
||||
return (result, None)
|
||||
else:
|
||||
return result
|
||||
except SyntaxError, e:
|
||||
# special handling for syntax errors, we just return
|
||||
# the expression string back as-is
|
||||
if include_exceptions:
|
||||
return (expr, None)
|
||||
return expr
|
||||
except Exception, e:
|
||||
if include_exceptions:
|
||||
return (str, e)
|
||||
return str
|
||||
return (expr, e)
|
||||
return expr
|
||||
|
||||
|
||||
def listify_lookup_plugin_terms(terms, basedir, inject):
|
||||
|
@ -1001,12 +1078,12 @@ def listify_lookup_plugin_terms(terms, basedir, inject):
|
|||
# with_items: {{ alist }}
|
||||
|
||||
stripped = terms.strip()
|
||||
if not (stripped.startswith('{') or stripped.startswith('[')) and not stripped.startswith("/"):
|
||||
if not (stripped.startswith('{') or stripped.startswith('[')) and not stripped.startswith("/") and not stripped.startswith('set(['):
|
||||
# if not already a list, get ready to evaluate with Jinja2
|
||||
# not sure why the "/" is in above code :)
|
||||
try:
|
||||
new_terms = template.template(basedir, "{{ %s }}" % terms, inject)
|
||||
if isinstance(new_terms, basestring) and new_terms.find("{{") != -1:
|
||||
if isinstance(new_terms, basestring) and "{{" in new_terms:
|
||||
pass
|
||||
else:
|
||||
terms = new_terms
|
||||
|
@ -1071,3 +1148,13 @@ def random_password(length=20, chars=C.DEFAULT_PASSWORD_CHARS):
|
|||
password.append(new_char)
|
||||
|
||||
return ''.join(password)
|
||||
|
||||
def before_comment(msg):
|
||||
''' what's the part of a string before a comment? '''
|
||||
msg = msg.replace("\#","**NOT_A_COMMENT**")
|
||||
msg = msg.split("#")[0]
|
||||
msg = msg.replace("**NOT_A_COMMENT**","#")
|
||||
return msg
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -23,6 +23,8 @@ import ast
|
|||
import yaml
|
||||
import traceback
|
||||
|
||||
from ansible import utils
|
||||
|
||||
# modules that are ok that they do not have documentation strings
|
||||
BLACKLIST_MODULES = [
|
||||
'async_wrapper', 'accelerate', 'async_status'
|
||||
|
@ -34,6 +36,10 @@ def get_docstring(filename, verbose=False):
|
|||
in the given file.
|
||||
Parse DOCUMENTATION from YAML and return the YAML doc or None
|
||||
together with EXAMPLES, as plain text.
|
||||
|
||||
DOCUMENTATION can be extended using documentation fragments
|
||||
loaded by the PluginLoader from the module_docs_fragments
|
||||
directory.
|
||||
"""
|
||||
|
||||
doc = None
|
||||
|
@ -46,6 +52,41 @@ def get_docstring(filename, verbose=False):
|
|||
if isinstance(child, ast.Assign):
|
||||
if 'DOCUMENTATION' in (t.id for t in child.targets):
|
||||
doc = yaml.safe_load(child.value.s)
|
||||
fragment_slug = doc.get('extends_documentation_fragment',
|
||||
'doesnotexist').lower()
|
||||
|
||||
# Allow the module to specify a var other than DOCUMENTATION
|
||||
# to pull the fragment from, using dot notation as a separator
|
||||
if '.' in fragment_slug:
|
||||
fragment_name, fragment_var = fragment_slug.split('.', 1)
|
||||
fragment_var = fragment_var.upper()
|
||||
else:
|
||||
fragment_name, fragment_var = fragment_slug, 'DOCUMENTATION'
|
||||
|
||||
|
||||
if fragment_slug != 'doesnotexist':
|
||||
fragment_class = utils.plugins.fragment_loader.get(fragment_name)
|
||||
assert fragment_class is not None
|
||||
|
||||
fragment_yaml = getattr(fragment_class, fragment_var, '{}')
|
||||
fragment = yaml.safe_load(fragment_yaml)
|
||||
|
||||
if fragment.has_key('notes'):
|
||||
notes = fragment.pop('notes')
|
||||
if notes:
|
||||
if not doc.has_key('notes'):
|
||||
doc['notes'] = []
|
||||
doc['notes'].extend(notes)
|
||||
|
||||
if 'options' not in fragment.keys():
|
||||
raise Exception("missing options in fragment, possibly misformatted?")
|
||||
|
||||
for key, value in fragment.items():
|
||||
if not doc.has_key(key):
|
||||
doc[key] = value
|
||||
else:
|
||||
doc[key].update(value)
|
||||
|
||||
if 'EXAMPLES' in (t.id for t in child.targets):
|
||||
plainexamples = child.value.s[1:] # Skip first empty line
|
||||
except:
|
||||
|
|
0
lib/ansible/utils/module_docs_fragments/__init__.py
Normal file
0
lib/ansible/utils/module_docs_fragments/__init__.py
Normal file
76
lib/ansible/utils/module_docs_fragments/aws.py
Normal file
76
lib/ansible/utils/module_docs_fragments/aws.py
Normal file
|
@ -0,0 +1,76 @@
|
|||
# (c) 2014, Will Thames <will@thames.id.au>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
class ModuleDocFragment(object):
|
||||
|
||||
# AWS only documentation fragment
|
||||
DOCUMENTATION = """
|
||||
options:
|
||||
ec2_url:
|
||||
description:
|
||||
- Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used
|
||||
required: false
|
||||
default: null
|
||||
aliases: []
|
||||
aws_secret_key:
|
||||
description:
|
||||
- AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used.
|
||||
required: false
|
||||
default: null
|
||||
aliases: [ 'ec2_secret_key', 'secret_key' ]
|
||||
aws_access_key:
|
||||
description:
|
||||
- AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used.
|
||||
required: false
|
||||
default: null
|
||||
aliases: [ 'ec2_access_key', 'access_key' ]
|
||||
validate_certs:
|
||||
description:
|
||||
- When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
|
||||
required: false
|
||||
default: "yes"
|
||||
choices: ["yes", "no"]
|
||||
aliases: []
|
||||
version_added: "1.5"
|
||||
profile:
|
||||
description:
|
||||
- uses a boto profile. Only works with boto >= 2.24.0
|
||||
required: false
|
||||
default: null
|
||||
aliases: []
|
||||
version_added: "1.6"
|
||||
security_token:
|
||||
description:
|
||||
- security token to authenticate against AWS
|
||||
required: false
|
||||
default: null
|
||||
aliases: []
|
||||
version_added: "1.6"
|
||||
requirements:
|
||||
- boto
|
||||
notes:
|
||||
- The following environment variables can be used C(AWS_ACCESS_KEY) or
|
||||
C(EC2_ACCESS_KEY) or C(AWS_ACCESS_KEY_ID),
|
||||
C(AWS_SECRET_KEY) or C(EC2_SECRET_KEY) or C(AWS_SECRET_ACCESS_KEY),
|
||||
C(AWS_REGION) or C(EC2_REGION), C(AWS_SECURITY_TOKEN)
|
||||
- Ansible uses the boto configuration file (typically ~/.boto) if no
|
||||
credentials are provided. See http://boto.readthedocs.org/en/latest/boto_config_tut.html
|
||||
- C(AWS_REGION) or C(EC2_REGION) can be typically be used to specify the
|
||||
AWS region, when required, but
|
||||
this can also be configured in the boto config file
|
||||
"""
|
58
lib/ansible/utils/module_docs_fragments/files.py
Normal file
58
lib/ansible/utils/module_docs_fragments/files.py
Normal file
|
@ -0,0 +1,58 @@
|
|||
# (c) 2014, Matt Martz <matt@sivel.net>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
class ModuleDocFragment(object):
|
||||
|
||||
# Standard files documentation fragment
|
||||
DOCUMENTATION = """
|
||||
options:
|
||||
path:
|
||||
description:
|
||||
- 'path to the file being managed. Aliases: I(dest), I(name)'
|
||||
required: true
|
||||
default: []
|
||||
aliases: ['dest', 'name']
|
||||
state:
|
||||
description:
|
||||
- If C(directory), all immediate subdirectories will be created if they
|
||||
do not exist. If C(file), the file will NOT be created if it does not
|
||||
exist, see the M(copy) or M(template) module if you want that behavior.
|
||||
If C(link), the symbolic link will be created or changed. Use C(hard)
|
||||
for hardlinks. If C(absent), directories will be recursively deleted,
|
||||
and files or symlinks will be unlinked. If C(touch) (new in 1.4), an empty file will
|
||||
be created if the c(path) does not exist, while an existing file or
|
||||
directory will receive updated file access and modification times (similar
|
||||
to the way `touch` works from the command line).
|
||||
required: false
|
||||
default: file
|
||||
choices: [ file, link, directory, hard, touch, absent ]
|
||||
src:
|
||||
required: false
|
||||
default: null
|
||||
choices: []
|
||||
description:
|
||||
- path of the file to link to (applies only to C(state= link or hard)). Will accept absolute,
|
||||
relative and nonexisting (with C(force)) paths. Relative paths are not expanded.
|
||||
recurse:
|
||||
required: false
|
||||
default: "no"
|
||||
choices: [ "yes", "no" ]
|
||||
version_added: "1.1"
|
||||
description:
|
||||
- recursively set the specified file attributes (applies only to state=directory)
|
||||
"""
|
122
lib/ansible/utils/module_docs_fragments/rackspace.py
Normal file
122
lib/ansible/utils/module_docs_fragments/rackspace.py
Normal file
|
@ -0,0 +1,122 @@
|
|||
# (c) 2014, Matt Martz <matt@sivel.net>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
class ModuleDocFragment(object):
|
||||
|
||||
# Standard Rackspace only documentation fragment
|
||||
DOCUMENTATION = """
|
||||
options:
|
||||
api_key:
|
||||
description:
|
||||
- Rackspace API key (overrides I(credentials))
|
||||
aliases:
|
||||
- password
|
||||
credentials:
|
||||
description:
|
||||
- File to find the Rackspace credentials in (ignored if I(api_key) and
|
||||
I(username) are provided)
|
||||
default: null
|
||||
aliases:
|
||||
- creds_file
|
||||
env:
|
||||
description:
|
||||
- Environment as configured in ~/.pyrax.cfg,
|
||||
see U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration)
|
||||
version_added: 1.5
|
||||
region:
|
||||
description:
|
||||
- Region to create an instance in
|
||||
default: DFW
|
||||
username:
|
||||
description:
|
||||
- Rackspace username (overrides I(credentials))
|
||||
verify_ssl:
|
||||
description:
|
||||
- Whether or not to require SSL validation of API endpoints
|
||||
version_added: 1.5
|
||||
requirements:
|
||||
- pyrax
|
||||
notes:
|
||||
- The following environment variables can be used, C(RAX_USERNAME),
|
||||
C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION).
|
||||
- C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file
|
||||
appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating)
|
||||
- C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file
|
||||
- C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
|
||||
"""
|
||||
|
||||
# Documentation fragment including attributes to enable communication
|
||||
# of other OpenStack clouds. Not all rax modules support this.
|
||||
OPENSTACK = """
|
||||
options:
|
||||
api_key:
|
||||
description:
|
||||
- Rackspace API key (overrides I(credentials))
|
||||
aliases:
|
||||
- password
|
||||
auth_endpoint:
|
||||
description:
|
||||
- The URI of the authentication service
|
||||
default: https://identity.api.rackspacecloud.com/v2.0/
|
||||
version_added: 1.5
|
||||
credentials:
|
||||
description:
|
||||
- File to find the Rackspace credentials in (ignored if I(api_key) and
|
||||
I(username) are provided)
|
||||
default: null
|
||||
aliases:
|
||||
- creds_file
|
||||
env:
|
||||
description:
|
||||
- Environment as configured in ~/.pyrax.cfg,
|
||||
see U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration)
|
||||
version_added: 1.5
|
||||
identity_type:
|
||||
description:
|
||||
- Authentication machanism to use, such as rackspace or keystone
|
||||
default: rackspace
|
||||
version_added: 1.5
|
||||
region:
|
||||
description:
|
||||
- Region to create an instance in
|
||||
default: DFW
|
||||
tenant_id:
|
||||
description:
|
||||
- The tenant ID used for authentication
|
||||
version_added: 1.5
|
||||
tenant_name:
|
||||
description:
|
||||
- The tenant name used for authentication
|
||||
version_added: 1.5
|
||||
username:
|
||||
description:
|
||||
- Rackspace username (overrides I(credentials))
|
||||
verify_ssl:
|
||||
description:
|
||||
- Whether or not to require SSL validation of API endpoints
|
||||
version_added: 1.5
|
||||
requirements:
|
||||
- pyrax
|
||||
notes:
|
||||
- The following environment variables can be used, C(RAX_USERNAME),
|
||||
C(RAX_API_KEY), C(RAX_CREDS_FILE), C(RAX_CREDENTIALS), C(RAX_REGION).
|
||||
- C(RAX_CREDENTIALS) and C(RAX_CREDS_FILE) points to a credentials file
|
||||
appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating)
|
||||
- C(RAX_USERNAME) and C(RAX_API_KEY) obviate the use of a credentials file
|
||||
- C(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
|
||||
"""
|
|
@ -30,7 +30,7 @@ _basedirs = []
|
|||
|
||||
def push_basedir(basedir):
|
||||
# avoid pushing the same absolute dir more than once
|
||||
basedir = os.path.abspath(basedir)
|
||||
basedir = os.path.realpath(basedir)
|
||||
if basedir not in _basedirs:
|
||||
_basedirs.insert(0, basedir)
|
||||
|
||||
|
@ -99,7 +99,7 @@ class PluginLoader(object):
|
|||
ret = []
|
||||
ret += self._extra_dirs
|
||||
for basedir in _basedirs:
|
||||
fullpath = os.path.abspath(os.path.join(basedir, self.subdir))
|
||||
fullpath = os.path.realpath(os.path.join(basedir, self.subdir))
|
||||
if os.path.isdir(fullpath):
|
||||
files = glob.glob("%s/*" % fullpath)
|
||||
for file in files:
|
||||
|
@ -111,7 +111,7 @@ class PluginLoader(object):
|
|||
# look in any configured plugin paths, allow one level deep for subcategories
|
||||
configured_paths = self.config.split(os.pathsep)
|
||||
for path in configured_paths:
|
||||
path = os.path.abspath(os.path.expanduser(path))
|
||||
path = os.path.realpath(os.path.expanduser(path))
|
||||
contents = glob.glob("%s/*" % path)
|
||||
for c in contents:
|
||||
if os.path.isdir(c) and c not in ret:
|
||||
|
@ -131,7 +131,7 @@ class PluginLoader(object):
|
|||
''' Adds an additional directory to the search path '''
|
||||
|
||||
self._paths = None
|
||||
directory = os.path.abspath(directory)
|
||||
directory = os.path.realpath(directory)
|
||||
|
||||
if directory is not None:
|
||||
if with_subdir:
|
||||
|
@ -240,4 +240,9 @@ filter_loader = PluginLoader(
|
|||
'filter_plugins'
|
||||
)
|
||||
|
||||
|
||||
fragment_loader = PluginLoader(
|
||||
'ModuleDocFragment',
|
||||
'ansible.utils.module_docs_fragments',
|
||||
os.path.join(os.path.dirname(__file__), 'module_docs_fragments'),
|
||||
'',
|
||||
)
|
||||
|
|
|
@ -1,9 +1,12 @@
|
|||
def isprintable(instring):
|
||||
#http://stackoverflow.com/a/3637294
|
||||
import string
|
||||
printset = set(string.printable)
|
||||
isprintable = set(instring).issubset(printset)
|
||||
return isprintable
|
||||
if isinstance(instring, str):
|
||||
#http://stackoverflow.com/a/3637294
|
||||
import string
|
||||
printset = set(string.printable)
|
||||
isprintable = set(instring).issubset(printset)
|
||||
return isprintable
|
||||
else:
|
||||
return True
|
||||
|
||||
def count_newlines_from_end(str):
|
||||
i = len(str)
|
||||
|
|
|
@ -88,8 +88,14 @@ def lookup(name, *args, **kwargs):
|
|||
vars = kwargs.get('vars', None)
|
||||
|
||||
if instance is not None:
|
||||
ran = instance.run(*args, inject=vars, **kwargs)
|
||||
return ",".join(ran)
|
||||
# safely catch run failures per #5059
|
||||
try:
|
||||
ran = instance.run(*args, inject=vars, **kwargs)
|
||||
except Exception, e:
|
||||
ran = None
|
||||
if ran:
|
||||
ran = ",".join(ran)
|
||||
return ran
|
||||
else:
|
||||
raise errors.AnsibleError("lookup plugin (%s) not found" % name)
|
||||
|
||||
|
@ -193,7 +199,7 @@ class J2Template(jinja2.environment.Template):
|
|||
def new_context(self, vars=None, shared=False, locals=None):
|
||||
return jinja2.runtime.Context(self.environment, vars.add_locals(locals), self.name, self.blocks)
|
||||
|
||||
def template_from_file(basedir, path, vars):
|
||||
def template_from_file(basedir, path, vars, vault_password=None):
|
||||
''' run a file through the templating engine '''
|
||||
|
||||
fail_on_undefined = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
|
||||
|
@ -310,7 +316,13 @@ def template_from_string(basedir, data, vars, fail_on_undefined=False):
|
|||
if os.path.exists(filesdir):
|
||||
basedir = filesdir
|
||||
|
||||
data = data.decode('utf-8')
|
||||
# 6227
|
||||
if isinstance(data, unicode):
|
||||
try:
|
||||
data = data.decode('utf-8')
|
||||
except UnicodeEncodeError, e:
|
||||
pass
|
||||
|
||||
try:
|
||||
t = environment.from_string(data)
|
||||
except Exception, e:
|
||||
|
@ -332,7 +344,10 @@ def template_from_string(basedir, data, vars, fail_on_undefined=False):
|
|||
res = jinja2.utils.concat(rf)
|
||||
except TypeError, te:
|
||||
if 'StrictUndefined' in str(te):
|
||||
raise errors.AnsibleUndefinedVariable("unable to look up a name or access an attribute in template string")
|
||||
raise errors.AnsibleUndefinedVariable(
|
||||
"Unable to look up a name or access an attribute in template string. " + \
|
||||
"Make sure your variable name does not contain invalid characters like '-'."
|
||||
)
|
||||
else:
|
||||
raise errors.AnsibleError("an unexpected type error occured. Error was %s" % te)
|
||||
return res
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
# installs ansible and sets it up to run on cron.
|
||||
|
||||
import os
|
||||
import shlex
|
||||
import shutil
|
||||
import tempfile
|
||||
from io import BytesIO
|
||||
|
@ -30,6 +31,26 @@ from binascii import hexlify
|
|||
from binascii import unhexlify
|
||||
from ansible import constants as C
|
||||
|
||||
try:
|
||||
from Crypto.Hash import SHA256, HMAC
|
||||
HAS_HASH = True
|
||||
except ImportError:
|
||||
HAS_HASH = False
|
||||
|
||||
# Counter import fails for 2.0.1, requires >= 2.6.1 from pip
|
||||
try:
|
||||
from Crypto.Util import Counter
|
||||
HAS_COUNTER = True
|
||||
except ImportError:
|
||||
HAS_COUNTER = False
|
||||
|
||||
# KDF import fails for 2.0.1, requires >= 2.6.1 from pip
|
||||
try:
|
||||
from Crypto.Protocol.KDF import PBKDF2
|
||||
HAS_PBKDF2 = True
|
||||
except ImportError:
|
||||
HAS_PBKDF2 = False
|
||||
|
||||
# AES IMPORTS
|
||||
try:
|
||||
from Crypto.Cipher import AES as AES
|
||||
|
@ -37,15 +58,17 @@ try:
|
|||
except ImportError:
|
||||
HAS_AES = False
|
||||
|
||||
CRYPTO_UPGRADE = "ansible-vault requires a newer version of pycrypto than the one installed on your platform. You may fix this with OS-specific commands such as: yum install python-devel; rpm -e --nodeps python-crypto; pip install pycrypto"
|
||||
|
||||
HEADER='$ANSIBLE_VAULT'
|
||||
CIPHER_WHITELIST=['AES']
|
||||
CIPHER_WHITELIST=['AES', 'AES256']
|
||||
|
||||
class VaultLib(object):
|
||||
|
||||
def __init__(self, password):
|
||||
self.password = password
|
||||
self.cipher_name = None
|
||||
self.version = '1.0'
|
||||
self.version = '1.1'
|
||||
|
||||
def is_encrypted(self, data):
|
||||
if data.startswith(HEADER):
|
||||
|
@ -59,7 +82,8 @@ class VaultLib(object):
|
|||
raise errors.AnsibleError("data is already encrypted")
|
||||
|
||||
if not self.cipher_name:
|
||||
raise errors.AnsibleError("the cipher must be set before encrypting data")
|
||||
self.cipher_name = "AES256"
|
||||
#raise errors.AnsibleError("the cipher must be set before encrypting data")
|
||||
|
||||
if 'Vault' + self.cipher_name in globals() and self.cipher_name in CIPHER_WHITELIST:
|
||||
cipher = globals()['Vault' + self.cipher_name]
|
||||
|
@ -67,13 +91,17 @@ class VaultLib(object):
|
|||
else:
|
||||
raise errors.AnsibleError("%s cipher could not be found" % self.cipher_name)
|
||||
|
||||
"""
|
||||
# combine sha + data
|
||||
this_sha = sha256(data).hexdigest()
|
||||
tmp_data = this_sha + "\n" + data
|
||||
"""
|
||||
|
||||
# encrypt sha + data
|
||||
tmp_data = this_cipher.encrypt(tmp_data, self.password)
|
||||
enc_data = this_cipher.encrypt(data, self.password)
|
||||
|
||||
# add header
|
||||
tmp_data = self._add_headers_and_hexify_encrypted_data(tmp_data)
|
||||
tmp_data = self._add_header(enc_data)
|
||||
return tmp_data
|
||||
|
||||
def decrypt(self, data):
|
||||
|
@ -83,8 +111,8 @@ class VaultLib(object):
|
|||
if not self.is_encrypted(data):
|
||||
raise errors.AnsibleError("data is not encrypted")
|
||||
|
||||
# clean out header, hex and sha
|
||||
data = self._split_headers_and_get_unhexified_data(data)
|
||||
# clean out header
|
||||
data = self._split_header(data)
|
||||
|
||||
# create the cipher object
|
||||
if 'Vault' + self.cipher_name in globals() and self.cipher_name in CIPHER_WHITELIST:
|
||||
|
@ -95,34 +123,29 @@ class VaultLib(object):
|
|||
|
||||
# try to unencrypt data
|
||||
data = this_cipher.decrypt(data, self.password)
|
||||
|
||||
# split out sha and verify decryption
|
||||
split_data = data.split("\n")
|
||||
this_sha = split_data[0]
|
||||
this_data = '\n'.join(split_data[1:])
|
||||
test_sha = sha256(this_data).hexdigest()
|
||||
if this_sha != test_sha:
|
||||
if data is None:
|
||||
raise errors.AnsibleError("Decryption failed")
|
||||
|
||||
return this_data
|
||||
return data
|
||||
|
||||
def _add_headers_and_hexify_encrypted_data(self, data):
|
||||
# combine header and hexlified encrypted data in 80 char columns
|
||||
def _add_header(self, data):
|
||||
# combine header and encrypted data in 80 char columns
|
||||
|
||||
tmpdata = hexlify(data)
|
||||
tmpdata = [tmpdata[i:i+80] for i in range(0, len(tmpdata), 80)]
|
||||
#tmpdata = hexlify(data)
|
||||
tmpdata = [data[i:i+80] for i in range(0, len(data), 80)]
|
||||
|
||||
if not self.cipher_name:
|
||||
raise errors.AnsibleError("the cipher must be set before adding a header")
|
||||
|
||||
dirty_data = HEADER + ";" + str(self.version) + ";" + self.cipher_name + "\n"
|
||||
|
||||
for l in tmpdata:
|
||||
dirty_data += l + '\n'
|
||||
|
||||
return dirty_data
|
||||
|
||||
|
||||
def _split_headers_and_get_unhexified_data(self, data):
|
||||
def _split_header(self, data):
|
||||
# used by decrypt
|
||||
|
||||
tmpdata = data.split('\n')
|
||||
|
@ -130,14 +153,22 @@ class VaultLib(object):
|
|||
|
||||
self.version = str(tmpheader[1].strip())
|
||||
self.cipher_name = str(tmpheader[2].strip())
|
||||
clean_data = ''.join(tmpdata[1:])
|
||||
clean_data = '\n'.join(tmpdata[1:])
|
||||
|
||||
"""
|
||||
# strip out newline, join, unhex
|
||||
clean_data = [ x.strip() for x in clean_data ]
|
||||
clean_data = unhexlify(''.join(clean_data))
|
||||
"""
|
||||
|
||||
return clean_data
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *err):
|
||||
pass
|
||||
|
||||
class VaultEditor(object):
|
||||
# uses helper methods for write_file(self, filename, data)
|
||||
# to write a file so that code isn't duplicated for simple
|
||||
|
@ -153,12 +184,14 @@ class VaultEditor(object):
|
|||
def create_file(self):
|
||||
""" create a new encrypted file """
|
||||
|
||||
if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2 or not HAS_HASH:
|
||||
raise errors.AnsibleError(CRYPTO_UPGRADE)
|
||||
|
||||
if os.path.isfile(self.filename):
|
||||
raise errors.AnsibleError("%s exists, please use 'edit' instead" % self.filename)
|
||||
|
||||
# drop the user into vim on file
|
||||
EDITOR = os.environ.get('EDITOR','vim')
|
||||
call([EDITOR, self.filename])
|
||||
call(self._editor_shell_command(self.filename))
|
||||
tmpdata = self.read_data(self.filename)
|
||||
this_vault = VaultLib(self.password)
|
||||
this_vault.cipher_name = self.cipher_name
|
||||
|
@ -166,6 +199,10 @@ class VaultEditor(object):
|
|||
self.write_data(enc_data, self.filename)
|
||||
|
||||
def decrypt_file(self):
|
||||
|
||||
if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2 or not HAS_HASH:
|
||||
raise errors.AnsibleError(CRYPTO_UPGRADE)
|
||||
|
||||
if not os.path.isfile(self.filename):
|
||||
raise errors.AnsibleError("%s does not exist" % self.filename)
|
||||
|
||||
|
@ -173,12 +210,18 @@ class VaultEditor(object):
|
|||
this_vault = VaultLib(self.password)
|
||||
if this_vault.is_encrypted(tmpdata):
|
||||
dec_data = this_vault.decrypt(tmpdata)
|
||||
self.write_data(dec_data, self.filename)
|
||||
if dec_data is None:
|
||||
raise errors.AnsibleError("Decryption failed")
|
||||
else:
|
||||
self.write_data(dec_data, self.filename)
|
||||
else:
|
||||
raise errors.AnsibleError("%s is not encrypted" % self.filename)
|
||||
|
||||
def edit_file(self):
|
||||
|
||||
if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2 or not HAS_HASH:
|
||||
raise errors.AnsibleError(CRYPTO_UPGRADE)
|
||||
|
||||
# decrypt to tmpfile
|
||||
tmpdata = self.read_data(self.filename)
|
||||
this_vault = VaultLib(self.password)
|
||||
|
@ -187,13 +230,14 @@ class VaultEditor(object):
|
|||
self.write_data(dec_data, tmp_path)
|
||||
|
||||
# drop the user into vim on the tmp file
|
||||
EDITOR = os.environ.get('EDITOR','vim')
|
||||
call([EDITOR, tmp_path])
|
||||
call(self._editor_shell_command(tmp_path))
|
||||
new_data = self.read_data(tmp_path)
|
||||
|
||||
# create new vault and set cipher to old
|
||||
# create new vault
|
||||
new_vault = VaultLib(self.password)
|
||||
new_vault.cipher_name = this_vault.cipher_name
|
||||
|
||||
# we want the cipher to default to AES256
|
||||
#new_vault.cipher_name = this_vault.cipher_name
|
||||
|
||||
# encrypt new data a write out to tmp
|
||||
enc_data = new_vault.encrypt(new_data)
|
||||
|
@ -203,6 +247,10 @@ class VaultEditor(object):
|
|||
self.shuffle_files(tmp_path, self.filename)
|
||||
|
||||
def encrypt_file(self):
|
||||
|
||||
if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2 or not HAS_HASH:
|
||||
raise errors.AnsibleError(CRYPTO_UPGRADE)
|
||||
|
||||
if not os.path.isfile(self.filename):
|
||||
raise errors.AnsibleError("%s does not exist" % self.filename)
|
||||
|
||||
|
@ -216,14 +264,20 @@ class VaultEditor(object):
|
|||
raise errors.AnsibleError("%s is already encrypted" % self.filename)
|
||||
|
||||
def rekey_file(self, new_password):
|
||||
|
||||
if not HAS_AES or not HAS_COUNTER or not HAS_PBKDF2 or not HAS_HASH:
|
||||
raise errors.AnsibleError(CRYPTO_UPGRADE)
|
||||
|
||||
# decrypt
|
||||
tmpdata = self.read_data(self.filename)
|
||||
this_vault = VaultLib(self.password)
|
||||
dec_data = this_vault.decrypt(tmpdata)
|
||||
|
||||
# create new vault, set cipher to old and password to new
|
||||
# create new vault
|
||||
new_vault = VaultLib(new_password)
|
||||
new_vault.cipher_name = this_vault.cipher_name
|
||||
|
||||
# we want to force cipher to the default
|
||||
#new_vault.cipher_name = this_vault.cipher_name
|
||||
|
||||
# re-encrypt data and re-write file
|
||||
enc_data = new_vault.encrypt(dec_data)
|
||||
|
@ -248,17 +302,27 @@ class VaultEditor(object):
|
|||
os.remove(dest)
|
||||
shutil.move(src, dest)
|
||||
|
||||
def _editor_shell_command(self, filename):
|
||||
EDITOR = os.environ.get('EDITOR','vim')
|
||||
editor = shlex.split(EDITOR)
|
||||
editor.append(filename)
|
||||
|
||||
return editor
|
||||
|
||||
########################################
|
||||
# CIPHERS #
|
||||
########################################
|
||||
|
||||
class VaultAES(object):
|
||||
|
||||
# this version has been obsoleted by the VaultAES256 class
|
||||
# which uses encrypt-then-mac (fixing order) and also improving the KDF used
|
||||
# code remains for upgrade purposes only
|
||||
# http://stackoverflow.com/a/16761459
|
||||
|
||||
def __init__(self):
|
||||
if not HAS_AES:
|
||||
raise errors.AnsibleError("pycrypto is not installed. Fix this with your package manager, for instance, yum-install python-crypto OR (apt equivalent)")
|
||||
raise errors.AnsibleError(CRYPTO_UPGRADE)
|
||||
|
||||
def aes_derive_key_and_iv(self, password, salt, key_length, iv_length):
|
||||
|
||||
|
@ -278,7 +342,12 @@ class VaultAES(object):
|
|||
|
||||
""" Read plaintext data from in_file and write encrypted to out_file """
|
||||
|
||||
in_file = BytesIO(data)
|
||||
|
||||
# combine sha + data
|
||||
this_sha = sha256(data).hexdigest()
|
||||
tmp_data = this_sha + "\n" + data
|
||||
|
||||
in_file = BytesIO(tmp_data)
|
||||
in_file.seek(0)
|
||||
out_file = BytesIO()
|
||||
|
||||
|
@ -301,14 +370,21 @@ class VaultAES(object):
|
|||
out_file.write(cipher.encrypt(chunk))
|
||||
|
||||
out_file.seek(0)
|
||||
return out_file.read()
|
||||
enc_data = out_file.read()
|
||||
tmp_data = hexlify(enc_data)
|
||||
|
||||
return tmp_data
|
||||
|
||||
|
||||
def decrypt(self, data, password, key_length=32):
|
||||
|
||||
""" Read encrypted data from in_file and write decrypted to out_file """
|
||||
|
||||
# http://stackoverflow.com/a/14989032
|
||||
|
||||
data = ''.join(data.split('\n'))
|
||||
data = unhexlify(data)
|
||||
|
||||
in_file = BytesIO(data)
|
||||
in_file.seek(0)
|
||||
out_file = BytesIO()
|
||||
|
@ -330,6 +406,127 @@ class VaultAES(object):
|
|||
|
||||
# reset the stream pointer to the beginning
|
||||
out_file.seek(0)
|
||||
return out_file.read()
|
||||
new_data = out_file.read()
|
||||
|
||||
# split out sha and verify decryption
|
||||
split_data = new_data.split("\n")
|
||||
this_sha = split_data[0]
|
||||
this_data = '\n'.join(split_data[1:])
|
||||
test_sha = sha256(this_data).hexdigest()
|
||||
|
||||
if this_sha != test_sha:
|
||||
raise errors.AnsibleError("Decryption failed")
|
||||
|
||||
#return out_file.read()
|
||||
return this_data
|
||||
|
||||
|
||||
class VaultAES256(object):
|
||||
|
||||
"""
|
||||
Vault implementation using AES-CTR with an HMAC-SHA256 authentication code.
|
||||
Keys are derived using PBKDF2
|
||||
"""
|
||||
|
||||
# http://www.daemonology.net/blog/2009-06-11-cryptographic-right-answers.html
|
||||
|
||||
def __init__(self):
|
||||
|
||||
if not HAS_PBKDF2 or not HAS_COUNTER or not HAS_HASH:
|
||||
raise errors.AnsibleError(CRYPTO_UPGRADE)
|
||||
|
||||
def gen_key_initctr(self, password, salt):
|
||||
# 16 for AES 128, 32 for AES256
|
||||
keylength = 32
|
||||
|
||||
# match the size used for counter.new to avoid extra work
|
||||
ivlength = 16
|
||||
|
||||
hash_function = SHA256
|
||||
|
||||
# make two keys and one iv
|
||||
pbkdf2_prf = lambda p, s: HMAC.new(p, s, hash_function).digest()
|
||||
|
||||
|
||||
derivedkey = PBKDF2(password, salt, dkLen=(2 * keylength) + ivlength,
|
||||
count=10000, prf=pbkdf2_prf)
|
||||
|
||||
key1 = derivedkey[:keylength]
|
||||
key2 = derivedkey[keylength:(keylength * 2)]
|
||||
iv = derivedkey[(keylength * 2):(keylength * 2) + ivlength]
|
||||
|
||||
return key1, key2, hexlify(iv)
|
||||
|
||||
|
||||
def encrypt(self, data, password):
|
||||
|
||||
salt = os.urandom(32)
|
||||
key1, key2, iv = self.gen_key_initctr(password, salt)
|
||||
|
||||
# PKCS#7 PAD DATA http://tools.ietf.org/html/rfc5652#section-6.3
|
||||
bs = AES.block_size
|
||||
padding_length = (bs - len(data) % bs) or bs
|
||||
data += padding_length * chr(padding_length)
|
||||
|
||||
# COUNTER.new PARAMETERS
|
||||
# 1) nbits (integer) - Length of the counter, in bits.
|
||||
# 2) initial_value (integer) - initial value of the counter. "iv" from gen_key_initctr
|
||||
|
||||
ctr = Counter.new(128, initial_value=long(iv, 16))
|
||||
|
||||
# AES.new PARAMETERS
|
||||
# 1) AES key, must be either 16, 24, or 32 bytes long -- "key" from gen_key_initctr
|
||||
# 2) MODE_CTR, is the recommended mode
|
||||
# 3) counter=<CounterObject>
|
||||
|
||||
cipher = AES.new(key1, AES.MODE_CTR, counter=ctr)
|
||||
|
||||
# ENCRYPT PADDED DATA
|
||||
cryptedData = cipher.encrypt(data)
|
||||
|
||||
# COMBINE SALT, DIGEST AND DATA
|
||||
hmac = HMAC.new(key2, cryptedData, SHA256)
|
||||
message = "%s\n%s\n%s" % ( hexlify(salt), hmac.hexdigest(), hexlify(cryptedData) )
|
||||
message = hexlify(message)
|
||||
return message
|
||||
|
||||
def decrypt(self, data, password):
|
||||
|
||||
# SPLIT SALT, DIGEST, AND DATA
|
||||
data = ''.join(data.split("\n"))
|
||||
data = unhexlify(data)
|
||||
salt, cryptedHmac, cryptedData = data.split("\n", 2)
|
||||
salt = unhexlify(salt)
|
||||
cryptedData = unhexlify(cryptedData)
|
||||
|
||||
key1, key2, iv = self.gen_key_initctr(password, salt)
|
||||
|
||||
# EXIT EARLY IF DIGEST DOESN'T MATCH
|
||||
hmacDecrypt = HMAC.new(key2, cryptedData, SHA256)
|
||||
if not self.is_equal(cryptedHmac, hmacDecrypt.hexdigest()):
|
||||
return None
|
||||
|
||||
# SET THE COUNTER AND THE CIPHER
|
||||
ctr = Counter.new(128, initial_value=long(iv, 16))
|
||||
cipher = AES.new(key1, AES.MODE_CTR, counter=ctr)
|
||||
|
||||
# DECRYPT PADDED DATA
|
||||
decryptedData = cipher.decrypt(cryptedData)
|
||||
|
||||
# UNPAD DATA
|
||||
padding_length = ord(decryptedData[-1])
|
||||
decryptedData = decryptedData[:-padding_length]
|
||||
|
||||
return decryptedData
|
||||
|
||||
def is_equal(self, a, b):
|
||||
# http://codahale.com/a-lesson-in-timing-attacks/
|
||||
if len(a) != len(b):
|
||||
return False
|
||||
|
||||
result = 0
|
||||
for x, y in zip(a, b):
|
||||
result |= ord(x) ^ ord(y)
|
||||
return result == 0
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -196,7 +196,7 @@ def main():
|
|||
template_parameters=dict(required=False, type='dict', default={}),
|
||||
state=dict(default='present', choices=['present', 'absent']),
|
||||
template=dict(default=None, required=True),
|
||||
disable_rollback=dict(default=False),
|
||||
disable_rollback=dict(default=False, type='bool'),
|
||||
tags=dict(default=None)
|
||||
)
|
||||
)
|
||||
|
@ -250,7 +250,7 @@ def main():
|
|||
operation = 'CREATE'
|
||||
except Exception, err:
|
||||
error_msg = boto_exception(err)
|
||||
if 'AlreadyExistsException' in error_msg:
|
||||
if 'AlreadyExistsException' in error_msg or 'already exists' in error_msg:
|
||||
update = True
|
||||
else:
|
||||
module.fail_json(msg=error_msg)
|
||||
|
|
|
@ -20,7 +20,7 @@ DOCUMENTATION = '''
|
|||
module: digital_ocean
|
||||
short_description: Create/delete a droplet/SSH_key in DigitalOcean
|
||||
description:
|
||||
- Create/delete a droplet in DigitalOcean and optionally waits for it to be 'running', or deploy an SSH key.
|
||||
- Create/delete a droplet in DigitalOcean and optionally wait for it to be 'running', or deploy an SSH key.
|
||||
version_added: "1.3"
|
||||
options:
|
||||
command:
|
||||
|
@ -35,10 +35,10 @@ options:
|
|||
choices: ['present', 'active', 'absent', 'deleted']
|
||||
client_id:
|
||||
description:
|
||||
- Digital Ocean manager id.
|
||||
- DigitalOcean manager id.
|
||||
api_key:
|
||||
description:
|
||||
- Digital Ocean api key.
|
||||
- DigitalOcean api key.
|
||||
id:
|
||||
description:
|
||||
- Numeric, the droplet id you want to operate on.
|
||||
|
@ -47,34 +47,40 @@ options:
|
|||
- String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key.
|
||||
unique_name:
|
||||
description:
|
||||
- Bool, require unique hostnames. By default, digital ocean allows multiple hosts with the same name. Setting this to "yes" allows only one host per name. Useful for idempotence.
|
||||
- Bool, require unique hostnames. By default, DigitalOcean allows multiple hosts with the same name. Setting this to "yes" allows only one host per name. Useful for idempotence.
|
||||
version_added: "1.4"
|
||||
default: "no"
|
||||
choices: [ "yes", "no" ]
|
||||
size_id:
|
||||
description:
|
||||
- Numeric, this is the id of the size you would like the droplet created at.
|
||||
- Numeric, this is the id of the size you would like the droplet created with.
|
||||
image_id:
|
||||
description:
|
||||
- Numeric, this is the id of the image you would like the droplet created with.
|
||||
region_id:
|
||||
description:
|
||||
- "Numeric, this is the id of the region you would like your server"
|
||||
- "Numeric, this is the id of the region you would like your server to be created in."
|
||||
ssh_key_ids:
|
||||
description:
|
||||
- Optional, comma separated list of ssh_key_ids that you would like to be added to the server
|
||||
- Optional, comma separated list of ssh_key_ids that you would like to be added to the server.
|
||||
virtio:
|
||||
description:
|
||||
- "Bool, turn on virtio driver in droplet for improved network and storage I/O"
|
||||
- "Bool, turn on virtio driver in droplet for improved network and storage I/O."
|
||||
version_added: "1.4"
|
||||
default: "yes"
|
||||
choices: [ "yes", "no" ]
|
||||
private_networking:
|
||||
description:
|
||||
- "Bool, add an additional, private network interface to droplet for inter-droplet communication"
|
||||
- "Bool, add an additional, private network interface to droplet for inter-droplet communication."
|
||||
version_added: "1.4"
|
||||
default: "no"
|
||||
choices: [ "yes", "no" ]
|
||||
backups_enabled:
|
||||
description:
|
||||
- Optional, Boolean, enables backups for your droplet.
|
||||
version_added: "1.6"
|
||||
default: "no"
|
||||
choices: [ "yes", "no" ]
|
||||
wait:
|
||||
description:
|
||||
- Wait for the droplet to be in state 'running' before returning. If wait is "no" an ip_address may not be returned.
|
||||
|
@ -164,11 +170,11 @@ try:
|
|||
import dopy
|
||||
from dopy.manager import DoError, DoManager
|
||||
except ImportError, e:
|
||||
print "failed=True msg='dopy >= 0.2.2 required for this module'"
|
||||
print "failed=True msg='dopy >= 0.2.3 required for this module'"
|
||||
sys.exit(1)
|
||||
|
||||
if dopy.__version__ < '0.2.2':
|
||||
print "failed=True msg='dopy >= 0.2.2 required for this module'"
|
||||
if dopy.__version__ < '0.2.3':
|
||||
print "failed=True msg='dopy >= 0.2.3 required for this module'"
|
||||
sys.exit(1)
|
||||
|
||||
class TimeoutError(DoError):
|
||||
|
@ -229,8 +235,8 @@ class Droplet(JsonfyMixIn):
|
|||
cls.manager = DoManager(client_id, api_key)
|
||||
|
||||
@classmethod
|
||||
def add(cls, name, size_id, image_id, region_id, ssh_key_ids=None, virtio=True, private_networking=False):
|
||||
json = cls.manager.new_droplet(name, size_id, image_id, region_id, ssh_key_ids, virtio, private_networking)
|
||||
def add(cls, name, size_id, image_id, region_id, ssh_key_ids=None, virtio=True, private_networking=False, backups_enabled=False):
|
||||
json = cls.manager.new_droplet(name, size_id, image_id, region_id, ssh_key_ids, virtio, private_networking, backups_enabled)
|
||||
droplet = cls(json)
|
||||
return droplet
|
||||
|
||||
|
@ -333,7 +339,8 @@ def core(module):
|
|||
region_id=getkeyordie('region_id'),
|
||||
ssh_key_ids=module.params['ssh_key_ids'],
|
||||
virtio=module.params['virtio'],
|
||||
private_networking=module.params['private_networking']
|
||||
private_networking=module.params['private_networking'],
|
||||
backups_enabled=module.params['backups_enabled'],
|
||||
)
|
||||
|
||||
if droplet.is_powered_on():
|
||||
|
@ -348,7 +355,7 @@ def core(module):
|
|||
|
||||
elif state in ('absent', 'deleted'):
|
||||
# First, try to find a droplet by id.
|
||||
droplet = Droplet.find(id=getkeyordie('id'))
|
||||
droplet = Droplet.find(module.params['id'])
|
||||
|
||||
# If we couldn't find the droplet and the user is allowing unique
|
||||
# hostnames, then check to see if a droplet with the specified
|
||||
|
@ -392,8 +399,9 @@ def main():
|
|||
image_id = dict(type='int'),
|
||||
region_id = dict(type='int'),
|
||||
ssh_key_ids = dict(default=''),
|
||||
virtio = dict(type='bool', choices=BOOLEANS, default='yes'),
|
||||
private_networking = dict(type='bool', choices=BOOLEANS, default='no'),
|
||||
virtio = dict(type='bool', default='yes'),
|
||||
private_networking = dict(type='bool', default='no'),
|
||||
backups_enabled = dict(type='bool', default='no'),
|
||||
id = dict(aliases=['droplet_id'], type='int'),
|
||||
unique_name = dict(type='bool', default='no'),
|
||||
wait = dict(type='bool', default=True),
|
||||
|
|
242
library/cloud/digital_ocean_domain
Normal file
242
library/cloud/digital_ocean_domain
Normal file
|
@ -0,0 +1,242 @@
|
|||
#!/usr/bin/python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: digital_ocean_domain
|
||||
short_description: Create/delete a DNS record in DigitalOcean
|
||||
description:
|
||||
- Create/delete a DNS record in DigitalOcean.
|
||||
version_added: "1.6"
|
||||
options:
|
||||
state:
|
||||
description:
|
||||
- Indicate desired state of the target.
|
||||
default: present
|
||||
choices: ['present', 'active', 'absent', 'deleted']
|
||||
client_id:
|
||||
description:
|
||||
- Digital Ocean manager id.
|
||||
api_key:
|
||||
description:
|
||||
- Digital Ocean api key.
|
||||
id:
|
||||
description:
|
||||
- Numeric, the droplet id you want to operate on.
|
||||
name:
|
||||
description:
|
||||
- String, this is the name of the droplet - must be formatted by hostname rules, or the name of a SSH key, or the name of a domain.
|
||||
ip:
|
||||
description:
|
||||
- The IP address to point a domain at.
|
||||
|
||||
notes:
|
||||
- Two environment variables can be used, DO_CLIENT_ID and DO_API_KEY.
|
||||
'''
|
||||
|
||||
|
||||
EXAMPLES = '''
|
||||
# Create a domain record
|
||||
|
||||
- digital_ocean_domain: >
|
||||
state=present
|
||||
name=my.digitalocean.domain
|
||||
ip=127.0.0.1
|
||||
|
||||
# Create a droplet and a corresponding domain record
|
||||
|
||||
- digital_cean_droplet: >
|
||||
state=present
|
||||
name=test_droplet
|
||||
size_id=1
|
||||
region_id=2
|
||||
image_id=3
|
||||
register: test_droplet
|
||||
|
||||
- digital_ocean_domain: >
|
||||
state=present
|
||||
name={{ test_droplet.name }}.my.domain
|
||||
ip={{ test_droplet.ip_address }}
|
||||
'''
|
||||
|
||||
import sys
|
||||
import os
|
||||
import time
|
||||
|
||||
try:
|
||||
from dopy.manager import DoError, DoManager
|
||||
except ImportError as e:
|
||||
print "failed=True msg='dopy required for this module'"
|
||||
sys.exit(1)
|
||||
|
||||
class TimeoutError(DoError):
|
||||
def __init__(self, msg, id):
|
||||
super(TimeoutError, self).__init__(msg)
|
||||
self.id = id
|
||||
|
||||
class JsonfyMixIn(object):
|
||||
def to_json(self):
|
||||
return self.__dict__
|
||||
|
||||
class DomainRecord(JsonfyMixIn):
|
||||
manager = None
|
||||
|
||||
def __init__(self, json):
|
||||
self.__dict__.update(json)
|
||||
update_attr = __init__
|
||||
|
||||
def update(self, data = None, record_type = None):
|
||||
json = self.manager.edit_domain_record(self.domain_id,
|
||||
self.id,
|
||||
record_type if record_type is not None else self.record_type,
|
||||
data if data is not None else self.data)
|
||||
self.__dict__.update(json)
|
||||
return self
|
||||
|
||||
def destroy(self):
|
||||
json = self.manager.destroy_domain_record(self.domain_id, self.id)
|
||||
return json
|
||||
|
||||
class Domain(JsonfyMixIn):
|
||||
manager = None
|
||||
|
||||
def __init__(self, domain_json):
|
||||
self.__dict__.update(domain_json)
|
||||
|
||||
def destroy(self):
|
||||
self.manager.destroy_domain(self.id)
|
||||
|
||||
def records(self):
|
||||
json = self.manager.all_domain_records(self.id)
|
||||
return map(DomainRecord, json)
|
||||
|
||||
@classmethod
|
||||
def add(cls, name, ip):
|
||||
json = cls.manager.new_domain(name, ip)
|
||||
return cls(json)
|
||||
|
||||
@classmethod
|
||||
def setup(cls, client_id, api_key):
|
||||
cls.manager = DoManager(client_id, api_key)
|
||||
DomainRecord.manager = cls.manager
|
||||
|
||||
@classmethod
|
||||
def list_all(cls):
|
||||
domains = cls.manager.all_domains()
|
||||
return map(cls, domains)
|
||||
|
||||
@classmethod
|
||||
def find(cls, name=None, id=None):
|
||||
if name is None and id is None:
|
||||
return False
|
||||
|
||||
domains = Domain.list_all()
|
||||
|
||||
if id is not None:
|
||||
for domain in domains:
|
||||
if domain.id == id:
|
||||
return domain
|
||||
|
||||
if name is not None:
|
||||
for domain in domains:
|
||||
if domain.name == name:
|
||||
return domain
|
||||
|
||||
return False
|
||||
|
||||
def core(module):
|
||||
def getkeyordie(k):
|
||||
v = module.params[k]
|
||||
if v is None:
|
||||
module.fail_json(msg='Unable to load %s' % k)
|
||||
return v
|
||||
|
||||
try:
|
||||
# params['client_id'] will be None even if client_id is not passed in
|
||||
client_id = module.params['client_id'] or os.environ['DO_CLIENT_ID']
|
||||
api_key = module.params['api_key'] or os.environ['DO_API_KEY']
|
||||
except KeyError, e:
|
||||
module.fail_json(msg='Unable to load %s' % e.message)
|
||||
|
||||
changed = True
|
||||
state = module.params['state']
|
||||
|
||||
Domain.setup(client_id, api_key)
|
||||
if state in ('present'):
|
||||
domain = Domain.find(id=module.params["id"])
|
||||
|
||||
if not domain:
|
||||
domain = Domain.find(name=getkeyordie("name"))
|
||||
|
||||
if not domain:
|
||||
domain = Domain.add(getkeyordie("name"),
|
||||
getkeyordie("ip"))
|
||||
module.exit_json(changed=True, domain=domain.to_json())
|
||||
else:
|
||||
records = domain.records()
|
||||
at_record = None
|
||||
for record in records:
|
||||
if record.name == "@":
|
||||
at_record = record
|
||||
|
||||
if not at_record.data == getkeyordie("ip"):
|
||||
record.update(data=getkeyordie("ip"), record_type='A')
|
||||
module.exit_json(changed=True, domain=Domain.find(id=record.domain_id).to_json())
|
||||
|
||||
module.exit_json(changed=False, domain=domain.to_json())
|
||||
|
||||
elif state in ('absent'):
|
||||
domain = None
|
||||
if "id" in module.params:
|
||||
domain = Domain.find(id=module.params["id"])
|
||||
|
||||
if not domain and "name" in module.params:
|
||||
domain = Domain.find(name=module.params["name"])
|
||||
|
||||
if not domain:
|
||||
module.exit_json(changed=False, msg="Domain not found.")
|
||||
|
||||
event_json = domain.destroy()
|
||||
module.exit_json(changed=True, event=event_json)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec = dict(
|
||||
state = dict(choices=['active', 'present', 'absent', 'deleted'], default='present'),
|
||||
client_id = dict(aliases=['CLIENT_ID'], no_log=True),
|
||||
api_key = dict(aliases=['API_KEY'], no_log=True),
|
||||
name = dict(type='str'),
|
||||
id = dict(aliases=['droplet_id'], type='int'),
|
||||
ip = dict(type='str'),
|
||||
),
|
||||
required_one_of = (
|
||||
['id', 'name'],
|
||||
),
|
||||
)
|
||||
|
||||
try:
|
||||
core(module)
|
||||
except TimeoutError as e:
|
||||
module.fail_json(msg=str(e), id=e.id)
|
||||
except (DoError, Exception) as e:
|
||||
module.fail_json(msg=str(e))
|
||||
|
||||
# import module snippets
|
||||
from ansible.module_utils.basic import *
|
||||
|
||||
main()
|
178
library/cloud/digital_ocean_sshkey
Normal file
178
library/cloud/digital_ocean_sshkey
Normal file
|
@ -0,0 +1,178 @@
|
|||
#!/usr/bin/python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: digital_ocean_sshkey
|
||||
short_description: Create/delete an SSH key in DigitalOcean
|
||||
description:
|
||||
- Create/delete an SSH key.
|
||||
version_added: "1.6"
|
||||
options:
|
||||
state:
|
||||
description:
|
||||
- Indicate desired state of the target.
|
||||
default: present
|
||||
choices: ['present', 'absent']
|
||||
client_id:
|
||||
description:
|
||||
- Digital Ocean manager id.
|
||||
api_key:
|
||||
description:
|
||||
- Digital Ocean api key.
|
||||
id:
|
||||
description:
|
||||
- Numeric, the SSH key id you want to operate on.
|
||||
name:
|
||||
description:
|
||||
- String, this is the name of an SSH key to create or destroy.
|
||||
ssh_pub_key:
|
||||
description:
|
||||
- The public SSH key you want to add to your account.
|
||||
|
||||
notes:
|
||||
- Two environment variables can be used, DO_CLIENT_ID and DO_API_KEY.
|
||||
'''
|
||||
|
||||
|
||||
EXAMPLES = '''
|
||||
# Ensure a SSH key is present
|
||||
# If a key matches this name, will return the ssh key id and changed = False
|
||||
# If no existing key matches this name, a new key is created, the ssh key id is returned and changed = False
|
||||
|
||||
- digital_ocean_sshkey: >
|
||||
state=present
|
||||
name=my_ssh_key
|
||||
ssh_pub_key='ssh-rsa AAAA...'
|
||||
client_id=XXX
|
||||
api_key=XXX
|
||||
|
||||
'''
|
||||
|
||||
import sys
|
||||
import os
|
||||
import time
|
||||
|
||||
try:
|
||||
from dopy.manager import DoError, DoManager
|
||||
except ImportError as e:
|
||||
print "failed=True msg='dopy required for this module'"
|
||||
sys.exit(1)
|
||||
|
||||
class TimeoutError(DoError):
|
||||
def __init__(self, msg, id):
|
||||
super(TimeoutError, self).__init__(msg)
|
||||
self.id = id
|
||||
|
||||
class JsonfyMixIn(object):
|
||||
def to_json(self):
|
||||
return self.__dict__
|
||||
|
||||
class SSH(JsonfyMixIn):
|
||||
manager = None
|
||||
|
||||
def __init__(self, ssh_key_json):
|
||||
self.__dict__.update(ssh_key_json)
|
||||
update_attr = __init__
|
||||
|
||||
def destroy(self):
|
||||
self.manager.destroy_ssh_key(self.id)
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def setup(cls, client_id, api_key):
|
||||
cls.manager = DoManager(client_id, api_key)
|
||||
|
||||
@classmethod
|
||||
def find(cls, name):
|
||||
if not name:
|
||||
return False
|
||||
keys = cls.list_all()
|
||||
for key in keys:
|
||||
if key.name == name:
|
||||
return key
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def list_all(cls):
|
||||
json = cls.manager.all_ssh_keys()
|
||||
return map(cls, json)
|
||||
|
||||
@classmethod
|
||||
def add(cls, name, key_pub):
|
||||
json = cls.manager.new_ssh_key(name, key_pub)
|
||||
return cls(json)
|
||||
|
||||
def core(module):
|
||||
def getkeyordie(k):
|
||||
v = module.params[k]
|
||||
if v is None:
|
||||
module.fail_json(msg='Unable to load %s' % k)
|
||||
return v
|
||||
|
||||
try:
|
||||
# params['client_id'] will be None even if client_id is not passed in
|
||||
client_id = module.params['client_id'] or os.environ['DO_CLIENT_ID']
|
||||
api_key = module.params['api_key'] or os.environ['DO_API_KEY']
|
||||
except KeyError, e:
|
||||
module.fail_json(msg='Unable to load %s' % e.message)
|
||||
|
||||
changed = True
|
||||
state = module.params['state']
|
||||
|
||||
SSH.setup(client_id, api_key)
|
||||
name = getkeyordie('name')
|
||||
if state in ('present'):
|
||||
key = SSH.find(name)
|
||||
if key:
|
||||
module.exit_json(changed=False, ssh_key=key.to_json())
|
||||
key = SSH.add(name, getkeyordie('ssh_pub_key'))
|
||||
module.exit_json(changed=True, ssh_key=key.to_json())
|
||||
|
||||
elif state in ('absent'):
|
||||
key = SSH.find(name)
|
||||
if not key:
|
||||
module.exit_json(changed=False, msg='SSH key with the name of %s is not found.' % name)
|
||||
key.destroy()
|
||||
module.exit_json(changed=True)
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec = dict(
|
||||
state = dict(choices=['present', 'absent'], default='present'),
|
||||
client_id = dict(aliases=['CLIENT_ID'], no_log=True),
|
||||
api_key = dict(aliases=['API_KEY'], no_log=True),
|
||||
name = dict(type='str'),
|
||||
id = dict(aliases=['droplet_id'], type='int'),
|
||||
ssh_pub_key = dict(type='str'),
|
||||
),
|
||||
required_one_of = (
|
||||
['id', 'name'],
|
||||
),
|
||||
)
|
||||
|
||||
try:
|
||||
core(module)
|
||||
except TimeoutError as e:
|
||||
module.fail_json(msg=str(e), id=e.id)
|
||||
except (DoError, Exception) as e:
|
||||
module.fail_json(msg=str(e))
|
||||
|
||||
# import module snippets
|
||||
from ansible.module_utils.basic import *
|
||||
|
||||
main()
|
|
@ -148,7 +148,7 @@ options:
|
|||
- Set the state of the container
|
||||
required: false
|
||||
default: present
|
||||
choices: [ "present", "stopped", "absent", "killed", "restarted" ]
|
||||
choices: [ "present", "running", "stopped", "absent", "killed", "restarted" ]
|
||||
aliases: []
|
||||
privileged:
|
||||
description:
|
||||
|
@ -169,6 +169,20 @@ options:
|
|||
default: null
|
||||
aliases: []
|
||||
version_added: "1.5"
|
||||
stdin_open:
|
||||
description:
|
||||
- Keep stdin open
|
||||
required: false
|
||||
default: false
|
||||
aliases: []
|
||||
version_added: "1.6"
|
||||
tty:
|
||||
description:
|
||||
- Allocate a pseudo-tty
|
||||
required: false
|
||||
default: false
|
||||
aliases: []
|
||||
version_added: "1.6"
|
||||
author: Cove Schneider, Joshua Conner, Pavel Antonov
|
||||
requirements: [ "docker-py >= 0.3.0" ]
|
||||
'''
|
||||
|
@ -287,6 +301,7 @@ import sys
|
|||
from urlparse import urlparse
|
||||
try:
|
||||
import docker.client
|
||||
import docker.utils
|
||||
from requests.exceptions import *
|
||||
except ImportError, e:
|
||||
HAS_DOCKER_PY = False
|
||||
|
@ -331,7 +346,7 @@ class DockerManager:
|
|||
if self.module.params.get('volumes'):
|
||||
self.binds = {}
|
||||
self.volumes = {}
|
||||
vols = self.parse_list_from_param('volumes')
|
||||
vols = self.module.params.get('volumes')
|
||||
for vol in vols:
|
||||
parts = vol.split(":")
|
||||
# host mount (e.g. /mnt:/tmp, bind mounts host's /tmp to /mnt in the container)
|
||||
|
@ -345,46 +360,32 @@ class DockerManager:
|
|||
self.lxc_conf = None
|
||||
if self.module.params.get('lxc_conf'):
|
||||
self.lxc_conf = []
|
||||
options = self.parse_list_from_param('lxc_conf')
|
||||
options = self.module.params.get('lxc_conf')
|
||||
for option in options:
|
||||
parts = option.split(':')
|
||||
self.lxc_conf.append({"Key": parts[0], "Value": parts[1]})
|
||||
|
||||
self.exposed_ports = None
|
||||
if self.module.params.get('expose'):
|
||||
expose = self.parse_list_from_param('expose')
|
||||
self.exposed_ports = self.get_exposed_ports(expose)
|
||||
self.exposed_ports = self.get_exposed_ports(self.module.params.get('expose'))
|
||||
|
||||
self.port_bindings = None
|
||||
if self.module.params.get('ports'):
|
||||
ports = self.parse_list_from_param('ports')
|
||||
self.port_bindings = self.get_port_bindings(ports)
|
||||
self.port_bindings = self.get_port_bindings(self.module.params.get('ports'))
|
||||
|
||||
self.links = None
|
||||
if self.module.params.get('links'):
|
||||
links = self.parse_list_from_param('links')
|
||||
self.links = dict(map(lambda x: x.split(':'), links))
|
||||
self.links = dict(map(lambda x: x.split(':'), self.module.params.get('links')))
|
||||
|
||||
self.env = None
|
||||
if self.module.params.get('env'):
|
||||
env = self.parse_list_from_param('env')
|
||||
self.env = dict(map(lambda x: x.split("="), env))
|
||||
self.env = dict(map(lambda x: x.split("="), self.module.params.get('env')))
|
||||
|
||||
# connect to docker server
|
||||
docker_url = urlparse(module.params.get('docker_url'))
|
||||
self.client = docker.Client(base_url=docker_url.geturl())
|
||||
|
||||
|
||||
def parse_list_from_param(self, param_name, delimiter=','):
|
||||
"""
|
||||
Get a list from a module parameter, whether it's specified as a delimiter-separated string or is already in list form.
|
||||
"""
|
||||
param_list = self.module.params.get(param_name)
|
||||
if not isinstance(param_list, list):
|
||||
param_list = param_list.split(delimiter)
|
||||
return param_list
|
||||
|
||||
|
||||
def get_exposed_ports(self, expose_list):
|
||||
"""
|
||||
Parse the ports and protocols (TCP/UDP) to expose in the docker-py `create_container` call from the docker CLI-style syntax.
|
||||
|
@ -409,7 +410,9 @@ class DockerManager:
|
|||
"""
|
||||
binds = {}
|
||||
for port in ports:
|
||||
parts = port.split(':')
|
||||
# ports could potentially be an array like [80, 443], so we make sure they're strings
|
||||
# before splitting
|
||||
parts = str(port).split(':')
|
||||
container_port = parts[-1]
|
||||
if '/' not in container_port:
|
||||
container_port = int(parts[-1])
|
||||
|
@ -522,15 +525,19 @@ class DockerManager:
|
|||
'command': self.module.params.get('command'),
|
||||
'ports': self.exposed_ports,
|
||||
'volumes': self.volumes,
|
||||
'volumes_from': self.module.params.get('volumes_from'),
|
||||
'mem_limit': _human_to_bytes(self.module.params.get('memory_limit')),
|
||||
'environment': self.env,
|
||||
'dns': self.module.params.get('dns'),
|
||||
'hostname': self.module.params.get('hostname'),
|
||||
'detach': self.module.params.get('detach'),
|
||||
'name': self.module.params.get('name'),
|
||||
'stdin_open': self.module.params.get('stdin_open'),
|
||||
'tty': self.module.params.get('tty'),
|
||||
}
|
||||
|
||||
if docker.utils.compare_version('1.10', self.client.version()['ApiVersion']) < 0:
|
||||
params['dns'] = self.module.params.get('dns')
|
||||
params['volumes_from'] = self.module.params.get('volumes_from')
|
||||
|
||||
def do_create(count, params):
|
||||
results = []
|
||||
for _ in range(count):
|
||||
|
@ -558,6 +565,11 @@ class DockerManager:
|
|||
'privileged': self.module.params.get('privileged'),
|
||||
'links': self.links,
|
||||
}
|
||||
|
||||
if docker.utils.compare_version('1.10', self.client.version()['ApiVersion']) >= 0:
|
||||
params['dns'] = self.module.params.get('dns')
|
||||
params['volumes_from'] = self.module.params.get('volumes_from')
|
||||
|
||||
for i in containers:
|
||||
self.client.start(i['Id'], **params)
|
||||
self.increment_counter('started')
|
||||
|
@ -616,12 +628,12 @@ def main():
|
|||
count = dict(default=1),
|
||||
image = dict(required=True),
|
||||
command = dict(required=False, default=None),
|
||||
expose = dict(required=False, default=None),
|
||||
ports = dict(required=False, default=None),
|
||||
expose = dict(required=False, default=None, type='list'),
|
||||
ports = dict(required=False, default=None, type='list'),
|
||||
publish_all_ports = dict(default=False, type='bool'),
|
||||
volumes = dict(default=None),
|
||||
volumes = dict(default=None, type='list'),
|
||||
volumes_from = dict(default=None),
|
||||
links = dict(default=None),
|
||||
links = dict(default=None, type='list'),
|
||||
memory_limit = dict(default=0),
|
||||
memory_swap = dict(default=0),
|
||||
docker_url = dict(default='unix://var/run/docker.sock'),
|
||||
|
@ -629,13 +641,15 @@ def main():
|
|||
password = dict(),
|
||||
email = dict(),
|
||||
hostname = dict(default=None),
|
||||
env = dict(),
|
||||
env = dict(type='list'),
|
||||
dns = dict(),
|
||||
detach = dict(default=True, type='bool'),
|
||||
state = dict(default='present', choices=['absent', 'present', 'stopped', 'killed', 'restarted']),
|
||||
state = dict(default='running', choices=['absent', 'present', 'running', 'stopped', 'killed', 'restarted']),
|
||||
debug = dict(default=False, type='bool'),
|
||||
privileged = dict(default=False, type='bool'),
|
||||
lxc_conf = dict(default=None),
|
||||
stdin_open = dict(default=False, type='bool'),
|
||||
tty = dict(default=False, type='bool'),
|
||||
lxc_conf = dict(default=None, type='list'),
|
||||
name = dict(default=None)
|
||||
)
|
||||
)
|
||||
|
@ -662,25 +676,35 @@ def main():
|
|||
changed = False
|
||||
|
||||
# start/stop containers
|
||||
if state == "present":
|
||||
|
||||
# make sure a container with `name` is running
|
||||
if name and "/" + name not in map(lambda x: x.get('Name'), running_containers):
|
||||
if state in [ "running", "present" ]:
|
||||
|
||||
# make sure a container with `name` exists, if not create and start it
|
||||
if name and "/" + name not in map(lambda x: x.get('Name'), deployed_containers):
|
||||
containers = manager.create_containers(1)
|
||||
manager.start_containers(containers)
|
||||
if state == "present": #otherwise it get (re)started later anyways..
|
||||
manager.start_containers(containers)
|
||||
running_containers = manager.get_running_containers()
|
||||
deployed_containers = manager.get_deployed_containers()
|
||||
|
||||
# start more containers if we don't have enough
|
||||
elif delta > 0:
|
||||
containers = manager.create_containers(delta)
|
||||
manager.start_containers(containers)
|
||||
|
||||
# stop containers if we have too many
|
||||
elif delta < 0:
|
||||
containers_to_stop = running_containers[0:abs(delta)]
|
||||
containers = manager.stop_containers(containers_to_stop)
|
||||
manager.remove_containers(containers_to_stop)
|
||||
|
||||
facts = manager.get_running_containers()
|
||||
if state == "running":
|
||||
# make sure a container with `name` is running
|
||||
if name and "/" + name not in map(lambda x: x.get('Name'), running_containers):
|
||||
manager.start_containers(deployed_containers)
|
||||
|
||||
# start more containers if we don't have enough
|
||||
elif delta > 0:
|
||||
containers = manager.create_containers(delta)
|
||||
manager.start_containers(containers)
|
||||
|
||||
# stop containers if we have too many
|
||||
elif delta < 0:
|
||||
containers_to_stop = running_containers[0:abs(delta)]
|
||||
containers = manager.stop_containers(containers_to_stop)
|
||||
manager.remove_containers(containers_to_stop)
|
||||
|
||||
facts = manager.get_running_containers()
|
||||
else:
|
||||
acts = manager.get_deployed_containers()
|
||||
|
||||
# stop and remove containers
|
||||
elif state == "absent":
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
#!/usr/bin/env python
|
||||
#!/usr/bin/python
|
||||
#
|
||||
|
||||
# (c) 2014, Pavel Antonov <antonov@adwz.ru>
|
||||
|
@ -137,6 +137,9 @@ class DockerImageManager:
|
|||
self.changed = True
|
||||
|
||||
for chunk in stream:
|
||||
if not chunk:
|
||||
continue
|
||||
|
||||
chunk_json = json.loads(chunk)
|
||||
|
||||
if 'error' in chunk_json:
|
||||
|
|
|
@ -67,6 +67,13 @@ options:
|
|||
required: true
|
||||
default: null
|
||||
aliases: []
|
||||
spot_price:
|
||||
version_added: "1.5"
|
||||
description:
|
||||
- Maximum spot price to bid, If not set a regular on-demand instance is requested. A spot request is made with this maximum bid. When it is filled, the instance is started.
|
||||
required: false
|
||||
default: null
|
||||
aliases: []
|
||||
image:
|
||||
description:
|
||||
- I(emi) (or I(ami)) to use for the instance
|
||||
|
@ -97,24 +104,12 @@ options:
|
|||
- how long before wait gives up, in seconds
|
||||
default: 300
|
||||
aliases: []
|
||||
ec2_url:
|
||||
spot_wait_timeout:
|
||||
version_added: "1.5"
|
||||
description:
|
||||
- Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used
|
||||
required: false
|
||||
default: null
|
||||
- how long to wait for the spot instance request to be fulfilled
|
||||
default: 600
|
||||
aliases: []
|
||||
aws_secret_key:
|
||||
description:
|
||||
- AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used.
|
||||
required: false
|
||||
default: null
|
||||
aliases: [ 'ec2_secret_key', 'secret_key' ]
|
||||
aws_access_key:
|
||||
description:
|
||||
- AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used.
|
||||
required: false
|
||||
default: null
|
||||
aliases: [ 'ec2_access_key', 'access_key' ]
|
||||
count:
|
||||
description:
|
||||
- number of instances to launch
|
||||
|
@ -157,7 +152,7 @@ options:
|
|||
default: null
|
||||
aliases: []
|
||||
assign_public_ip:
|
||||
version_added: "1.4"
|
||||
version_added: "1.5"
|
||||
description:
|
||||
- when provisioning within vpc, assign a public IP address. Boto library must be 2.13.0+
|
||||
required: false
|
||||
|
@ -184,6 +179,12 @@ options:
|
|||
required: false
|
||||
default: null
|
||||
aliases: []
|
||||
source_dest_check:
|
||||
version_added: "1.6"
|
||||
description:
|
||||
- Enable or Disable the Source/Destination checks (for NAT instances and Virtual Routers)
|
||||
required: false
|
||||
default: true
|
||||
state:
|
||||
version_added: "1.3"
|
||||
description:
|
||||
|
@ -198,6 +199,12 @@ options:
|
|||
required: false
|
||||
default: null
|
||||
aliases: []
|
||||
ebs_optimized:
|
||||
version_added: "1.6"
|
||||
description:
|
||||
- whether instance is using optimized EBS volumes, see U(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html)
|
||||
required: false
|
||||
default: false
|
||||
exact_count:
|
||||
version_added: "1.5"
|
||||
description:
|
||||
|
@ -212,17 +219,9 @@ options:
|
|||
required: false
|
||||
default: null
|
||||
aliases: []
|
||||
validate_certs:
|
||||
description:
|
||||
- When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
|
||||
required: false
|
||||
default: "yes"
|
||||
choices: ["yes", "no"]
|
||||
aliases: []
|
||||
version_added: "1.5"
|
||||
|
||||
requirements: [ "boto" ]
|
||||
author: Seth Vidal, Tim Gerla, Lester Wade
|
||||
extends_documentation_fragment: aws
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
|
@ -253,7 +252,7 @@ EXAMPLES = '''
|
|||
db: postgres
|
||||
monitoring: yes
|
||||
|
||||
# Single instance with additional IOPS volume from snapshot
|
||||
# Single instance with additional IOPS volume from snapshot and volume delete on termination
|
||||
local_action:
|
||||
module: ec2
|
||||
key_name: mykey
|
||||
|
@ -268,6 +267,7 @@ local_action:
|
|||
device_type: io1
|
||||
iops: 1000
|
||||
volume_size: 100
|
||||
delete_on_termination: true
|
||||
monitoring: yes
|
||||
|
||||
# Multiple groups example
|
||||
|
@ -311,6 +311,19 @@ local_action:
|
|||
vpc_subnet_id: subnet-29e63245
|
||||
assign_public_ip: yes
|
||||
|
||||
# Spot instance example
|
||||
- local_action:
|
||||
module: ec2
|
||||
spot_price: 0.24
|
||||
spot_wait_timeout: 600
|
||||
keypair: mykey
|
||||
group_id: sg-1dc53f72
|
||||
instance_type: m1.small
|
||||
image: ami-6e649707
|
||||
wait: yes
|
||||
vpc_subnet_id: subnet-29e63245
|
||||
assign_public_ip: yes
|
||||
|
||||
# Launch instances, runs some tasks
|
||||
# and then terminate them
|
||||
|
||||
|
@ -557,7 +570,8 @@ def get_instance_info(inst):
|
|||
'root_device_type': inst.root_device_type,
|
||||
'root_device_name': inst.root_device_name,
|
||||
'state': inst.state,
|
||||
'hypervisor': inst.hypervisor}
|
||||
'hypervisor': inst.hypervisor,
|
||||
'ebs_optimized': inst.ebs_optimized}
|
||||
try:
|
||||
instance_info['virtualization_type'] = getattr(inst,'virtualization_type')
|
||||
except AttributeError:
|
||||
|
@ -620,6 +634,17 @@ def create_block_device(module, ec2, volume):
|
|||
delete_on_termination=volume.get('delete_on_termination', False),
|
||||
iops=volume.get('iops'))
|
||||
|
||||
def boto_supports_param_in_spot_request(ec2, param):
|
||||
"""
|
||||
Check if Boto library has a <param> in its request_spot_instances() method. For example, the placement_group parameter wasn't added until 2.3.0.
|
||||
|
||||
ec2: authenticated ec2 connection object
|
||||
|
||||
Returns:
|
||||
True if boto library has the named param as an argument on the request_spot_instances method, else False
|
||||
"""
|
||||
method = getattr(ec2, 'request_spot_instances')
|
||||
return param in method.func_code.co_varnames
|
||||
|
||||
def enforce_count(module, ec2):
|
||||
|
||||
|
@ -644,7 +669,6 @@ def enforce_count(module, ec2):
|
|||
|
||||
for inst in instance_dict_array:
|
||||
instances.append(inst)
|
||||
|
||||
elif len(instances) > exact_count:
|
||||
changed = True
|
||||
to_remove = len(instances) - exact_count
|
||||
|
@ -690,6 +714,7 @@ def create_instances(module, ec2, override_count=None):
|
|||
group_id = module.params.get('group_id')
|
||||
zone = module.params.get('zone')
|
||||
instance_type = module.params.get('instance_type')
|
||||
spot_price = module.params.get('spot_price')
|
||||
image = module.params.get('image')
|
||||
if override_count:
|
||||
count = override_count
|
||||
|
@ -700,6 +725,7 @@ def create_instances(module, ec2, override_count=None):
|
|||
ramdisk = module.params.get('ramdisk')
|
||||
wait = module.params.get('wait')
|
||||
wait_timeout = int(module.params.get('wait_timeout'))
|
||||
spot_wait_timeout = int(module.params.get('spot_wait_timeout'))
|
||||
placement_group = module.params.get('placement_group')
|
||||
user_data = module.params.get('user_data')
|
||||
instance_tags = module.params.get('instance_tags')
|
||||
|
@ -708,8 +734,10 @@ def create_instances(module, ec2, override_count=None):
|
|||
private_ip = module.params.get('private_ip')
|
||||
instance_profile_name = module.params.get('instance_profile_name')
|
||||
volumes = module.params.get('volumes')
|
||||
ebs_optimized = module.params.get('ebs_optimized')
|
||||
exact_count = module.params.get('exact_count')
|
||||
count_tag = module.params.get('count_tag')
|
||||
source_dest_check = module.boolean(module.params.get('source_dest_check'))
|
||||
|
||||
# group_id and group_name are exclusive of each other
|
||||
if group_id and group_name:
|
||||
|
@ -760,18 +788,16 @@ def create_instances(module, ec2, override_count=None):
|
|||
try:
|
||||
params = {'image_id': image,
|
||||
'key_name': key_name,
|
||||
'client_token': id,
|
||||
'min_count': count_remaining,
|
||||
'max_count': count_remaining,
|
||||
'monitoring_enabled': monitoring,
|
||||
'placement': zone,
|
||||
'placement_group': placement_group,
|
||||
'instance_type': instance_type,
|
||||
'kernel_id': kernel,
|
||||
'ramdisk_id': ramdisk,
|
||||
'private_ip_address': private_ip,
|
||||
'user_data': user_data}
|
||||
|
||||
if ebs_optimized:
|
||||
params['ebs_optimized'] = ebs_optimized
|
||||
|
||||
if boto_supports_profile_name_arg(ec2):
|
||||
params['instance_profile_name'] = instance_profile_name
|
||||
else:
|
||||
|
@ -788,13 +814,19 @@ def create_instances(module, ec2, override_count=None):
|
|||
msg="assign_public_ip only available with vpc_subnet_id")
|
||||
|
||||
else:
|
||||
interface = boto.ec2.networkinterface.NetworkInterfaceSpecification(
|
||||
subnet_id=vpc_subnet_id,
|
||||
groups=group_id,
|
||||
associate_public_ip_address=assign_public_ip)
|
||||
if private_ip:
|
||||
interface = boto.ec2.networkinterface.NetworkInterfaceSpecification(
|
||||
subnet_id=vpc_subnet_id,
|
||||
private_ip_address=private_ip,
|
||||
groups=group_id,
|
||||
associate_public_ip_address=assign_public_ip)
|
||||
else:
|
||||
interface = boto.ec2.networkinterface.NetworkInterfaceSpecification(
|
||||
subnet_id=vpc_subnet_id,
|
||||
groups=group_id,
|
||||
associate_public_ip_address=assign_public_ip)
|
||||
interfaces = boto.ec2.networkinterface.NetworkInterfaceCollection(interface)
|
||||
params['network_interfaces'] = interfaces
|
||||
|
||||
params['network_interfaces'] = interfaces
|
||||
else:
|
||||
params['subnet_id'] = vpc_subnet_id
|
||||
if vpc_subnet_id:
|
||||
|
@ -814,38 +846,88 @@ def create_instances(module, ec2, override_count=None):
|
|||
|
||||
params['block_device_map'] = bdm
|
||||
|
||||
res = ec2.run_instances(**params)
|
||||
except boto.exception.BotoServerError, e:
|
||||
module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message))
|
||||
|
||||
instids = [ i.id for i in res.instances ]
|
||||
while True:
|
||||
try:
|
||||
res.connection.get_all_instances(instids)
|
||||
break
|
||||
except boto.exception.EC2ResponseError, e:
|
||||
if "<Code>InvalidInstanceID.NotFound</Code>" in str(e):
|
||||
# there's a race between start and get an instance
|
||||
continue
|
||||
# check to see if we're using spot pricing first before starting instances
|
||||
if not spot_price:
|
||||
if assign_public_ip and private_ip:
|
||||
params.update(dict(
|
||||
min_count = count_remaining,
|
||||
max_count = count_remaining,
|
||||
client_token = id,
|
||||
placement_group = placement_group,
|
||||
))
|
||||
else:
|
||||
module.fail_json(msg = str(e))
|
||||
params.update(dict(
|
||||
min_count = count_remaining,
|
||||
max_count = count_remaining,
|
||||
client_token = id,
|
||||
placement_group = placement_group,
|
||||
private_ip_address = private_ip,
|
||||
))
|
||||
|
||||
res = ec2.run_instances(**params)
|
||||
instids = [ i.id for i in res.instances ]
|
||||
while True:
|
||||
try:
|
||||
ec2.get_all_instances(instids)
|
||||
break
|
||||
except boto.exception.EC2ResponseError as e:
|
||||
if "<Code>InvalidInstanceID.NotFound</Code>" in str(e):
|
||||
# there's a race between start and get an instance
|
||||
continue
|
||||
else:
|
||||
module.fail_json(msg = str(e))
|
||||
else:
|
||||
if private_ip:
|
||||
module.fail_json(
|
||||
msg='private_ip only available with on-demand (non-spot) instances')
|
||||
if boto_supports_param_in_spot_request(ec2, placement_group):
|
||||
params['placement_group'] = placement_group
|
||||
elif placement_group :
|
||||
module.fail_json(
|
||||
msg="placement_group parameter requires Boto version 2.3.0 or higher.")
|
||||
|
||||
params.update(dict(
|
||||
count = count_remaining,
|
||||
))
|
||||
res = ec2.request_spot_instances(spot_price, **params)
|
||||
|
||||
# Now we have to do the intermediate waiting
|
||||
if wait:
|
||||
spot_req_inst_ids = dict()
|
||||
spot_wait_timeout = time.time() + spot_wait_timeout
|
||||
while spot_wait_timeout > time.time():
|
||||
reqs = ec2.get_all_spot_instance_requests()
|
||||
for sirb in res:
|
||||
if sirb.id in spot_req_inst_ids:
|
||||
continue
|
||||
for sir in reqs:
|
||||
if sir.id == sirb.id and sir.instance_id is not None:
|
||||
spot_req_inst_ids[sirb.id] = sir.instance_id
|
||||
if len(spot_req_inst_ids) < count:
|
||||
time.sleep(5)
|
||||
else:
|
||||
break
|
||||
if spot_wait_timeout <= time.time():
|
||||
module.fail_json(msg = "wait for spot requests timeout on %s" % time.asctime())
|
||||
instids = spot_req_inst_ids.values()
|
||||
except boto.exception.BotoServerError, e:
|
||||
module.fail_json(msg = "Instance creation failed => %s: %s" % (e.error_code, e.error_message))
|
||||
|
||||
if instance_tags:
|
||||
try:
|
||||
ec2.create_tags(instids, instance_tags)
|
||||
except boto.exception.EC2ResponseError, e:
|
||||
module.fail_json(msg = "%s: %s" % (e.error_code, e.error_message))
|
||||
module.fail_json(msg = "Instance tagging failed => %s: %s" % (e.error_code, e.error_message))
|
||||
|
||||
# wait here until the instances are up
|
||||
this_res = []
|
||||
num_running = 0
|
||||
wait_timeout = time.time() + wait_timeout
|
||||
while wait_timeout > time.time() and num_running < len(instids):
|
||||
res_list = res.connection.get_all_instances(instids)
|
||||
if len(res_list) > 0:
|
||||
this_res = res_list[0]
|
||||
num_running = len([ i for i in this_res.instances if i.state=='running' ])
|
||||
else:
|
||||
res_list = ec2.get_all_instances(instids)
|
||||
num_running = 0
|
||||
for res in res_list:
|
||||
num_running += len([ i for i in res.instances if i.state=='running' ])
|
||||
if len(res_list) <= 0:
|
||||
# got a bad response of some sort, possibly due to
|
||||
# stale/cached data. Wait a second and then try again
|
||||
time.sleep(1)
|
||||
|
@ -859,8 +941,14 @@ def create_instances(module, ec2, override_count=None):
|
|||
# waiting took too long
|
||||
module.fail_json(msg = "wait for instances running timeout on %s" % time.asctime())
|
||||
|
||||
for inst in this_res.instances:
|
||||
running_instances.append(inst)
|
||||
#We do this after the loop ends so that we end up with one list
|
||||
for res in res_list:
|
||||
running_instances.extend(res.instances)
|
||||
|
||||
# Enabled by default by Amazon
|
||||
if not source_dest_check:
|
||||
for inst in res.instances:
|
||||
inst.modify_attribute('sourceDestCheck', False)
|
||||
|
||||
instance_dict_array = []
|
||||
created_instance_ids = []
|
||||
|
@ -1020,13 +1108,15 @@ def main():
|
|||
group_id = dict(type='list'),
|
||||
zone = dict(aliases=['aws_zone', 'ec2_zone']),
|
||||
instance_type = dict(aliases=['type']),
|
||||
spot_price = dict(),
|
||||
image = dict(),
|
||||
kernel = dict(),
|
||||
count = dict(default='1'),
|
||||
count = dict(type='int', default='1'),
|
||||
monitoring = dict(type='bool', default=False),
|
||||
ramdisk = dict(),
|
||||
wait = dict(type='bool', default=False),
|
||||
wait_timeout = dict(default=300),
|
||||
spot_wait_timeout = dict(default=600),
|
||||
placement_group = dict(),
|
||||
user_data = dict(),
|
||||
instance_tags = dict(type='dict'),
|
||||
|
@ -1035,10 +1125,12 @@ def main():
|
|||
private_ip = dict(),
|
||||
instance_profile_name = dict(),
|
||||
instance_ids = dict(type='list'),
|
||||
source_dest_check = dict(type='bool', default=True),
|
||||
state = dict(default='present'),
|
||||
exact_count = dict(type='int', default=None),
|
||||
count_tag = dict(),
|
||||
volumes = dict(type='list'),
|
||||
ebs_optimized = dict(),
|
||||
)
|
||||
)
|
||||
|
||||
|
|
|
@ -22,24 +22,6 @@ short_description: create or destroy an image in ec2, return imageid
|
|||
description:
|
||||
- Creates or deletes ec2 images. This module has a dependency on python-boto >= 2.5
|
||||
options:
|
||||
ec2_url:
|
||||
description:
|
||||
- Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Must be specified if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used
|
||||
required: false
|
||||
default: null
|
||||
aliases: []
|
||||
aws_secret_key:
|
||||
description:
|
||||
- AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used.
|
||||
required: false
|
||||
default: null
|
||||
aliases: [ 'ec2_secret_key', 'secret_key' ]
|
||||
aws_access_key:
|
||||
description:
|
||||
- AWS access key. If not set then the value of the AWS_ACCESS_KEY environment variable is used.
|
||||
required: false
|
||||
default: null
|
||||
aliases: ['ec2_access_key', 'access_key' ]
|
||||
instance_id:
|
||||
description:
|
||||
- instance id of the image to create
|
||||
|
@ -101,17 +83,9 @@ options:
|
|||
required: false
|
||||
default: null
|
||||
aliases: []
|
||||
validate_certs:
|
||||
description:
|
||||
- When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
|
||||
required: false
|
||||
default: "yes"
|
||||
choices: ["yes", "no"]
|
||||
aliases: []
|
||||
version_added: "1.5"
|
||||
|
||||
requirements: [ "boto" ]
|
||||
author: Evan Duffield <eduffield@iacquire.com>
|
||||
extends_documentation_fragment: aws
|
||||
'''
|
||||
|
||||
# Thank you to iAcquire for sponsoring development of this module.
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue