* This adds support the CommandRunner to handle executing commands on
the remote device.
* It also changes the waitfor argument to wait_for to remain compatable
with other modules and adds an alias for waitfor.
* Restricts commands to show commands only when check mode is specified.
* add version_added to wait_for doc string
The IAM group modules were not receiving the `module` object, but they
use `module.fail_json()` in their exception handlers. This patch passes
through the module object so the real errors from boto are exposed,
rather than errors about "NoneType has no method `fail_json`".
$source check causes:
FAILED! => {"changed": false, "failed": true, "msg": "A parameter cannot be found that matches parameter name 'Source'."}
$Params.Remove causes:
FAILED! => {"changed": false, "failed": true, "msg": "Method invocation failed because [System.Management.Automation.PSCustomObject] does not contain a method named 'Remove'."}
* Change documented options for os_networks_facts
os_network_facts currently lists 'network' as an available option, taking the Name or ID. In Ansible 2.0.2 to 2.2.0, this is not valid. Options 'name' and 'id' should be used instead.
* Update os_networks_facts.py
* Update os_networks_facts.py
Set version_added to the only accepted value
* Update os_networks_facts.py
Removed inappropriate 'ID' parameter
When `version` is not specified, it defaults to "HEAD". "HEAD" is not a
remote tag, and it's not listed in the output of get_branches(), so we'd
keep repo_updated at the default value (None) and then return early with
changed=True in --check mode, even when before == after.
Fixes#3024.
Ceph Object Gateway (Ceph RGW) is an object storage interface built on top of
librados to provide applications with a RESTful gateway to Ceph Storage
Clusters:
http://docs.ceph.com/docs/master/radosgw/
This patch adds the required bits to use the RGW S3 RESTful API properly.
Signed-off-by: Javier M. Mellid <jmunhoz@igalia.com>
Prior to Python 2.5, SystemExit was a subclass of Exception.
In Py2.4, this is causing extra error output on valid sys.exit(0).
(Toshio) Call sys.exit from inside of the SystemExit exception handler so py2.4 and py2.5+ behaviour matches
* Fixing compile time errors irt a) exception handling for Python 3 in util, also: b) problem octal usage (fixed) and c) print json_dump -> print(json_dump(xyz) ... et al
* This code was not Python 2.4 compliant. Octal codes and exception handling is now working with Py 2.4, 2.6, & 3.5.
* Fixing formating (or rather reverting an non 2.4 compatible change). Works in compile & runtime checking.
* a) revert to use print sys.stderr not fail_json; b) fixed var name in exception
* Python 3 compatible print (print >>sys.stderr will generate a TypeError - now uses sys.stderr.write instead).
The "Developing Modules" documentation states:
Include a minimum of dependencies if possible. If there are
dependencies, document them at the top of the module file, and have
the module raise JSON error messages when the import fails.
When docker_service runs on a remote host without PyYAML it crashes with
ImportError.
This patch raises a JSON error message when import fails, but only if
the PyYAML module is actually used. It's only needed when the
"definition" parameter is given.
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
The example for delete=yes does not specify recursive although it is
required. In addition, the wording for the delete option is confusing
about from where files are really deleted. This should clarify that.
* Update GitHub templates to reflect ansible/ansible
Update the GitHub templates to what is used for some time on ansible/ansible
For more information, see ansible/ansible#15961
* Small improvement from ansible/ansible
* Improve the unzip output scraping
Ensure we capture the complete file (also when it includes spaces).
Drop lines that do not conform (in length) to what we expect (e.g. header/footer).
This fixes#3813
* Fix how split() works
* remove unused variables
* fetch branch name instead of HEAD
fix#3782, which was introduced by f1bacc1d3f9578f26d4ae2f66112cbb2509a7fe8
* disable git depth option for old git versions
fixes#3782
git support for `--depth` did not fully work in old git versions (before 1.8.2)
fall back to full clones/fetches on those versions
Reading the entire tar file into memory can result in out-of-memory
conditions such as this traceback:
Traceback (most recent call last):
File "/tmp/ansible_YELTSu/ansible_module_docker_image.py", line 486, in load_image
self.client.load_image(image_data)
File "/usr/local/lib/python2.7/dist-packages/docker/api/image.py", line 147, in load_image
res = self._post(self._url("/images/load"), data=data)
...
File "/usr/lib/python2.7/httplib.py", line 997, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 848, in _send_output
msg += message_body
MemoryError
Luckily docker-py's load_image(), which calls requests post(), accepts a
file-like object instead of a string. Pass in the file object to avoid
reading the full file into memory. This allows larger tar files to load
succesfully.
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
* This moves the lines in the code that parse the `env` and `env_file` options for docker to the end of the `__init__()` function.
This is needed because the `_check_capabilites` function needs both a working `self.client` and a proper `self.docker_py_versioninfo`.
`_check_capabilities` is used by `ensure_capabilities` which is, in turn, used by `get_environment`
This means that before this commit, the environment variables could not be loaded because both `self.client` and `self.docker_py_versioninfo` were not set at that time.
This commit fixes that by putting the environment variable parsing after those two.
* This moves the lines in the code that parse the `env` and `env_file` options for docker to the end of the `__init__()` function.
This is needed because the `_check_capabilites` function needs both a working `self.client` and a proper `self.docker_py_versioninfo`.
`_check_capabilities` is used by `ensure_capabilities` which is, in turn, used by `get_environment`
This means that before this commit, the environment variables could not be loaded because both `self.client` and `self.docker_py_versioninfo` were not set at that time.
This commit fixes that by putting the environment variable parsing after those two.
The default pagination is every 100 items with a maximum of 1000 from
Amazon. This properly uses the marker returned by Amazon to concatenate
the various pages from the results.
This fixes#2440.
* Fixing error exception handling for python. Does not need to be compatible with Python2.4 b/c boto is Python 2.6 and above.
* Fixing error exception handling for python. Does not need to be compatible with Python2.4 b/c boto is Python 2.6 and above.
* Fixing compile time errors IRT error exception handling for Python 3.5.
This does not need to be compatible with Python2.4 b/c Boto is Python 2.6 and above.
Fix KeyError: 'prepared' while installing dependencies using deb=<file>.deb
This error shows up when --diff was not passed by and the deb files has dependencies not yet installed.
Closes#3752.
Currently, when writing user's crontab, ansible calls
crontab <file> -u <user>
This is incorrect according to crontab(1) on both FreeBSD and Linux,
which suggest that file argument should be the last.
At least on FreeBSD, this leads to incorrect cron module bahavior which
writes to root's crontab instead of users's
Fallback to unzip if zipfile fails and hope that unzip can deal with it
(sites have an easier time upgrading the unzip utility than all of
python).
https://bugs.python.org/issue3997Fixes#3560
packaging/language/pip.py:
virtualenv option:
Mention that virtualenv is created if it does not exist.
(Explicit is better than implicit.)
Mention other relevant options.
notes:
initialized -> created
Wrap long lines.
This is to address this error:
fatal: [site]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to connect to S3: Region does not seem to be available for awsmodule boto.s3. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path"}
Commit 0dd58e9 changed the logic so an exception is thrown (by
`connect_to_aws`) before the `s3 is None` check is performed. This
changes the `None` check to a catch so the old logic can compensate.
This fix passing the update variable to the str()
so that it avoids the exception when ops.dc.read()
returns a dictionary which contains non-string keys.
This is due to the fact that some of the key types in
OpenSwitch schema are actually defined as integer
and ops.dc declerative config module encode those
in integer inside the dictionary. This could be
the right encoding from the schema point of view
but someone needs to convert it to the string
somewhere, as JSON key should be string.
In the mysql_user module, login_host is defined as "localhost". Setting this to localhost also fixes Ubuntu 16.04 support.
To make it more consistent in the future, the params in all mysql modules should move to module utils. I'll take care.
Also fixed a few other things along.
- httpd removed from control_binaries
- check for enabled module after running a2enmod/a2dismod
- fail, if user has no permissions to run control_binary
- reduce code duplication
* Detection of handler depends on the wrong handler failing to list the contents of the tarfile.
Use explicit compression types with the python tarfile library to
achieve that.
* bytearray isn't available in python2.4
* unarchive: use Python's tarfile module for tar listing
fixes https://github.com/ansible/ansible/issues/11348
Depending on the current active locale, `tar`'s file listing can end up
spitting backslash-escaped characters. Unfortunately, when that happens,
we end up with double-escaped backslashes, giving us a wrong path,
making our action fail.
We could try un-double-escaping our paths, but that would be complicated
and, I think, error-prone. The easiest way forward seemed to simply use
the `tarfile` module.
Why use it only for listing? Because the `unarchive` option also
supports the `extra_opts` option, and that supporting this would require
us to mimick `tar`'s interface.
For listing files, however, I don't think that the loss of `extra_opts`
support causes problems (well, I hope so).
* unarchive: re-add xz decompression support
Following previous change to use Python's `tarfile` module for tar file
listing, we lost `xz` decompression support. This commits re-add it by
adding a special case in `TarXzArchive` that pre-decompresses the source
file.
* Adding docker_container
* If state absent, stop the container before attempting to remove. Fixed status running check.
* If container absent, stop before removing. Fix container status check.