The lvol module has a different logic in check-mode for knowing when a change is induced. And this logic is *only* based on a size check. However during a normal run, it is the lvreduce or lvextend tool that decides when a change is performed (or when the requested and existing sizes differ). So while in check-mode the module reports a change, in real run-mode it does not in fact changes anything an reports ok.
One solution would be to implement the exact size-comparison logic that is implemented in lvextend and lvreduce, but we opted to use the `--test` option to each command to verify if a change is induced or not. In effect both check-mode and run-mode use the exact same logic and conclusion.
Instead of doing an unpack, deliberately specify which parameters you
want to use. This allows us to flexibly add more parameters to the
f5_argument_spec without having to rewrite all the modules that use
it.
Functionally this commit changes nothing, it just provides for a
different way of accessing the parameters to the module
* refactor zypper module
Cleanup:
* remove mention of old_zypper (no longer supported)
* requirement goes up to zypper 1.0, SLES 11.0, openSUSE 11.1
* allows to use newer features (xml output)
* already done for zypper_repository
* use zypper instead of rpm to get old version information, based on work by @jasonmader
* don't use rpm, zypper can do everything itself
* run zypper only twice, first to determine current state, then to apply changes
New features:
* determine change by parsing zypper xmlout
* determine failure by checking return code
* allow simulataneous installation/removal of packages (using '-' and '+' prefix)
* allows to swap out alternatives without removing packages depending
on them
* implement checkmode, using zypper --dry-run
* implement diffmode
* implement 'name=* state=latest' and 'name=* state=latest type=patch'
* add force parameter, handed to zypper to allow downgrade or change of vendor/architecture
Fixes/Replaces:
* fixes#1627, give changed=False on installed patches
* fixes#2094, handling URLs for packages
* fixes#1461, fixes#546, allow state=latest name='*'
* fixes#299, changed=False on second install, actually this was fixed earlier, but it is explicitly tested now
* fixes#1824, add type=application
* fixes#1256, install rpm from path, this is done by passing URLs and paths directly to zypper
* fix typo in package_update_all
* minor fixes
* remove commented code block
* bump version added to 2.2
* deal with zypper return codes 103 and 106
* Add git_config module
This module can be used for reading and writing git configuration at all
three scopes (local, global and system). It supports --diff and --check
out of the box.
This module is based off of the following gist:
https://gist.github.com/mgedmin/b38c74e2d25cb4f47908
I tidied it up and added support for the following:
- Reading values on top of writing them
- Reading and writing values at any scope
The original author is credited in the documentation for the module.
* Respond to review feedback
- Improve documentation by adding choices for parameters, requirements
for module, and add missing description for scope parameter.
- Fail gracefully when git is not installed (followed example of puppet
module).
- Remove trailing whitespace.
* Change repo parameter to type 'path'
This ensures that all paths are automatically expanded appropriately.
* Set locale to C before running commands to ensure consistent error messages
This is important to ensure error message parsing occurs correctly.
* Adjust comment
As is done in other ansible modules, this adds the __main__ check
to the module so that the module code itself can be used as a library.
For instance, when testing the code.
It's not particularly obvious that removing an application will remove it
from ufw's own state, potentially leaving ports open on your box if you
upload your configuration.
Whilst this applies to a lot of things in Ansible, firewall rules might
cross some sort of line that justifies such a warning in his instance.
Signed-off-by: Chris Lamb <chris@chris-lamb.co.uk>
1) Removed kubectl functionality. We'll move that into a different
module in the future. Also removed post/put/patch/delete options,
as they are not Ansible best practice.
2) Expanded error handling in areas where tracebacks were most likely,
based on bad data from users, etc.
3) Added an 'insecure' option and made the password param optional, to
enable the use of the local insecure port.
4) Allowed the data (both inline and from the file) to support multiple
items via a list. This is common in YAML files where mutliple docs
are used to create/remove multiple resources in one shot.
5) General bug fixing.
* Remove support for ancient zypper versions
Even SLES11 has zypper 1.x.
* zypper_repository: don't silently ignore repo changes
So far when a repo URL changes this got silently ignored (leading to
incorrect package installations) due to this code:
elif 'already exists. Please use another alias' in stderr:
changed = False
Removing this reveals that we correctly detect that a repo definition
has changes (via repo_subset) but don't indicate this as change but as a
nonexistent repo. This makes us currenlty bail out silently in the above
statement.
To fix this distinguish between non existent and modified repos and
remove the repo first in case of modifications (since there is no force
option in zypper to overwrite it and 'zypper mr' uses different
arguments).
To do this we have to identify a repo by name, alias or url.
* Don't fail on empty values
This unbreaks deleting repositories
* refactor zypper_repository module
* add properties enabled and priority
* allow changing of one property and correctly report changed
* allow overwrite of multiple repositories by alias and URL
* cleanup of unused code and more structuring
* respect enabled option
* make zypper_repository conform to python2.4
* allow repo deletion only by alias
* check for non-existant url field and use alias instead
* remove empty notes and aliases
* add version_added for priority and overwrite_multiple
* add version requirement on zypper and distribution
* zypper 1.0 is enough and exists
* make suse versions note, not requirement
based on comment by @alxgu
* add vmware maintenance mode support
* changed version number in documentation
* updated version_added to 2.0 since CI is failing
* changed version to 2.0 due to CI - error asking for 2.1
* added RETURN
* updated formatting of return values and added some to clarify actions taken
* Support for masquerade settings
Ability to enable and disable masquerade settings from ansible via:
- firewalld: mapping=masquerade state=disabled permanent=true zone=dmz
Placeholder added (mapping) to support masquerade and port_forward
choices initially - port_forward not implemented yet.
* Permanent and Immediate zone handling differentiated
* Corrected naming abstraction for masquerading functionality
Removed mapping tag with port_forward choices - not applicable!
* Added version info for new masquerade option
Pull Request #2017 failing due to missing version info
* Add SQS queue policy attachment functionality
SQS queue has no attribute 'Policy' until one is attached, so this special
case must be handled uniquely
SQS queue Policy can now be passed in as json
container_config:
- "lxc.network.ipv4.gateway=auto"
- "lxc.network.ipv4=192.0.2.1"
might try to override lxc.network.ipv4.gateway in the second entry as both
start with "lxc.network.ipv4".
use a regular expression to find a line that contains (optional) whitespace
and an = after the key.
Signed-off-by: Evgeni Golov <evgeni@golov.de>
before the following would produce four entries:
container_config:
- "lxc.network.flags=up"
- "lxc.network.flags =up"
- "lxc.network.flags= up"
- "lxc.network.flags = up"
let's strip the whitespace and insert only one "lxc.network.flags = up"
into the final config
Signed-off-by: Evgeni Golov <evgeni@golov.de>
The previous version of my regexp did not take into account packages
such as 'p5-Perl-Tidy' or 'p5-Test-Output', so use a greedy match up to
the last occurrance of '-' for matching the package. This regex has
been extensively tested using all packages as provided by pkgsrc-2016Q1[1].
Footnotes:
[1] http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/?only_with_tag=pkgsrc-2016Q1
Since user_key and app_token are used for authentication, I
suspect both of them should be kept secret.
According to the API manual, https://pushover.net/api
priority go from -2 to 2, so the argument should be constrained.
- make path to pkgin a global and stop passing it around; it's not going
to change while ansible is running
- add support for several new options:
* upgrade
* full_upgrade
* force
* clean
- allow for update_cache to be run in the same task as upgrading/installing
packages instead of needing a separate task for that
Only a small issue in results.
In case of type is ingress, we rely on ip address, but in results we also return the network.
Resolving the ip address works without zone params. If the ip address is not located in the default zone and zone param is not set,
the network won't be found because default zone was used for the network query listing.
However since network param is not used for type ingress we skip the return of the network in results.
At the moment, this only works when 'enable' is equals to 'yes' or 'no'.
While I'm on it, I also fixed a typo in the example and added a required
parameter.
* VMware datacenter module rewritten to don't hold pyvmomi context and objects in Ansible module object
fixed exceptions handling
added datacenter destroy result, moved checks
changed wrong value
wrong value again... need some sleep
* check_mode fixes
* state defaults to present, default changed to true
* module check fixes
Note that since cpanm version 1.6926 its messages are sent to stdout
when previously they were sent to stderr.
Also there is no need to initialize out_cpanm and err_cpanm and
check for their truthiness as module.run_command() and str.find()
take care of that.
* added stdout and stderr outputs
Added stdout and stderr outputs of the results from composer as the current msg output strips \n so very hard to read when debugging
* using stdout for fail_json
using stdout for fail_json so we get the stdout_lines array
with the default umask tar will create a world-readable archive of the
container, which may contain sensitive data
Signed-off-by: Evgeni Golov <evgeni@golov.de>
* do not use a predictable filename for the LXC attach script
* don't use predictable filenames for LXC attach script logging
* don't set a predictable archive_path
this should prevent symlink attacks which could result in
* data corruption
* data leakage
* privilege escalation
otherwise deploying user-containers fail as these require information
from ~/.config/lxc/default.conf that the LXC tools will load if no
--config was supplied
Signed-off-by: Evgeni Golov <evgeni@golov.de>
This change adds a note to the win_scheduled_task module
docs that indicates Windows Server 2012 or later is required.
This is because the module relies on the Get-ScheduledTask
cmdlet, which is a part of the Server 2012 OS. Previous
versions, like Server 2008, simply can't work with this
module.
The range_search() API was added to the shade library in version
1.5.0 so let's check for that and let the user know they need to
upgrade if they try to use it.
Addition of an os_ironic_inspect module to leverage the OpenStack
Baremetal inspector add-on to ironic or ironic driver out-of-band
hardware introspection, if supported and configured.
The manual check to see if get_bin_path() returned anything is
redundant, because we pass True to the required parameter of
get_bin_path(). This automatically causes the task to fail if the pacman
binary isn't available. Therefore, the code within the if statement
being removed is never called.
-e or --execute [1] allows to execute a specific piece of Puppet code
such a class.
For example, in puppet you would run:
puppet apply -e 'include ::mymodule'
Will be in ansible:
puppet: execute='include ::mymodule'
[1] http://docs.puppetlabs.com/puppet/latest/reference/man/apply.html#OPTIONS
win_unzip fails to extract files when either src or dest contains
complex paths such as "..\..\" or "C:\\Program Files" (double slashes).
Fix this by fetching absolute path of both before invoking CopyHere
method.
Set int for the various port (and so avoid to convert them later)
Set no_log=True for the login_password
Verify that db is a int, so avoid a conversion
Do a sorted comparison of the list of security groups supplied via `module.params.get('security_groups')` and the list of security groups fetched via `get_sec_group_list(eni.groups)`. This fixes an incorrect "The specified address is already in use" error if the order of security groups in those lists differ.
I changed the logic here to always use 'netsh ... show rule' keywords as keys for $fwsettings map. While the translation (e.g. Enabled -> enable) is performed when invoking 'netsh ... add rule' command.
I tested rule creation and rule creation when the rule was already existing on Windows Server 2012.
Currently the module doesn't explicitly close the file handle. This
wraps the reading of the private key in a try/finally block to ensure
the file is properly closed.
When passing a package version that parses as a number (e.g. `1.9`), the version should be converted to a string before being concatenated to the package name.
add exit_json code to succesfully exit, when you want to delete the already
deleted host.
Without this, playbook fails with
`Specify at least one group for creating host`
which is not correct message.
New module to retrieve facts about existing instance flavors.
By default, facts on all available flavors will be returned.
This can be narrowed by naming a flavor or specifying criteria
about flavor RAM or VCPUs.
- original parameter comment was probably copy&paste error
- new comment highlights that firewall rules can be
added or removed altering this parameter
Session_id is unused in update_session, changed is always specifically
set in all exit_json call, and consul_client.session.destroy return True
or False, and is unused later (nor checked)
TRACE:
while parsing a block mapping
in "<string>", line 33, column 13:
description: resulting state of ...
^
expected <block end>, but found ','
in "lxc_container.RETURN", line 419, column 53:
... "/tmp/test-container-config.tar",
ERROR: RETURN is not valid YAML. Line 419 column 53
- "action" style invoking is a legacy way to call modules
- the examples were updated to the typical style of calling complex
modules:
ovirt:
parameter1: value1
parameter2: value2
...
The os_project module instantiates the openstack cloud object
by passing the module params kwargs.
As the params contain a key named 'domain_id', this is used
for domain in the OpenStack connection, instead of the domain value
the user specifies on the OSCC clouds.yaml or OpenStack envvars.
This fix corrects this by popping the 'domain_id' key, so it we
keep the value but it's not passed later on module.params.
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_vm_vss_dvs_migrate module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Migrate VCSA to vDS
local_action:
module: vmware_vm_vss_dvs_migrate
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
vm_name: "{{ hostname }}"
dvportgroup_name: Management
```
Module Testing
```
ASK [Migrate VCSA to vDS] *****************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:260
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" )
localhost PUT /tmp/tmpkzD4pF TO /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"dvportgroup_name": "Management", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root", "vm_name": "cscvcatmp001"}, "module_name": "vmware_vm_vss_dvs_migrate"}, "result": null}
```
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvs_portgroup module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create Management portgroup
local_action:
module: vmware_dvs_portgroup
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
portgroup_name: Management
switch_name: dvSwitch
vlan_id: "{{ hostvars[groups['foundation_esxi'][0]].mgmt_vlan_id }}"
num_ports: 120
portgroup_type: earlyBinding
state: present
```
Module Testing
```
TASK [Create Management portgroup] *********************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:17
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" )
localhost PUT /tmp/tmpeQ8M1U TO /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"hostname": "172.27.0.100", "num_ports": 120, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "portgroup_name": "Management", "portgroup_type": "earlyBinding", "state": "present", "switch_name": "dvSwitch", "username": "root", "vlan_id": 2700}, "module_name": "vmware_dvs_portgroup"}, "result": "None"}
```
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_cluster module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create Cluster
local_action:
module: vmware_cluster
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
datacenter_name: "{{ mgmt_vdc }}"
cluster_name: "{{ mgmt_cluster }}"
enable_ha: True
enable_drs: True
enable_vsan: True
```
Module testing
```
TASK [Create Cluster] **********************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:188
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" )
localhost PUT /tmp/tmpAJfdPb TO /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "enable_drs": true, "enable_ha": true, "enable_vsan": true, "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_cluster"}}
```
win_uri uses "Invoke-WebRequest" under the covers, which apparently
uses Internet Explorer to parse a webpage. The problem is if a user
has never run Internet Explorer, it will be unable to do that. The
work around for this is to set the "-UseBasicParsing" flag.
The only advantage to having the Internet Explorer parsed page is
that you can then access the DOM as if it was a powershell
argument. That doesn't seem super useful for Ansible to be able
to do, so I set the default to be "-UseBasicParsing"
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvswitch module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create dvswitch
local_action:
module: vmware_dvswitch
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
datacenter_name: "{{ mgmt_vdc }}"
switch_name: dvSwitch
mtu: 1500
uplink_quantity: 2
discovery_proto: lldp
discovery_operation: both
state: present
```
Module Testing
```
TASK [Create dvswitch] *********************************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:3
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" )
localhost PUT /tmp/tmptb3e2c TO /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"datacenter_name": "Test-Lab", "discovery_operation": "both", "discovery_proto": "lldp", "hostname": "172.27.0.100", "mtu": 1500, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "uplink_quantity": 2, "username": "root"}, "module_name": "vmware_dvswitch"}, "result": "'vim.dvs.VmwareDistributedVirtualSwitch:dvs-9'"}
```
dnf: name=PACKAGE state=latest is reponsible for two use cases:
- to install a package if not already installed.
- to update the package to the latest if already installed.
The latter use cases is not handled properly as base.upgrade does not
throw dnf.exceptions.MarkingError if a package is not installed.
Setting base.conf.best = True ensures a package is installed or
updated to the latest when calling base.install.
Sign-off: jsilhan@redhat.com
Sign-off: jchaloup@redhat.com
While returning puppet logs as ansible stdout is useful in some cases,
there are also cases where it's more destructive than helpful. For
those, local logging to syslog so that the ansible logging makes sense
is very useful.
This defaults to stdout so that behavior does not change for people.
The bigip_api method was changed in the module_utils function definition
to include the validate_certs option but the bigip_virtual_server module
was not updated accordingly. This patch updates the method so that the
error message below is not returned to the user
received exception: bigip_api() takes exactly 4 arguments (3 given)
This moves the validation of properties to the zfs command itself. The
properties and their choices were not really correct anyway due to
differences between OpenZFS and Solaris/ZFS.
The patch module has a few missing items, and inconsistencies, in its
documentation. A few of which are addressed here.
Within Ansible documentation, the choices for boolean values are
commonly 'yes', and 'no'. We standardise the options on that.
'remote_src' documentation uses 'False' and 'True' for its documentation,
so these have been updated in both the choices and default.
'src' documentation refers to 'remote_src', so is updated to use
the 'no' choice.
'backup' did not describe its options and default at all, so we add
them.
'binary' default used 'False', but specified the type as 'bool' which is
implicitly documented as 'yes'/'no', so we make that 'no' as well.
cloudstack: cs_instance: fix do not require name to be set to avoid clashes
Require one of display_name or name. If both is given, name is used as identifier.
cloudstack: fix name is not case insensitive
cloudstack: cs_template: implement state=extracted
Update f5 validate_certs functionality to do the right thing on multiple python versions
This requires the implementation in the module_utils code here
https://github.com/ansible/ansible/pull/13667 to funciton
fixed domain_id to actually be supported
also added domain as an alias
alt fixes#1437
Simplify the code and remove use_unsafe_shell=True
While there is no security issue with this shell snippet, it
is better to not rely on shell and avoid use_unsafe_shell.
Fix for issue #1074. Now able to create volume without replica's.
Improved fix for #1074. Both None and '' transform to fqdn.
Fix for ansible-modules-extras issue #1080
This prevents failing when a playbook describes a volume deletion and
is launched more that once.
Without this fix, if you run the playbook a second time, it will fail.
Loop compatibility for dry run exception handling
Route table deletion dry run handler
Fixing regression in propagating_vgw_ids default value
Adjusting truthiness of changed attribute for route manipulation
Updating propagating_vgw_ids default in docstring
One of inconvinence this address is the the fact that
you have to pass user's tags even if you just want to
add a permission rule
Signed-off-by: Marian Rusu <rusumarian91@gmail.com>
The `rabbitmqctl list_users` command will list the user's last login time
which does not include `\t` character. This is causing a ValueError exception
when attempting to split a user and its tags from the command output. This
fix will check for a `\t` in the current line of the output before splitting.
Values for boolean types were being unconditionally treated as strings
(by calling `.lower()`), thus breaking value parsing for actual boolean
and integer objects.
It looks like the bug was introduced in:
- 130bd670d82cc55fa321021e819838e07ff10c08
Fixes#709.
Some do not use the json module directly so don't need import json.
Some needed to fallback to simplejson with no traceback if neither was installed
Fixes#1298
Add default value
Rename argument
Explicit verification of relative bower path
Add example
Old keyword name used in example
BUGFIX: tilde expansion actually useless on relative paths
Modify relative_execpath default value as suggested
Added version_added for relative_execpath
Update for last few comments on the bug report
* version to 2.1 since this feature enhancement will now go into 2.1
* set path and relative_execpath type to path
* Set default value of path to None
Update: query_package documentation
Fix: Number of Packages to Updated was one to high,
'cause of counting the '\n'
Fix: Pacman was reinstalling state=latest packages,
even when it was unable to load the remote version
cloudstack: cs_volume: fix not usable in older cloudstack versions
affects CCP 4.3.0.2 , but not ACS / CCP 4.5.1
closes#1321
cloudstack: cs_volume: fix uable to create volumes with the same name on multiple zones
cloudstack: cs_volume: use type bool and fix python3 support
Infra has been keeping a local copy of this waiting for ansible 2 to
release. In getting ready for ansible 2 (and our ability to delete our
local copy of the file, I noticed we had a couple of minor cleanups.
Also, the timeout command is there to improve life and workaround puppet
deficiencies. However, it's not working around deficiencies on systems
that do not have the timeout command if we blindly use it.
The puppet specific timeout options are more complex and out of scope of
this.
Issue: #1273
- cs_instance: fix VM not updated with states given stopped, started, restarted
A missing VM will be created though but an existing not updated. This fixes the lack of consistency.
- cs_instance: fix user data can not be cleared
- cs_instance: fix deleted VM not recovered on state=present
Instead of waiting for up to a certain number of retries we set a high
timeout and only re-check every five seconds. Certain services can
take a minute or more to start and we want to avoid waisting resources
by polling too often.
@mpeters reported that we're not checking that the named service is
actually there after a reload. And that sometimes monit doesn't actually
return anything at all after a reload.
If there are already ongoing actions for a process managed by monit, the
module would exit unsuccessfully. It could also give off false positives
because it did not determine whether the service was started/stopped
when it was in a pending state. Which might be turning the service off,
but the action was to start it.
For example "Running - pending stop" would be regarded as the service
running and "state=enabled" would do nothing.
This will make Ansible wait for the state to finalize, or a timeout decided
by the new `max_retries` option, before it decides what to do.
This fixes issue #244.
* Remove leading module parameter on open_url call as it's no longer used
by module_utils.urls.open_url
* Force basic auth otherwise vsphere will just return a 401
If a bucket is being created in us-east-1, the module passed
'us-east-1' to boto's s3.create_bucket method rather than
Location.DEFAULT (an empty string). This caused boto to generate
invalid XML which AWS was unable to interpret.
This patch is adding a new module which allows to add and remove YUM
repository definitions. The module implements all repository options
as described in the `yum.conf` manual page.