The eos terminal plugin did not correctly catch the error message
returned with trying to configure more than one ospf instance. This
change updates the terminal plugin to catch that scenario
* windows: add #AnsibleRequires to set whether a module requires module or a specific version
* fix up pep8 issues
* changed psversion to use the actual ps Requires -Version syntax
* missed the check on #Requires -Version
* fix #Requires module extensions
* module_utils #Requires should not have .psm1 extension if "real" Powershell will ever execute them
* updated validate-modules to enforce this
* added check to disallow multi-module syntax on Ansible.ModuleUtils #Requires
* Start using ClientRequestTokens in event lists
* Include request token in all reqs that support it (basically all but check mode/changeset)
* Update placebo recordings
* Add comments for CRQ popping
* nosh system module: fixes and improvements
documentation:
* fleshed out and fixed to better follow the official guidelines
consistency:
* the following facts will now always be returned on success: name,
service_path, enabled, preset, user, status
* state is only returned when the state option is used
* state and status will be null if the service is not loaded by the end
of the task
* [nosh]: PEP8 fix
* Rebase with update of remote repository
* Add Example
* Reference to example
* Fix error with collon (ansibot saw a yaml not a string)
* Change inventory mode to manual
add link to inventory documentation of zabbix
* Fix:
The test ansible-test sanity --test pep8 [?] failed with the following error:
lib/ansible/modules/monitoring/zabbix_host.py:532:1: E302 expected 2 blank lines, found 1
The test ansible-test sanity --test validate-modules [?] failed with the following error:
lib/ansible/modules/monitoring/zabbix_host.py:0:0: E309 version_added for new option (inventory_zabbix) should be 2.5. Currently 2.4
* Handle timezone updates on Ubuntu 16.04+ on containers
Although Ubuntu 16.04 will use timedatectl by default,
containers without a working timedatectl need to use the
old method.
A bug in Ubuntu for the old method means having to write
a nasty hack
https://bugs.launchpad.net/ubuntu/+source/tzdata/+bug/1554806
* Add tests for timezones
Ensure timezone changes work across various OSs
* added win_audit_rule with integration test
* Updated integration testing to target files as well as directories
and registry keys. Split testing files apart to be more organized.
Updated powershell for better handling when targetting file objects
and optimized a bit. Removed duplicated sections that got there from a
previous merge I think.
* Decided to make all the fact names the same in integration testing.
Seemed like there would be less change of accidentally using the wrong
variable when copy/pasting that way, and not much upside to having
unique names.
Did final cleanup and fixed a few errors in the integration testing.
* Fixed a bug where results was displaying a wrong value
Fixed a bug where removal was failing if multiple rules existed due to
inheritance from higher level objects.
* Resolved issue with unhandled error when used didn't have permissions
for get-acl.
Changed from setauditrule to addauditrule, see comment in script for reasoning.
Fixed state absent to be able to remove multiple entries if they exist.
* fixed docs issue
* updated to fail if invalid inheritance_rule when defining a file rather than warn
* firewalld: don't reference undefined variable in error case
Signed-off-by: Adam Miller <maxamillion@fedoraproject.org>
* firewalld: don't set exception as var and not use it
Signed-off-by: Adam Miller <maxamillion@fedoraproject.org>
* Move compare_policies and hashable_policy functions into module_utils/ec2
* Use compare_policies which is compatible with python 2 and 3.
* rename function to indicate internal use
* s3_bucket: don't set changed to false if it has had the chance to be changed to true already.
This code originated in module_utils/basic.py which was BSD licensed.
In moving it and making it aplicable to other pieces of code that were
using similar functions, I added onto it a little.
Module allows you to wait for a bigip device to be
"ready" for configuration. This module will wait for things like
the device coming online as well as the REST API and MCPD being
ready.
If all of the above is not online and ready, then no configuration
will be able to be made.
* Add nosh service manager module
* based on the `svc`, `systemd`, `runit` and proposed `rc_service`
modules
* uses the high-level 'system-control' command and assumes nosh-native
interfaces though it should work with daemontools-style service scanning
* assumes a single service name is provided
* Metadata fixes
* Added "author" and "version_added"
* fixed the RETURN yaml
* PEP8 fixes
* fixed spacing issue
The current code flow precludes the use of the policy_path module
parameter that's documented. It's actually called policy_file in the
code.
What's worse is that the policy_file branch actually tries to open the
file named by the policy parameter, even though policy and policy_file
are marked as mutually-exclusive.
This change fixes the logic bug in policy_file and updates the
documentation to reference policy_file. The old parameter policy_path
is provided as an alias
* Support 'termination protection' for cloudformation stacks
- Pass in the stack_name and desired termination protection state to update_termination_protection
* Fix for failing cloudformation unit test
* Check if cfn has update_termination_protection attr
* Use hasattr to test if cfn supports update_termination_protection
* termination_protection shouldn't prevent update_stack call for existing stacks
in ServiceNow
Remove "updated" as a option for state, per review from bcoca. Update
examples section, and tested.
Update metadata to 1.1
Rip out some more instances of updated from documentation.
Update for ansible 2.5 first version
* better cleanup on task results display
callbacks get 'clean' copy of result objects
moved cleanup into result object itself
removed now redundant callback cleanup
moved no_log tests
* moved import as per feedback
- added `role_arn` to the "role example" example
- removed the irrelevant parameters to the "role example" example
- updated comment on one of the examples
- removed the last example as it was a duplicate of "role example" example
- some other minor changes
In this refactor we moved to the most recent coding standards for
both F5 and Ansible. Many bugs were fixed and some features were
also added (such as ipv6 support).
New conventions for ansible warrant fixes to accomodate those
in bigip_partition.
This patch also includes an import fix that can raise an error when
Ansible unit tests run
This adds a new type of vault-password script (a 'client') that takes advantage of and enhances the
multiple vault password support.
If a vault password script basename ends with the name '-client', consider it a vault password script client.
A vault password script 'client' just means that the script will take a '--vault-id' command line arg.
The previous vault password script (as invoked by --vault-password-file pointing to an executable) takes
no args and returns the password on stdout. But it doesnt know anything about --vault-id or multiple vault
passwords.
The new 'protocol' of the vault password script takes a cli arg ('--vault-id') so that it can lookup that specific
vault-id and return it's password.
Since existing vault password scripts don't know the new 'protocol', a way to distinguish password scripts
that do understand the protocol was needed. The convention now is to consider password scripts that are
named like 'something-client.py' (and executable) to be vault password client scripts.
The new client scripts get invoked with the '--vault-id' they were requested for. An example:
ansible-playbook --vault-id my_vault_id@contrib/vault/vault-keyring-client.py some_playbook.yml
That will cause the 'contrib/vault/vault-keyring-client.py' script to be invoked as:
contrib/vault/vault-keyring-client.py --vault-id my_vault_id
The previous vault-keyring.py password script was extended to become vault-keyring-client.py. It uses
the python 'keyring' module to request secrets from various backends. The plain 'vault-keyring.py' script
would determine which key id and keyring name to use based on values that had to be set in ansible.cfg.
So it was also limited to one keyring name.
The new vault-keyring-client.py will request the secret for the vault id provided via the '--vault-id' option.
The script can be used without config and can be used for multiple keyring ids (and keyrings).
On success, a vault password client script will print the password to stdout and exit with a return code of 0.
If the 'client' script can't find a secret for the --vault-id, the script will exit with return code of 2 and print an error to stderr.
* documentation was not inline with other Ansible modules
* Python 3 specific imports were missing
* monitor_type is no longer required when creating a new pool; it is now the default.
* A new monitor_type choice of "single" was added for a more intuitive way to specify "a single monitor". It uses "and_list" underneath, but provides additional checks to ensure that you are specifying only a single monitor.
* host and port arguments have been deprecated for now. Please use bigip_pool_member instead.
* 'partition' field was missing from documentation.
* A note that "python 2.7 or greater is required" has been added for those who were not aware that this applies for ALL F5 modules.
* Unit tests were fixed to support the above module
* Correct usage for shutil.rmtree
Fix adds correct usage of shutil.rmtree in git module
Fixes: #31225
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* Include archive tests so they get run
* Use new include syntax
* Cleanup syntax on git tests
- use multi-line YAML
- remove unneeded {{ }} around vars in conditionals
- remove unneeded quotes
- add task file name to task names for easier troubleshooting when things fail
* Make archive tests work for RHEL/CentOS 6
The older versions of Jinja2 in RHEL/CentOS 6 required assertion tasks using the map filter to be skipped.
The older version of git required gzip compression to be skipped on RHEL/CentOS 6.
* Account for ansible_distribution_major_version missing
* Adding a cli transport option for the bigip_command module.
* Fixing keyerror when using other f5 modules. Adding version_added for new option in bigip_command.
* Removing local connection check because the F5 tasks can be delegated to any host that has the libraries for REST.
* Using the network_common load_provider.
* Adding unit test to cover cli transport and updating previous unit test to ensure cli was not called.
* new module: AIX rootvg backup image using mksysb
This module is simple but very useful for AIX system
administrators. Easy to construct playbooks to generate
and manage rootvg backups using mksysb tool.
* added module_check, pep8, non-written convention
- implemented module_check;
- fixed some pep8 and non-written convention
* removed parameters as global variables and doc
Moved global variables parameters to inside main()
Better doc format for mentioned files
* wait_for: treat broken connections as "unready"
We have observed the following condition while waiting for hosts:
```
Traceback (most recent call last):
File "/var/folders/f8/23xp00654plcv2b2tcc028680000gn/T/ansible_8hxm4_/ansible_module_wait_for.py", line 585, in <module>
main()
File "/var/folders/f8/23xp00654plcv2b2tcc028680000gn/T/ansible_8hxm4_/ansible_module_wait_for.py", line 535, in main
s.shutdown(socket.SHUT_RDWR)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 57] Socket is not connected
```
This appears to happen while the host is still starting; we believe something is
accepting our connection but immediately resetting it. In these cases, we'd
prefer to continue waiting instead of immediately failing the play.
This patch has been applied locally for some time, and we have seen no adverse
effects.
* wait_for: fixup change
We were missing an import and a space after the `#`
##### SUMMARY
Creating the modules for enos from Lenovo.
##### ISSUE TYPE
- New Module Pull Request
##### COMPONENT NAME
<!--- Name of the module/plugin/module/task -->
lib/ansible/modules/network/enos/__init__.py
##### ANSIBLE VERSION
ansible 2.5.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules',
u'/usr/share/ansible/plugins/modules']
ansible python module location =
/usr/local/lib/python2.7/dist-packages/ansible-2.5.0-py2.7.egg/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4]
##### ADDITIONAL INFORMATION
This is an effort to create modules for enos based switches from Lenovo.
* Create action file enos_facts.py
* Update and rename enos_facts.py to enos.py
* Taking chances on Dealing with Unstable issues
* Removing blank space/ white line
An incorrect removal of a conditional resulted in include_tasks falling
through to the old static detection mechanism incorrectly. This restores
the previous conditional check.
Fixes#31593
This change makes the PluginLoader use DEFAULT_INVENTORY_PLUGIN_PATH setting.
Inventory Plugins were only being loaded the 'inventory_plugins' folder of the current directory,
as well as the ansible-provided inventory plugins (e.g. `/path/to/site-packages/ansible/plugins/inventory`).
* zabbix_host: added zabbix host property (description)
* zabbix_host: fixed error E309 version_added for new option (description) should be 2.4
* zabbix_host: deleted unwanted else for update description
* lib/ansible/modules/monitoring/zabbix_host.py: increased version_added to 2.5 for option description
* lib/ansible/modules/monitoring/zabbix_host.py: allow to change the description
* lib/ansible/modules/monitoring/zabbix_host.py: added new lines back to fix pep8 issues
* Initial CD-ROM support
* create cdrom bugfix
* Improving CDROM change detection and fixing template creation bug
Running MarkAsTemplate on an existing template will fail with an error
* Better change detection for guest ID
Should only mark a change in case it actually changes
* Adding integration tests
* Pep8 compliance fixes
* Adding CDROM support, including iso, client and none types
* Updating added release version for CDROM option
* Fix rollback in junos_config
Fixes#30778
* Call `load_configuration` with rollback id in case
the id is given as input
* Pass rollback id to `get_diff()` to fetch diff from device
* Fix unit test
* fixed module generation
added missing lookup page
point to plugins when plugins
made modules singular
add display for verbose an debug messages
nicer templating, changed generation order for ref
corrected links
moved most of lookup docs to plugin section
* Copy edits
* Fixed typos
* Clarified wording
- old functionality is still available direct lookup use, the following are equivalent
with_nested: [[1,2,3], ['a','b','c']]
loop: "{{lookup('nested', [1,2,3], ['a','b','c'])}}"
- avoid squashing with 'loop:'
- fixed test to use new intenal attributes
- removed most of 'lookup docs' as these now reside in the plugins
* Module option metadata are extra arguments rather than S3 object metadata: update ExtraArgs variable.
* Remove hyphens from ExtraArgs to maintain backwards compatibility
* Map lowercase extra args to CamelCase
* Maintain backwards compatibility by guessing at content type rather than always defaulting to binary/octet-stream.
* Fix ExtraArgs for non-hyphenated options
* Simplify logic
* Remove sysctl entry when state=absent
* Cleanup sysctl integration test syntax
* Correct grammar on error message
* Add sysctl integration test for state=absent
* [rpm_key] Fix to import first key on the system
Fixes: #31483
* [rpm_key] removed unsafe_shell and "throwaway" underscore
* [rpm_key] adding test to add the first key on system
* Added warning for 'force' option
* Changed 'profiles' type to list
* Changed 'interfacetypes' type to list
* Added deprecation warning and fixed doc
* updated force parameter
* win_become: Added support to become a service user
* fixes for linting
* changes to get local and network service working
* fixed linting issues again
* pleasing pepe
- new module: ssm_parameter_store
- new lookup: ssm
* lookup module ssm - adjust error message
* Pacify pylint erroring on botocore not found
* adjust to version 2.5
this should allow user to control how they want the playbook dirs inspected
for additional vars, default now reverts to 2.3 behaviour (top).
corrected paths order
minor doc reword
* Properly handle user selection of `None` as vars_files
In a playbook, if a user has a playbook like:
```
- hosts: localhost
connection: local
vars_files:
tasks:
- ....
```
Then `vars_files` will be none, and cause a `TypeError` in vars-manager when it
tries to iterate over them. To avoid this, I changed the getter to either send
back the vars files from the user, or an empty list when the user passed
`None`.
* Only replace None with an empty list, not all falsey values
* Catch error when vars_files isn't iterable
* Move whole `for` loop into try/except and catch TypeError
* Line length
this was abandoned early on the manger side but seems like we left behind on plugin side.
more flexible extensions with yaml plugin
validate data correctly for yaml/constructed
fixed issue with only adding one child to keyed, the group only got the host that forced it's creation
fixes#31382fixes#31365
* Addition of TCP protocol to ELB target group as target groups support HTTP/S and TCP now
* Fixup stickiness type so that it checks if the current_tg has the stickiness_type key in the dict, as TCP ones do not
Trying to associate an already-associated ElasticIP was failing.
This is however supported by the `boto` method that is used
under the hood, `associate_address`:
To quote `boto` documentation:
```
This option to allow an Elastic IP address that is already
associated with another networkinterface or instance to be
re-associated with the specified instance or interface.
```
This defaults to False, both per backwards-compatibility
and to mirror the boto default value.
Fixes#27385
* Set desired capacity to min_size if no instances exist
* Improve readability of if/then clause
* Only update null desired_capacity to min_size on initial create
Any future updates to the ASG will be able to reference the existing
capacity.
* Make ansible_selinux facts a consistent type
Rather than returning a bool if the Python library is missing, return a dict with one key containing a message explaining there is no way to tell the status of SELinux on the system becasue the Python library is not present.
* Fix unit test
This reverts commit f8005d2737.
fix needs to be rethought as it applies to only newer git versions
and use of env shell breaks with non 'bourne compatible' shells
* Allow any_errors_fatal to be set in playbook.
* Default to the config file value for any_errors_fatal only if it isn't already provided.
* add _get_attr method
Make sure that example in docs is usable:
# Remove storage domain
- ovirt_storage_domains:
state: absent
name: mystorage_domain
format: true
Without this PR data_center and host parameters where required when we wanted to
remove some storage domain.
Also fixes a regression when trying to remove a detached
storage domain.
The following patch fixes a regression when trying to remove a detached
storage domain.
As part of the remove process the ovirt_storage_domains module first
tries to move the domain to maintenance and detach it.
In case of removing a detached storage domain with no DC attached to it
The maintenace process will fail with 404 (not exists) exception when
trying to fetch the DC using empty Guid.
The fix proposes a solution to return None value in case of a detached
storage domain.
* Add update_only parameter for yum module
When using latest, `update_only: yes` will ensure that only existing
packages are updated and no additional packages are installed.
* Update yum.py
Update version added for `update_only` parameter to 2.5
* add unit tests for update_only flag in yum module
* Add new lines to end of config file lines
* Properly write out selinux config file
Change module behavior to not always report a change but warn if a reboot is needed and return reboot_required.
Improve the output messages.
Add strip parameter to get_file_lines utility to help with parsing the selinux config file.
* Add return documentation
* Add integration tests for selinux module
* Use consistent capitalization for SELinux
* Use atomic_move in selinux module
* Don't copy the config file initially
There's no need to make a copy just for reading.
* Put message after set_config_policy in case the change fails
* Add aliases to selinux tests
* win_become: move error handling to Ansible outside of shell
* trimmed the output so double newlines don't get set
* added test for non-zero exit code
* missed issue URL on test
* changed exit to SetShouldExit
The /etc/os-release based distro detection doesn't
seem to work for Ubuntu 10.04 (no /etc/os-release?).
So it was testing the next case which was /etc/lsb-release to
see if it is 'Mandriva'. Since the check for existence of
(/etc/lsb-release, Mandrive) was the first non-empty dist
file match, 'ansible_distribution' was being set to 'Mandriva'
expecting to be corrected by the data from the dist file content.
But since the dist file parsing for Mandriva didn't match for
Ubuntu 10.04 /etc/lsb-release _and_ there is no Debian specific
lsb-release check, 'ansible_distribution' stayed at 'Mandriva'
and the dist file checking loop keeps going and eventually off
the end of the list before finding a better match.
Adding a debian/ubuntu specific check for /etc/lsb-release after
the debian os-release sets the info correctly and stops further
checking of dist files.
Fixes#30693
'distribution' facts were being set after checking
the existence of the dist file, and then being set
again with more detail after they were succesfully parsed.
But if the dist file was not succesfully parsed and
matched the required names, the loop continues
without resetting the earlier set facts. This is
how 'Mandriva' would end up being the 'distribution'
file for unrelated cases (it would find /etc/lsb-release,
set distro to 'Mandriva', then fail to parse/match and
continue the loop. If no other checks worked, 'Mandriva'
would stick).
* parse_dist_file_NA should check 'name' not distro for NA
parse_distribution_file_NA was checking the incoming
'distribution' fact to be 'NA', but the fact itself can
be specific at that point ('KDE Neon', for ex) but the
check is really if the 'name' it was passed is NA.
* for matches on OS_RELEASE_ALIAS (ie, 'Archlinux') do
not continue if the dist file content doesn't match. Previously
it had to because of the 'Mandriva' bug mentioned above.
This is a more general fix for #30693 than #30723
Fixes #30693
Related to #30600
In cli.CLI.unfrack_path callback, special case if the
value of '--output' is '-', and avoid expanding
it to a full path.
vault cli already has special cases for '-', so it
just needs to get the original value to work.
Fixes#30550
get_config would use ConfigManager.get_ini_value which does not
exist. What we are meant to use is
ansible.config.manager.get_ini_config_value and this method does not
expect a list, only a dictionary with a section and a key.
This PR addresses two issues:
1. The hg module was added to command module's check_command list,
so if someone runs hg directly from the command module, the command
module would warn the user "Consider using hg module rather than running hg".
We address this by removing hg from the list.
2. We added a new note to tell users push feature will be addressed
in issue #31156.
* Added support to retrieving LIG resources in HPE OneView
* Fixing copyright header according to review
* Swapping out config for full credentials in parameter for documentation
* Added support to retrieving Enclosures in HPE OneView
- Added unit tests
* Updated version_added to 2.5
* Changing return type of enclosure_script to string
* Fixing copyright header according to review
* Replaced config for credentials in parameters for documentation
Fix adds a new module 'vmware_guest_powerstate' to manage
power states of virtual machine.
Fixes: #30371
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
As part of the absent state of ovirt_storage_domains module,
the pre_remove method tries to move the stoage domain to
maintenance and detach it.
In case a destroy of a storage domain is being called there is no need
for those operations since the destroy might be merely a DB operation.
vm_username and vm_password are required parameters in
vmware_vm_shell. Fix adds changes to documentation as well.
Fixes: #28266
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* module_utils.urls - Encode the proxy connect as binary
Under Python3 the sendall method expects binary not a string.
Prior to this change the below exception was being thrown;
Traceback (most recent call last):
File "/tmp/ansible_umxox7_x/ansible_modlib.zip/ansible/module_utils/urls.py", line 1044, in fetch_url
client_key=client_key, cookies=cookies)
File "/tmp/ansible_umxox7_x/ansible_modlib.zip/ansible/module_utils/urls.py", line 951, in open_url
r = urllib_request.urlopen(*urlopen_args)
File "/opt/blue-python/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/opt/blue-python/3.6/lib/python3.6/urllib/request.py", line 524, in open
req = meth(req)
File "/tmp/ansible_umxox7_x/ansible_modlib.zip/ansible/module_utils/urls.py", line 729, in http_request
s.sendall((self.CONNECT_COMMAND % (self.hostname, self.port)).decode())
AttributeError: 'str' object has no attribute 'decode'
Encoding the value is inline with the lines below (Proxy-Authorization etc) which are being sent as binary.