* Add new Fedora docker images with Python 3.
* Use consistent env var for lookup test.
* Fix testing of virtualenv with Python 3.
* Fix docker_secret tests on Fedora 26.
* Add Python 3 support to Fedora postgresql test.
* Add Python 3 support to Fedora mysql tests.
* Fix uri test server for Python 3 on Fedora.
* Fix iso_extract test for Python 3 on Fedora.
* Add Python 3 support for Fedora to openssl tests.
* Fix dnf group test for Python 3 on Fedora.
* Use force with user deletion in become test.
* Reimplement iso_extract using 7zip (not requiring root)
So one of the drawbacks of the original implementation is that it required root for mounting/unmount the ISO image.
This is now no longer needed as we use 7zip for extracting files from the ISO.
* Fall back to using mount/umount if 7zip not found
As discussed with others.
Also improved integration tests.
The test assumes the node has the hostname set as the inventory_hostname_short.
That's not the case in our CI, we the inventory_hostname is a UUID, returned
by the openstack dynamic inventory.
We are getting this error message:
"Advertisement-interval should be greater than or equal to four times the tx-delay".
Changing transmit delay to 2 meets that constraint.
* Add aggregate for junos modules and sub spec validation
* aggregate support of junos modules
* aggregate sub spec validation
* relevant changes to junos integration test
* junos module boilerplate changes
* Add new boilerplate for junos modules
* Fix CI issues
* Add 'cacheable' param to set_fact action and module.
Used just like set_fact, except facts set with cacheable: true
will be stored in the fact cache if fact caching is enabled.
set_fact normally only sets facts in the non_persistent_fact_cache, so they
are lost between invocations.
* update set_facts docs
* use 'ansible_facts_cacheable' in module/actions result
* pop fact cacheable related items out of args/results
We dont want to use 'ansible_facts_cacheable' result item
or 'cacheable' arg as actual facts, so pop them out of the
dicts.
previously gather_subset=['!all'] would still gather the
min set of facts, and there was no way to collect no facts.
The 'min' specifier in gather_subset is equilivent to
exclude the minimal_gather_subset facts as well.
gather_subset=['!all', '!min'] will collect no facts
This also lets explicitly added gather_subsets override excludes.
gather_subset=['pkg_mgr', '!all', '!min'] will collect only the pkg_mgr
fact.
Create preserved_copy function in basic.py to perserve file ownership.
* Add a test for template preserved backup
* Use a script to get the random names
* bytes to strings
* Remove dump of hostvars
* Stop being fancy and create a testuser instead
* Fix pep8
* set file attributes
* Pass the correct data to set_attributes_if_different
* Use -j instead -b and pass the attributes as a string instead of a list
* remove debugging message
* Use shell to softly set the attr
Fixes#24408
Got removed in arg parsing updates. Now added back in
setup_vault_secrets().
The default value for DEFAULT_VAULT_PASSWORD_FILE was also
set to '~' for some reason, change to to no default.
Add integration tests.
* ios implementation for net_interface
* ios_interface implementation
* ios_interface integration test
* net_interface integration test for ios and other refactor
* Update boilerplate and minor refactor
* Add 2.0-2.3 facts api compat (ansible_facts(), get_all_facts())
These are intended to provide compatibilty for modules that
use 'ansible.module_utils.facts.ansible_facts' and
'ansible.module_utils.facts.get_all_facts' from 2.0-2.3 facts
API.
Fixes#25686
Some related changes/fixes needed to provide the compat api:
* rm ansible.constants import from module_utils.facts.compat
Just use a hard coded default for gather_subset/gather_timeout
instead of trying to load it from non existent config if the
module params dont include it.
* include 'external' collectors in compat ansible_facts()
* Add facter/ohai back to the valid collector classes
facter/ohai had gotten removed from the default_collectors
class used as the default list for all_collector_classes by
setup.py and compat.py
That made gather_subset['facter'] fail.
* iosxr implemetation for net_interface
* iosxr_interface implementation
* Add integration test
* iosxr_interface integration test
* net_interface intergration test for iosxr
* update boilerplate
* Add tests for group in a VPC
* Improve ec2_group output and documentation
Update ec2_group to provide full security group information
Add RETURN documentation to match
* Fix ec2_group creation within a VPC
Ensure VPC ID gets passed when creating security group
* Add test for auto creating SG
* Fix ec2_group auto group creation
* Add backoff to describe_security_groups
Getting LimitExceeded from describe_security_groups is definitely
possible (source: me) so add backoff to increase likelihood of
success.
To ensure that all `describe_security_group` calls are backed off,
remove implicit ones that use `ec2.SecurityGroup`. From there,
the decision to remove the `ec2` boto3 resource and rely on the client
alone makes good sense.
* Tidy up auto created security group
Add resource_prefix to auto created security group and delete
it in the `always` section.
Use YAML argument form for all module parameters
* win_service: added support for paused services
* change pausable service for local computers
* more fixes for older hosts
* sigh
* skip pause tests for Server 2008 as it relies on the service
* set output_dir_expanded using module result
'path' values are expanded using 'expandvars' too
* foo.txt is located in 'files' directory
* Use 'role_path' and 'connection: local' for local paths
'{{ role_path }}/tmp' is used for generated paths
* Use local connection with local paths
/tmp/ansible-test-abs-link and /tmp/ansible-test-abs-link-dir are
defined by targets/copy/files/subdir/subdir1/ansible-test-abs-link
and targets/copy/files/subdir/subdir1/ansible-test-abs-link-dir links.
* task names: add a suffix when same name is reused
* Check that item exists before checking file mode
then error message is more explicit when item doesn't exist
* Use output_dir_expanded only when necessary
* Enforce remote_user when root is required
* Fix remote path
* Use different local & remote user
this is useful when controller and managed hosts are identical
* Checks must not expect output of tested module to be right
* Use a temporary directory on the controller
* Use sha1 & md5 filters instead of hardcoded values
* Use 'remote_dir' for directory on managed host
* Workaround tempfile error on OS X
Error was:
temp_path = tempfile.mkdtemp(prefix='ansible_')
AttributeError: 'module' object has no attribute 'mkdtemp'"
* initial commit for win_group_member module
* fix variable name change for split_adspath
* correct ordering of examples/return data to match documentation verbiage
* change tests setup/teardown to use new group rather than an inbult group
* prepare_ovs call gather facts
As we are no longer using run_ovs_integration_tests.yml we need to
explicitly gather facts so we can call the correct package manager.
* typo
Absolute path trailing slash handling in absolute directories
find_needle() isn't passing a trailing slash through verbatim. Since
copy uses that to determine if it should copy a directory or just the
files inside of it, we have to detect that and restore it after calling
find_needle()
Fixes#27439
Fixes#13243
** Add --vault-id to name/identify multiple vault passwords
Use --vault-id to indicate id and path/type
--vault-id=prompt # prompt for default vault id password
--vault-id=myorg@prompt # prompt for a vault_id named 'myorg'
--vault-id=a_password_file # load ./a_password_file for default id
--vault-id=myorg@a_password_file # load file for 'myorg' vault id
vault_id's are created implicitly for existing --vault-password-file
and --ask-vault-pass options.
Vault ids are just for UX purposes and bookkeeping. Only the vault
payload and the password bytestring is needed to decrypt a
vault blob.
Replace passing password around everywhere with
a VaultSecrets object.
If we specify a vault_id, mention that in password prompts
Specifying multiple -vault-password-files will
now try each until one works
** Rev vault format in a backwards compatible way
The 1.2 vault format adds the vault_id to the header line
of the vault text. This is backwards compatible with older
versions of ansible. Old versions will just ignore it and
treat it as the default (and only) vault id.
Note: only 2.4+ supports multiple vault passwords, so while
earlier ansible versions can read the vault-1.2 format, it
does not make them magically support multiple vault passwords.
use 1.1 format for 'default' vault_id
Vaulted items that need to include a vault_id will be
written in 1.2 format.
If we set a new DEFAULT_VAULT_IDENTITY, then the default will
use version 1.2
vault will only use a vault_id if one is specified. So if none
is specified and C.DEFAULT_VAULT_IDENTITY is 'default'
we use the old format.
** Changes/refactors needed to implement multiple vault passwords
raise exceptions on decrypt fail, check vault id early
split out parsing the vault plaintext envelope (with the
sha/original plaintext) to _split_plaintext_envelope()
some cli fixups for specifying multiple paths in
the unfrack_paths optparse callback
fix py3 dict.keys() 'dict_keys object is not indexable' error
pluralize cli.options.vault_password_file -> vault_password_files
pluralize cli.options.new_vault_password_file -> new_vault_password_files
pluralize cli.options.vault_id -> cli.options.vault_ids
** Add a config option (vault_id_match) to force vault id matching.
With 'vault_id_match=True' and an ansible
vault that provides a vault_id, then decryption will require
that a matching vault_id is required. (via
--vault-id=my_vault_id@password_file, for ex).
In other words, if the config option is true, then only
the vault secrets with matching vault ids are candidates for
decrypting a vault. If option is false (the default), then
all of the provided vault secrets will be selected.
If a user doesn't want all vault secrets to be tried to
decrypt any vault content, they can enable this option.
Note: The vault id used for the match is not encrypted or
cryptographically signed. It is just a label/id/nickname used
for referencing a specific vault secret.
* Revert "Update conventions in azure modules"
This reverts commit 30a688d8d3.
* Revert "Allow specific __future__ imports in modules"
This reverts commit 3a2670e0fd.
* Revert "Fix wildcard import in galaxy/token.py"
This reverts commit 6456891053.
* Revert "Fix one name in module error due to rewritten VariableManager"
This reverts commit 87a192fe66.
* Revert "Disable pylint check for names existing in modules for test data"
This reverts commit 6ac683ca19.
* Revert "Allow ini plugin to load file using other encoding than utf8."
This reverts commit 6a57ad34c0.
- New option for ini plugins: encoding
- Add a new option encoding to _get_file_contents
- Use replace option in test/runner/lib/util.py when calling decode on stdout/err
output when diff have non-utf8 sequences
* changed collection arg to argregate on 2.4 network modules
* replace users with aggregate in eos_user, junos_user, nxos_user
* added version_added to places where we replaced users with aggregate in the docs
* fix ios_static_route test
* update tests to reference aggregate instead of collection/users
* Nuage module and unit tests with requested changes
* Cleanup of imports
* Adding check on python version
* Adding import try and catch wrappers
* Cleanup of requirements and adding integration tests
* Using pypi package for simulator
* Cleanup of requirements and adding integration tests
* Adding aliases for integration tests
* Adding module to import sanity test skip list
* Revert "Adding module to import sanity test skip list"
This reverts commit eab23af8c5ca7c503af63c05610b5db66d31fae4.
* Adding check for importlib and cleanup of requirements
Crypto namespace contains the openssl modules. It has no integration
testing as of now.
This commits aims to add integration tests for the crypto namespace.
This will make it easier to spot breaking changes in the future.
This tests currently apply to:
* openssl_privatekey
* openssl_publickey
* openssl_csr
* vmware_host: Small fixes and docs updates
This PR includes:
- A fix to no longer require a datacenter folder for adding a host
- Documentation improvements
- Ensure imports are specific
* Update vmware_host
Fix adds following:
* Update logic in vmware_host
* Update example documentation
* Added test case for vmware_host
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
Sometimes MacOSX's pwd doesn't return an expanded path. Not sure why
but this test is still valid if we expand it via a playbook filter so
go ahead and do that.
* We need a directory walker that can handle symlinks, empty directories,
and some other odd needs. This commit contains a directory walker that
can do all that. The walker returns information about the files in the
directories that we can then use to implement different strategies for
copying the files to the remote machines.
* Add local_follow parameter to copy that follows local symlinks (follow
is for remote symlinks)
* Refactor the copying of files out of run into its own method
* Add new integration tests for copy
Fixes#24949Fixes#21513
* add unit test: nested dynamic includes
* nested dynamic includes: avoid AnsibleFileNotFound error
Error was:
Unable to retrieve file contents
Could not find or access 'include2.yml'
Before 8f758204cf, at the end of
'path_dwim_relative' method, the 'search' variable contained amongst
others paths:
'/tmp/roles/testrole/tasks/tasks/included.yml' and
'/tmp/roles/testrole/tasks/included.yml'.
The commit mentioned before removed the last one despite the method
docstrings specify 'with or without explicitly named dirname subdirs'.
* add integration test: nested includes
* Ensure that include_role properly fires handlers
include_role needs to ensure that any handlers included
with the role are added to the _notified_handler and
_listening_handler lists of the TaskQueueManager, otherwise
it fails when trying to run the handler.
Additionally, the handler needs to be added to the
PlayIterator's `_uuid_cache` or it fails after running
the handler
Add more uuid debug statements - this code was hard
to debug with existing debug statements, so add more
uuid information at little additional output cost.
Fixes#18411
* Add tests for include_role handlers
Tests for #18411
* Only use `git verify-tag` when verifying annotated tags
The command `git verify-tag` only applies to annotated tags. When
verifying lightweight tags, which are more similar to non-moving
branches, one has to use `git verify-commit` instead.
Using ':' as a separator is appropriate since that is one of the
characters not allowed in a Git reference name.
See also https://www.kernel.org/pub/software/scm/git/docs/git-check-ref-format.html
* Improve testing of the Git module's gpg verification
* Add default description string to vyos_interface
* If `state=up` it should remove the `disable` configuration
for interface. However, if no other interface parameter is configured
this ends up deleting the interface itself which is not the desired
behaviour. Hence adding a default description field to avoid such
scenario's.
* Minor changes
* Add default description to aggregate
The 'admin' word was being masked by Ansible as potential cred.
Let's just use network-operator since we are just testing here
we can create users in aggregate.
* Use Boto3 for ec2_group
Currently boto doesn't support ipv6. To support ipv6 in ec2_group, we need boto3.
boto3 has significant API changes, which caused more re-factoring for ec2_group module.
Added additional integration test to test_ec2_group role.
* Follow the standard for boto3 ansible
Fixed imports. Use boto3 ansible exception with camel_dict_to_snake_dict.
Refactored the call to authorize/revoke ingress and egress.
* Removed dependancy with module ipaddress
Added new parameter called cidr_ipv6 for specifying
ipv6 addresses inline with how boto3 handles ipv6 addresses.
* Updated integration test
* Added ipv6 integration test for ec2_group
* Set purge_rules to false for integration test
* Fixed import statements
Added example for ipv6.
Removed defining HAS_BOTO3 variable and import HAS_BOTO3 from ec2.
Cleaned up import statements.
* Fixed exception handling
* Add IAM permissions for ec2_group tests
Missing AuthorizeSecurityGroupEgress necessary for latest tests
* Wrapped botocore import in try/except block
Import just botocore to be more similar to other modules
* junos implementation for net_l3_interface module
* junos_l3_interface implementation
* junos_l3_interface integration test
* net_l3_interface integration test for junos
* Fix module name typo
* once smoketest is implemented in CI, will run a much smaller set of Windows module integration tests for changes to module_utils or connection subsystems
* win_secedit: Added module with tests/diff mode
* fixed up test issues
* Added missing return value
* change for win_secedit based on review
* updated win_security_policy examples for rename
These integration tests were used for testing the exact behaviour of
Ansible for YAML-style syntax and key=value syntax.
This includes fixes to win_shortcut (as `src` can be a URL too)
* win_regedit: rewrite to support edge cases and fix issues
* fix up byte handling of single bytes and minor doc fix
* removed unused method
* updated with requested changes
* vyos implementation for net_interface module
* vyos_interface implementation module
* vyos_interface integration test
* net_interface integration test for vyos
* Change collection to aggregate
* Fix the editable condition into pip module (#19028)
* Add editable to tests
Default changed to False, so now editable: True is needed explicitly in
tests