* Add new Fedora docker images with Python 3.
* Use consistent env var for lookup test.
* Fix testing of virtualenv with Python 3.
* Fix docker_secret tests on Fedora 26.
* Add Python 3 support to Fedora postgresql test.
* Add Python 3 support to Fedora mysql tests.
* Fix uri test server for Python 3 on Fedora.
* Fix iso_extract test for Python 3 on Fedora.
* Add Python 3 support for Fedora to openssl tests.
* Fix dnf group test for Python 3 on Fedora.
* Use force with user deletion in become test.
* Reimplement iso_extract using 7zip (not requiring root)
So one of the drawbacks of the original implementation is that it required root for mounting/unmount the ISO image.
This is now no longer needed as we use 7zip for extracting files from the ISO.
* Fall back to using mount/umount if 7zip not found
As discussed with others.
Also improved integration tests.
The test assumes the node has the hostname set as the inventory_hostname_short.
That's not the case in our CI, we the inventory_hostname is a UUID, returned
by the openstack dynamic inventory.
We are getting this error message:
"Advertisement-interval should be greater than or equal to four times the tx-delay".
Changing transmit delay to 2 meets that constraint.
* Add aggregate for junos modules and sub spec validation
* aggregate support of junos modules
* aggregate sub spec validation
* relevant changes to junos integration test
* junos module boilerplate changes
* Add new boilerplate for junos modules
* Fix CI issues
* Add 'cacheable' param to set_fact action and module.
Used just like set_fact, except facts set with cacheable: true
will be stored in the fact cache if fact caching is enabled.
set_fact normally only sets facts in the non_persistent_fact_cache, so they
are lost between invocations.
* update set_facts docs
* use 'ansible_facts_cacheable' in module/actions result
* pop fact cacheable related items out of args/results
We dont want to use 'ansible_facts_cacheable' result item
or 'cacheable' arg as actual facts, so pop them out of the
dicts.
previously gather_subset=['!all'] would still gather the
min set of facts, and there was no way to collect no facts.
The 'min' specifier in gather_subset is equilivent to
exclude the minimal_gather_subset facts as well.
gather_subset=['!all', '!min'] will collect no facts
This also lets explicitly added gather_subsets override excludes.
gather_subset=['pkg_mgr', '!all', '!min'] will collect only the pkg_mgr
fact.
Create preserved_copy function in basic.py to perserve file ownership.
* Add a test for template preserved backup
* Use a script to get the random names
* bytes to strings
* Remove dump of hostvars
* Stop being fancy and create a testuser instead
* Fix pep8
* set file attributes
* Pass the correct data to set_attributes_if_different
* Use -j instead -b and pass the attributes as a string instead of a list
* remove debugging message
* Use shell to softly set the attr
Fixes#24408
Got removed in arg parsing updates. Now added back in
setup_vault_secrets().
The default value for DEFAULT_VAULT_PASSWORD_FILE was also
set to '~' for some reason, change to to no default.
Add integration tests.
* ios implementation for net_interface
* ios_interface implementation
* ios_interface integration test
* net_interface integration test for ios and other refactor
* Update boilerplate and minor refactor
* Add 2.0-2.3 facts api compat (ansible_facts(), get_all_facts())
These are intended to provide compatibilty for modules that
use 'ansible.module_utils.facts.ansible_facts' and
'ansible.module_utils.facts.get_all_facts' from 2.0-2.3 facts
API.
Fixes#25686
Some related changes/fixes needed to provide the compat api:
* rm ansible.constants import from module_utils.facts.compat
Just use a hard coded default for gather_subset/gather_timeout
instead of trying to load it from non existent config if the
module params dont include it.
* include 'external' collectors in compat ansible_facts()
* Add facter/ohai back to the valid collector classes
facter/ohai had gotten removed from the default_collectors
class used as the default list for all_collector_classes by
setup.py and compat.py
That made gather_subset['facter'] fail.
* iosxr implemetation for net_interface
* iosxr_interface implementation
* Add integration test
* iosxr_interface integration test
* net_interface intergration test for iosxr
* update boilerplate
* Add tests for group in a VPC
* Improve ec2_group output and documentation
Update ec2_group to provide full security group information
Add RETURN documentation to match
* Fix ec2_group creation within a VPC
Ensure VPC ID gets passed when creating security group
* Add test for auto creating SG
* Fix ec2_group auto group creation
* Add backoff to describe_security_groups
Getting LimitExceeded from describe_security_groups is definitely
possible (source: me) so add backoff to increase likelihood of
success.
To ensure that all `describe_security_group` calls are backed off,
remove implicit ones that use `ec2.SecurityGroup`. From there,
the decision to remove the `ec2` boto3 resource and rely on the client
alone makes good sense.
* Tidy up auto created security group
Add resource_prefix to auto created security group and delete
it in the `always` section.
Use YAML argument form for all module parameters
* win_service: added support for paused services
* change pausable service for local computers
* more fixes for older hosts
* sigh
* skip pause tests for Server 2008 as it relies on the service
* set output_dir_expanded using module result
'path' values are expanded using 'expandvars' too
* foo.txt is located in 'files' directory
* Use 'role_path' and 'connection: local' for local paths
'{{ role_path }}/tmp' is used for generated paths
* Use local connection with local paths
/tmp/ansible-test-abs-link and /tmp/ansible-test-abs-link-dir are
defined by targets/copy/files/subdir/subdir1/ansible-test-abs-link
and targets/copy/files/subdir/subdir1/ansible-test-abs-link-dir links.
* task names: add a suffix when same name is reused
* Check that item exists before checking file mode
then error message is more explicit when item doesn't exist
* Use output_dir_expanded only when necessary
* Enforce remote_user when root is required
* Fix remote path
* Use different local & remote user
this is useful when controller and managed hosts are identical
* Checks must not expect output of tested module to be right
* Use a temporary directory on the controller
* Use sha1 & md5 filters instead of hardcoded values
* Use 'remote_dir' for directory on managed host
* Workaround tempfile error on OS X
Error was:
temp_path = tempfile.mkdtemp(prefix='ansible_')
AttributeError: 'module' object has no attribute 'mkdtemp'"
* initial commit for win_group_member module
* fix variable name change for split_adspath
* correct ordering of examples/return data to match documentation verbiage
* change tests setup/teardown to use new group rather than an inbult group
* prepare_ovs call gather facts
As we are no longer using run_ovs_integration_tests.yml we need to
explicitly gather facts so we can call the correct package manager.
* typo
Absolute path trailing slash handling in absolute directories
find_needle() isn't passing a trailing slash through verbatim. Since
copy uses that to determine if it should copy a directory or just the
files inside of it, we have to detect that and restore it after calling
find_needle()
Fixes#27439
Fixes#13243
** Add --vault-id to name/identify multiple vault passwords
Use --vault-id to indicate id and path/type
--vault-id=prompt # prompt for default vault id password
--vault-id=myorg@prompt # prompt for a vault_id named 'myorg'
--vault-id=a_password_file # load ./a_password_file for default id
--vault-id=myorg@a_password_file # load file for 'myorg' vault id
vault_id's are created implicitly for existing --vault-password-file
and --ask-vault-pass options.
Vault ids are just for UX purposes and bookkeeping. Only the vault
payload and the password bytestring is needed to decrypt a
vault blob.
Replace passing password around everywhere with
a VaultSecrets object.
If we specify a vault_id, mention that in password prompts
Specifying multiple -vault-password-files will
now try each until one works
** Rev vault format in a backwards compatible way
The 1.2 vault format adds the vault_id to the header line
of the vault text. This is backwards compatible with older
versions of ansible. Old versions will just ignore it and
treat it as the default (and only) vault id.
Note: only 2.4+ supports multiple vault passwords, so while
earlier ansible versions can read the vault-1.2 format, it
does not make them magically support multiple vault passwords.
use 1.1 format for 'default' vault_id
Vaulted items that need to include a vault_id will be
written in 1.2 format.
If we set a new DEFAULT_VAULT_IDENTITY, then the default will
use version 1.2
vault will only use a vault_id if one is specified. So if none
is specified and C.DEFAULT_VAULT_IDENTITY is 'default'
we use the old format.
** Changes/refactors needed to implement multiple vault passwords
raise exceptions on decrypt fail, check vault id early
split out parsing the vault plaintext envelope (with the
sha/original plaintext) to _split_plaintext_envelope()
some cli fixups for specifying multiple paths in
the unfrack_paths optparse callback
fix py3 dict.keys() 'dict_keys object is not indexable' error
pluralize cli.options.vault_password_file -> vault_password_files
pluralize cli.options.new_vault_password_file -> new_vault_password_files
pluralize cli.options.vault_id -> cli.options.vault_ids
** Add a config option (vault_id_match) to force vault id matching.
With 'vault_id_match=True' and an ansible
vault that provides a vault_id, then decryption will require
that a matching vault_id is required. (via
--vault-id=my_vault_id@password_file, for ex).
In other words, if the config option is true, then only
the vault secrets with matching vault ids are candidates for
decrypting a vault. If option is false (the default), then
all of the provided vault secrets will be selected.
If a user doesn't want all vault secrets to be tried to
decrypt any vault content, they can enable this option.
Note: The vault id used for the match is not encrypted or
cryptographically signed. It is just a label/id/nickname used
for referencing a specific vault secret.
* Revert "Update conventions in azure modules"
This reverts commit 30a688d8d3.
* Revert "Allow specific __future__ imports in modules"
This reverts commit 3a2670e0fd.
* Revert "Fix wildcard import in galaxy/token.py"
This reverts commit 6456891053.
* Revert "Fix one name in module error due to rewritten VariableManager"
This reverts commit 87a192fe66.
* Revert "Disable pylint check for names existing in modules for test data"
This reverts commit 6ac683ca19.
* Revert "Allow ini plugin to load file using other encoding than utf8."
This reverts commit 6a57ad34c0.
- New option for ini plugins: encoding
- Add a new option encoding to _get_file_contents
- Use replace option in test/runner/lib/util.py when calling decode on stdout/err
output when diff have non-utf8 sequences
* changed collection arg to argregate on 2.4 network modules
* replace users with aggregate in eos_user, junos_user, nxos_user
* added version_added to places where we replaced users with aggregate in the docs
* fix ios_static_route test
* update tests to reference aggregate instead of collection/users
* Nuage module and unit tests with requested changes
* Cleanup of imports
* Adding check on python version
* Adding import try and catch wrappers
* Cleanup of requirements and adding integration tests
* Using pypi package for simulator
* Cleanup of requirements and adding integration tests
* Adding aliases for integration tests
* Adding module to import sanity test skip list
* Revert "Adding module to import sanity test skip list"
This reverts commit eab23af8c5ca7c503af63c05610b5db66d31fae4.
* Adding check for importlib and cleanup of requirements
Crypto namespace contains the openssl modules. It has no integration
testing as of now.
This commits aims to add integration tests for the crypto namespace.
This will make it easier to spot breaking changes in the future.
This tests currently apply to:
* openssl_privatekey
* openssl_publickey
* openssl_csr
* vmware_host: Small fixes and docs updates
This PR includes:
- A fix to no longer require a datacenter folder for adding a host
- Documentation improvements
- Ensure imports are specific
* Update vmware_host
Fix adds following:
* Update logic in vmware_host
* Update example documentation
* Added test case for vmware_host
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
Sometimes MacOSX's pwd doesn't return an expanded path. Not sure why
but this test is still valid if we expand it via a playbook filter so
go ahead and do that.
* We need a directory walker that can handle symlinks, empty directories,
and some other odd needs. This commit contains a directory walker that
can do all that. The walker returns information about the files in the
directories that we can then use to implement different strategies for
copying the files to the remote machines.
* Add local_follow parameter to copy that follows local symlinks (follow
is for remote symlinks)
* Refactor the copying of files out of run into its own method
* Add new integration tests for copy
Fixes#24949Fixes#21513
* add unit test: nested dynamic includes
* nested dynamic includes: avoid AnsibleFileNotFound error
Error was:
Unable to retrieve file contents
Could not find or access 'include2.yml'
Before 8f758204cf, at the end of
'path_dwim_relative' method, the 'search' variable contained amongst
others paths:
'/tmp/roles/testrole/tasks/tasks/included.yml' and
'/tmp/roles/testrole/tasks/included.yml'.
The commit mentioned before removed the last one despite the method
docstrings specify 'with or without explicitly named dirname subdirs'.
* add integration test: nested includes