The tests rely on setting the lldp IP on the management interface.
However, the IP discovered is the private IP of the node, and tests
require accessing it via Nodepool node public IP.
Removing that test for now to get CI green again, we'll reasses once we
release 2.4.
There's been a change in persistent connect framework that switches
playbook timeout (which corresponds to 'timeout' param) to command_timeout.
While we fix this and return the functionality, let's put the command_timeout
in place to avoid CI being red.
* Import original unmodified upstream version
This is another attempt to get the xml module upstream.
https://github.com/cmprescott/ansible-xml/
This is the original file from upstream,
without commit 1e7a3f6b6e2bc01aa9cebfd80ac5cd4555032774
* Add additional changes required for upstreaming
This PR includes the following changes:
- Clean up of DOCUMENTATION
- Rename "ensure" parameter to "state" parameter (kept alias)
- Added EXAMPLES
- Remove explicit type-case using str() for formatting
- Clean up AnsibleModule parameter handling
- Retained Python 2.4 compatibility
- PEP8 compliancy
- Various fixes as suggested by abadger during first review
This fixescmprescott/ansible-xml#108
* Added original integration tests
There is some room for improvement wrt. idempotency and check-mode
testing.
* Some tests depend on lxml v3.0alpha1 or higher
We are now expecting lxml v2.3.0 or higher.
We skips tests if lxml is too old.
Plus small fix.
* Relicense to GPLv3+ header
All past contributors have agreed to relicense this module to GPLv2+, and GPLv3 specifically.
See: https://github.com/cmprescott/ansible-xml/issues/113
This fixescmprescott/ansible-xml#73
* Fix small typo in integration tests
* Python 3 support
This PR also includes:
- Python 3 support
- Documentation fixes
- Check-mode fixes and improvements
- Bugfix in check-mode support
- Always return xmlstring, even if there's no change
- Check for lxml 2.3.0 or newer
* Add return values
* Various fixes after review
* rm unneeded parens following assert
* rm unused parse_vaulttext_envelope from yaml.constructor
* No longer need index/enumerate over vault_ids
* rm unnecessary else
* rm unused VaultCli.secrets
* rm unused vault_id arg on VaultAES.decrypt()
pylint: Unused argument 'vault_id'
pylint: Unused parse_vaulttext_envelope imported from ansible.parsing.vault
pylint: Unused variable 'index'
pylint: Unnecessary parens after 'assert' keyword
pylint: Unnecessary "else" after "return" (no-else-return)
pylint: Attribute 'editor' defined outside __init__
* use 'dummy' for unused variables instead of _
Based on pylint unused variable warnings.
Existing code use '_' for this, but that is old
and busted. The hot new thing is 'dummy'. It
is so fetch.
Except for where we get warnings for reusing
the 'dummy' var name inside of a list comprehension.
* Add super().__init__ call to PromptVaultSecret.__init__
pylint: __init__ method from base class 'VaultSecret' is not called (super-init-not-called)
* Make FileVaultSecret.read_file reg method again
The base class read_file() doesnt need self but
the sub classes do.
Rm now unneeded loader arg to read_file()
* Fix err msg string literal that had no effect
pylint: String statement has no effect
The indent on the continuation of the msg_format was wrong
so the second half was dropped.
There was also no need to join() filename (copy/paste from
original with a command list I assume...)
* Use local cipher_name in VaultEditor.edit_file not instance
pylint: Unused variable 'cipher_name'
pylint: Unused variable 'b_ciphertext'
Use the local cipher_name returned from parse_vaulttext_envelope()
instead of the instance self.cipher_name var.
Since there is only one valid cipher_name either way, it was
equilivent, but it will not be with more valid cipher_names
* Rm unused b_salt arg on VaultAES256._encrypt*
pylint: Unused argument 'b_salt'
Previously the methods computed the keys and iv themselves
so needed to be passed in the salt, but now the key/iv
are built before and passed in so b_salt arg is not used
anymore.
* rm redundant import of call from subprocess
pylint: Imports from package subprocess are not grouped
use via subprocess module now instead of direct
import.
* self._bytes is set in super init now, rm dup
* Make FileVaultSecret.read_file() -> _read_file()
_read_file() is details of the implementation of
load(), so now 'private'.
* Changed rpm-keyid extraction and verification method
* minor style fixes
* fixed rpm key deletion,added integration test for mono key,fixed wording in integration tests
* moved aws elasticache module to boto3
* fixed error and improved code
* implemented requested changes
* now checking for missing boto3 packages in a better way
* now dynamically setting the default port depending on the engine if it is not set
* moved standard import in front of ansible ones
* now case insensitive in regards to engine name
* removed superfluous spaces
* now checking for None in the correct way
* removed elasticache module from exceptions to pep8 testing
* removed hardcoded default ports and letting aws decide if no port is given
Updates ec2_lc module to use boto3. Adds parameters:
instance_id
placement_tenancy
Also added a second example using instance_id and updated the docs with the new parameters.
We are reserving the _ identifier for i18n work. Code should use the
identifier dummy for dummy variables instead.
This test is currently skipped as someone needs to generate the list of
files which are currently out of compliance before this can be turned
on.
This PR includes:
- Documentation improvements (mostly related to boolean defaults)
- Make PEP8 compliant
- Ensure imports are specific
- Few cosmetic changes (sort lists, casing, punctuation)
* Add new Fedora docker images with Python 3.
* Use consistent env var for lookup test.
* Fix testing of virtualenv with Python 3.
* Fix docker_secret tests on Fedora 26.
* Add Python 3 support to Fedora postgresql test.
* Add Python 3 support to Fedora mysql tests.
* Fix uri test server for Python 3 on Fedora.
* Fix iso_extract test for Python 3 on Fedora.
* Add Python 3 support for Fedora to openssl tests.
* Fix dnf group test for Python 3 on Fedora.
* Use force with user deletion in become test.
* Reimplement iso_extract using 7zip (not requiring root)
So one of the drawbacks of the original implementation is that it required root for mounting/unmount the ISO image.
This is now no longer needed as we use 7zip for extracting files from the ISO.
* Fall back to using mount/umount if 7zip not found
As discussed with others.
Also improved integration tests.
This PR includes:
- RETURN information (since the difference between status_code and
status was confusing)
- Improvements to parameter definition (and docs)
- PEP8 compliancy
Fix 'module' object is not callable
* rhn_register: fix Python 3 compatibility
* rhn_register: update requirements
* rhn_register: add unit tests
* Add missing method name
* use a dedicated line for XML related requirements
* rhn_register: drop support for Python 2.4
* rhn_register unit tests: fix Python 3 compatibility
* refactor in order to check order of the requests
* Fix for issue ansible/ansible#27715
* Also fixing mutually exclusive check
* Updating subspec checks
These changes take into account a spec with all features enabled and do
the following tests for subspecs:
1. Test proper specs
2. Test Alias
3. Test missing required param
4. Test mutually exclusive params
5. Test required if params
6. Test required one of params
7. Test required together params
8. Test required if params with a default value
9. Test basis subspec params
10. Test invalid subsec params
* adds new filter plugins for network use cases
* adds parse_cli filter
* adds parse_cli_textfsm filter
* adds Template class to network_common
* adds conditional function to network_common
* fix up PEP8 issues
The test assumes the node has the hostname set as the inventory_hostname_short.
That's not the case in our CI, we the inventory_hostname is a UUID, returned
by the openstack dynamic inventory.
We are getting this error message:
"Advertisement-interval should be greater than or equal to four times the tx-delay".
Changing transmit delay to 2 meets that constraint.
* Add aggregate for junos modules and sub spec validation
* aggregate support of junos modules
* aggregate sub spec validation
* relevant changes to junos integration test
* junos module boilerplate changes
* Add new boilerplate for junos modules
* Fix CI issues
* Create get_exception and wildcard import code-smell tests
* Add more detail to boilerplate and no-basestring descriptions
* Remove the no-list-cmp test as the pylint undefined-variable test covers it
* s3_bucket: fix policy sorting for python3 so strings are evaluated as less than tuples.
Add tests to ensure this behavior is maintained.
* Fix s3_bucket comparison function to work on both Python 3.5 and 3.6
* s3_bucket: document that cmp_to_key is used for python 2.7.
Add another test for s3_bucket to compare policies of different sizes.
* fix pep8
* Work around code-smell grepping by not using the word 'cmp'.
* New module for managing AWS Datapipelines
* Supports create/activate/deactivate and deletion
* Handles idempotent creation by embeding the version in the
uniqueId field
* Waits for requested state to be reached, as Botocore doesn't
have waiters yet for datapipelines
* rename module, fix imports, add tags option, improve exit_json results, fix a couple bugs, add a TODO so I don't forget
fix pep8
allow timeout to be used for pipeline creation
make .format syntax uniform
fix pep8
fix exception handling
allow pipeline to be modified, refactor, add some comments, remove unnecessary imports
pipeline activation may not be in the activated state long
remove datapipeline version option
change a loop to a list comprehension
create idempotence by hashing the options given to the module minus the objects (which can be modified)
small bugfix
* data_pipeline unittests
make unittests pep8
fix bug in unittests
* remove exception handling that serves no purpose
* Fix python3 incompatibilities in datapipeline tests and add placebo fixture maybe_sleep for faster tests
Fix python3 incompatibilities in data_pipeline build_unique_id()
Don't delete a pipeline in diff_pipeline() because it's unexpected
Don't use time.time() because it causes an issue with placebo testing
re-recorded tests
fix pep8 in data_pipeline
Remove disable_rollback from tests
Make sure unique identifier is a string
re-record tests
* improve documentation and add another example
* use a placebo fixture instead of redundant code in tests
fix tests for PLACEBO_RECORD=false
* Fix data_pipeline docs
use isinstance instead of type()
fix documentation
* fix documentation
* Remove use of undefined variable from data_pipeline module and fix license
* fix copyright header
Fix adds missing imports and boilerplate for proxysql.
It also remove get_exception calls in-favor of native exception.
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* Initial commit for integration of HPE OneView resources with Ansible Core. Adding FC Network and FC Network Fact modules and unit tests, and OneView base class for all OV resources.
* Add 'cacheable' param to set_fact action and module.
Used just like set_fact, except facts set with cacheable: true
will be stored in the fact cache if fact caching is enabled.
set_fact normally only sets facts in the non_persistent_fact_cache, so they
are lost between invocations.
* update set_facts docs
* use 'ansible_facts_cacheable' in module/actions result
* pop fact cacheable related items out of args/results
We dont want to use 'ansible_facts_cacheable' result item
or 'cacheable' arg as actual facts, so pop them out of the
dicts.
previously gather_subset=['!all'] would still gather the
min set of facts, and there was no way to collect no facts.
The 'min' specifier in gather_subset is equilivent to
exclude the minimal_gather_subset facts as well.
gather_subset=['!all', '!min'] will collect no facts
This also lets explicitly added gather_subsets override excludes.
gather_subset=['pkg_mgr', '!all', '!min'] will collect only the pkg_mgr
fact.
* Add module cv_server_provision for integration with Arista CloudVision Portal.
* Doc update.
* Remove shebang from test file. Update short description with company and product name.
* Update exception syntax to Python3 style.
* Remove blank line between imports.
* Remove newlines from RETURN documentation.
* Add cvprac to unittest requirements.
* Update unittest format. Add a few additional tests.
* Mock exceptions from cvprac so the library is not needed for unittests.
* Mock cvprac imports.
* Update unit tests to support python 3.5.
* Mock full cvprac library for unittests.
* Update Jinja2 import to pass updated CI checks.
* Update cvprac imports format for new CI tests.
* Add __metaclass__ and __future__.
Create preserved_copy function in basic.py to perserve file ownership.
* Add a test for template preserved backup
* Use a script to get the random names
* bytes to strings
* Remove dump of hostvars
* Stop being fancy and create a testuser instead
* Fix pep8
* set file attributes
* Pass the correct data to set_attributes_if_different
* Use -j instead -b and pass the attributes as a string instead of a list
* remove debugging message
* Use shell to softly set the attr
Fixes#24408
When parsing a vaulttext blob, use .splitlines()
instead of split(b'\n') to handle \n newlines and
windows style \r\n (CRLF) new lines.
The vaulttext enevelope at this point is just the header line
and a hexlify()'ed blob, so CRLF is a valid newline here.
Fixes#22914
We set the ansible_ssh_user and ansible_ssh_pass on the Junos
group. However, that has lower precedence than group_vars.
Commenting the group_vars so we have the creds for all Nodepool nodes
within the inventory.
Got removed in arg parsing updates. Now added back in
setup_vault_secrets().
The default value for DEFAULT_VAULT_PASSWORD_FILE was also
set to '~' for some reason, change to to no default.
Add integration tests.
* Added new module interfaces_file
* interfaces_file: added unit tests
* interfaces_file: added golden files for unit tests
* interfaces_file: moved to system modules
* interfaces_file: fixed code formatting and convention issues
* ios implementation for net_interface
* ios_interface implementation
* ios_interface integration test
* net_interface integration test for ios and other refactor
* Update boilerplate and minor refactor
* Add 2.0-2.3 facts api compat (ansible_facts(), get_all_facts())
These are intended to provide compatibilty for modules that
use 'ansible.module_utils.facts.ansible_facts' and
'ansible.module_utils.facts.get_all_facts' from 2.0-2.3 facts
API.
Fixes#25686
Some related changes/fixes needed to provide the compat api:
* rm ansible.constants import from module_utils.facts.compat
Just use a hard coded default for gather_subset/gather_timeout
instead of trying to load it from non existent config if the
module params dont include it.
* include 'external' collectors in compat ansible_facts()
* Add facter/ohai back to the valid collector classes
facter/ohai had gotten removed from the default_collectors
class used as the default list for all_collector_classes by
setup.py and compat.py
That made gather_subset['facter'] fail.
* Add aggregate parameter validation
aggregate parameter validation will support checking each individual dict
to resolve conditions for aliases, no_log, mutually_exclusive,
required, type check, values, required_together, required_one_of
and required_if conditions in argspec. It will also set default values.
eg:
tasks:
- name: Configure interface attribute with aggregate
net_interface:
aggregate:
- {name: ge-0/0/1, description: test-interface-1, duplex: full, state: present}
- {name: ge-0/0/2, description: test-interface-2, active: False}
register: response
purge: Yes
Usage:
```
from ansible.module_utils.network_common import AggregateCollection
transform = AggregateCollection(module)
param = transform(module.params.get('aggregate'))
```
Aggregate allows supports for `purge` parameter, it will instruct the module
to remove resources from remote device that hasn’t been explicitly
defined in aggregate. This is not supported by with_* iterators
Also, it improves performace as compared to with_* iterator for network device
that has seperate candidate and running datastore.
For with_* iteration the sequence of operartion is
load-config-1 (candidate db) -> commit (running db) -> load_config-2
(candidate db) -> commit (running db) ...
With aggregate the sequence of operation is
load-config-1 (candidate db) -> load-config-2 (candidate db) -> commit
(running db)
As commit is executed only once per task for aggregate it has
huge perfomance benefit for large configurations.
* Fix CI issues
* Fix review comments
* Add support for options validation for aliases, no_log,
mutually_exclusive, required, type check, value check,
required_together, required_one_of and required_if
conditions in sub-argspec.
* Add unit test for options in argspec.
* Reverted aggregate implementaion.
* Minor change
* Add multi-level argspec support
* Multi-level argspec support with module's top most
conditionals options.
* Fix unit test failure
* Add parent context in errors for sub options
* Resolve merge conflict
* Fix CI issue
* Make camel_to_snake work on capitalized plurals
`TargetGroupARNs` should become `target_group_arns`, not
`target_group_ar_ns`
Promote `camel_to_snake` to top layer function but prefix
it with an underscore.
Add tests for improved `_camel_to_snake` function.
Reduce use of `re.compile` as it makes no sense when the
compilation result is not reused.
* Remove unused LooseVersion check
* Fix PLURALs case for camel_to_snake
Also renamed EXPECTED_CAMELIZATION to EXPECTED_SNAKIFICATION
* iosxr implemetation for net_interface
* iosxr_interface implementation
* Add integration test
* iosxr_interface integration test
* net_interface intergration test for iosxr
* update boilerplate
* Add tests for group in a VPC
* Improve ec2_group output and documentation
Update ec2_group to provide full security group information
Add RETURN documentation to match
* Fix ec2_group creation within a VPC
Ensure VPC ID gets passed when creating security group
* Add test for auto creating SG
* Fix ec2_group auto group creation
* Add backoff to describe_security_groups
Getting LimitExceeded from describe_security_groups is definitely
possible (source: me) so add backoff to increase likelihood of
success.
To ensure that all `describe_security_group` calls are backed off,
remove implicit ones that use `ec2.SecurityGroup`. From there,
the decision to remove the `ec2` boto3 resource and rely on the client
alone makes good sense.
* Tidy up auto created security group
Add resource_prefix to auto created security group and delete
it in the `always` section.
Use YAML argument form for all module parameters
* win_service: added support for paused services
* change pausable service for local computers
* more fixes for older hosts
* sigh
* skip pause tests for Server 2008 as it relies on the service
* set output_dir_expanded using module result
'path' values are expanded using 'expandvars' too
* foo.txt is located in 'files' directory
* Use 'role_path' and 'connection: local' for local paths
'{{ role_path }}/tmp' is used for generated paths
* Use local connection with local paths
/tmp/ansible-test-abs-link and /tmp/ansible-test-abs-link-dir are
defined by targets/copy/files/subdir/subdir1/ansible-test-abs-link
and targets/copy/files/subdir/subdir1/ansible-test-abs-link-dir links.
* task names: add a suffix when same name is reused
* Check that item exists before checking file mode
then error message is more explicit when item doesn't exist
* Use output_dir_expanded only when necessary
* Enforce remote_user when root is required
* Fix remote path
* Use different local & remote user
this is useful when controller and managed hosts are identical
* Checks must not expect output of tested module to be right
* Use a temporary directory on the controller
* Use sha1 & md5 filters instead of hardcoded values
* Use 'remote_dir' for directory on managed host
* Workaround tempfile error on OS X
Error was:
temp_path = tempfile.mkdtemp(prefix='ansible_')
AttributeError: 'module' object has no attribute 'mkdtemp'"
* initial commit for win_group_member module
* fix variable name change for split_adspath
* correct ordering of examples/return data to match documentation verbiage
* change tests setup/teardown to use new group rather than an inbult group
* ACI module_utils library for ACI modules
This PR includes:
- the ACI argument_spec
- an aci_login function
- an experimental aci_request function
- an aci_response function
- included the ACI team
* New prototype using ACIModule
This PR includes:
- A new ACIModule object with various useful methods
* prepare_ovs call gather facts
As we are no longer using run_ovs_integration_tests.yml we need to
explicitly gather facts so we can call the correct package manager.
* typo
* Enable specific tests (this lets us disable a group and then
enable a particular test inside of it)
* Comment out tests in the enable and disable files
Made the following changes:
* Removed wildcard imports
* Replaced long form of GPL header with short form
* Removed get_exception usage
* Added from __future__ boilerplate
* Adjust division operator to // where necessary
For the following files:
* web_infrastructure modules
* system modules
* linode, lxc, lxd, atomic, cloudscale, dimensiondata, ovh, packet,
profitbricks, pubnub, smartos, softlayer, univention modules
* compat dirs (disabled as its used intentionally)
Absolute path trailing slash handling in absolute directories
find_needle() isn't passing a trailing slash through verbatim. Since
copy uses that to determine if it should copy a directory or just the
files inside of it, we have to detect that and restore it after calling
find_needle()
Fixes#27439
Fixes#13243
** Add --vault-id to name/identify multiple vault passwords
Use --vault-id to indicate id and path/type
--vault-id=prompt # prompt for default vault id password
--vault-id=myorg@prompt # prompt for a vault_id named 'myorg'
--vault-id=a_password_file # load ./a_password_file for default id
--vault-id=myorg@a_password_file # load file for 'myorg' vault id
vault_id's are created implicitly for existing --vault-password-file
and --ask-vault-pass options.
Vault ids are just for UX purposes and bookkeeping. Only the vault
payload and the password bytestring is needed to decrypt a
vault blob.
Replace passing password around everywhere with
a VaultSecrets object.
If we specify a vault_id, mention that in password prompts
Specifying multiple -vault-password-files will
now try each until one works
** Rev vault format in a backwards compatible way
The 1.2 vault format adds the vault_id to the header line
of the vault text. This is backwards compatible with older
versions of ansible. Old versions will just ignore it and
treat it as the default (and only) vault id.
Note: only 2.4+ supports multiple vault passwords, so while
earlier ansible versions can read the vault-1.2 format, it
does not make them magically support multiple vault passwords.
use 1.1 format for 'default' vault_id
Vaulted items that need to include a vault_id will be
written in 1.2 format.
If we set a new DEFAULT_VAULT_IDENTITY, then the default will
use version 1.2
vault will only use a vault_id if one is specified. So if none
is specified and C.DEFAULT_VAULT_IDENTITY is 'default'
we use the old format.
** Changes/refactors needed to implement multiple vault passwords
raise exceptions on decrypt fail, check vault id early
split out parsing the vault plaintext envelope (with the
sha/original plaintext) to _split_plaintext_envelope()
some cli fixups for specifying multiple paths in
the unfrack_paths optparse callback
fix py3 dict.keys() 'dict_keys object is not indexable' error
pluralize cli.options.vault_password_file -> vault_password_files
pluralize cli.options.new_vault_password_file -> new_vault_password_files
pluralize cli.options.vault_id -> cli.options.vault_ids
** Add a config option (vault_id_match) to force vault id matching.
With 'vault_id_match=True' and an ansible
vault that provides a vault_id, then decryption will require
that a matching vault_id is required. (via
--vault-id=my_vault_id@password_file, for ex).
In other words, if the config option is true, then only
the vault secrets with matching vault ids are candidates for
decrypting a vault. If option is false (the default), then
all of the provided vault secrets will be selected.
If a user doesn't want all vault secrets to be tried to
decrypt any vault content, they can enable this option.
Note: The vault id used for the match is not encrypted or
cryptographically signed. It is just a label/id/nickname used
for referencing a specific vault secret.