* Fix vault --ask-vault-pass with no tty
2.4.0 added a check for isatty() that would skip setting up interactive
vault password prompts if not running on a tty.
But... getpass.getpass() will fallback to reading from stdin if
it gets that far without a tty. Since 2.4.0 skipped the interactive
prompts / getpass.getpass() in that case, it would never get a chance
to fall back to stdin.
So if 'echo $VAULT_PASSWORD| ansible-playbook --ask-vault-pass site.yml'
was ran without a tty (ie, from a jenkins job or via the vagrant
ansible provisioner) the 2.4 behavior was different than 2.3. 2.4
would never read the password from stdin, resulting in a vault password
error like:
ERROR! Attempting to decrypt but no vault secrets found
Fix is just to always call the interactive password prompts based
on getpass.getpass() on --ask-vault-pass or --vault-id @prompt and
let getpass sort it out.
* up test_prompt_no_tty to expect prompt with no tty
We do call the PromptSecret class if there is no tty, but
we are back to expecting it to read from stdin in that case.
* Fix logic for when to auto-prompt vault pass
If --ask-vault-pass is used, then pretty much always
prompt.
If it is not used, then prompt if there are no other
vault ids provided and 'auto_prompt==True'.
Fixes vagrant bug https://github.com/hashicorp/vagrant/issues/9033Fixes#30993
* Adding git_command module and its UT file
* Changing Author Name and removing 2 blank lines
* Removing blank lines
* Adding enos_config and its UT files
* Removing config module as I am allowed to have only module per PR
* Work on Ganesh's Review comments
* John Review Comments on enos_command.py
* Review comments of John
* Add mtu option nxos_interface feature idea
Signed-off-by: Trishna Guha <trishnaguha17@gmail.com>
* Add unit test for mtu feature
Signed-off-by: Trishna Guha <trishnaguha17@gmail.com>
* Better handling of malformed vault data envelope
If an embedded vaulted variable ('!vault' in yaml)
had an invalid format, it would eventually cause
an error for seemingly unrelated reasons.
"Invalid" meaning not valid hexlify (extra chars,
non-hex chars, etc).
For ex, if a host_vars file had invalid vault format
variables, on py2, it would cause an error like:
'ansible.vars.hostvars.HostVars object' has no
attribute u'broken.example.com'
Depending on where the invalid vault is, it could
also cause "VARIABLE IS NOT DEFINED!". The behavior
can also change if ansible-playbook is py2 or py3.
Root cause is errors from binascii.unhexlify() not
being handled consistently.
Fix is to add a AnsibleVaultFormatError exception and
raise it on any unhexlify() errors and to handle it
properly elsewhere.
Add a _unhexlify() that try/excepts around a binascii.unhexlify()
and raises an AnsibleVaultFormatError on invalid vault data.
This is so the same exception type is always raised for this
case. Previous it was different between py2 and py3.
binascii.unhexlify() raises a binascii.Error if the hexlified
blobs in a vault data blob are invalid.
On py2, binascii.Error is a subclass of Exception.
On py3, binascii.Error is a subclass of TypeError
When decrypting content of vault encrypted variables,
if a binascii.Error is raised it propagates up to
playbook.base.Base.post_validate(). post_validate()
handles exceptions for TypeErrors but not for
base Exception subclasses (like py2 binascii.Error).
* Add a display.warning on vault format errors
* Unit tests for _unhexlify, parse_vaulttext*
* Add intg test cases for invalid vault formats
Fixes#28038
* implements jsonrpc message passing for ansible-connection
* implements more generic mechanism for persistent connections
* starts persistent connection in task_executor if enabled and supported
* supports using network_cli as top level connection plugin
* enhances logging for persistent connection to stdout
* Update action plugins
* Fix Python3 RPC
* Fix Junos bytes<-->str issues
* supports using netconf as top level connection plugin
* Error message when running netconf on an unsupported platform
* Update tests
* Fix `authorize: yes` for `connection: local`
* Handle potentially JSON data in terminal
* Add clarifying detail if possible on ConnectionError
* Squashing all commits to one as suggested by John
* Adding Unit test method for the module enos_facts.py
* Pep8 and Ylint issues addressed
* Trying again to remove blank line. Some scripts are required for this.
* Bug Fixing for interfaces
* Editing for over indenting issue
* E203 whitespace before ','
* Update enos.py
Added warnings argument as to check_args method
* Update enos_facts.py
Added warnings to check_args method
* When getting the stack events we need to consider the case where we don't have ClientRequestToken fixes#32396
* Adding tests for the case when the ClientRequestToken is not present in the stack creation.
* Renaming the stack that the test for Client Request Token requires so it won't cause collisions with the basic test.
* Ensure include_role unit tests check something
This is not the case: get_tasks_vars doesn't yield
* Fix include_role unit tests
Since e609618274, include_role are not
static anymore.
* Add some tests for iptables
* Fix remove bug (calls 2 times check to remove a chain)
* Add me as maintainer
* Fix PEP8
* Doc: Give more information on issue #18988
* Fix#18988 and test it
* Fix doc (thanks Pillou)
* enable PEP8 check for iptables
This patch addresses a number of issues, large and small, that were
identified by users in the downstream repo.
* formatting of some code
* specific option combinations leading to errors
* missing includes for unit tests
* ios_logging: Fix typo in documentation
* ios_logging: Fix traceback when setting buffered destination without size
When the size parameter is not configured while configuring the buffered
destination, a traceback occurs due to the fact that validate_size expects the
parameter to be an int. Explicitely converting value to int makes the
check work for every case.
* ios_logging: Update size parameter documentation
Update the documentation of the size paramter to reflect the current behaviour
of setting a default of 4096 for the buffered dest.
* ios_logging: Add unit test
Add unit test for ios_logging testing the behaviour clarified in the previous
commits.
* ios_logging: Fix python 2.6 compliance
This module's purpose is to specifically manage the ssl keys. It
is essentially the key component of the bigip_ssl_certificate module.
The modules were separated and the key portion deprecated from
bigip_ssl_certificate in favor of this module.
* - Adds iosxr_netconf module to configure netcong service on Cisco
IOS-XR devices
* - Adds Integration test for module
- Handles diff return from load_config
* - Adds unit test for iosxr_netconf module
* Start using ClientRequestTokens in event lists
* Include request token in all reqs that support it (basically all but check mode/changeset)
* Update placebo recordings
* Add comments for CRQ popping
Module allows you to wait for a bigip device to be
"ready" for configuration. This module will wait for things like
the device coming online as well as the REST API and MCPD being
ready.
If all of the above is not online and ready, then no configuration
will be able to be made.
* better cleanup on task results display
callbacks get 'clean' copy of result objects
moved cleanup into result object itself
removed now redundant callback cleanup
moved no_log tests
* moved import as per feedback
In this refactor we moved to the most recent coding standards for
both F5 and Ansible. Many bugs were fixed and some features were
also added (such as ipv6 support).
New conventions for ansible warrant fixes to accomodate those
in bigip_partition.
This patch also includes an import fix that can raise an error when
Ansible unit tests run
This adds a new type of vault-password script (a 'client') that takes advantage of and enhances the
multiple vault password support.
If a vault password script basename ends with the name '-client', consider it a vault password script client.
A vault password script 'client' just means that the script will take a '--vault-id' command line arg.
The previous vault password script (as invoked by --vault-password-file pointing to an executable) takes
no args and returns the password on stdout. But it doesnt know anything about --vault-id or multiple vault
passwords.
The new 'protocol' of the vault password script takes a cli arg ('--vault-id') so that it can lookup that specific
vault-id and return it's password.
Since existing vault password scripts don't know the new 'protocol', a way to distinguish password scripts
that do understand the protocol was needed. The convention now is to consider password scripts that are
named like 'something-client.py' (and executable) to be vault password client scripts.
The new client scripts get invoked with the '--vault-id' they were requested for. An example:
ansible-playbook --vault-id my_vault_id@contrib/vault/vault-keyring-client.py some_playbook.yml
That will cause the 'contrib/vault/vault-keyring-client.py' script to be invoked as:
contrib/vault/vault-keyring-client.py --vault-id my_vault_id
The previous vault-keyring.py password script was extended to become vault-keyring-client.py. It uses
the python 'keyring' module to request secrets from various backends. The plain 'vault-keyring.py' script
would determine which key id and keyring name to use based on values that had to be set in ansible.cfg.
So it was also limited to one keyring name.
The new vault-keyring-client.py will request the secret for the vault id provided via the '--vault-id' option.
The script can be used without config and can be used for multiple keyring ids (and keyrings).
On success, a vault password client script will print the password to stdout and exit with a return code of 0.
If the 'client' script can't find a secret for the --vault-id, the script will exit with return code of 2 and print an error to stderr.
* documentation was not inline with other Ansible modules
* Python 3 specific imports were missing
* monitor_type is no longer required when creating a new pool; it is now the default.
* A new monitor_type choice of "single" was added for a more intuitive way to specify "a single monitor". It uses "and_list" underneath, but provides additional checks to ensure that you are specifying only a single monitor.
* host and port arguments have been deprecated for now. Please use bigip_pool_member instead.
* 'partition' field was missing from documentation.
* A note that "python 2.7 or greater is required" has been added for those who were not aware that this applies for ALL F5 modules.
* Unit tests were fixed to support the above module
* Adding a cli transport option for the bigip_command module.
* Fixing keyerror when using other f5 modules. Adding version_added for new option in bigip_command.
* Removing local connection check because the F5 tasks can be delegated to any host that has the libraries for REST.
* Using the network_common load_provider.
* Adding unit test to cover cli transport and updating previous unit test to ensure cli was not called.
* Fix rollback in junos_config
Fixes#30778
* Call `load_configuration` with rollback id in case
the id is given as input
* Pass rollback id to `get_diff()` to fetch diff from device
* Fix unit test
- old functionality is still available direct lookup use, the following are equivalent
with_nested: [[1,2,3], ['a','b','c']]
loop: "{{lookup('nested', [1,2,3], ['a','b','c'])}}"
- avoid squashing with 'loop:'
- fixed test to use new intenal attributes
- removed most of 'lookup docs' as these now reside in the plugins
* Make ansible_selinux facts a consistent type
Rather than returning a bool if the Python library is missing, return a dict with one key containing a message explaining there is no way to tell the status of SELinux on the system becasue the Python library is not present.
* Fix unit test
The /etc/os-release based distro detection doesn't
seem to work for Ubuntu 10.04 (no /etc/os-release?).
So it was testing the next case which was /etc/lsb-release to
see if it is 'Mandriva'. Since the check for existence of
(/etc/lsb-release, Mandrive) was the first non-empty dist
file match, 'ansible_distribution' was being set to 'Mandriva'
expecting to be corrected by the data from the dist file content.
But since the dist file parsing for Mandriva didn't match for
Ubuntu 10.04 /etc/lsb-release _and_ there is no Debian specific
lsb-release check, 'ansible_distribution' stayed at 'Mandriva'
and the dist file checking loop keeps going and eventually off
the end of the list before finding a better match.
Adding a debian/ubuntu specific check for /etc/lsb-release after
the debian os-release sets the info correctly and stops further
checking of dist files.
Fixes#30693
* Added support to retrieving LIG resources in HPE OneView
* Fixing copyright header according to review
* Swapping out config for full credentials in parameter for documentation
* Added support to retrieving Enclosures in HPE OneView
- Added unit tests
* Updated version_added to 2.5
* Changing return type of enclosure_script to string
* Fixing copyright header according to review
* Replaced config for credentials in parameters for documentation
* Fix fact failures cause by ordering of collectors
Some fact collectors need info collected by other facts.
(for ex, service_mgr needs to know 'ansible_system').
This info is passed to the Collector.collect method via
the 'collected_facts' info.
But, the order the fact collectors were running in is
not a set order, so collectors like service_mgr could
run before the PlatformFactCollect ('ansible_system', etc),
so the 'ansible_system' fact would not exist yet.
Depending on the collector and the deps, this can result
in incorrect behavior and wrong or missing facts.
To make the ordering of the collectors more consistent
and predictable, the code that builds that list is now
driven by the order of collectors in default_collectors.py,
and the rest of the code tries to preserve it.
* Flip the loops when building collector names
iterate over the ordered default_collectors list
selecting them for the final list in order instead
of driving it from the unordered collector_names set.
This lets the list returned by select_collector_classes
to stay in the same order as default_collectors.collectors
For collectors that have implicit deps on other fact collectors,
the default collectors can be ordered to include those early.
* default_collectors.py now uses a handful of sub lists of
collectors that can be ordered in default_collectors.collectors.
fixes#30753fixes#30623
* Use vault_id when encrypted via vault-edit
On the encryption stage of
'ansible-vault edit --vault-id=someid@passfile somefile',
the vault id was not being passed to encrypt() so the files were
always saved with the default vault id in the 1.1 version format.
When trying to edit that file a second time, also with a --vault-id,
the file would be decrypted with the secret associated with the
provided vault-id, but since the encrypted file had no vault id
in the envelope there would be no match for 'default' secrets.
(Only the --vault-id was included in the potential matches, so
the vault id actually used to decrypt was not).
If that list was empty, there would be an IndexError when trying
to encrypted the changed file. This would result in the displayed
error:
ERROR! Unexpected Exception, this is probably a bug: list index out of range
Fix is two parts:
1) use the vault id when encrypting from edit
2) when matching the secret to use for encrypting after edit,
include the vault id that was used for decryption and not just
the vault id (or lack of vault id) from the envelope.
add unit tests for #30575 and intg tests for 'ansible-vault edit'
Fixes#30575
* Fix 'distribution' fact for ArchLinux
Allow empty wasn't breaking out of the process_dist_files
loop, so a empty /etc/arch-release would continue searching
and eventually try /etc/os-release. The os-release parsing
works, but the distro name there is 'Arch Linux' which does
not match the 2.3 behavior of 'Archlinux'
Add a OS_RELEASE_ALIAS map for the cases where we need to get
the distro name from os-release but use an alias.
We can't include 'Archlinux' in SEARCH_STRING because a name match on its keys
but without a match on the content causes a fallback to using the first
whitespace seperated item from the file content as the name.
For os-release, that is in form 'NAME=Arch Linux'
With os-release returning the right name, this also supports the
case where there is no /etc/arch-release, but there is a /etc/os-release
Fixes#30600
* pep8 and comment cleanup
* Fix pkg_mgr fact on OpenBSD
Add a OpenBSDPkgMgrFactCollector that hardcodes pkg_mgr
to 'openbsd_pkg'. The ansible collector will choose the
OpenBSD collector if the system is OpenBSD and the 'Generic'
one otherwise.
This removes PkgMgrFactCollectors depenency on the
'system' fact being in collected_facts, which also
avoids ordering issues (if the pkg mgr fact is collected
before the system fact...)
Fixes#30623
* Fix nxos provider transport warning issue
* Add default value of transport arg in provider spec
* Remove default value if transport arg in top level spec
This ensure deprecation warning is seen only in case transport
is given as a top level arg in task
* Refactor nxos modules to reference transport value from provider
spec
* Fix unit test
* Remove transport arg assignment in nxos action plugin
* As assigning transport value is handled in provider spec
top level task arg assignment is no longer required
* Add Routing Engine Facts
- Map routing engine output information to routing_engines facts dict.
- Add fact 'has_2RE', which is a quick way to determine how many REs
the chassis has.
* Fix a typo
* Fix more typos
* Add slot number to routing_engine dict
* Add facts about the installed chassis modules
* Fix typo
* Fixed another typo
* Fix Path
* Change path again.
* More Typos
* Add some deubgging
* Add additional information for hardware components.
- Return information about the Routing Engines.
- Return a fact to easily determine if the device
has two routing engines.
- Return information about the hardware modules.
* Addressed pep8 stardard failures.
* Add unit test fixtures.
* Rename fixture.
* Fix unit test failures.
- Rename the fixture file to what the unit test expects.
- Strip out junos namespace attributes.
Rename file to match what the unit test expects.
* Scrubbed the routing engine serial numbers.
* Add unit test facts for new tests.
- Add unit test for ansible_net_routing_engines fact
- Add unit test for ansible_net_modules fact
- Add unit test for ansible_net_has_2RE
* Fixed spacing.
* Don't ask for password confirm on 'ansible-vault edit'
This is to match the 2.3 behavior on:
ansible-vault edit encrypted_file.yml
Previously, the above command would consider that a 'new password'
scenario and prompt accordingly, ie:
$ ansible-vault edit encrypted_file.yml
New Password:
Confirm New Password:
The bug was cause by 'create_new_password' being used for
'edit' action. This also causes the previous implicit 'auto prompt'
to get triggered and prompt the user.
Fix is to make auto prompt explicit in the calling code to handle
the 'edit' case where we want to auto prompt but we do not want
to request a password confirm.
Fixes#30491
Unittests are sometimes run without network connectivity in build
systems. Make that work correctly by mocking out _get_url_data with the
expected return value.