This is part of the 2.2 refactor to extract the Cli class into a
separate module. This renames netcmd to netcli which is consistent
with the network shared modules implementations
This removes top level functions from the ios module and moves them
into the specific modules. This update also includes some clean up
of the Cli transport
This restructure moves the Cli object to netcmd and includes a roll up
of inor bugfix updates to CommandRunner
* CommandRunner now only allows one instance of a command in the stack and
raise an exception if a duplidate command is detected
* CommandRunner now caches returns based on command and output
* CommandRunner is not responsible for creating Command instances
test/units/plugins/action/test_action.py had code
for handling a bug in python 3.4's mock_open that
causes errors when reading binary data.
Moved to compat/tests/mock.py so other tests can
use it by default.
This update will now remove any keys from results that are created using
the private names. Private names are identified as double underscore (__)
on either side of the key name
* actions/unarchive: fix unarchive from remote url
Currently unarchive from remote url does not work because the core
unarchive module was updated to support 'remote_src' [1], but the
unarchive action plugin was not updated for this. This causes failures
because the action plugin assumes it needs to copy a file to the
remote server, but in the case of downloading a file from a remote
url a local file does not exist, so an error occurs when the file is
not found.
[1] https://github.com/ansible/ansible-modules-core/commit/467516e
* test_unarchive: fix test with wrong remote_src use
The non-ascii filenames test had improperly set remote_src=yes even
though it was actually copying the file from the local machine (i.e.
the file did not already exist remotely). This test was passing
until the remote_src behavior of unarchive was fixed in 276550f.
The calculation for max_fail_percentage was moved into the linear
strategy a while back, and works better there in the stategy layer
rather than at the PBE layer. This patch removes it from the PBE layer
and tweaks the logic controlling whether or not the next batch is run.
Fixes#15954
Fixes#10779
Refactor some of the block device, mount point, and
mtab/fstab facts collection for linux for better
performance on systems with lots of block devices.
Instead of invoking 'lsblk' for every entry in mtab,
invoke it once, then map the results to mtab entries.
Change the args used for invoking 'findmnt' since the
previous combination of args conflicts, so this would
always fail on some systems depending on version.
Add test cases for facts Hardware()/Network()/Virtual() classes
__new__ method and verify they create the proper subclass based
on the platform.system() results.
Split out all the 'invoke some command and grab it's output'
bits related to linux mount paths into their own methods so
it is easier to mock them in unit tests.
Fix the DragonFly* classes that did not defined a 'platform'
class attribute. This caused FreeBSD systems to potentially
get the DragonFly* subclasses incorrectly. In practice it
didnt matter much since the DragonFly* subclasses duplicated
the FreeBSD ones. Actual DragonFly systems would end up with
the generic Hardware() etc instead of the DragonFly* classes.
Fix Hardware.__new__() on PY3, passing args to __new__
would cause "object() takes no parameters" errors. So
check for PY3 and just call __new__ without the args
See
https://hg.python.org/cpython/file/44ed0cd3dc6d/Objects/typeobject.c#l2818
for some explaination.
The flag new_pb_basedir is not being utilized in Inventory._get_hostgroup_vars,
leading to the situation where an inventory with no playbook basedir set will
read host/group vars from the $CWD, regardless of the inventory and/or playbook
relative location. This patch corrects that by not using the playbook basedir
if it is unset (None).
This patch also corrects a bug in which the VariableManager would accumulate
host/group vars files, which could lead to incorrect vars files being used when
playbooks are run from different directories containing their own group/host vars
directories.
Fixes#16953
Copying the TaskInclude task (which is the parent) before loading the blocks
makes the code much more simple and clean, and fixes a bug introduced during
the performance improvement changes (and specifically the change which moved
things to a single-parent model).
Fixes#17064
Since we introduced static includes in 2.1, this broke the functionality
where a notify could be sent to a named include statement, triggering all
handlers contained within the include. This patch fixes that by adding a
search through the parents of a handler for any TaskIncludes which match.
Fixes#15915
We want to NOT consider the async task as failed if the result is
not parsed, which was the intent of:
https://github.com/ansible/ansible/pull/16458
However, the logic doesn't actually do that because we default
the 'parsed' value to True. It should default to False so that
we continue waiting, as intended.
Instead of immediately returning a failed code (indicating a break in
the play execution), we internally 'or' that failure code with the result
(now an integer flag instead of a boolean) so that we can properly handle
the rescue/always portions of blocks and still remember that the break
condition was hit.
Fixes#16937
* Introduce new 'filetree' lookup plugin
The new "filetree" lookup plugin makes it possible to recurse over a tree of files within the task loop. This makes it possible to e.g. template a complete tree of files to a target system with little effort while retaining permissions and ownership.
The module supports directories, files and symlinks.
The item dictionary consists of:
- src
- root
- path
- mode
- state
- owner
- group
- seuser
- serole
- setype
- selevel
- uid
- gid
- size
- mtime
- ctime
EXAMPLES:
Here is an example of how we use with_filetree within a role:
```yaml
- name: Create directories
file:
path: /web/{{ item.path }}
state: directory
mode: '{{ item.mode }}'
owner: '{{ item.owner }}'
group: '{{ item.group }}'
force: yes
with_filetree: web/
when: item.state == 'directory'
- name: Template complete tree
file:
src: '{{ item.src }}'
dest: /web/{{ item.path }}
state: 'link'
mode: '{{ item.mode }}'
owner: '{{ item.owner }}'
group: '{{ item.group }}'
with_filetree: web/
when: item.state == 'link'
- name: Template complete tree
template:
src: '{{ item.src }}'
dest: /web/{{ item.path }}
mode: '{{ item.mode }}'
owner: '{{ item.owner }}'
group: '{{ item.group }}'
force: yes
with_filetree: web/
when: item.state == 'file'
```
SPECIAL USE:
The following properties also have its special use:
- root: Makes it possible to filter by original location
- path: Is the relative path to root
- uid, gid: Makes it possible to force-create by exact id, rather than by name
- size, mtime, ctime: Makes it possible to filter out files by size, mtime or ctime
TODO:
- Add snippets to documentation
* Small fixes for Python 3
* Return the portion of the file’s mode that can be set by os.chmod()
And remove the exists=True, which is redundant.
* Use lstat() instead of stat() since we support symlinks
* Avoid a few possible stat() calls
* Bring in line with v1.9 and hybrid plugin
* Remove glob module since we no longer use it
* Included suggestions from @RussellLuo
- Two blank lines will be better. See PEP 8
- I think if props is not None is more conventional 😄
* Support failed pwd/grp lookups
* Implement first-found functionality in the path-order
* when including statically, make sure that all parents were also included
statically (issue #16990)
* properly resolve nested static include paths
* print a message when a file is statically included
Fixes#16990
When unittesting this we found that the platform selecting class
hierarchies weren't working in all cases. If the subclass was directly
created (ie: LinuxHardware()), then it would use its inherited __new__()
to try to create itself. The inherited __new__ would look for
subclasses and end up calling its own __new__() again. This would
recurse endlessly. The new code detects when we want to find a subclass
to create (when the base class is used, ie: Hardware()) vs when to
create the class itself (when the subclass is used, ie:
LinuxHardware()).
Rather than repeatedly searching for tasks by uuid via iterating over
all known blocks, cache the tasks when they are added to the PlayIterator
so the lookup becomes a simple key check in a dict.
It is possible that a block is copied prior to validation, in which case
some fields (like when) which should be something other than a string might
not be. Using validate() in copy() is relatively harmless and ensures the
blocks are in the proper structure.
This also cleans up some of the finalized logic from an earlier commit and
adds similar logic for validated.
Fixes#17018
After post_validate() is called on an object, there should be no
need to continue looking up at parent attributes. This patch adds a
new flag (_finalized) which is set to True at the end of post_validate,
and getattr will not look beyond its own attributes from that point on.
* Allow to make the jsonfile cache files pretty (indented and sorted)
Since the json cache files are condensed, it is not very practical to look for something in them. Having indented/sorted cache files makes debugging and playbook/inventory development a lot easier to do.
I made it configurable in case people would object to the performance hit this would have, but to be honest, then they probably should be looking at other cache plugins instead IMO.
* Removed the config option and documentation changes
* Query lookup plugin
* Add license and docstrings
* Add python3-ish imports
* Change query plugin type from lookup to filter
* Switch from dq to jsonpath_rw
* Add integration test for query filter
* Rename query filter to json_query
* Add jsonpath-rw
* Rename query filter to json_query
* Switch query implementation from jsonpath-rw to jmespath
Run setfacl/chown/chmod on each temp dir and file.
This fixes temp file permissions handling on platforms such as FreeBSD
which always return success when using find -exec. This is done by
eliminating the use of find when setting up temp files and directories.
Additionally, tests that now pass on FreeBSD have been enabled for CI.
Due to the way we load plugins, internally to Python there can be issues when
the debug strategy is loaded after the linear strategy. To work around this,
we're changing the import line for the linear strategy to avoid the problem.
Related to #16825
uri:
follow_redirects: no
Will lead yaml to set follow_redirects=False. This is problematic when
the module parameter is not a boolean value but a string. For instance:
follow_redirects = dict(required=False, default='safe', choices=['all', 'safe', 'none', 'yes', 'no']),
Our parameter validation code ends up getting follow_redirects="False"
instead of "no". The 100% fix is for the user to quote their strings in
playbooks like:
uri:
follow_redirects: "no"
But we can fix quite a few common cases by trying to switch "False" back
into the string that it was specified as. We only do this if there is
only one correct choices value that could have been specified. In the
follow_redirects example, a value of "True" only maps back to "yes" and
a value of "False" only maps back to "no" so we can do this. If choices
also contained "on" and "off" then we couldn't map back safely and would
need to force the module author to change the module to handle this
case.
Fixes parts of the following PRs:
* https://github.com/ansible/ansible-modules-core/pull/4220
* https://github.com/ansible/ansible-modules-extras/pull/2593
* These can still race when multiple ansible processes are created at
the same time.
* Reverse order of expanduser and expandvars in unfrakpath(). So that
tildes in environment variables will be handled.
When a task result has an empty results list, the
list should be ignored when determining the results
of `_check_key`. Here the empty list is treated the
same as a non-existent list.
This fixes a bug that manifests itself with squashed
items - namely the task result contains the correct
value for the key, but an empty results list. The
empty results list was treated as zero failures
when deciding which handler to call - so the task
show as a success in the output, but is deemed to
have failed when deciding whether to continue.
This also demonstrates a mismatch between task
result processing and play iteration.
A test is also added for this case, but it would not
have caught the bug - because the bug is really in
the display, and not the success/failure of the
task (visually the test is more accurate).
Fixesansible/ansible-modules-core#4214
This feature changes the scalar value of `serial:` to a list, which
allows users to specify a list of values, so batches can be ramped
up (commonly called "canary" setups):
- hosts: all
serial: [1, 5, 10, "100%"]
tasks:
...
* Revert "There can be only one localhost"
This reverts commit 5f1bbb4fcd.
this broke several usages of localhost, see #16882, #16898 and #16886
* ensure there is only 1 localhost
fixes#16886, #16882 and #16898
- make sure localhost exists before returning it
- optimzed host caching
- ensure we always return a host object
The module level function defs for gcdns_connect() and
gce_connect() provide a default arg for 'provider' that
references into the libcloud module. If the libcloud
modules were not installed, the gce/gcdns python modules
would throw ImportError.
Let the provider arg default to None and if not provided,
set it to the default libcloud.compute.types.Provider.*
value if the modules are installed.
The lack of a comma caused the statement to always evaluate as a
`TypeError` when python interpreted `value (list, tuple, dict)` to call
value with the arguments list, tuple, and dict.
This is a refactoring of the existing GCE utility module to support other projects on Google Cloud Platform.
The previous gce.py module was hard-coded specifically for GCE, and attempting to use it with other projects in GCP failed.
See https://github.com/ansible/ansible/pull/15918#issuecomment-220165913 for more detail.
This has also been an issue for others in the past, although they've handled it by simply
duplicating some of the logic of gce.py in their own modules.
- The existing gce.py module was renamed to gcp.py, and modified to remove any
imports or other code that refers to libcloud.compute or GCE (the GCE_* params were
retained for compatibility). I also renamed the gce_connect function to gcp_connect,
and modified the function signature to make supplying a provider, driver, and agent
information mandatory.
- A new gce.py module was created to handle connectivity to GCE. It imports the
appropriate libcloud.compute providers and drivers, and then passes them on
to gcp_connect in gcp.py. The constants and function signatures are the same
as the old gce.py, so compatibility with existing modules is retained.
- A new gcdns.py module was created to support PR ansible/ansible-modules-extras#2252
for two new Google Cloud DNS modules, and to demonstrate support for a non-GCE
Google Cloud service. It follows the same basic structure as the new gce.py module,
but imports from libcloud.dns instead.
I'm not sure why that would be desirable -- we really want __version__
to come from the controller whereas importing will come from the client
node. If it turns out there was a reason to do that, please be sure to
use an exception handler that catches all exceptions instead of only
catching ImportError:
```
try:
from ansible.release import __version__, __author__
except:
__version__ = [...]
```
Fixes#16523
* switch cwd to basedir of task
This restores previous behaviour in pre 2.0 and allows for 'local type' plugins
and actions to have a more predictable relative path.
fixes#14489
* removed FIXME since prev commit 'fixes' this
* fix tests, now they need a loader (thanks jimi!)
* add check_mode option for tasks
includes example testcases for the template module
* extend check_mode option
* replace always_run, see also proposal rename_always_run
* rename always_run where used and add deprecation warning
* add some documentation
* have check_mode overwrite always_run
* use unique template name to prevent conflicts
test_check_mode was right before, but failed due to using the same filename as other roles
* still mention always_run in the docs
* set deprecation of always_run to version 2.4
* fix rst style
* expand documentation on per-task check mode
now systemd will run even if service module is inovked with parameters that it does not support
these will be removed before invoking systemd and issue a warning.
this facility will work for any new service modules.
A simple import of cryptography can throw several types of errors. For example,
if `setuptools` is less than cryptography's minimum requirement of 11.3, then
this import of cryptography will throw a VersionConflict here. An earlier case
threw a DistributionNotFound exception.
An optional dependency should not stop ansible. If the error is more than
an ImportError, log a warning, so that errors can be fixed in ansible or
elsewhere.
This bug was introduced in 3ced6d3, where getting vars from a role
did not follow the dep chain. This was originally hidden by the fact
that we got vars twice (from the block and from the roles directly).
Fixes#16729
* fixed lookup search path
added ansible_search_path var that contains the proper list and in order
removed roledir var which was only used by first_found, rest used role_path
added needle function for lookups that mirrors the action plugin one, now
both types of plugins use same pathing.
* added missing os import
* renamed as per feedback
* fixed missing rename in first_found
* also fixed first_found
* fixed import to match new error class
* fixed getattr ref
* moved tests from filters to actual jinja2 tests
also removed some unused declarations and imports
* split tests into their own docs
removed isnan as existing jinja2's 'number' already covers same
added missing docs for several tests
* updated as per feedback
2e003adb added the ability for tasks using any_errors_fatal to fail
when there were unreachable hosts. However that patch used the running
unreachable hosts data rather than the results from the current task,
which causes failures when any run_once or BYPASS_HOST_LOOP task is hit
after an unreachable host causes a failure. This patch corrects that by
using the current set of results to determine if any hosts were
unreachable during the last task only.
Fixesansible/ansible-modules-core#4160
This adds a action plugin that will allow config and template modules
to be merged into a single module. Once completed this will supercede
the net_template action plugin.
* add load_config() for loading a set of configuration commands
* add load_candidate() function for loading a candidate config
* updates shared module to provide NetworKModule instead of get_module
* fixes Cli transport implementation for 2.2 refactor
* updates ios documentation fragments with new options
* diff functions now split out for easier troubleshooting
* added dumps() function to serialize config objects to strings
* difference() can now expand all blocks instead of just singluar blocks
includes changes from PR ansible/ansible#16636 and refactors for the
NetworkModule changes
new features
* ios now supports transport=restcon will additional arguments
* ModuleStub refactored into common network shared module
* import temporary get_module() function (to be removed prior to 2.2 final)
This is a temporary change to keep the get_module() function until all
of the network module refactoring is completed to avoid breaking them
in devel. The get_module() function should not be used and will be
removed before 2.2 final.
* Update IOS with new NetworkModule
* Remove redundant EOS code
* `authorize` can get rolled into NetCli
* Fix up IOS to where EOS is.
* Update IOSXR for NetworkModule
* collections is unnecessary
Since Ansiballz, we no longer need to import basic directly into
a new-style module. Some modules, like the Networking modules, may
import basic in their own module_utils files and the module will import
that specialized module_util file rather than basic.
* Don't treat parsing problems as async task timeout
If there is a problem reading/writing the status file that manifests as
not being able to parse the data, that doesn't mean the task timed out,
it means there was what was likely a tempoarary problem. Move on and
keep polling for success. The only things that should cause the async
status to not be parseable are bugs in the async_runner.
* Add comment explaining not bailing out of loop
* Return different error when result is unparseable
* Remove extraneous else
* Instead of rebuilding the handler list all over the place, we now
compile the handlers at the point the play is post-validated so that
the view of the play in the PlayIterator contains the definitive list
* Assign the dep_chain to the handlers as they're compiling, just as we
do for regular tasks
* Clean up the logic used to find a given handler, which is greatly
simplified by the above changes
Fixes#15418
The `boto3_conn` function requires a module argument, and calls
`module.fail_json` if the connection doesn't receive enough arguments.
In non-module settings like inventory scripts, there is no module to be
passed.
The `boto3_inventory_conn` function takes the same arguments except for
`module`, and both call _boto3_conn which doesn't require a module be
passed.
* fixes lots of bugs with get_config function to perform correctly
* refactors load_config into load_candidate
* adds load_config function to convert commands to NetworkConfig
The Command object can now store the response from executing the command
to allow it to be retrieved later by command name. This update will
update the Command instance with the response before returning.
This adds a new method that will return the output from a specified
command that has already been excuted by the CommandRunner. The new
method, get_command takes a single argument which is the full name
of the command to retrieve.
6eefc11c converted task.loop_control into an object, but while the other
callers were updated to use .loop_var instead of .get('loop_var'), this
site was overlooked.
This can be reproduced by including with loop_control a file that does
set_fact; a simple regression test along these lines is included.
This fix prevents a broken pipe exception from occurring when password-less
SSH is configured and the sshpass process exits and closes the pipe before
the password is written to the pipe.
We want to update host vars for all hosts (even those that might
have failed), and the in case of a refresh_inventory, the code has
a stale restrictions list at this point anyway.
* smarter function to figure out relative paths
takes list of paths in order of relevance to current task
and does the dwim magic on them
* shared function for action plugins using new dwim
unify path construction and error info/messaging
made include and role non exclusive
corrected order and now smarter about tasks
includes inside roles are currently broken as they don't provide the correct role data
make dirname full match to avoid corner cases
* migrated action plugins to new dwim function
reported plugins to use exceptions instead of info
* clarified needle
In the case of using YAML anchors/aliases, YAML actually uses references
to the duplicated object so any modifications to the original impacts
later uses of the object.
Fixes#13575
* Lookup unencrypted password must not include salt
* Integration test lookup: remove previous directory
* Test that lookup password doesn't return salt
* Lookup password: test behavior with empty encrypt parameter
Closes#16189
* Remove unnecessary copying of values from parents to role deps, as
this can cause problems when roles have multiple parents (or the same
parents with different params speficied through deps)
* Since we're already checking the dep chain in the block for role
things (which every task in a role should have), it is not necessary
to check the role directly in case it improperly grabs something
Fixes#14438
Our custom encoder for the to_json filter was simply returning the
object if it was not a HostVars object, leading in some cases to a
TypeError when the data contained an undefined variable. This lead
to an odd error message being propagated up, so we now properly catch
this as an undefined variable error.
Fixes#15610
Again, as we're carrying failed/unreachable hosts forward from play to play via
internal structures, we need to remember which ones had previously failed so that
unrelated host failures don't inflate the numbers for a given serial batch in the
PlaybookExecutor causing a premature exit.
Fixes#16364
The listen statement on handlers should have supported a list, however
it was broken in the revision of the pub/sub feature based on the handler
revamp. This patch corrects the bug, so this works again:
- name: some handler
...
listen:
- some target
- another target
Fixes#16378
* add new module network
* move EOS to NetworkModule
* shell.py Python 3.x compatibility
* implements the Command class through the connection for eos
This implements a new Command class that specifies the cli command
and output format. This removes the need to batch commands through
the connection
* initial add of netcmd module
Due to the fact that roles may be instantiated with different sets of
params (multiple inclusions of the same role or via role dependencies),
simply tracking notified handlers by name does not work. This patch
changes the way we track handler notifications by using the handler
object itself instead of just the name, allowing for multiple internal
instances. Normally this would be bad, but we also modify the way we
search for handlers by first looking at the notifying tasks dependency
chain (ensuring that roles find their own handlers first) and then at
the main list of handlers, using the first match it finds.
This patch also modifies the way we setup the internal list of handlers,
which should allow us to correctly identify if a notified handler exists
more easily.
Fixes#15084
This removes the extra layer of quotes around values in the 'args' file.
These quotes were there before the pipes.quote() call was added, but
were not removed, resulting in too much quoting.
Manifests as the following stack trace
File "/usr/local/Cellar/ansible/2.0.1.0/libexec/lib/python2.7/site-packages/ansible/utils/display.py", line 259, in error
new_msg = u"ERROR! " + msg
TypeError: coercing to Unicode: need string or buffer, AnsibleParserError found
This makes Ansible no longer set LC_ALL for remote systems. It is up to
the individual modules to set LC_ALL if they need it for screenscraping
the output from a program.
This is the 2.2 followup for #15138
Problem: When setting the file permissions on the remote server for
unprivileged users ansible expects that a chown will fail for unprivileged
users. For some systems (e.g. HP-UX) this is not the case.
Solution: Change the order how ansible sets the remote permissions.
* If the remote_user sudo's to an unprivileged user then we attempt to
grant the unprivileged user access via file system acls.
* If granting file system acls fails we try to change the owner of the
file with chown which only works in case the remote_user is privileged
or the remote systems allows chown calls by unprivileged users (e.g.
HP-UX)
* If the chown fails we can set the file to be world readable so that
the second unprivileged user can read the file. Since this could allow
other users to get access to private information we only do this
ansible is configured with "allow_world_readable_tmpfiles" in the
ansible.cfg
When the PYTHONPATH is an empty string python will treat it as though
the cwd is in the PYTHONPATH. This can be undesirable. So make sure we
delete PYTHONPATH from the environment altgether in this case.
Fixes#16195
Symlinks inside of the chroot were failng because we weren't able to
determine if they were pointing to a real file or not. We could write
some complicated code to walk the symlink path taking into account where
the root of the tree is but that could be fragile. Since this is just
a sanity check, instead we just assume that the chroot is fine if we
find that /bin/sh in the chroot is a symlink. Can revisit if it turns
out that many chroots have a /bin/sh that's a broken symlink.
Fixes#16097
The junos network module will now properly use the ssh key file if its
passed from the playbook to authenticate to the remote device. Prior
to this commit, the ssh keyfile was ignored.
When setuptools installs a python module (as is done via python setup.py
install) It puts the module into a subdirectory of site-packages and
then creates an entry in easy-install.pth to load that directory. This
makes it difficult for Ansiballz to function correctly as the .pth file
overrides the sys.path that the wrapper constructs. Using
sitecustomize.py fixes this because sitecustomize overrides the
directories handled in .pth files.
Fixes#16187
AIX ssh does not seem to like compression, moved it to ssh_args
to allow making it configurable. Note that those using ssh_args
already will need to add it explicitly to keep compression.
* Give a module the possibility to known its own name
This is useful for logging and reporting and fixes the longstanding problem with syslog-messages:
May 30 15:50:11 moria ansible-<stdin>: Invoked with ...
now becomes:
Jun 1 17:32:03 moria ansible-copy: Invoked with ...
This fixes#15830
* Rename the internal name from module.ansible_module_name to module._name
* Fix: create retry_files_save_path if it doesn't exist
Ansible documentation states that retry_files_save_path directory will be
created if it does not already exist. It currently doesn't, so this patch
fixes it :)
* Use makedirs_safe to ensure thread-safe dir creation
@bcoca suggested to use the makedirs_safe helper function :)
The changes to exclude implicit localhosts from group patterns exposed
the bug that we sometimes create multiple implicit localhosts, which
caused some bugs with things like includes, where the host was used as
an entry into a dict, so having multiple meant that the incorrect host
(with a different uuid) was found and includes were not executed for
implicit localhosts.
This allows the PlaybookExecutor to receive more information regarding
what happened internal to the TaskQueueManager and strategy, to determine
things like whether or not the play iteration should stop.
Fixes#15523
The nxos cli provider would not properly handle ssh key files passed
from the playbook task. The ssh_keyfile argument is now properly
passed to the ssh authentication method
This fix address the bug reported in #3862
Also updates doc on variable precedence, as it was incorrect for the
order of play vars/vars_prompt/vars_files in relation to set_fact and
registered variables.
Fixes#14702Fixes#14826
Since we now use the PlayIterator to carry forward failures from previous
play executions, in the event that some hosts which had previously failed
are not in the current inventory we now create a stub state instead of
raising an error.
Exception was raised when trying to use ssh-agent for authentication to
ios devices. This fix enables ssh-agent and enable use of password
protected ssh keys. There is one additional fix to capture authentication
exceptions nicely.
* Port urls.py to python3
Fixes (largely normalizing byte vs text strings) for python3
* Rework what we do with attributes that aren't set already.
* Comments
Has already been transferred as a tempfile.
This fixes the error in https://github.com/ansible/ansible/issues/16125
but there may be higher level issues that should be fixed as well (other
modules might be able to cause status fields like failed and changed to
return a censored string instead of a bool). So leaving 16125 open for
now.
If someone run:
ansible all -m file state=present
The error message is "Missing target hosts" which is misleading, since
the target hosts is here, the problem is the missing '-a'.
* In the VariableManager, we were not properly tracking if a file
had already been loaded, so we continuously append data to the end
of the list there for host and group vars, meaning large sets of data
are duplicated multiple times
* In the inventory, we were merging the host/group vars with the vars
local to the host needlessly, as the VariableManager already handles that.
This leads to needless duplication of the data and makes combining the
vars in VariableManager take even longer.
The output of 'ansible-galaxy info' was formatting the
'galaxy_info' key with one char per line.
Previously, when building the output string, items in
role_info that had a dict for value, the label for
it's key ('galaxy_info' for ex) was being added to
the text list in addition to being appended. Only
the append is needed.
Also added a unit test in test/units/cli/test_galaxy.py,
but skip it on py3 until galaxy is py3 compatible.
fixes#15177
Ansible excessively checks the file system for the potential presence of
`group_vars` and `host_vars` files.
For large numbers of groups this leads to combinatorial performance
issues.
This commit generates a set of group_vars and host_vars filenames using
`os.listdir()` in every possible location and then checks against the sets
before making a stat of the file system.
Also included in this commit is caching of the base directory lookup
for the inventory.
This makes it possible to use anything other than a list (e.g., a
tuple, or dict.keys() in py3k) for argument_spec choices. It also
improves the error messages if you don't use a list type.
Child blocks (whether nested or via includes) don't get a copy of the
dependency chain, so the above method should be used to ensure the block
looks at its parents dep chain.
Fixes#15996
* readd the service action plugin, was removed cause it created unexpected fact gathering and there are no split service plugins that would make this useful (yet)
Revert "removed action plugin as service facts and separate modules don't work yet and this forces gathering facts"
This reverts commit 7368030651.
* now only does minimal fact gathering
This class can be used by F5 modules for raising exceptions.
This should be used to handle known errors and raise them so
that they can be printed in the fail_json method.
The common Exception class built-in should not be used because
it hides tracebacks that are necessary to have when debugging
problems with the module.
* Catch DistributionNotFound when pycrypto is absent
On Solaris 11, module `pkg_resources` throws `DistributionNotFound` on import if `cryptography` is installed but `pycrypto` is not. This change causes that situation to be handled gracefully.
I'm not using Paramiko or Vault, so I my understanding is that I don't
need `pycrpto`. I could install `pycrypto` to make the error go away, but:
- The latest released version of `pycrypto` doesn't build cleanly on Solaris (https://github.com/dlitz/pycrypto/issues/184).
- Solaris includes an old version of GMP that triggers warnings every time Ansible runs (https://github.com/ansible/ansible/issues/6941). I notice that I can silence these warnings with `system_warnings` in `ansible.cfg`, but not installing `pycrypto` seems like a safer solution.
* Ignore only `pkg_resources.DistributionNotFound`, not other exceptions.
With some earlier changes, continuing to forward failed hosts on
to the iterator with each TQM run() call was causing plays with
max_fail_pct set to fail, as hosts which failed in previous plays
were counting those old failures against the % calculation.
Also changed the linear strategy's calculation to use the internal
failed list, rather than the iterator, as this now represents the
hosts failed during the current run only.
This change makes it so we know when it is safe to get rid of the module
(when we stop supporting python2.4) and makes it easier for us to find
code that is using the functions in there to update.
If needed, we'll create a pycompat26 and pycompat27 as well. These
files are for functions that are needed on that python version to write
portable code. So python-2.4 compatible modules may need code in
pycompat24, python26+ modules may need code in pycompat26, etc. If
a function is needed in multiple python versions, we should implement it
in an internal common file and use import to put it in the namespace for
each pycompatXY module.
As noted in the comment, the TQM may be used for more than one play. As such,
after creating the new PlayIterator object it is necessary to mark any failed
hosts from previous calls to run() as failed in the iterator, so they are
properly skipped during any future calls to run().
Since the pyrax website say that only python 2.7 is tested,
I do not think it is worth to aim for python 2.4 compatibility
for the various rackspace modules.
Since this is now the default package manager, it got moved
to another location on Netbsd :
netbsd# type pkgin
pkgin is a tracked alias for /usr/pkg/bin/pkgin
netbsd# uname -a
NetBSD netbsd.example.org 6.1.4 NetBSD 6.1.4 (GENERIC) amd64
But since the package manager is also used outside of NetBSD, we
have to keep the /opt/local path too.
The change is needed to support the multiple include statements
inside the jinja2 template file, as in '{% include ['another.j2'] %}'.
statement. I need this capability, as OpenSwitch `switch` role needs
to handle multiple *.j2 files and supporting the include statement
inside jinja2 file is essential, otherwise I need to combine multiple
template files into a single file, which easily causes conflicts
between developers working on different parts of the teamplate, ports
and interface.
Since it depend on libcloud and libcloud requirements include python 2.6
since libcloud 0.4.0 (https://libcloud.apache.org/about.html), which
was released in 2011 Q2, and GCE drivers were added in 2013,
we can't run a libcloud version with GCE support on 2.4.
Since the modules can use a paramiko transport (ergo
python 2.4 syntax), we need to keep compat with 2.4 and python 3,
so we need to use the get_exception trick, even if the various juniper
libraries are not compatible with 2.4.
It currently fail with
ansible/module_utils/facts.py\", line 357, in get_service_mgr_facts\r\nKeyError: 'distribution'\r\n"
Since self.facts['distribution'] is used after, we need to make sure
this is set by default and if needed, corrected somewhere for Linux.
* more robust hashi_vault module, and allow querying specific field in secret-dict
* allow fetching entire secret dict with trailing ':'
* process comment by bcoca for PR #13690
Initialize facts['distribution'] with self.system so that this fact does
not remain uninitialized on systems_platform_working platforms (FreeBSD,
OpenBSD).
Fixes#15841
Prior to this patch, the retry/until logic would fail any task that
succeeded if it took all of the alloted retries to succeed. This patch
reworks the retry/until logic to make things more simple and clear.
Fixes#15697
When using run_once, there is only one dict of facts so passing that
to the VariableManager results in the fact cache containing the same
dictionary reference for all hosts in inventory. This patch fixes that
by making sure we pass a copy of the facts dict to VariableManager.
Fixes#14279
* Update GCE module to use JSON credentials
* Ensure minimum libcloud version when using JSON crednetials for GCE
* Relax langauge around libcloud requirements
In the free strategy, we mark a host as blocked when it has work to do
(the PlayIterator returns a task) to prevent multiple tasks from being sent
to the host. However, we check for role duplicates after setting the blocked
flag, but were not clearing that when the task was skipped leading to an
infinite loop. This patch corrects that by clearing the blocked flag when
the task is skipped.
Fixes#15681
Previously the changed code was necessary, however it is now problematic
as we've started using the is_failed() method in other places in the code.
Additional changes at the strategy layer should make this safe to remove
now.
Fixes#15625
Fixes#15745
Applies conditional forwarding to all tasks/roles within the included playbook.
The existing line only applies forwarded conditionals to the main Task block, and misses pre_, post_, and roles.
Typo ::
Made a selection mistake when I copied over the one line change
In VariableManager, we fetch the params specifically in the next step,
so including them in the prior step is unnecessary and could lead to things
being overridden in an improper order.
In Block, we should not be getting the params for the role as they are
included earlier via the VariableManager.
Fixes#14411
This patch adds the port argument as a valid parameter to the f5_spec.
This argument is supported in bigsuds version 1.0.4 and greater, so
this patch uses the __version__ variable of the bigsuds module to
determine when the port value should be honored by the module.