template/__init__.py imported unsafe_proxy from vars which caused
vars/__init__.py to load. vars/__init__.py needed template/__init__.py
which caused issues. Loading unsafe_proxy from another location fixes
that.
Just after release of 2.0.0 (in 2.0.0.1) we had a change to the API of
callbacks without bumping the API version. We added the playbook to the
arguments passed to the callbacks.
This wasn't in the Tower callback at the time. In order to prevent
breaking that callback we added a temporary hack to inspect the
callback's API to decide if we needed to call it with arguments or not.
We scheduled the hack for removal in January 2017. Since that's now
past, removing the hack.
Change signed off by matburt on the Tower side.
* fixes#22441
* fixes#22655
* moves all env handling into the exec wrapper; this should work for everything but raw, which is consistent with non-Windows.
* Update module_utils.six to latest
We've been held back on the version of six we could use on the module
side to 1.4.x because of python-2.4 compatibility. Now that our minimum
is Python-2.6, we can update to the latest version of six in
module_utils and get rid of the second copy in lib/ansible/compat.
We can diff non-utf8 files (as part of copy, for instance) but when we
try to turn the bytes into text for display, the characters cause
a traceback. Since diff output is only informational, we can replace
those problematic bytes with replacement characters. We do not want to
do this to other fields because those fields may be used inside of the
playbook (for templating another variable or matching in a conditional).
Fixes#21803Fixes#21804
Fix 'task name is not templated in retry callback'
Add a task_name property to TaskResult that knows to
check in TaskResult._task_fields.
Add integration test for v2_retry_runner callback
Fixes#18236
* removes unused code
* removes module_utils/local.py
* removes plugins/action/network.py
* removes action_handler from connection plugins
* removes code to use action_handler in task_executor
* updates action plugins to subclass from normal
In 159aa26b36
the result of _get_serialized_branches when no hosts were matched
changed from [[]] to []. Thus, the v2_playbook_on_no_hosts_matched
callback would not fire. Change the check so we get the error message
again. Fixes#17706
- centralized skipping
- also fixed module name broken by previous refactor
- let action modules handle async processing
- moved async into base action class's module exec
- action plugins can now run final action as async
- actually skip copy if base skips
- fixed normal for new paths
- ensure internal stat is never async
- default poll to 10 as per docs
- added hint for callback fix on poll
- restructured late tmp, now a pipeline query
- moving action handler to connection as networking does
- fixed network assumption invocation is always passed
- centralized key cleanup, normalized internal var
- _supress_tmpdir_delete now in _ansible_xxx and gets removed from results
- delay internal key removal till after we use em
- nicer tmp removing, using existing methods
- moved cleanup tmp flag to mking tmp func
* Make the module_utils path configurable
* Add a config value to define the path site module_utils files
* Handle module_utils that do not have source as an error
* Make an integration test for module_utils envvar working
* Add documentation for the ANSIBLE_MODULE_UTILS config option/envvar
* Add it to the sample ansible.cfg
* Add it to intro_configuration.
* Also modify intro_configuration to place envvars on equal footing with
the config options (will need to document the envvar names in the
future)
* Also add the ANSIBLE_LIBRARY use case from
https://github.com/ansible/ansible/issues/15432 so we can close out
that bug.
This version just gets the relevant paths from PluginLoader and then
uses the existing imp.find_plugin() calls in the AnsiballZ code to load
the proper module_utils.
Modify PluginLoader to optionally omit subdirectories (module_utils
needs to operate on top level dirs, not on subdirs because it has
a hierarchical namespace whereas all other plugins use a flat
namespace).
Rename snippet* variables to module_utils*
Add a small number of unittests for recursive_finder
Add a larger number of integration tests to demonstrate that
module_utils is working.
Whitelist module-style shebang in test target library dirs
Prefix module_data variable with b_ to be clear that it holds bytes data
Rather than trying to enumerate tasks or track an ever changing cur_role
flag in PlayIterator, this change simply sets a flag on the last block in
the list of blocks returned by Role.compile(). The PlayIterator then checks
for that flag when the cur_block number is incremented, and marks the role
as complete if the given host had any tasks run in that role.
Fixes#20224
On Ubuntu the scriptdir gets placed into sys.path. This makes some
modules (copy) fail because the ansible module gets loaded instead of
the stdlib copy module. So we remove scriptdir there. Unfortunately,
the scriptdir code uses abspath(). When pipelining, abspath() has to
find the cwd. On OSX, finding the cwd when that directory is not
executable by the user raises an OSError. Since OSX does not suffer
from the scriptdir problem we're able to just skip scriptdir handling if
we get that exception.
Fixes#19729
can be per run or per host, also aggregate or not
set_stats action plugin as reference implementation
added doc stub
display stats in calblack
made custom stats showing configurable
When the same role is listed consecutively in a play, the previous role
completion detection failed to mark it as complete as it only checked to
see if the role changed.
This patch addresses that by also keeping track of which task in the role
we are on, so that even if the same role is encountered during later passes
the task number will be less than or equal to the last noted task position.
Related to #15409
Since we no longer use a post-validated task in _process_pending_results, we
need to be sure to template fields used in original_task as they are raw and
may contain variables.
This patch also moves the handler tracking to be per-uuid, not per-object.
Doing it per-object had implications for the above due to the fact that the
copy of the original task is now being used, so the only sure way is to track
based on the uuid instead.
Fixes#18289
Connection plugin can define default action plugin to use by providing
action_handler instance variable. This will override the default
action plugin normal
Some machines have system clocks which can fall behind (for instance,
a host without a CMOS battery like Raspberry Pi). When managing those
machines we have to workaround the fact that the zip format does not
handle file timestamps before 1980. The workaround is to substitute in
the timestamp from the controller instead of from the managed machine.
Fixes#18640
Previous changes addressed a corner case, which unfortunately introduced
another bug. This patch adds a new flag to the host state (did_rescue) which
is set to true when the rescue portion of a block completes. This flag is
then checked in _check_failed_state() when the fail_state != FAILED_NONE.
This lead to the discovery of another bug - current strategies are not advancing
hosts to ITERATING_COMPLETE after doing a peek at the next task, leaving the
host state in the run_state of the final task. To address this, before gathering
the list of failed hosts in StrategyBase.run(), a final pass through the iterator
for all hosts is done to ensure each host is in its final state. This way, no
strategy derived from StrategyBase has to worry about it and it's handled.
Fixes#17983
In order to support legacy plugins, the following two method signatures
are allowed for `CallbackBase.v2_playbook_on_start`:
def v2_playbook_on_start(self):
def v2_playbook_on_start(self, playbook):
Previously, the logic to handle this divergence checked to see if the
callback plugin being called supported an argument named `playbook`
in its `v2_playbook_on_start` method. This was fragile in a few ways:
- if a plugin author did not use the literal `playbook` to name their
method argument, their plugin would not be called correctly
- if a plugin author wrapped their `v2_playbook_on_start` method and
by doing so changed the argspec to no longer expose an argument
with that literal name, their plugin would not be called correctly
In order to continue to support both types of callback for backwards
compatibility while making the call more robust for plugin authors,
the logic can be reversed in order to have a positive check for the old
method signature instead of a positive check for the new one.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
The PlayIterator was written without nested roles in mind, but since
include_role can nest them we need to check to see if we've moved into
a new role which is a child via nesting.
Fixes#18026
When using hostvars to get extra connection-specific vars for connection
plugins, use this raw lookup to avoid prematurely templating all of the
hostvar data (triggering unnecessary lookups).
Fixes#17024
* Pass the absolute path to dirname when assigning basedir
If no path is specified when calling the playbook, os.path.dirname(playbook_path) returns ''
This will cause failure when creating the retry file.
Fixes#17456
* Updated to use os.pathdirname(os.path.abspath())
While doing evil things with action plugins, I hit a code path in which
the mkdir here was failing due to lack of parent dir. Changing this to
makedirs made everything happy. Now, I'd obviously like to understand
why the parent dir exists in some places and not others - but I could
not find anywhere that C.DEFAULT_LOCAL_TMP is ensured to be created.
* Add a new config option to cache the check for controlpersist on the
control machine.
Fixes#15844
* Remove the option and make the behavior the default
* Make the check for controlpersist cache its status per-ssh executable
We couldn't copy to_unicode, to_bytes, to_str into module_utils because
of licensing. So once created it we had two sets of functions that did
the same things but had different implementations. To remedy that, this
change removes the ansible.utils.unicode versions of those functions.
* dynamic role_include
* more fixes for dynamic include roles
* set play yfrom iterator when dynamic
* changes from jimi-c
* avoid modules that break ad hoc
TODO: should really be a config
* adds squashing to objects, which allows them to be squashed down
to a final "view" before post_validate to avoid expensive evaluations
of parent attributes
* attempt #11 to role_include
* fixes from jimi-c
* do not override load_data, move all to load
* removed debugging
* implemented tasks_from parameter, must break cache
* fixed issue with cache and tasks_from
* make resolution of from_tasks prioritize literal
* avoid role dependency dedupe when include_role
* fixed role deps and handlers are now loaded
* simplified code, enabled k=v parsing
used example from jimi-c
* load role defaults for task when include_role
* fixed issue with from_Tasks overriding all subdirs
* corrected priority order of main candidates
* made tasks_from a more generic interface to roles
* fix block inheritance and handler order
* allow vars: clause into included role
* pull vars already processed vs from raw data
* fix from jimi-c blocks i broke
* added back append for dynamic includes
* only allow for basename in from parameter
* fix for docs when no default
* fixed notes
* added include_role to changelog
ran task_executor through python-modernize and then made changes to the
code pointed out by it:
* Most places where we looped through dict.keys() changed to
for key in dict:
Using keys() in python2 creates a list() of keys. For iterating, we
can iterate over the dict itself and we'll be handed back each key.
In python3, doing it this way does not create a new list and thus is
more memory efficient.
* In one place, use:
for key in list(dict.keys()):
because we're deleting elements from the dictionary inside of the
loop. So we really do need to iterate over a separate list of the
keys to avoid modifying the dictionary that we're iterating over.
(Fixes Python3 bug)
* In one place, change the order of an if-elif-else tree so that the
most frequent cases are evaluated first. (Optimization)
The calculation for max_fail_percentage was moved into the linear
strategy a while back, and works better there in the stategy layer
rather than at the PBE layer. This patch removes it from the PBE layer
and tweaks the logic controlling whether or not the next batch is run.
Fixes#15954
The flag new_pb_basedir is not being utilized in Inventory._get_hostgroup_vars,
leading to the situation where an inventory with no playbook basedir set will
read host/group vars from the $CWD, regardless of the inventory and/or playbook
relative location. This patch corrects that by not using the playbook basedir
if it is unset (None).
This patch also corrects a bug in which the VariableManager would accumulate
host/group vars files, which could lead to incorrect vars files being used when
playbooks are run from different directories containing their own group/host vars
directories.
Fixes#16953
We want to NOT consider the async task as failed if the result is
not parsed, which was the intent of:
https://github.com/ansible/ansible/pull/16458
However, the logic doesn't actually do that because we default
the 'parsed' value to True. It should default to False so that
we continue waiting, as intended.
Instead of immediately returning a failed code (indicating a break in
the play execution), we internally 'or' that failure code with the result
(now an integer flag instead of a boolean) so that we can properly handle
the rescue/always portions of blocks and still remember that the break
condition was hit.
Fixes#16937
Rather than repeatedly searching for tasks by uuid via iterating over
all known blocks, cache the tasks when they are added to the PlayIterator
so the lookup becomes a simple key check in a dict.
When a task result has an empty results list, the
list should be ignored when determining the results
of `_check_key`. Here the empty list is treated the
same as a non-existent list.
This fixes a bug that manifests itself with squashed
items - namely the task result contains the correct
value for the key, but an empty results list. The
empty results list was treated as zero failures
when deciding which handler to call - so the task
show as a success in the output, but is deemed to
have failed when deciding whether to continue.
This also demonstrates a mismatch between task
result processing and play iteration.
A test is also added for this case, but it would not
have caught the bug - because the bug is really in
the display, and not the success/failure of the
task (visually the test is more accurate).
Fixesansible/ansible-modules-core#4214
This feature changes the scalar value of `serial:` to a list, which
allows users to specify a list of values, so batches can be ramped
up (commonly called "canary" setups):
- hosts: all
serial: [1, 5, 10, "100%"]
tasks:
...
I'm not sure why that would be desirable -- we really want __version__
to come from the controller whereas importing will come from the client
node. If it turns out there was a reason to do that, please be sure to
use an exception handler that catches all exceptions instead of only
catching ImportError:
```
try:
from ansible.release import __version__, __author__
except:
__version__ = [...]
```
Fixes#16523
* fixed lookup search path
added ansible_search_path var that contains the proper list and in order
removed roledir var which was only used by first_found, rest used role_path
added needle function for lookups that mirrors the action plugin one, now
both types of plugins use same pathing.
* added missing os import
* renamed as per feedback
* fixed missing rename in first_found
* also fixed first_found
* fixed import to match new error class
* fixed getattr ref
Since Ansiballz, we no longer need to import basic directly into
a new-style module. Some modules, like the Networking modules, may
import basic in their own module_utils files and the module will import
that specialized module_util file rather than basic.
* Don't treat parsing problems as async task timeout
If there is a problem reading/writing the status file that manifests as
not being able to parse the data, that doesn't mean the task timed out,
it means there was what was likely a tempoarary problem. Move on and
keep polling for success. The only things that should cause the async
status to not be parseable are bugs in the async_runner.
* Add comment explaining not bailing out of loop
* Return different error when result is unparseable
* Remove extraneous else
* Instead of rebuilding the handler list all over the place, we now
compile the handlers at the point the play is post-validated so that
the view of the play in the PlayIterator contains the definitive list
* Assign the dep_chain to the handlers as they're compiling, just as we
do for regular tasks
* Clean up the logic used to find a given handler, which is greatly
simplified by the above changes
Fixes#15418
6eefc11c converted task.loop_control into an object, but while the other
callers were updated to use .loop_var instead of .get('loop_var'), this
site was overlooked.
This can be reproduced by including with loop_control a file that does
set_fact; a simple regression test along these lines is included.
Again, as we're carrying failed/unreachable hosts forward from play to play via
internal structures, we need to remember which ones had previously failed so that
unrelated host failures don't inflate the numbers for a given serial batch in the
PlaybookExecutor causing a premature exit.
Fixes#16364
The listen statement on handlers should have supported a list, however
it was broken in the revision of the pub/sub feature based on the handler
revamp. This patch corrects the bug, so this works again:
- name: some handler
...
listen:
- some target
- another target
Fixes#16378
Due to the fact that roles may be instantiated with different sets of
params (multiple inclusions of the same role or via role dependencies),
simply tracking notified handlers by name does not work. This patch
changes the way we track handler notifications by using the handler
object itself instead of just the name, allowing for multiple internal
instances. Normally this would be bad, but we also modify the way we
search for handlers by first looking at the notifying tasks dependency
chain (ensuring that roles find their own handlers first) and then at
the main list of handlers, using the first match it finds.
This patch also modifies the way we setup the internal list of handlers,
which should allow us to correctly identify if a notified handler exists
more easily.
Fixes#15084
When setuptools installs a python module (as is done via python setup.py
install) It puts the module into a subdirectory of site-packages and
then creates an entry in easy-install.pth to load that directory. This
makes it difficult for Ansiballz to function correctly as the .pth file
overrides the sys.path that the wrapper constructs. Using
sitecustomize.py fixes this because sitecustomize overrides the
directories handled in .pth files.
Fixes#16187
* Fix: create retry_files_save_path if it doesn't exist
Ansible documentation states that retry_files_save_path directory will be
created if it does not already exist. It currently doesn't, so this patch
fixes it :)
* Use makedirs_safe to ensure thread-safe dir creation
@bcoca suggested to use the makedirs_safe helper function :)
This allows the PlaybookExecutor to receive more information regarding
what happened internal to the TaskQueueManager and strategy, to determine
things like whether or not the play iteration should stop.
Fixes#15523
Since we now use the PlayIterator to carry forward failures from previous
play executions, in the event that some hosts which had previously failed
are not in the current inventory we now create a stub state instead of
raising an error.
* Port urls.py to python3
Fixes (largely normalizing byte vs text strings) for python3
* Rework what we do with attributes that aren't set already.
* Comments
With some earlier changes, continuing to forward failed hosts on
to the iterator with each TQM run() call was causing plays with
max_fail_pct set to fail, as hosts which failed in previous plays
were counting those old failures against the % calculation.
Also changed the linear strategy's calculation to use the internal
failed list, rather than the iterator, as this now represents the
hosts failed during the current run only.
As noted in the comment, the TQM may be used for more than one play. As such,
after creating the new PlayIterator object it is necessary to mark any failed
hosts from previous calls to run() as failed in the iterator, so they are
properly skipped during any future calls to run().