* Add update parameter in junos_config module which supports
configuration action like merge, replace and overwrite.
* Add support for replace along with update
argument
Since we no longer use a post-validated task in _process_pending_results, we
need to be sure to template fields used in original_task as they are raw and
may contain variables.
This patch also moves the handler tracking to be per-uuid, not per-object.
Doing it per-object had implications for the above due to the fact that the
copy of the original task is now being used, so the only sure way is to track
based on the uuid instead.
Fixes#18289
If the plugin version expected is, say '1.20', then specifying it
as...
version: 1.20
... will make the YAML parser interpret it as a float, and the
value obtained by the module will be 1.2 instead of 1.20, which
will cause downloading of wrong version of the module.
This patch updates the docs so that users don't face this issue.
* Fix # #5839 Add 'update' parameter in junos_config module
Add update parameter in junos_config module which supports
configuration action like merge, replace and overwrite.
* Fix documentation issue
* Fix review comment to add replace argument
Make replace and update argument mutually
exclusive, to support replace for backward
compatibility.
Previously, packages were installed one at a time in a loop. This caused
a couple of problems.
First, it was a performance issue - pacman would have to perform all of
its checks once per package. This is unnecessarily costly, especially
when you're trying to install several related packages at the same time.
Second, if a package you're trying to install depends on a virtual
package that is provided by several different packages (such as the
"libgl" package on Arch) and you aren't also installing something that
provides that virtual package at the same time, pacman will produce an
interactive prompt to allow the user to select a relevant package. This
is obviously incompatible with how ansible operates. Yes, this problem
could be avoided by installing packages in a different order, but the
order of installation shouldn't matter, and there may be situations
where it is not possible to control the order of installation.
With this refactoring, all of the above problems are avoided. The code
will now work out all of the packages that need to be installed from any
configured repositories and any packages that need to be installed from
local files, and then install all the repository packages in one go and
then all of the local file packages in one go.
This is a redesign in how plugins call _remote_checksum().
- _remote_stat() has been modified to report the real error as
AnsiblError
- Action plugin **unarchive** calls _remote_stat() directly instead of
_remote_checksum()
- Action plugin **unarchive** also handles the exceptions directly
- Ensure get_exception() returns native text
Two other action plugins, **template** and **fetch**, also do a remote checksum.
In **template** we already call _remote_stat(), just like we now do for
unarchive, in **fetch** we do call _remote_checksum() and we make the
exact same mistake as the unarchive plugin. So that one could use a
redesign as well.
This fixes#19494
Before:
```
[dag@moria ansible.testing]$ ansible-playbook -v test137.yml
Using /home/dag/home-made/ansible.testing/ansible.cfg as config file
PLAY [localhost]
******************************************************************************************************
TASK [unarchive]
******************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg":
"python isn't present on the system. Unable to compute checksum"}
PLAY RECAP
******************************************************************************************************
localhost : ok=0 changed=0 unreachable=0
failed=1
```
After:
```
[dag@moria ansible.testing]$ ansible-playbook -v test137.yml
Using /home/dag/home-made/ansible.testing/ansible.cfg as config file
PLAY [localhost]
*************************************************************************************************************
TASK [unarchive]
*************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg":
"Failed to get information on remote file (/tmp/): sudo: unknown user:
foobar\nsudo: unable to initialize policy plugin\n"}
PLAY RECAP
*******************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0
failed=1
```
* Update system/user.py module.
Add ability to add real system users with next free system uid (< 500) on macOS.
* Improve syntax in system/user.py module.
Remove complex if else line and replace by simple comparison which yields the same boolean value.
* Remove "True" comparison of user.py.
Remove comparison to true, as it is not pep8 conform.
* Add new parameters to taskdefinition module - network_mode and task_role_arn
* Add version_added field for doco
* Change version_added parameter to 2.3
For devices that do not support mutliplexing, we cannot automatically
determine the network os. This removes the os guess static method
from the terminal plugin. For this devices, the network_os
value must be configured
It's possible to compress packages using several different compression
methods, or not compressed at all. Previously, the pacman module only
supported files compressed using xz. This update ensures that all
compression types currently supported by pacman are supported by the
ansible pacman module.
The list of supported compression methods at the time of writing can be
found here:
https://git.archlinux.org/pacman.git/tree/scripts/makepkg.sh.in#n747
This fix ensures that if there are specific module errors (in our case
the python interpreter was not found) then command and shell returns a
proper error.
It also fixes a few other imperfections that we noticed during
troubleshooting:
- Return the real RC if it were available
- Improve a dictionary evaluation using .get()
- Return an RC of -1 if it is unknown (instead of returning 0)
This fixes#18846
This fix ensures a proper error is shown when a group_vars files cannot
be parsed correctly. Without this patch you get:
```
[dag@moria ansible.testing]$ ansible-playbook test132.yml
ERROR! Unexpected Exception: dictionary update sequence element #0 has length 1; 2 is required
to see the full traceback, use -vvv
```
With this patch you get:
```
[dag@moria ansible.testing]$ ansible-playbook test132.yml
ERROR! Problem parsing file '/home/dag/home-made/ansible.testing/group_vars/test135': line 1, column 1
```
This fixes#18843
Sudoers is a great example to show how you can prevent shutting yourself
out. But SSHd is at least as important to avoid syntax errors causing a
lot of grieve. So I think it deserves a spot in this list :-)
Currently this function directs to the standard NetworkModule,
whose run_commands function takes no arguments (other than self).
This directs the call to the connection's cli method to run the command
directly on the device.
Connection plugin can define default action plugin to use by providing
action_handler instance variable. This will override the default
action plugin normal
* adds new error AnsibleModuleExit to handle module returns
* adds new action plugin network for attaching connection to network modules
* adds new shared module local to receive connection
* splits out function to update task_args with common updates
This commit provides a mechansim for running local modules that require
a connection object for interative commands tyically implemented for
network devices. It provides a way to locally import modules (post fork)
and run them using exception handling to exit.
* Fix bug #5328 apache module loading
Currently, the apache2_module module parses apache configs
for correctness when enabling or disabling apache2 modules.
This behavior introduced a conflict condition when transitioning
between mpm modules, such as mpm_worker and mpm_event.
This change accounts for the specific error condition raised
by ``apachectl -M``:
``AH00534: apache2: Configuration error: No MPM loaded.``
When loading or unloading a module with a name that contains 'mpm_',
apache2_module will ignore the error raised by apachectl if stderr
contains 'AH00534'.
Fixes#5328
* Add AH00534 warning
* Added changes from PR #5629
* Modified ignore_configcheck behavior
* Code smell test for iteritems and itervalues
* Change the keydict object in authorized_keys so it doesn't throw a false postive
keydict is a bad data structure anyway. We don't use the iteritems and
itervalues methods so just disable them so that the code-smell tests do
not trigger on it.
* Change release templates so they work with py3
The process to poll for data in the stdout and/or stderr pipes during a
low-level command execution was repetitive. Factoring this out into a
function DRYs out the code.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
For the comparisions that need to be done, this map call needs
to convert to a list because the six import in ansible changes
the behavior of map to return an iterator instead of a list
* Fix UnboundLocalError remote_head in git
Fixes#5505
The use of remote_head was a leftover of #4562.
remote_head is not necessary, since the repo is unchanged anyway and
after is set correctly.
Further changes:
* Set changed=True and msg once local_mods are detected and reset.
* Remove need_fetch that is always True (due to previous if) to improve
clarity
* Don't exit early for local_mods but run submodules update and
switch_version
* Add test for git with local modifications
* Enable tests on python 3 for uri
* Added one more node type to SAFE_NODES into safe_eval module.
ast.USub represents unary operators. This is necessary for
parsing some unusual but still valid JSON files during testing
with Python 3.
* Rebase of https://github.com/ansible/ansible-modules-extras/pull/708
708 was full of extraneous merge commits interwoven with commits to
implement the feature. In the end the only way I could clean this up
in reasonable time was to just take a regular diff between the PR and
the base. This lost the history of intermediate commits but I've
preserved attribution to @dayton967 via git's --author field.
Although I preserved the logic of the PR, there were a few additional
things that I cleaned up:
* Fixed import of email.mime.multipart
* Used the argspec to set port and timeout to integers instead of having
ad hoc code inside of the module.
* Used argspec's choices for secure instead of ad hoc code inside of the
module.
* Removed some unused variables
* Made secure_state a python boolean instead of using 0 and 1
* Used secure with string comparisons instead of turning it into an
integer code. This is much more readable.
* Fixed catching of SMTPExceptions (SMTPException wasn't imported
directly so it needed to use the smtplib namespace.)
* Fix synchronize retries
The synchronize module munges its task args on every invocation of
run(). This was problematic because the munged data was not fit for use
by a second pass of the synchronize module. Correct this by using a copy
of the task args on every invocation of run() so that the original args
are not affected.
Local testing using this playbook seems to confirm that things work as
expected:
- hosts: all
tasks:
- delay: 2
register: task_result
retries: 1
until: task_result.rc == 0
synchronize:
dest: /tmp/out
mode: pull
src: /tmp/nonexistent/
fixes#18281
* Update synchroncization fixture assertions
When we started operating on a copy of the task args the test assertions
were no longer asserting things about the munged state but of the
pristine state. Convert the copy of task args to a class member so that
it can be compared against later in testing and update the assertions to
check this munged copy.
* Shuffle objects around for cleaner testing
Attach the temporary args dict to the task rather than the action as
this makes updating the existing tests cleaner.
The overwrite parameter is forcibly set to false, meaning a module
passing that parameter will have no effect. The overwrite facility
is necessary to ensure that conflicting options can be written the
configuration (which, in replace mode, they cannot).
This change ensures that if overwrite is set, it will not be changed
to False in the logic.
* Fixes#18663. Bad handling of existing config in dellos9 module.
The dellos9 module doesn't build correctly the internal
structures used to represent the existing config of the managed
network device. This leads to apply changes every time the
playbook is run, even if the existing config is the same that the
one you are trying to push into the device.
Probably this problem exist also in the dellos6 and dellos10
modules, but I only fixed it in the dellos9 module.
The fix modifies two methods. The first one is `get_config`,
where the return clause didn't work correctly when the flow
doesn't enter in the `if` block. In that case the `contents`
variable is not an array an this should be handled.
The second fix is in the `get_sublevel_config` method. In this
case the indentation whitespaces of the parents should be rebuild
because further functions and methods required it to handle
correctly comparisons used to check if changes should be pushed
into device.
* Fixes#18663 for dellos10 module with the same patches as dellos9.
mkstemp() returns a tuple containing an OS-level handle to an open file
(as would be returned by os.open()) and the absolute pathname of that
file, in that order.
This patch makes sure that the fd opened by tempfile.mkstemp() is
re-used and closed properly.
* added alpha version of the 'sorcery' module
* fully conforming YAML
* use bundled check for executables
* - codex_list(): use commands instead of checksums to get sorcery version and verify codex equality - renamed: - manage_depends() -> match_depends() - tocast -> cast_queue, todispel -> dispel_queue, needs_recast -> depends_ok - SORCERY_LOG -> SORCERY_LOG_DIR, SORCERY_STATE -> SORCERY_STATE_DIR - removed: - SORCERY_VERSION_FILE - CODEX - added commentary to match_depends() and manage_spells() - fixed bug about dropped dependency line for previously existed dependency - fixed bug about not fixing depends for the 'latest' state - simplified several code constructions
* cleaned up some docs
* do not use separate message for Codex update, rely on the 'changed' status instead
* use built-in list conversion (_check_type_list()) for spells
* corrected spell name extraction from list in match_depends()
* avoid non-matching dependencies line duplication in depends file
* added more complex playbook example
* tiny stylistic fix for docs
* replaced ternary construction with a regular statement
* replaced yet another ternary construction with a regular statement
* enable Python 2.4 compatibility by splitting try-finally block
* enable Python 2.4 compatibility by replacing 'with' statement with try-except+try-finally blocks
* unify spells' assign
* replaced one regex with startswith()
* go Ansible 2.1
* added dummy RETURN template
* go Ansible 2.2
* better clarify permissions' requirements
* - updated copyright years - fixed rebuild command bug - re-used run_command_environ_update dict for env var management
* handle Python 3.5
* Revert "handle Python 3.5"
This reverts commit 33a5a0eb64c1193318298e111f063cdd5f93b73a.
* handle Python 3.5 (2nd try)
* go Ansible 2.3
* clarity++
* For realz this time
* Fix tempfile.mkstemp (#2)
* back to square one, removing temp file from the mix
* Adding temp back
* Adding tuple back
* Adding another tuple back
* Trying to get around weird Jenkins behavior of blowing up when both .hpi and jpi file found
* Incorporating PR feedback
* Delete .hpi file instead of backing it up, some basic clean up
* Moving file deletion to the right location
* Blank lines. They always get me.
* Allow re-using an existing template when updating a stack by not passing 'template' or 'template_url'. This is a big one for me as our deploy process creates a new stack and then modifies the old one; to avoid changing the resources inside the old one, we have had to avoid using the Ansible module and use the AWS CLI instead in order to pass `--use-previous-template`.
* Split create and update logic into separate functions
* Remove dead `update` variable