The use-case here is that based on information in the /proc/cmdline certain actions can be taken.
A practical example in our case is that we have a play at the end of the provisioning phase that reboots the system. Since we don't want to accidentally reboot a system (or restart the network) on a production machine, having a way to separate an Anaconda post-install (sshd in chroot) with a normal system is a good way to make that distinction.
---
- name: reboot
hosts: all
tasks:
- action: command init 6
only_if: "not '${ansible_cmdline.BOOT_IMAGE}'.startswith('$')"
A practical problem here is the fact that we cannot simply check whether it is set or empty:
---
- name: reboot
hosts: all
tasks:
- action: command init 6
only_if: "'${ansible_cmdline.BOOT_IMAGE}'"
If ansible_cmdline was a string, a simple only_if: "'${ansible_cmdline}'.find(' BOOT_IMAGE=')" was an option, but still not very "beautiful" :-/
This implementation uses shlex.split() and uses split(sep, maxsplit=1).
This allows the use of ~ in the chdir argument of the command module
I know the later change is absolutely necessary as the first change
was not sufficient. It may be that the first change fixes shell and
the second fixes command.
Added required as optional argument to get_bin_path(). It defaults to
false. Updated following modules to use required=True when calling
get_bin_path(): apt_repository, easy_install, group, pip,
supervisorctl, and user.
Also removed _find_supervisorctl() from supervisorctl module and updated
_is_running() to not need it.
Will manage values of seboolean on a host. Options are name (name of
boolean), state (on or off), and persistent (on or off). Persistent
defaults to no.
* Migraed easy_install, pip, service, setup, and user.
* Updated fail_json message in apt_repository
* Fixed easy_install to not hardcode location of virtualenv in
/usr/local/bin/.
* Made handling of virtualenv more consistent between easy_install and
pip.
Most of it worked already, except for the enable parameter, because it
tried to use chkconfig which only sees SysV services. First look for
systemctl and use that if it exists.
This takes started, stopped and restarted.
Started returns when connecting is possible.
Stopped when connecting is not possible.
Restarted first waits for connecting to be impossible and returns when it is
possible again.
Use a different method to query for current
privileges at the table and database level.
This method is more robust if newer privileges
are added in future versions and also supports the
ALL wildcard.
fail_on_user option can be used to ignore silently
if the user cannot be removed because of remaining
privilege dependencies to other objects in the
database. By default it will fail, so that this new
behavior won't surprise unsuspecting users.
The postgresql_user module has several drawbacks:
* No granularity for privileges
* PostgreSQL semantics force working on one
database at time, at least for Tables. Which
means that a single call can't remove all the
privileges for a user, and a user can't be
removed until all the privileges are removed,
forcing a module failure with no way to
work around the issue.
Changes:
* Added the ability to specify granular privileges
for database and tables within the database
* Report if user was removed, and add an option to
disable failing if user is not removed.
whether to download it every time or not -- will replace only on change, or decide to not download. The default
is thirsty=no which will not download every time by default.
mpdehaan requested in ansible/ansible#795 that globals be removed.
The response was to remove the lines with the word 'global', but not
the actual use of global variables. Which makes the module break silently.
Updated to use local variables.
When initalizing a connection to psycopg2, in order to use the default
values, the keywords must be missing. So we use a dictionary as a kwarg
and include only the keywords that do not have an empty value on the
module parameters.
it is possible those folks w/o yum-utils installed but with rhn-plugin
installed but w/o any rhn-certificates will still see an error msg.
they have 3 options:
1. remove rhn-plugin
2. enable some channels w/rhn certs
3. install yum-utils