* Move self._tqm.load_callbacks() earlier to ensure that v2_on_playbook_start can fire
* Pass the playbook instance to v2_on_playbook_start
* Add a _file_name instance attribute to the playbook
At its most basic, this is nothing more than an array or hash lookup,
but when used in conjunction with map, it is very useful. For example,
while constructing an "ssh-keyscan …" command to update known_hosts on
all hosts in a group, one can get a list of IP addresses with:
groups['x']|map('extract', hostvars, 'ec2_ip_address')|list
This returns hostvars[a].ec2_ip_address, hostvars[b].ec2_ip_address, and
so on. You can even specify an array of keys for a recursive lookup, and
mix string and integer keys depending on what you're looking up:
['localhost']|map('extract', hostvars, ['vars','group_names',0])|first
== hostvars['localhost']['vars']['group_names'][0]
== 'ungrouped'
Includes documentation and tests.
The comment was taken literally from lib/plugins/strategy/linear.py and
makes no sense in free.py where we have no noop tasks.
Also update the debug messages.
This patch fixes an issue with the common args dict in the eapi shared
module. This patch is required for the eapi shared module to be properly
imported and is therefore should be applied to all instances.
This commit changes the way modules create an instance of AnsibleModule to
now use a common function, eapi_module. This function will now automatically
append the common argument spec to the module argument_spec. Module
arguments can override common module arguments
Pipelining is a *significant* performance benefit, because each task can
be completed with a single SSH connection (vs. one ssh connection at the
start to mkdir, plus one sftp and one ssh per task).
Pipelining is disabled by default in Ansible because it conflicts with
the use of sudo if 'Defaults requiretty' is set in /etc/sudoers (as it
is on Red Hat) and su (which always requires a tty).
We can (and already do) make sudo/su happy by using "ssh -t" to allocate
a tty, but then the python interpreter goes into interactive mode and is
unhappy with module source being written to its stdin, per the following
comment from connections/ssh.py:
# we can only use tty when we are not pipelining the modules.
# piping data into /usr/bin/python inside a tty automatically
# invokes the python interactive-mode but the modules are not
# compatible with the interactive-mode ("unexpected indent"
# mainly because of empty lines)
Instead of the (current) drastic solution of turning off pipelining when
we use a tty, we can instead use a tty but suppress the behaviour of the
Python interpreter to switch to interactive mode. The easiest way to do
this is to make its stdin *not* be a tty, e.g. with cat|python.
This works, but there's a problem: ssh will ignore -t if its input isn't
really a tty. So we could open a pseudo-tty and use that as ssh's stdin,
but if we then write Python source into it, it's all echoed back to us
(because we're a tty). So we have to use -tt to force tty allocation; in
that case, however, ssh puts the tty into "raw" mode (~ICANON), so there
is no good way for the process on the other end to detect EOF on stdin.
So if we do:
echo -e "print('hello world')\n"|ssh -tt someho.st "cat|python"
…it hangs forever, because cat keeps on reading input even after we've
closed our pipe into ssh's stdin. We can get around this by writing a
special __EOF__ marker after writing in_data, and doing this:
echo -e "print('hello world')\n__EOF__\n"|ssh -tt someho.st "sed -ne '/__EOF__/q' -e p|python"
This works fine, but in fact I use a clever python one-liner by mgedmin
to achieve the same effect without depending on sed (at the expense of a
much longer command line, alas; Python really isn't one-liner-friendly).
We also enable pipelining by default as a consequence.
since all the --ask pass options end up triggering the same code
and are functionally equivalent, ignore them when it comes to checking
privilege escalation conflicts. This allows using -K when --become-method=su
and so on.
The secret_key parameter especially can contain non-ascii characters and
will throw an error if such a string is passed as a byte str.
Potential fix for #13303
It is natural that an argument_spec with choises=BOOLEAN accepts
boolean literal (True, False) though the current implementation
allows only string or int.
* StandardError doesn't exist in python3
* because it is the root of builtin expections, we can't catch it
separate from the builtin exceptions
* It doesn't tell us anything about the error being thrown as it's too
generic
This ssh shared module is used for building modules that require an
interactive shell environment such as those required for connecting
to network devices
If we request escalation with a password, we start in expecting_prompt
state. If the escalation then succeeds without the password, i.e., the
become_success response arrives, we must explicitly move into the next
state (awaiting_escalation, which immediately goes into ready_to_send),
so that we no longer try to apply the timeout.
Otherwise, we would leak the success notification and eventually
timeout. But if the module response did arrive before the timeout
expired, the "process has already exited" test would do the right
thing by accident (which is why it didn't fail more often).
Fixes#13289
This was caused by accessing the cache using the passed in mod_type
rather than the suffix that we calculate with knowledge of whether this
is a module or non-module plugin.
Previously, we were filtering the task list on tags for each host
that was including the file, based on the idea that the variables
had to include the host information. However, the top level task
filtering is play-context only, which should also apply to the
included tasks. Tags cannot and should not be based on hostvars.
Looks like there are two pattern caches that need to be cleared for this to work- added the second one.
Added integration tests for add_host to prevent future regressions.
Looks like someone forgot to create an instance of undefined here- we were returning the undefined type object, which broke all the undefined checks.
Added an integration test around add_host that will catch this (separate PR to follow)
This callback plugin will generate json objects to be sent to the
logentries service for auditing/debugging purposes.
To use:
Add this to your ansible.cfg file in the defaults block
[defaults]
callback_plugins = ./callback_plugins
callback_stdout = logentries
callback_whitelist = logentries
Copy the callback plugin into the callback_plugings directory
Either set the environment variables
export LOGENTRIES_API=data.logentries.com
export LOGENTRIES_PORT=10000
export LOGENTRIES_ANSIBLE_TOKEN=dd21fc88-f00a-43ff-b977-e3a4233c53af
Or create a logentries.ini config file that sites next to the plugin with the following contents
[logentries]
api = data.logentries.com
port = 10000
tls_port = 20000
use_tls = no
token = dd21fc88-f00a-43ff-b977-e3a4233c53af
It was set to match the SSH connect timeout. Unfortunately, they would
race when ssh fails to connect, and the connect timeout usually failed.
This led to some misleading error messages.
Fixes#12916
Error reporting was broken for GCE modules- pprint didn't work with exceptions, so you'd always get "Unexpected response: {}" instead of the real error.