We were logging the command to be executed many times, which made debug
logs very hard to read. Now we do it only once.
Also makes the logged ssh command line cut-and-paste-able (the lack of
which has confused a number of people by now; the problem being that we
pass the command as a single argument to execve(), so it doesn't need an
extra level of quoting as it does when you try to run it by hand).
Pipelining is a *significant* performance benefit, because each task can
be completed with a single SSH connection (vs. one ssh connection at the
start to mkdir, plus one sftp and one ssh per task).
Pipelining is disabled by default in Ansible because it conflicts with
the use of sudo if 'Defaults requiretty' is set in /etc/sudoers (as it
is on Red Hat) and su (which always requires a tty).
We can (and already do) make sudo/su happy by using "ssh -t" to allocate
a tty, but then the python interpreter goes into interactive mode and is
unhappy with module source being written to its stdin, per the following
comment from connections/ssh.py:
# we can only use tty when we are not pipelining the modules.
# piping data into /usr/bin/python inside a tty automatically
# invokes the python interactive-mode but the modules are not
# compatible with the interactive-mode ("unexpected indent"
# mainly because of empty lines)
Instead of the (current) drastic solution of turning off pipelining when
we use a tty, we can instead use a tty but suppress the behaviour of the
Python interpreter to switch to interactive mode. The easiest way to do
this is to make its stdin *not* be a tty, e.g. with cat|python.
This works, but there's a problem: ssh will ignore -t if its input isn't
really a tty. So we could open a pseudo-tty and use that as ssh's stdin,
but if we then write Python source into it, it's all echoed back to us
(because we're a tty). So we have to use -tt to force tty allocation; in
that case, however, ssh puts the tty into "raw" mode (~ICANON), so there
is no good way for the process on the other end to detect EOF on stdin.
So if we do:
echo -e "print('hello world')\n"|ssh -tt someho.st "cat|python"
…it hangs forever, because cat keeps on reading input even after we've
closed our pipe into ssh's stdin. We can get around this by writing a
special __EOF__ marker after writing in_data, and doing this:
echo -e "print('hello world')\n__EOF__\n"|ssh -tt someho.st "sed -ne '/__EOF__/q' -e p|python"
This works fine, but in fact I use a clever python one-liner by mgedmin
to achieve the same effect without depending on sed (at the expense of a
much longer command line, alas; Python really isn't one-liner-friendly).
We also enable pipelining by default as a consequence.
If we request escalation with a password, we start in expecting_prompt
state. If the escalation then succeeds without the password, i.e., the
become_success response arrives, we must explicitly move into the next
state (awaiting_escalation, which immediately goes into ready_to_send),
so that we no longer try to apply the timeout.
Otherwise, we would leak the success notification and eventually
timeout. But if the module response did arrive before the timeout
expired, the "process has already exited" test would do the right
thing by accident (which is why it didn't fail more often).
Fixes#13289
It was set to match the SSH connect timeout. Unfortunately, they would
race when ssh fails to connect, and the connect timeout usually failed.
This led to some misleading error messages.
Fixes#12916
I PR'd a change to pywinrm to allow server certs to be ignored; but it's only on the SSL transport (which we were previously ignoring). For this to work more generally, we're also now pulling the named ansible_winrm_* args from the merged set of host/group vars, not just host_vars.
also remove condition to bypass setting user if user matches current user
this enables forcing user when set to the same user as current user and ignoring .ssh/config
while keeping .ssh/config with current user if nothing is specified.
When using 'local' connections, privilege escalation would fail if
ansible_ssh_user was in the current context to the same value as
become_user.
This commit ensures that for 'local' connections we reset remote_user to
the local username.
This fixes#12782.
This is because we pass the whole dd command string into the shell
that's running on the contained environment rather than running it
directly from python via subprocess without a shell.
The earlier code behaved exactly as though this default had been set,
but it was actually handled as a(n unnecessary) special case inside the
connection plugin, rather than set as an explicit default.
If the default is overriden either in ansible.cfg or the environment,
the new code will continue to work (in fact, it won't know or care,
since it just uses the value set in the PlayContext).
This is submitted as a separate commit for easier review to address
backwards-compatibility concerns.
Using set_host_overrides() in the connection plugin to access the ssh
argument variables from the inventory didn't see group_vars/host_vars
settings, as noted earlier. Instead, we can set the correct values in
the PlayContext, which has access to all command-line options, task
settings, and variables.
The only downside of doing so is that the source of the settings is no
longer available in ssh.py, and therefore can't be logged. But the code
is simpler, and it actually works.
This change was suggested by @jimi-c in response to the FIXME in the
earlier commit.
Now we have the following ways to set additional arguments:
1. [ssh_connection]ssh_args in ansible.cfg: global setting, prepended to
every command line for ssh/scp/sftp. Overrides default ControlPersist
settings.
2. ansible_ssh_common_args inventory variable. Appended to every command
line for ssh/scp/sftp. Used in addition to ssh_args, if set above, or
the default settings.
3. ansible_{sftp,scp,ssh}_extra_args inventory variables. Appended to
every command line for the relevant binary only. Used in addition to
#1 and #2, if set above, or the default settings.
3. Using the --ssh-common-args or --{sftp,scp,ssh}-extra-args command
line options (which are overriden by #2 and #3 above).
This preserves backwards compatibility (for ssh_args in ansible.cfg),
but also permits global settings (e.g. ProxyCommand via _common_args) or
ssh-specific options (e.g. -R via ssh_extra_args).
Fixes#12576
* Remove extraneous imports
* Fix some error handling
* Enable pipelining
* Disable su since it doesn't work
* Add error message when installed docker is not recent enough to
support this plugin
* Move nested functions to class level
* Make transport a class attribute
* Make exec_command, put_file and fetch_file more robust
* Disable su as it's not currently working 100% (and was disabled in v1).
* Move BUFSIZE out of the class to match other conenction plugins
* _connect shouldn't return self.
This is also peripheral to what _build_command needs, can be improved
and tested independently, and so makes more sense in a separate method.
This commit doesn't change any functionality (and I've verified that it
works with the various combinations: control_path set in ansible.cfg,
ssh_args adding or not adding ControlMaster/ControlPersist, etc.).
Also get pipelining working for people who look to chroot as an example
for their own connection plugins
Note: In the latest v2 API, action handles become but chroot doesn't
reliably handle become. Maybe we need to add a has_become attribute
that the action can display an appropriate error.
There doesn't appear to be anything that actually uses tmp_path in the
connection plugins so we don't need to pass that in to exec_command.
That change also means that we don't need to pass tmp_path around in
many places in the action plugins any more. there may be more cleanup
that can be done there as well (the action plugin's public run() method
takes tmp as a keyword arg but that may not be necessary).
As a sideeffect of this patch, some potential problems with chmod and
the patch, assemble, copy, and template modules has been fixed (those
modules called _remote_chmod() with the wrong order for their
parameters. Removing the tmp parameter fixed them.)
The process is already gone, so there's not going to be any new data
showing up on its stderr; we only want to make sure that we haven't
missed something that was already written. So polling once is enough.
This change is motivated by an ssh oddity: when ControlPersist is
enabled, the first (i.e. master) connection goes into the background; we
see EOF on its stdout and the process exits, but we never see EOF on its
stderr. So if we ran a command like this:
ANSIBLE_SSH_PIPELINING=1 ansible -T 30 -vvv somehost -u someuser -m command -a whoami
We would first do select([stdout,stderr], timeout) and read the command
module output, then select([stdout,stderr], timeout) again and read EOF
on stdout, then select([stderr], timeout) AGAIN (though the process has
exited), and select() would wait for the full timeout before returning
rfd=[], and then we would exit. The use of a very short timeout in the
code masked the underlying problem (that we don't see EOF on stderr).
It's always preferable to call select() with a long timeout so that the
process doesn't use any CPU until one of the events it's interested in
happens (and then select will return independent of elapsed time).
(A long timeout value means "if nothing happens, sleep for up to <x>";
omitting the timeout value means "if nothing happens, sleep forever";
specifying a zero timeout means "don't sleep at all", i.e. poll for
events and return immediately.)
This commit uses a long timeout, but explicitly detects the condition
where we've seen EOF on stdout and the process has exited, but we have
not seen EOF on stderr. If and only if that happens, it reruns select()
with a short timeout (in practice it could just exit at that point, but
I chose to be extra cautious). As a result, we end up calling select()
far less often, and use less CPU while waiting, but don't sleep for a
long time waiting for something that will never happen.
Note that we don't omit the timeout to select() altogether because if
we're waiting for an escalation prompt, we DO want to give up with an
error after some time. We also don't set exceptfds, because we're not
actually acting on any notifications of exceptional conditions.
Without this, we could execute «ssh -q ...» and call select(), which
would timeout after the default 10s, and only then send initial data.
(This is a relic of the earlier change where we always ran ssh with
-vvv, so the situation where it would sit quietly never happened in
practice; but this would have been the right thing to do even then.)
The event loop (even after it was brought into one place in _run in the
previous commit) was hard to follow. The states and transitions weren't
clear or documented, and the privilege escalation code was non-blocking
while the rest was blocking.
Now we have a state machine with four states: awaiting_prompt,
awaiting_escalation, ready_to_send (initial data), and awaiting_exit.
The actions in each state and the transitions between then are clearly
documented.
The check_incorrect_password() method no longer checks for empty strings
(since they will always match), and check_become_success() uses equality
rather than a substring match to avoid thinking an echoed command is an
indication of successful escalation. Also adds a check_missing_password
connection method to detect the error from sudo -n/doas -n.
The main exec_command/put_file/fetch_file methods now _build_command and
call _run to handle input from/output to the ssh process. The purpose is
to bring connection handling together in one place so that the locking
doesn't have to be split across functions.
Note that this doesn't change the privilege escalation and connection IO
code at all—just puts it all into one function.
Most of the changes are just moving code from one place to another (e.g.
from _connect to _build_command, from _exec_command and _communicate to
_run), but there are some other notable changes:
1. We test for the existence of sshpass the first time we need to use
password authentication, and remember the result.
2. We set _persistent in _build_command if we're using ControlPersist,
for later use in close(). (The detection could be smarter.)
3. Some apparently inadvertent inconsistencies between put_file and
fetch_file (e.g. argument quoting, sftp -b use) have been removed.
Also reorders functions into a logical sequence, removes unused imports
and functions, etc.
Aside: the high-level EXEC/PUT/FETCH description should really be logged
from ConnectionBase, while individual subclasses log transport-specific
details.
* Add exception handling when running PowerShell modules to provide exception message and stack trace.
* Enable strict mode for all PowerShell modules and internal commands.
* Update common PowerShell code to fix strict mode errors.
* Fix an issue with Set-Attr where it would not replace an existing property if already set.
* Add tests for exception handling using modified win_ping modules.