If someone add ssh_args = " " to his .ansible.cfg, it will result into
strange failure later :
<server.example.org> ESTABLISH CONNECTION FOR USER: misc
<server.example.org> REMOTE_MODULE ping
<server.example.org> EXEC ['ssh', '-C', '-tt', '-q', ' ', '-o', 'KbdInteractiveAuthentication=no',
'-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no',
'-o', 'ConnectTimeout=10', 'server.example.org', "/bin/sh -c 'mkdir -p /tmp/ansible-tmp-1397947711.21-5932460998838
&& chmod a+rx /tmp/ansible-tmp-1397947711.21-5932460998838 && echo /tmp/ansible-tmp-1397947711.21-5932460998838'"]
server.example.org | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the
command using -vvvv, which will enable SSH debugging output to help diagnose the issue
The root cause is the empty string between -q and -o, who kinda break mkdir.
If a delegated host is not found in the inventory specified
private_key_file for primary host was not used.
This allows running playbooks without having to define any inventory at
all and to use the same ssh private key for both primary host and
delegated one.
Enable the use of executable commands that use command line options with
the localhost command runner. These commands require parsing out the
base executable from the command string to pass to subprocess.
Any other module is able to detect a dark host, but raw was treating 255
as a return code from the module execution, rather from the connection
attempt. This change allows 255 to be treated as a connection failure
when using the raw module.
The jid will now also contain the PID of the async_wrapper process,
and can each unique jid from each host is tracked rather than just
relying on one global jid per task.
Fixes#5582
* Added capability to support multiple keys, so clients from different
machines can connect to a single daemon instance
* Any activity on the daemon will cause the timeout to extend, so that the
daemon must be idle for the full number of minutes before it will auto-
shutdown
* Various other small fixes to remove some redundancy
Fixes#5171
Add support for checking host against global known host files.
The effect of this is that before this fix if files are spread across the known_hosts file but not in the ~/known_hosts file the hosts will execute sequentially. This PR augments the functionality so that all of the knowns hosts will execute in parallel.
##### Issue Type:
Bugfix Pull Request
##### Ansible Version:
ansible 1.4.3
##### Environment:
N/A
##### Summary:
We are using a wrapper python script to run ansible-playbook. We use subprocess to execute and print the stdout as and when its written. Problem is when we use pause it doesn't display the prompt string as raw_input does not flush stdout before reading from stdin.
It looks like a dirty fix to add "\n" to the prompt string but i don't see any other way to over come this. If anyone else have a better fix please do propose/suggest.
##### Steps To Reproduce:
```yaml
#File: test_play.yml
- name: Test
hosts: $nodes
gather_facts: false
tasks:
- name: Waiting for User
local_action: pause prompt="Do you want to continue (yes/no)? "
```
```python
#!/usr/bin/env python
#File: test.py
import shlex, subprocess
def run_process(process):
process = process.encode("utf-8")
command = shlex.split(process)
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
print line,
cmd = "/usr/bin/python -u /usr/bin/ansible-playbook -i hosts.txt test_play.yml -e 'nodes=local'"
run_process(cmd)
```
```
shell $ python test.py
```
##### Expected Results:
```
PLAY [Test] *******************************************************************
TASK: [Waiting for User] ******************************************************
[localhost]
Do you want to continue (yes/no)? :
```
##### Actual Results:
```
PLAY [Test] *******************************************************************
TASK: [Waiting for User] ******************************************************
[localhost]
```
Create a lookup plugin named dict that can be used to loop over hashes.
It converts a dict into a list of key-value pairs, with attributes named
"key" and "value." Also adds a brief explanation and simple example to
the docs.
Signed-off-by: Kent R. Spillner <kspillner@acm.org>
Bugfixes:
* the remote_src param was not being converted to a boolean correctly,
resulting in it never being used by the module as the default behavior
was remote_src=True (issue #5581)
* the remote_src param was not listed in the generic file params, leading
to a failure when the above bug regarding remote_src was fixed
* the delimiter should always end with a newline to ensure that the file
fragments do not run together on one line
Fixes#5581
When content is processed and found to be valid JSON it is decoded into a dict. To write it out to a file we need to encode it back into a string.
Addresses GH-5914.
There is a bit going on with the changes here. Most of the changes are cleanup of files so that they line up with the standard files.
PR #5136 was merged into the current devel and brought up to working order. A few bug fixes had to be done to get the code to test correctly. Thanks out to @pib!
Issue #5431 was not able to be confirmed as it behaved as expected with a sudo user.
Tests were added via a playbook with archive files to verify functionality.
All tests fire clean including custom playbooks across multiple linux and solaris systems.
It turns out that some of the assumptions in #5885 were slightly off. The previous fix relied on a call to the module to creat a tmp_path. This is insufficent as there are few cases that we need to have the tmp directory before we make the module call. If we don't have a tmp_path before we do a recursive call or when we find a file that does not match the remote md5 hash we need to create a tmp directory. Also we are not more percise when we will need to clean up the remote tmp_path.
We break the read while loop after waiting "the end of the process" and
the pipes are empty, otherwise we do another select that waits all the
timeout.
The copy action_plugin is not easy to read. Part of this commit is taking that file, restructuring it, and adding comments. No functionality changed in how it interacts with the world.
The fix for #5739 ends up being the assumption that there is a cleanup 'rm -rf' that happens at the end of the copy loop. This was not the fact before and we made a bunch of tmp directories that we hoped would end up being cleaned up. Now we just use the tmp directory that the runner provides and cleanup inline if it is a single file to be coppied or after the loop if it is a recursive copy.
As a part of this we did end up having to change runner to provide a flag so that we could short the inline tmp directory removal. This flag defaults to True so it will not change the behavior of other modules that are being called.