Reported by Rumen:
TASK: [fail FAIL] *************************************************************
skipping: [hostname.com]
failed: [hostname.com] => {"failed": true}
msg: Failed as requested from task
Traceback (most recent call last):
File "/usr/local/bin/ansible-playbook", line 268, in <module>
sys.exit(main(sys.argv[1:]))
File "/usr/local/bin/ansible-playbook", line 208, in main
pb.run()
File "/Library/Python/2.7/site-packages/ansible/playbook/__init__.py", line 262, in run
if not self._run_play(play):
File "/Library/Python/2.7/site-packages/ansible/playbook/__init__.py", line 580, in _run_play
if (hosts_count - len(host_list)) > int((play.max_fail_pct)/100.0 * hosts_count):
TypeError: object of type 'NoneType' has no len()
Adds original_transport attribute to Runner to track what the original
transport was before it is changed to 'accelerate'.
If using paramiko in original_transport, uses ParamikoConnection. If
not, falls back to SSHConnection like before.
The play was just checking for the presence of the keyword in the
YAML datastructure, and not the value of the field, so doing something
like variable substitution was always causing the play to be accelerated
* Default variables are now fed directly into roles, just like the
other variables, so that roles see their unique values rather
than those set at the global level.
* Role dependency duplicates are now determined by checking the params used
when specifying them as dependencies rather than just on the name of the
role. For example, the following would be included twice without having
to specify "allow_duplicates: true":
dependencies:
- { role: foo, x: 1 }
- { role: foo, x: 2 }
Caveats:
* requiretty must be disabled in the sudoers config
* asking for a password doesn't work yet, so any sudoers users must
be configured with NOPASSWD
* if not starting the daemon as root, the user running the daemon
must have sudoers entries to allow them to run the command as the
target sudo_user
Still needs:
* chunked file transfer/receive
* should probably move all send/recv operations to separate
functions to reduce code duplication
* initial connection setup over ssh? or do we handle that in runner?
This is based somewhat loosely on how Keyczar does things. Their
implementation does things in a much more generic way to allow for more
variance in how the cipher is created, but since we're only using one
key type most of our values are hard-coded. They also add a header to
their messages, which I am not doing (don't see the need for it
currently).
Files were being created in /tmp, but will now be created in $HOME/.ansible/cp/
Addresses CVE-2013-4259: ansible uses a socket with predictable filename in /tmp
The 'always_run' task clause allows one to execute a task even in
check mode.
While here implement Runner.noop_on_check() to check if a runner
really should execute its task, with respect to check mode option
and 'always_run' clause.
Also add the optional 'jinja2' argument to check_conditional() :
it allows to give this function a jinja2 expression without exposing
the 'jinja2_compare' implementation mechanism.
For some reason, ssh seems to ask for password even when
PasswordAuthentication is set to no, adding PreferredAuthentications
with the 2 options removed do the trick.
Tests `test_playbook_undefined_varsX_fail` check if ansible detects
undefined variables when `error_on_undefined_vars` is enabled. These
tests fail without "Improve behavior with error_on_undefined_vars
enabled" patch.
Tests `test_playbook_undefined_varsX_ignore` check if ansible ignores
undefined variables when `error_on_undefined_vars` is disabled.
Also modify PlayBook._run_task_internal() so error_on_undefined_vars is
testable.
Pass fail_on_undefined flag to recursive calls to `template` function,
so more undefined variables are detected.
Works only for Jinja style variables. Undefined legacy variables are
never detected.
Previously, hostvars would only expose a keys() list of hosts that had
been seen yet- however you could explicitly access the host if you knew
the name, and get the content that way. This precludes template code
from being able to safely access information about other hosts if any
limiters/tags were in use.
Additionally, the object was inconsistent for hostvars['myhost'] access
and [x[1] for x in hostvars.items() if x[0] == 'myhost'] access; this is
due to the original derivation from the dict object. .items() would be
handled by dict.items(), using the passed in setup_cache values without
using the actual lookup content.
This patch rebases the class implementation to a py2.6 dictmixin, fixing
those issues and restoring behaviour to match what the docs claim.
For link-local addresses, it is sometimes necessary to append the
interface to use for the ipv6 address. This patch extends the ipv6
regex to allow for '%ifnameX' at the end.
See https://bugzilla.redhat.com/show_bug.cgi?id=136852 for more info
Due to various inconsistencies of ssh and sftp regarding ipv6 and
ipv4 handling, some special arguments must be passed, and the
ipv6 must be passed in a specific format.
testing with a ipv6 :
ansible -u misc -i '[2002::c23e]:22,' '*' -m ping
fail due to parsing of ':' as a separator of port/ip with ipv4.
This commit add support for properly parsing 2002::c23 and the
bracket notation [2002::ce]:2222
The block that added the original list of roles was indented too far,
and was only being reached if a role had dependencies. This resulted
in roles without dependencies from being added to the list of roles.
Credit goes to looped for reporting and diagnosing the issue.
Fixes#3686
Dependencies are enabled by adding a new directory/file named
meta/main.yml to the role. The format of the dependencies are:
dependencies:
- { role: foo, x: 1, y: 2 }
- { role: bar, x: 3, y: 4 }
...
Dependencies inherit variables as they are seen at the time of the
dependency inclusion. For example, if foo(x=1, y=2) has a dependency
on bar(x=3,z=4), then bar will have variables (x=3,y=2,z=4).
Different roles can have dependencies on the same role, and this
variable inheritence allows for the reuse of generic roles quite easily.
For example:
Role 'car' has the following dependencies:
dependencies:
- { role: wheel, n: 1 }
- { role: wheel, n: 2 }
- { role: wheel, n: 3 }
- { role: wheel, n: 4 }
Role 'wheel' has the following dependencies:
dependencies:
- { role: tire }
- { role: brake }
The role 'car' is then used as follows:
- { role: car, type: honda }
And tasks/main.yml in each role simply contains the following:
- name: {{ type }} whatever {{ n }}
command: echo ''
TASK: [honda tire 1]
TASK: [honda brake 1]
TASK: [honda wheel 1]
TASK: [honda tire 2]
TASK: [honda brake 2]
TASK: [honda wheel 2]
TASK: [honda tire 3]
TASK: [honda brake 3]
TASK: [honda wheel 3]
TASK: [honda tire 4]
TASK: [honda brake 4]
TASK: [honda wheel 4]
TASK: [I'm a honda] <- (this is in roles/car/tasks/main.yml)
The idea is that some plugin would not be called in some
specific case, and the callback should decide by itself.
Having a way to globally disable it is much cleaner than
disabling every method one by one on the plugin side.
My use case is for fedora-infrastructure that cannot be run
from git checkout since it try to connect to the message bus,
but another case would be to bootstrap infrastructure, or to
run the code on a test servers without having all the callback
infrastructure setup.
by ensuring all basedirs, plugin paths and extra
paths are handled as absolute paths and are checked
to not add any doubles.
This fixes the corner case where e.g. the user has
an additional plugin path configured to a dir
relative to his playbooks or inventory location,
which also matches the _plugin subdir relative to
one of the basedirs in the play.
For most plugins this doesn't show as an obvious issue
except for callback_plugins, which might fire more
than once. Other plugins (inventory and template
plugins) might unnecessarily be ran twice.
e.g. ansible.cfg has
callback_plugins = ./plays/callback_plugins
and plays/ contains a playbook file:
.
├── ansible.cfg
├── inventory
└── plays
├── callback_plugins
│ └── timestamp.py
└── site.yml
modified: lib/ansible/utils/plugins.py
Previous patch was reverted due to the fact that there was an issue
with the results not always being a dictionary (they're sometimes
a unicode string, ie. when the with_items is used with yum). This
minor change corrects that by checking for a dict object.
to ensure consistent behavior, hosts should look like this:
hosts: webservers:&boston:!rack42
So when applying the host selectors, run those without the "&" first,
then the &s, then the !s.
Closes#3500
If SELinux is enabled and mcstrans is running, daemons are restarted on each
run. After further debugging, it turn out that ansible compare the untranslated
level 's0' with the translated level 'SystemLow' due to mcstrans being running,
which trigger a handler since this is considered as a change.
and the _meta hash contains a "hostvars", don't call --host hostname for any elements
and just serve them directly for performance enhancements with the external inventory
script and a large number of hosts.
Added support of an optional init method for action modules like rsync that need to alter the connection and other inject data before it's established.
Treat errno 13 (permission denied) as one of the special cases in
atomic_move.
This type of error can occur because of sudo'ing to non-root user.
Fixes#3705
Since ansible 1.2, it became possible to place a host_vars
directory in the same directory as a playbook, making it possible
to keep host_vars local to that playbook there. However, due to
python's os.path.dirname, a action such as:
$ ansible-playbook pb.yml
..would not pick up the host_vars as os.path.dirname("pb.yml")
returns "", unlike the unix command dirname that would return
".". Substituting "pb.yml" on the command line with "./pb.yml"
would do the trick, but is not always intuitive. This patch
solves the problem until python solves issue18547 [1].
[1] http://bugs.python.org/issue18547