1
0
Fork 0
mirror of https://github.com/ansible-collections/community.general.git synced 2024-09-14 20:13:21 +02:00

multiple spelling error changes

This commit is contained in:
Carlos E. Garcia 2014-04-29 10:41:05 -04:00
parent 9a6998aa17
commit 7f5dd5e85d
48 changed files with 72 additions and 72 deletions

View file

@ -137,7 +137,7 @@ New modules:
* system: at
* utilities: assert
Other notable changes (many new module params & bugfixes may not not listed):
Other notable changes (many new module params & bugfixes may not be listed):
* no_reboot is now defaulted to "no" in the ec2_ami module to ensure filesystem consistency in the resulting AMI.
* sysctl module overhauled
@ -197,7 +197,7 @@ Highlighted new features:
* Added do-until feature, which can be used to retry a failed task a specified number of times with a delay in-between the retries.
* Added failed_when option for tasks, which can be used to specify logical statements that make it easier to determine when a task has failed, or to make it easier to ignore certain non-zero return codes for some commands.
* Added the "subelement" lookup plugin, which allows iteration of the keys of a dictionary or items in a list.
* Added the capability to use either paramiko or ssh for the inital setup connection of an accelerated playbook.
* Added the capability to use either paramiko or ssh for the initial setup connection of an accelerated playbook.
* Automatically provide advice on common parser errors users encounter.
* Deprecation warnings are now shown for legacy features: when_integer/etc, only_if, include+with_items, etc. Can be disabled in ansible.cfg
* The system will now provide helpful tips around possible YAML syntax errors increasing ease of use for new users.
@ -267,7 +267,7 @@ Misc changes (all module additions/fixes may not listed):
* Added a -vvvv level, which will show SSH client debugging information in the event of a failure.
* Includes now support the more standard syntax, similar to that of role includes and dependencies.
* Changed the `user:` parameter on plays to `remote_user:` to prevent confusion with the module of the same name. Still backwards compatible on play parameters.
* Added parameter to allow the fetch module to skip the md5 validation step ('validate_md5=false'). This is usefull when fetching files that are actively being written to, such as live log files.
* Added parameter to allow the fetch module to skip the md5 validation step ('validate_md5=false'). This is useful when fetching files that are actively being written to, such as live log files.
* Inventory hosts are used in the order they appear in the inventory.
* in hosts: foo[2-5] type syntax, the iterators now are zero indexed and the last index is non-inclusive, to match Python standards.
* There is now a way for a callback plugin to disable itself. See osx_say example code for an example.
@ -526,7 +526,7 @@ Modules added:
* packages: redhat_subscription: manage Red Hat subscription usage
* packages: rhn_register: basic RHN registration
* packages: zypper (SuSE)
* database: postgresql_priv: manages postgresql priveledges
* database: postgresql_priv: manages postgresql privileges
* networking: bigip_pool: load balancing with F5s
* networking: ec2_elb: add and remove machines from ec2 elastic load balancers
* notification: hipchat: send notification events to hipchat
@ -568,7 +568,7 @@ Bugfixes and Misc Changes:
* private_ip parameter added to the ec2 module
* $FILE and $PIPE now tolerate unicode
* various plugin loading operations have been made more efficient
* hostname now uses platform.node versus socket.gethostname to be more consistant with Unix 'hostname'
* hostname now uses platform.node versus socket.gethostname to be more consistent with Unix 'hostname'
* fix for SELinux operations on Unicode path names
* inventory directory locations now ignore files with .ini extensions, making hybrid inventory easier
* copy module in check-mode now reports back correct changed status when used with force=no
@ -608,8 +608,8 @@ the variable is still registered for the host, with the attribute skipped: True.
* localhost and 127.0.0.1 are now fuzzy matched in inventory (are now more or less interchangeable)
* AIX improvements/fixes for users, groups, facts
* lineinfile now does atomic file replacements
* fix to not pass PasswordAuthentication=no in the config file unneccessarily for SSH connection type
* for for authorized_key on Debian Squeeze
* fix to not pass PasswordAuthentication=no in the config file unnecessarily for SSH connection type
* for authorized_key on Debian Squeeze
* fixes for apt_repository module reporting changed incorrectly on certain repository types
* allow the virtualenv argument to the pip module to be a pathname
* service pattern argument now correctly read for BSD services
@ -1035,7 +1035,7 @@ Module changes:
* setup module now detects interfaces with aliases
* better handling of VM guest type detection in setup module
* new module boilerplate code to check for mutually required arguments, arguments required together, exclusive args
* add pattern= as a paramter to the service module (for init scripts that don't do status, or do poor status)
* add pattern= as a parameter to the service module (for init scripts that don't do status, or do poor status)
* various fixes to mysql & postresql modules
* added a thirsty= option (boolean, default no) to the get_url module to decide to download the file every time or not
* added a wait_for module to poll for ports being open

View file

@ -89,7 +89,7 @@ required. You're now live!
Reporting A Bug
---------------
Ansible practices responsible disclosure - if this is a security related bug, email security@ansible.com instead of filing a ticket or posting to the Google Group and you will recieve a prompt response.
Ansible practices responsible disclosure - if this is a security related bug, email security@ansible.com instead of filing a ticket or posting to the Google Group and you will receive a prompt response.
Bugs should be reported to [github.com/ansible/ansible](http://github.com/ansible/ansible) after
signing up for a free github account. Before reporting a bug, please use the bug/issue search
@ -138,7 +138,7 @@ affecting a smaller number of users.
Since we place a strong emphasis on testing and code review, it may take a few months for a minor feature to get merged.
Don't worry though -- we'll also take periodic sweeps through the lower priority queues and give
them some attention as well, particularly in the area of new module changes. So it doesn't neccessarily
them some attention as well, particularly in the area of new module changes. So it doesn't necessarily
mean that we'll be exhausting all of the higher-priority queues before getting to your ticket.
Release Numbering

View file

@ -13,7 +13,7 @@
* helper function to return a node containing the
* search summary for a given text. keywords is a list
* of stemmed words, hlwords is the list of normal, unstemmed
* words. the first one is used to find the occurance, the
* words. the first one is used to find the occurrence, the
* latter for highlighting it.
*/

View file

@ -338,7 +338,7 @@ and guidelines:
* In the event of failure, a key of 'failed' should be included, along with a string explanation in 'msg'. Modules that raise tracebacks (stacktraces) are generally considered 'poor' modules, though Ansible can deal with these returns and will automatically convert anything unparseable into a failed result. If you are using the AnsibleModule common Python code, the 'failed' element will be included for you automatically when you call 'fail_json'.
* Return codes from modules are not actually not signficant, but continue on with 0=success and non-zero=failure for reasons of future proofing.
* Return codes from modules are not actually not significant, but continue on with 0=success and non-zero=failure for reasons of future proofing.
* As results from many hosts will be aggregated at once, modules should return only relevant output. Returning the entire contents of a log file is generally bad form.

View file

@ -580,7 +580,7 @@ and less information has to be shared with remote hosts.
Orchestration in the Rackspace Cloud
++++++++++++++++++++++++++++++++++++
Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other pice of software in an environment. Complex deployments might have previously required manaul manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additioanl nodes contingent on the current number of running nodes, or the configuration of a clustered applicaiton dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:
Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other pice of software in an environment. Complex deployments might have previously required manaul manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additioanl nodes contingent on the current number of running nodes, or the configuration of a clustered application dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:
* Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool
* Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed

View file

@ -37,7 +37,7 @@ from ansible import errors
def detect_range(line = None):
'''
A helper function that checks a given host line to see if it contains
a range pattern descibed in the docstring above.
a range pattern described in the docstring above.
Returnes True if the given line contains a pattern, else False.
'''

View file

@ -241,7 +241,7 @@ class RhsmPools(object):
def _load_product_list(self):
"""
Loads list of all availaible pools for system in data structure
Loads list of all available pools for system in data structure
"""
args = "subscription-manager list --available"
rc, stdout, stderr = self.module.run_command(args, check_rc=True)

View file

@ -93,7 +93,7 @@ class PlayBook(object):
transport: how to connect to hosts that don't specify a transport (local, paramiko, etc)
callbacks output callbacks for the playbook
runner_callbacks: more callbacks, this time for the runner API
stats: holds aggregrate data about events occuring to each host
stats: holds aggregrate data about events occurring to each host
sudo: if not specified per play, requests all plays use sudo mode
inventory: can be specified instead of host_list to use a pre-existing inventory object
check: don't change anything, just try to detect some potential changes
@ -195,7 +195,7 @@ class PlayBook(object):
utils.plugins.push_basedir(basedir)
for play in playbook_data:
if type(play) != dict:
raise errors.AnsibleError("parse error: each play in a playbook must be a YAML dictionary (hash), recieved: %s" % play)
raise errors.AnsibleError("parse error: each play in a playbook must be a YAML dictionary (hash), received: %s" % play)
if 'include' in play:
# a playbook (list of plays) decided to include some other list of plays

View file

@ -253,7 +253,7 @@ class Task(object):
if len(incompatibles) > 1:
raise errors.AnsibleError("with_(plugin), and first_available_file are mutually incompatible in a single task")
# make first_available_file accessable to Runner code
# make first_available_file accessible to Runner code
if self.first_available_file:
self.module_vars['first_available_file'] = self.first_available_file

View file

@ -783,7 +783,7 @@ class Runner(object):
if actual_transport == 'accelerate':
# for accelerate, we stuff both ports into a single
# variable so that we don't have to mangle other function
# calls just to accomodate this one case
# calls just to accommodate this one case
actual_port = [actual_port, self.accelerate_port]
elif actual_port is not None:
actual_port = int(template.template(self.basedir, actual_port, inject))

View file

@ -349,7 +349,7 @@ def template_from_string(basedir, data, vars, fail_on_undefined=False):
"Make sure your variable name does not contain invalid characters like '-'."
)
else:
raise errors.AnsibleError("an unexpected type error occured. Error was %s" % te)
raise errors.AnsibleError("an unexpected type error occurred. Error was %s" % te)
return res
except (jinja2.exceptions.UndefinedError, errors.AnsibleUndefinedVariable):
if fail_on_undefined:

View file

@ -201,7 +201,7 @@ class ElbManager:
self.module.fail_json(msg=msg % (self.instance_id, lb))
if instance_state.state == awaited_state:
# Check the current state agains the initial state, and only set
# Check the current state against the initial state, and only set
# changed if they are different.
if (initial_state is None) or (instance_state.state != initial_state.state):
self.changed = True

View file

@ -137,7 +137,7 @@ EXAMPLES = '''
resource_tags: { "Environment":"Development" }
region: us-west-2
# Full creation example with subnets and optional availability zones.
# The absence or presense of subnets deletes or creates them respectively.
# The absence or presence of subnets deletes or creates them respectively.
local_action:
module: ec2_vpc
state: present
@ -400,7 +400,7 @@ def create_vpc(module, vpc_conn):
# indempotent is to basically build all the route tables as
# defined, track the route table ids, and then run through the
# remote list of route tables and delete any that we didn't
# create. This shouldn't interupt traffic in theory, but is the
# create. This shouldn't interrupt traffic in theory, but is the
# only way to really work with route tables over time that I can
# think of without using painful aws ids. Hopefully boto will add
# the replace-route-table API to make this smoother and

View file

@ -317,7 +317,7 @@ def handle_create(module, gs, bucket, obj):
if bucket_check(module, gs, bucket):
module.exit_json(msg="Bucket already exists.", changed=False)
else:
module.exit_json(msg="Bucket created succesfully", changed=create_bucket(module, gs, bucket))
module.exit_json(msg="Bucket created successfully", changed=create_bucket(module, gs, bucket))
if bucket and obj:
if bucket_check(module, gs, bucket):
if obj.endswith('/'):
@ -362,10 +362,10 @@ def main():
if mode == 'put':
if not src or not object:
module.fail_json(msg="When using PUT, src, bucket, object are mandatory paramters")
module.fail_json(msg="When using PUT, src, bucket, object are mandatory parameters")
if mode == 'get':
if not dest or not object:
module.fail_json(msg="When using GET, dest, bucket, object are mandatory paramters")
module.fail_json(msg="When using GET, dest, bucket, object are mandatory parameters")
if obj:
obj = os.path.expanduser(module.params['object'])

View file

@ -168,7 +168,7 @@ def _set_tenant_id(module):
_os_tenant_id = tenant.id
break
if not _os_tenant_id:
module.fail_json(msg = "The tenant id cannot be found, please check the paramters")
module.fail_json(msg = "The tenant id cannot be found, please check the parameters")
def _get_net_id(neutron, module):

View file

@ -139,7 +139,7 @@ def _set_tenant_id(module):
_os_tenant_id = tenant.id
break
if not _os_tenant_id:
module.fail_json(msg = "The tenant id cannot be found, please check the paramters")
module.fail_json(msg = "The tenant id cannot be found, please check the parameters")
def _get_router_id(module, neutron):

View file

@ -140,7 +140,7 @@ def _set_tenant_id(module):
_os_tenant_id = tenant.id
break
if not _os_tenant_id:
module.fail_json(msg = "The tenant id cannot be found, please check the paramters")
module.fail_json(msg = "The tenant id cannot be found, please check the parameters")
def _get_router_id(module, neutron):

View file

@ -169,7 +169,7 @@ def _set_tenant_id(module):
_os_tenant_id = tenant.id
break
if not _os_tenant_id:
module.fail_json(msg = "The tenant id cannot be found, please check the paramters")
module.fail_json(msg = "The tenant id cannot be found, please check the parameters")
def _get_net_id(neutron, module):
kwargs = {

View file

@ -351,7 +351,7 @@ def download(module, cf, container, src, dest, structure):
def delete(module, cf, container, src, dest):
""" Delete specific objects by proving a single file name or a
comma-separated list to src OR dest (but not both). Ommitting file name(s)
comma-separated list to src OR dest (but not both). Omitting file name(s)
assumes the entire container is to be deleted.
"""
objs = None

View file

@ -443,7 +443,7 @@ def main():
if bucketrtn is True:
module.exit_json(msg="Bucket already exists.", changed=False)
else:
module.exit_json(msg="Bucket created succesfully", changed=create_bucket(module, s3, bucket))
module.exit_json(msg="Bucket created successfully", changed=create_bucket(module, s3, bucket))
if bucket and obj:
bucketrtn = bucket_check(module, s3, bucket)
if obj.endswith('/'):

View file

@ -7,7 +7,7 @@ short_description: Runs a local script on a remote node after transferring it
description:
- "The M(script) module takes the script name followed by a list of
space-delimited arguments. "
- "The local script at path will be transfered to the remote node and then executed. "
- "The local script at path will be transferred to the remote node and then executed. "
- "The given script will be processed through the shell environment on the remote node. "
- "This module does not require python on the remote system, much like
the M(raw) module. "

View file

@ -198,7 +198,7 @@ def main():
pass
mode = module.params['slave_mode']
#Check if we ahve all the data
#Check if we have all the data
if mode == "slave": # Only need data if we want to be slave
if not master_host:
module.fail_json(
@ -235,7 +235,7 @@ def main():
else:
# Do the stuff
# (Check Check_mode before commands so the commands aren't evaluated
# if not necesary)
# if not necessary)
if mode == "slave":
if module.check_mode or\
set_slave_mode(r, master_host, master_port):
@ -281,7 +281,7 @@ def main():
# Do the stuff
# (Check Check_mode before commands so the commands aren't evaluated
# if not necesary)
# if not necessary)
if mode == "all":
if module.check_mode or flush(r):
module.exit_json(changed=True, flushed=True)

View file

@ -77,7 +77,7 @@ options:
required: false
default: null
description:
- DEPRECATED. The acl to set or remove. This must always be quoted in the form of '<etype>:<qualifier>:<perms>'. The qualifier may be empty for some types, but the type and perms are always requried. '-' can be used as placeholder when you do not care about permissions. This is now superceeded by entity, type and permissions fields.
- DEPRECATED. The acl to set or remove. This must always be quoted in the form of '<etype>:<qualifier>:<perms>'. The qualifier may be empty for some types, but the type and perms are always requried. '-' can be used as placeholder when you do not care about permissions. This is now superseded by entity, type and permissions fields.
author: Brian Coca
notes:

View file

@ -165,7 +165,7 @@ def main():
res = {}
if key is None and state in ['present','absent']:
module.fail_json(msg="%s needs a key paramter" % state)
module.fail_json(msg="%s needs a key parameter" % state)
# All xattr must begin in user namespace
if key is not None and not re.match('^user\.',key):

View file

@ -145,7 +145,7 @@ class AristaInterface(object):
""" This method will return a dictionary with the attributes of the
physical ethernet interface resource specified in interface_id.
The physcial ethernet interface resource has the following
stucture:
structure:
{
"interface_id": <interface_id>,

View file

@ -154,7 +154,7 @@ class AristaL2Interface(object):
def get(self):
""" This method will return a dictionary with the attributes of the
layer 2 interface resource specified in interface_id. The layer
2 interface resource has the following stucture:
2 interface resource has the following structure:
{
"interface_id": <interface_id>,

View file

@ -143,7 +143,7 @@ class AristaLag(object):
def get(self):
""" This method will return a dictionary with the attributes of the
lag interface resource specified in interface_id. The lag
interface resource has the following stucture:
interface resource has the following structure:
{
"interface_id": <interface_id>,

View file

@ -234,7 +234,7 @@ class AristaVlan(object):
def get(self):
""" This method will return a dictionary with the attributes of the
VLAN resource identified in vlan_id. The VLAN resource has the
following stucture:
following structure:
{
"vlan_id": <vlan_id>,

View file

@ -101,7 +101,7 @@ options:
default: none
port:
description:
- port address part op the ipport definition. Tyhe default API
- port address part op the ipport definition. The default API
setting is 0.
required: false
default: none

View file

@ -101,7 +101,7 @@ options:
default: none
port:
description:
- port address part op the ipport definition. Tyhe default API
- port address part op the ipport definition. The default API
setting is 0.
required: false
default: none

View file

@ -24,7 +24,7 @@ description:
options:
account_email:
description:
- "Account email. If ommitted, the env variables DNSIMPLE_EMAIL and DNSIMPLE_API_TOKEN will be looked for. If those aren't found, a C(.dnsimple) file will be looked for, see: U(https://github.com/mikemaccana/dnsimple-python#getting-started)"
- "Account email. If omitted, the env variables DNSIMPLE_EMAIL and DNSIMPLE_API_TOKEN will be looked for. If those aren't found, a C(.dnsimple) file will be looked for, see: U(https://github.com/mikemaccana/dnsimple-python#getting-started)"
required: false
default: null
@ -36,7 +36,7 @@ options:
domain:
description:
- Domain to work with. Can be the domain name (e.g. "mydomain.com") or the numeric ID of the domain in DNSimple. If ommitted, a list of domains will be returned.
- Domain to work with. Can be the domain name (e.g. "mydomain.com") or the numeric ID of the domain in DNSimple. If omitted, a list of domains will be returned.
- If domain is present but the domain doesn't exist, it will be created.
required: false
default: null

View file

@ -98,7 +98,7 @@ options:
description:
- The character set of email being sent
default: 'us-ascii'
requred: false
required: false
"""
EXAMPLES = '''

View file

@ -286,7 +286,7 @@ class RhsmPools(object):
def _load_product_list(self):
"""
Loads list of all availaible pools for system in data structure
Loads list of all available pools for system in data structure
"""
args = "subscription-manager list --available"
rc, stdout, stderr = self.module.run_command(args, check_rc=True)

View file

@ -145,7 +145,7 @@ class Group(object):
class SunOS(Group):
"""
This is a SunOS Group manipulation class. Solaris doesnt have
This is a SunOS Group manipulation class. Solaris doesn't have
the 'system' group concept.
This overrides the following methods from the generic class:-

View file

@ -57,7 +57,7 @@ def apply_change(targetState, name, encoding):
"""Create or remove locale.
Keyword arguments:
targetState -- Desired state, eiter present or absent.
targetState -- Desired state, either present or absent.
name -- Name including encoding such as de_CH.UTF-8.
encoding -- Encoding such as UTF-8.
"""
@ -76,7 +76,7 @@ def apply_change_ubuntu(targetState, name, encoding):
"""Create or remove locale.
Keyword arguments:
targetState -- Desired state, eiter present or absent.
targetState -- Desired state, either present or absent.
name -- Name including encoding such as de_CH.UTF-8.
encoding -- Encoding such as UTF-8.
"""

View file

@ -533,7 +533,7 @@ class LinuxService(Service):
# if the job status is still not known check it by status output keywords
if self.running is None:
# first tranform the status output that could irritate keyword matching
# first transform the status output that could irritate keyword matching
cleanout = status_stdout.lower().replace(self.name.lower(), '')
if "stop" in cleanout:
self.running = False

View file

@ -69,17 +69,17 @@ class CallbackModule(object):
def runner_on_error(self, host, msg):
sender = '"Ansible: %s" <root>' % host
subject = 'Error: %s' % msg.strip('\r\n').split('\n')[0]
body = 'An error occured for host ' + host + ' with the following message:\n\n' + msg
body = 'An error occurred for host ' + host + ' with the following message:\n\n' + msg
mail(sender=sender, subject=subject, body=body)
def runner_on_unreachable(self, host, res):
sender = '"Ansible: %s" <root>' % host
if isinstance(res, basestring):
subject = 'Unreachable: %s' % res.strip('\r\n').split('\n')[-1]
body = 'An error occured for host ' + host + ' with the following message:\n\n' + res
body = 'An error occurred for host ' + host + ' with the following message:\n\n' + res
else:
subject = 'Unreachable: %s' % res['msg'].strip('\r\n').split('\n')[0]
body = 'An error occured for host ' + host + ' with the following message:\n\n' + \
body = 'An error occurred for host ' + host + ' with the following message:\n\n' + \
res['msg'] + '\n\nA complete dump of the error:\n\n' + str(res)
mail(sender=sender, subject=subject, body=body)
@ -87,9 +87,9 @@ class CallbackModule(object):
sender = '"Ansible: %s" <root>' % host
if isinstance(res, basestring):
subject = 'Async failure: %s' % res.strip('\r\n').split('\n')[-1]
body = 'An error occured for host ' + host + ' with the following message:\n\n' + res
body = 'An error occurred for host ' + host + ' with the following message:\n\n' + res
else:
subject = 'Async failure: %s' % res['msg'].strip('\r\n').split('\n')[0]
body = 'An error occured for host ' + host + ' with the following message:\n\n' + \
body = 'An error occurred for host ' + host + ' with the following message:\n\n' + \
res['msg'] + '\n\nA complete dump of the error:\n\n' + str(res)
mail(sender=sender, subject=subject, body=body)

View file

@ -368,7 +368,7 @@ or environment variables (DO_CLIENT_ID and DO_API_KEY)'''
def load_droplet_variables_for_host(self):
'''Generate a JSON reponse to a --host call'''
'''Generate a JSON response to a --host call'''
host = self.to_safe(str(self.args.host))
if not host in self.index['host_to_droplet']:

View file

@ -196,7 +196,7 @@ def setup():
write_stderr(e)
sys.exit(1)
# Enviroment Variables
# Environment Variables
env_base_url = os.environ.get('DOCKER_HOST')
env_version = os.environ.get('DOCKER_VERSION')
env_timeout = os.environ.get('DOCKER_TIMEOUT')

View file

@ -112,7 +112,7 @@
#
#
#
# The Docker inventory plugin provides several enviroment variables that
# The Docker inventory plugin provides several environment variables that
# may be overridden here. This configuration file always takes precedence
# over environment variables.
#

View file

@ -193,7 +193,7 @@ if __name__ == '__main__':
)
except Exception, e:
client = None
#print >> STDERR "Unable to login (only cache avilable): %s", str(e)
#print >> STDERR "Unable to login (only cache available): %s", str(e)
# acitually do the work
if hostname is None:

View file

@ -164,7 +164,7 @@
that:
- "file11_result.uid == 1235"
- name: fail to create soft link to non existant file
- name: fail to create soft link to non existent file
file: src=/noneexistant dest={{output_dir}}/soft2.txt state=link force=no
register: file12_result
ignore_errors: true
@ -174,7 +174,7 @@
that:
- "file12_result.failed == true"
- name: force creation soft link to non existant
- name: force creation soft link to non existent
file: src=/noneexistant dest={{output_dir}}/soft2.txt state=link force=yes
register: file13_result

View file

@ -64,7 +64,7 @@
stat: path={{ checkout_dir }}/.git/branches
register: branches
- name: assert presense of tags/trunk/branches
- name: assert presence of tags/trunk/branches
assert:
that:
- "tags.stat.isdir"

View file

@ -62,7 +62,7 @@
- debug: var=tags
- debug: var=branches
- name: assert presense of tags/trunk/branches
- name: assert presence of tags/trunk/branches
assert:
that:
- "tags.stat.isreg"

View file

@ -63,11 +63,11 @@
# now remove it to test uninstallation of a package we are sure is installed
- name: now uninstall so we can see that a change occured
- name: now uninstall so we can see that a change occurred
pip: name={{ pip_test_package }} state=absent
register: absent2
- name: assert a change occured on uninstallation
- name: assert a change occurred on uninstallation
assert:
that:
- "absent2.changed"

View file

@ -77,7 +77,7 @@
stat: path={{ checkout_dir }}/branches
register: branches
- name: assert presense of tags/trunk/branches
- name: assert presence of tags/trunk/branches
assert:
that:
- "tags.stat.isdir"

View file

@ -133,7 +133,7 @@ class TestSynchronize(unittest.TestCase):
def test_synchronize_action_vagrant(self):
""" Verify the action plugin accomodates the common
""" Verify the action plugin accommodates the common
scenarios for vagrant boxes. """
runner = FakeRunner()

View file

@ -1,4 +1,4 @@
# order of groups, children, and vars is not signficant
# order of groups, children, and vars is not significant
# so this example mixes them up for maximum testing
[nc:children]