1
0
Fork 0
mirror of https://github.com/ansible-collections/community.general.git synced 2024-09-14 20:13:21 +02:00

Module deprecation: docs, scheme and tests (#34100)

Enforce module deprecation.
After module has reached the end of it's deprecation cycle we will replace it with a docs stub.

* Replace deprecated modules with docs-only sub
* Use of deprecated past deprecation cycle gives meaningful message (see examples below)
* Enforce documentation.deprecation dict via `schema.py`
* Update `ansible-doc` and web docs to display documentation.deprecation
* Document that structure in `dev_guide`
* Ensure that all modules starting with `_` have a `deprecation:` block
* Ensure `deprecation:` block is only used on modules that start with `_`
* `removed_in` A string which represents when this module needs **deleting**
* CHANGELOG.md and porting_guide_2.5.rst list removed modules as well as alternatives
* CHANGELOG.md links to porting guide index

To ensure that meaningful messages are given to the user if they try to use a module at the end of it's deprecation cycle we enforce the module to contain:
```python
if __name__ == '__main__':
    removed_module()
```
This commit is contained in:
John R Barker 2018-01-30 12:23:52 +00:00 committed by GitHub
parent 7c83f006c0
commit a23c95023b
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
66 changed files with 241 additions and 4438 deletions

View file

@ -5,6 +5,8 @@ Ansible Changes By Release
## 2.5 "TBD" - ACTIVE DEVELOPMENT ## 2.5 "TBD" - ACTIVE DEVELOPMENT
[Porting Guide](http://docs.ansible.com/ansible/devel/porting_guides.html)
### Major Changes ### Major Changes
* Removed the previously deprecated 'accelerate' mode and all associated keywords and code. * Removed the previously deprecated 'accelerate' mode and all associated keywords and code.
* New simpler and more intuitive 'loop' keyword for task loops. The ``with_<lookup>`` loops will be deprecated in the near future and eventually removed. * New simpler and more intuitive 'loop' keyword for task loops. The ``with_<lookup>`` loops will be deprecated in the near future and eventually removed.
@ -13,7 +15,7 @@ Ansible Changes By Release
currently on by default, in the future it will be off. currently on by default, in the future it will be off.
* Add a configuration file to filter modules that a site administrator wants to exclude from being used. * Add a configuration file to filter modules that a site administrator wants to exclude from being used.
### Deprecations ### Deprecations (to be removed in 2.9)
* Previously deprecated 'hostfile' config settings have been 're-deprecated' as previously code did not warn about deprecated configuration settings. * Previously deprecated 'hostfile' config settings have been 're-deprecated' as previously code did not warn about deprecated configuration settings.
* Using Ansible provided Jinja tests as filters is deprecated and will be removed in Ansible 2.9 * Using Ansible provided Jinja tests as filters is deprecated and will be removed in Ansible 2.9
* `stat` and `win_stat` have deprecated `get_md5` and the `md5` return value * `stat` and `win_stat` have deprecated `get_md5` and the `md5` return value
@ -36,6 +38,9 @@ Ansible Changes By Release
* nxos_ip_interface module is deprecated in Ansible 2.5. Use nxos_l3_interface module instead. * nxos_ip_interface module is deprecated in Ansible 2.5. Use nxos_l3_interface module instead.
* nxos_portchannel module is deprecated in Ansible 2.5. Use nxos_linkagg module instead. * nxos_portchannel module is deprecated in Ansible 2.5. Use nxos_linkagg module instead.
* nxos_switchport module is deprecated in Ansible 2.5. Use nxos_l2_interface module instead. * nxos_switchport module is deprecated in Ansible 2.5. Use nxos_l2_interface module instead.
* ec2_ami_find has been deprecated, use ec2_ami_facts.
See [Porting Guide](http://docs.ansible.com/ansible/devel/porting_guides.html) for more information
### Minor Changes ### Minor Changes
* added a few new magic vars corresponding to configuration/command line options: * added a few new magic vars corresponding to configuration/command line options:
@ -57,14 +62,16 @@ Ansible Changes By Release
* Task debugger functionality was moved into `StrategyBase`, and extended to allow explicit invocation from use of the `debugger` keyword. * Task debugger functionality was moved into `StrategyBase`, and extended to allow explicit invocation from use of the `debugger` keyword.
The `debug` strategy is still functional, and is now just a trigger to enable this functionality The `debug` strategy is still functional, and is now just a trigger to enable this functionality
#### Deprecated Modules (to be removed in 2.9):
* ec2_ami_find: replaced by ec2_ami_facts
#### Removed Modules (previously deprecated): #### Removed Modules (previously deprecated):
* accelerate * accelerate.
* boundary_meter: There was no deprecation period for this but the hosted * boundary_meter: There was no deprecation period for this but the hosted
service it relied on has gone away so the module has been removed. service it relied on has gone away so the module has been removed.
https://github.com/ansible/ansible/issues/29387 [#29387](https://github.com/ansible/ansible/issues/29387)
* cl_ : cl_interface, cl_interface_policy, cl_bridge, cl_img_install, cl_ports, cl_license, cl_bond. Use `nclu` instead
* docker, use docker_container and docker_image instead.
* ec2_vpc.
* ec2_ami_search, use ec2_ami_facts instead.
* nxos_mtu, use nxos_system's `system_mtu` option. To specify an interfaces MTU use nxos_interface.
### New Plugins ### New Plugins

View file

@ -201,9 +201,17 @@ The following fields can be used and are all required unless specified otherwise
:author: :author:
Name of the module author in the form ``First Last (@GitHubID)``. Use a multi-line list if there is more than one author. Name of the module author in the form ``First Last (@GitHubID)``. Use a multi-line list if there is more than one author.
:deprecated: :deprecated:
If this module is deprecated, detail when that happened, and what to use instead, e.g. If a module is deprecated it must be:
`Deprecated in 2.3. Use M(whatmoduletouseinstead) instead.`
Ensure `CHANGELOG.md` is updated to reflect this. * Mentioned in ``CHANGELOG``
* Referenced in the ``porting_guide_x.y.rst``
* File should be renamed to start with an ``_``
* ``ANSIBLE_METADATA`` must contain ``status: ['deprecated']``
* Following values must be set:
:removed_in: A `string`, such as ``"2.9"``, which represents the version of Ansible this module will replaced with docs only module stub.
:why: Optional string that used to detail why this has been removed.
:alternative: Inform users they should do instead, i.e. ``Use M(whatmoduletouseinstead) instead.``.
:options: :options:
One per module argument: One per module argument:

View file

@ -63,8 +63,8 @@ Errors
**1xx** **Locations** **1xx** **Locations**
101 Interpreter line is not ``#!/usr/bin/python`` 101 Interpreter line is not ``#!/usr/bin/python``
102 Interpreter line is not ``#!powershell`` 102 Interpreter line is not ``#!powershell``
103 Did not find a call to ``main()`` 103 Did not find a call to ``main()`` (or ``removed_module()`` in the case of deprecated & docs only modules)
104 Call to ``main()`` not the last line 104 Call to ``main()`` not the last line (or ``removed_module()`` in the case of deprecated & docs only modules)
105 GPLv3 license header not found 105 GPLv3 license header not found
106 Import found before documentation variables. All imports must appear below 106 Import found before documentation variables. All imports must appear below
``DOCUMENTATION``/``EXAMPLES``/``RETURN``/``ANSIBLE_METADATA`` ``DOCUMENTATION``/``EXAMPLES``/``RETURN``/``ANSIBLE_METADATA``

View file

@ -76,7 +76,17 @@ Modules removed
The following modules no longer exist: The following modules no longer exist:
* None * :ref:`nxos_mtu <nxos_mtu>` use :ref:`nxos_system <nxos_system>`'s ``system_mtu`` option or :ref:`nxos_interface <nxos_interface>` instead
* :ref:`cl_interface_policy <cl_interface_policy>` use :ref:`nclu <nclu>` instead
* :ref:`cl_bridge <cl_bridge>` use :ref:`nclu <nclu>` instead
* :ref:`cl_img_install <cl_img_install>` use :ref:`nclu <nclu>` instead
* :ref:`cl_ports <cl_ports>` use :ref:`nclu <nclu>` instead
* :ref:`cl_license <cl_license>` use :ref:`nclu <nclu>` instead
* :ref:`cl_interface <cl_interface>` use :ref:`nclu <nclu>` instead
* :ref:`cl_bond <cl_bond>` use :ref:`nclu <nclu>` instead
* :ref:`ec2_vpc <ec_vpc>` use :ref:`ec2_vpc_net <ec2_vpc_net>` along with supporting modules :ref:`ec2_vpc_igw <ec2_vpc_igw>`, :ref:`ec2_vpc_route_table <ec2_vpc_route_table>`, :ref:`ec2_vpc_subnet <ec2_vpc_subnet>`, :ref:`ec2_vpc_dhcp_options <ec2_vpc_dhcp_options>`, :ref:`ec2_vpc_nat_gateway <ec2_vpc_nat_gateway>`, :ref:`ec2_vpc_nacl <ec2_vpc_nacl>` instead.
* :ref:`ec2_ami_search <ec2_ami_search` use :ref:`ec2_ami_facts <ec2_ami_facts>` instead
* :ref:`docker <docker>` use :ref:`docker_container <docker_container>` and :ref:`docker_image <docker_image>` instead
Deprecation notices Deprecation notices
------------------- -------------------

View file

@ -34,7 +34,7 @@ DEPRECATED
---------- ----------
{# use unknown here? skip the fields? #} {# use unknown here? skip the fields? #}
:In: version: @{ deprecated['version'] | default('') | string | convert_symbols_to_format }@ :Removed in Ansible: version: @{ deprecated['removed_in'] | default('') | string | convert_symbols_to_format }@
:Why: @{ deprecated['why'] | default('') | convert_symbols_to_format }@ :Why: @{ deprecated['why'] | default('') | convert_symbols_to_format }@
:Alternative: @{ deprecated['alternative'] | default('')| convert_symbols_to_format }@ :Alternative: @{ deprecated['alternative'] | default('')| convert_symbols_to_format }@

View file

@ -468,7 +468,7 @@ class DocCLI(CLI):
if 'deprecated' in doc and doc['deprecated'] is not None and len(doc['deprecated']) > 0: if 'deprecated' in doc and doc['deprecated'] is not None and len(doc['deprecated']) > 0:
text.append("DEPRECATED: \n") text.append("DEPRECATED: \n")
if isinstance(doc['deprecated'], dict): if isinstance(doc['deprecated'], dict):
text.append("\tReason: %(why)s\n\tScheduled removal: Ansible %(version)s\n\tAlternatives: %(alternative)s" % doc.pop('deprecated')) text.append("\tReason: %(why)s\n\tWill be removed in: Ansible %(removed_in)s\n\tAlternatives: %(alternative)s" % doc.pop('deprecated'))
else: else:
text.append("%s" % doc.pop('deprecated')) text.append("%s" % doc.pop('deprecated'))
text.append("\n") text.append("\n")

View file

@ -16,7 +16,10 @@ DOCUMENTATION = r'''
module: ec2_ami_find module: ec2_ami_find
version_added: '2.0' version_added: '2.0'
short_description: Searches for AMIs to obtain the AMI ID and other information short_description: Searches for AMIs to obtain the AMI ID and other information
deprecated: Deprecated in 2.5. Use M(ec2_ami_facts) instead. deprecated:
removed_in: "2.9"
why: Various AWS modules have been combined and replaced with M(ec2_ami_facts).
alternative: Use M(ec2_ami_facts) instead.
description: description:
- Returns list of matching AMIs with AMI ID, along with other useful information - Returns list of matching AMIs with AMI ID, along with other useful information
- Can search AMIs with different owners - Can search AMIs with different owners

View file

@ -16,7 +16,10 @@ DOCUMENTATION = '''
--- ---
module: ec2_ami_search module: ec2_ami_search
short_description: Retrieve AWS AMI information for a given operating system. short_description: Retrieve AWS AMI information for a given operating system.
deprecated: "Use M(ec2_ami_find) instead." deprecated:
removed_in: "2.2"
why: Various AWS modules have been combined and replaced with M(ec2_ami_facts).
alternative: Use M(ec2_ami_find) instead.
version_added: "1.6" version_added: "1.6"
description: description:
- Look up the most recent AMI on AWS for a given operating system. - Look up the most recent AMI on AWS for a given operating system.
@ -84,124 +87,7 @@ EXAMPLES = '''
key_name: mykey key_name: mykey
''' '''
import csv from ansible.module_utils.common.removed import removed_module
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.urls import fetch_url
SUPPORTED_DISTROS = ['ubuntu']
AWS_REGIONS = ['ap-northeast-1',
'ap-southeast-1',
'ap-northeast-2',
'ap-southeast-2',
'ap-south-1',
'ca-central-1',
'eu-central-1',
'eu-west-1',
'eu-west-2',
'sa-east-1',
'us-east-1',
'us-east-2',
'us-west-1',
'us-west-2',
"us-gov-west-1"]
def get_url(module, url):
""" Get url and return response """
r, info = fetch_url(module, url)
if info['status'] != 200:
# Backwards compat
info['status_code'] = info['status']
module.fail_json(**info)
return r
def ubuntu(module):
""" Get the ami for ubuntu """
release = module.params['release']
stream = module.params['stream']
store = module.params['store']
arch = module.params['arch']
region = module.params['region']
virt = module.params['virt']
url = get_ubuntu_url(release, stream)
req = get_url(module, url)
reader = csv.reader(req, delimiter='\t')
try:
ami, aki, ari, tag, serial = lookup_ubuntu_ami(reader, release, stream,
store, arch, region, virt)
module.exit_json(changed=False, ami=ami, aki=aki, ari=ari, tag=tag,
serial=serial)
except KeyError:
module.fail_json(msg="No matching AMI found")
def lookup_ubuntu_ami(table, release, stream, store, arch, region, virt):
""" Look up the Ubuntu AMI that matches query given a table of AMIs
table: an iterable that returns a row of
(release, stream, tag, serial, region, ami, aki, ari, virt)
release: ubuntu release name
stream: 'server' or 'desktop'
store: 'ebs', 'ebs-io1', 'ebs-ssd' or 'instance-store'
arch: 'i386' or 'amd64'
region: EC2 region
virt: 'paravirtual' or 'hvm'
Returns (ami, aki, ari, tag, serial)"""
expected = (release, stream, store, arch, region, virt)
for row in table:
(actual_release, actual_stream, tag, serial,
actual_store, actual_arch, actual_region, ami, aki, ari,
actual_virt) = row
actual = (actual_release, actual_stream, actual_store, actual_arch,
actual_region, actual_virt)
if actual == expected:
# aki and ari are sometimes blank
if aki == '':
aki = None
if ari == '':
ari = None
return (ami, aki, ari, tag, serial)
raise KeyError()
def get_ubuntu_url(release, stream):
url = "https://cloud-images.ubuntu.com/query/%s/%s/released.current.txt"
return url % (release, stream)
def main():
arg_spec = dict(
distro=dict(required=True, choices=SUPPORTED_DISTROS),
release=dict(required=True),
stream=dict(required=False, default='server',
choices=['desktop', 'server']),
store=dict(required=False, default='ebs',
choices=['ebs', 'ebs-io1', 'ebs-ssd', 'instance-store']),
arch=dict(required=False, default='amd64',
choices=['i386', 'amd64']),
region=dict(required=False, default='us-east-1', choices=AWS_REGIONS),
virt=dict(required=False, default='paravirtual',
choices=['paravirtual', 'hvm']),
)
module = AnsibleModule(argument_spec=arg_spec)
distro = module.params['distro']
if distro == 'ubuntu':
ubuntu(module)
else:
module.fail_json(msg="Unsupported distro: %s" % distro)
if __name__ == '__main__': if __name__ == '__main__':
main() removed_module()

View file

@ -15,7 +15,10 @@ DOCUMENTATION = '''
--- ---
module: ec2_remote_facts module: ec2_remote_facts
short_description: Gather facts about ec2 instances in AWS short_description: Gather facts about ec2 instances in AWS
deprecated: Deprecated in 2.4. Use M(ec2_instance_facts) instead. deprecated:
removed_in: "2.8"
why: Replaced with boto3 version.
alternative: Use M(ec2_instance_facts) instead.
description: description:
- Gather facts about ec2 instances in AWS - Gather facts about ec2 instances in AWS
version_added: "2.0" version_added: "2.0"

View file

@ -18,10 +18,11 @@ short_description: configure AWS virtual private clouds
description: description:
- Create or terminates AWS virtual private clouds. This module has a dependency on python-boto. - Create or terminates AWS virtual private clouds. This module has a dependency on python-boto.
version_added: "1.4" version_added: "1.4"
deprecated: >- deprecated:
Deprecated in 2.3. Use M(ec2_vpc_net) along with supporting modules including removed_in: "2.5"
M(ec2_vpc_igw), M(ec2_vpc_route_table), M(ec2_vpc_subnet), M(ec2_vpc_dhcp_options), why: Replaced by dedicated modules.
M(ec2_vpc_nat_gateway), M(ec2_vpc_nacl). alternative: Use M(ec2_vpc_net) along with supporting modules including M(ec2_vpc_igw), M(ec2_vpc_route_table), M(ec2_vpc_subnet),
M(ec2_vpc_dhcp_options), M(ec2_vpc_nat_gateway), M(ec2_vpc_nacl).
options: options:
cidr_block: cidr_block:
description: description:
@ -159,605 +160,7 @@ EXAMPLES = '''
# the delete will fail until those dependencies are removed. # the delete will fail until those dependencies are removed.
''' '''
import time from ansible.module_utils.common.removed import removed_module
try:
import boto
import boto.ec2
import boto.vpc
from boto.exception import EC2ResponseError
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ec2 import connect_to_aws, ec2_argument_spec, get_aws_connection_info
def get_vpc_info(vpc):
"""
Retrieves vpc information from an instance
ID and returns it as a dictionary
"""
return({
'id': vpc.id,
'cidr_block': vpc.cidr_block,
'dhcp_options_id': vpc.dhcp_options_id,
'region': vpc.region.name,
'state': vpc.state,
})
def find_vpc(module, vpc_conn, vpc_id=None, cidr=None):
"""
Finds a VPC that matches a specific id or cidr + tags
module : AnsibleModule object
vpc_conn: authenticated VPCConnection connection object
Returns:
A VPC object that matches either an ID or CIDR and one or more tag values
"""
if vpc_id is None and cidr is None:
module.fail_json(
msg='You must specify either a vpc_id or a cidr block + list of unique tags, aborting'
)
found_vpcs = []
resource_tags = module.params.get('resource_tags')
# Check for existing VPC by cidr_block or id
if vpc_id is not None:
found_vpcs = vpc_conn.get_all_vpcs(None, {'vpc-id': vpc_id, 'state': 'available', })
else:
previous_vpcs = vpc_conn.get_all_vpcs(None, {'cidr': cidr, 'state': 'available'})
for vpc in previous_vpcs:
# Get all tags for each of the found VPCs
vpc_tags = dict((t.name, t.value) for t in vpc_conn.get_all_tags(filters={'resource-id': vpc.id}))
# If the supplied list of ID Tags match a subset of the VPC Tags, we found our VPC
if resource_tags and set(resource_tags.items()).issubset(set(vpc_tags.items())):
found_vpcs.append(vpc)
found_vpc = None
if len(found_vpcs) == 1:
found_vpc = found_vpcs[0]
if len(found_vpcs) > 1:
module.fail_json(msg='Found more than one vpc based on the supplied criteria, aborting')
return (found_vpc)
def routes_match(rt_list=None, rt=None, igw=None):
"""
Check if the route table has all routes as in given list
rt_list : A list if routes provided in the module
rt : The Remote route table object
igw : The internet gateway object for this vpc
Returns:
True when there provided routes and remote routes are the same.
False when provided routes and remote routes are different.
"""
local_routes = []
remote_routes = []
for route in rt_list:
route_kwargs = {
'gateway_id': None,
'instance_id': None,
'interface_id': None,
'vpc_peering_connection_id': None,
'state': 'active'
}
if route['gw'] == 'igw':
route_kwargs['gateway_id'] = igw.id
elif route['gw'].startswith('i-'):
route_kwargs['instance_id'] = route['gw']
elif route['gw'].startswith('eni-'):
route_kwargs['interface_id'] = route['gw']
elif route['gw'].startswith('pcx-'):
route_kwargs['vpc_peering_connection_id'] = route['gw']
else:
route_kwargs['gateway_id'] = route['gw']
route_kwargs['destination_cidr_block'] = route['dest']
local_routes.append(route_kwargs)
for j in rt.routes:
remote_routes.append(j.__dict__)
match = []
for i in local_routes:
change = "false"
for j in remote_routes:
if set(i.items()).issubset(set(j.items())):
change = "true"
match.append(change)
if 'false' in match:
return False
else:
return True
def rtb_changed(route_tables=None, vpc_conn=None, module=None, vpc=None, igw=None):
"""
Checks if the remote routes match the local routes.
route_tables : Route_tables parameter in the module
vpc_conn : The VPC connection object
module : The module object
vpc : The vpc object for this route table
igw : The internet gateway object for this vpc
Returns:
True when there is difference between the provided routes and remote routes and if subnet associations are different.
False when both routes and subnet associations matched.
"""
# We add a one for the main table
rtb_len = len(route_tables) + 1
remote_rtb_len = len(vpc_conn.get_all_route_tables(filters={'vpc_id': vpc.id}))
if remote_rtb_len != rtb_len:
return True
for rt in route_tables:
rt_id = None
for sn in rt['subnets']:
rsn = vpc_conn.get_all_subnets(filters={'cidr': sn, 'vpc_id': vpc.id})
if len(rsn) != 1:
module.fail_json(
msg='The subnet {0} to associate with route_table {1} '
'does not exist, aborting'.format(sn, rt)
)
nrt = vpc_conn.get_all_route_tables(filters={'vpc_id': vpc.id, 'association.subnet-id': rsn[0].id})
if not nrt:
return True
else:
nrt = nrt[0]
if not rt_id:
rt_id = nrt.id
if not routes_match(rt['routes'], nrt, igw):
return True
continue
else:
if rt_id == nrt.id:
continue
else:
return True
return True
return False
def create_vpc(module, vpc_conn):
"""
Creates a new or modifies an existing VPC.
module : AnsibleModule object
vpc_conn: authenticated VPCConnection connection object
Returns:
A dictionary with information
about the VPC and subnets that were launched
"""
id = module.params.get('vpc_id')
cidr_block = module.params.get('cidr_block')
instance_tenancy = module.params.get('instance_tenancy')
dns_support = module.params.get('dns_support')
dns_hostnames = module.params.get('dns_hostnames')
subnets = module.params.get('subnets')
internet_gateway = module.params.get('internet_gateway')
route_tables = module.params.get('route_tables')
vpc_spec_tags = module.params.get('resource_tags')
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
changed = False
# Check for existing VPC by cidr_block + tags or id
previous_vpc = find_vpc(module, vpc_conn, id, cidr_block)
if previous_vpc is not None:
changed = False
vpc = previous_vpc
else:
changed = True
try:
vpc = vpc_conn.create_vpc(cidr_block, instance_tenancy)
# wait here until the vpc is available
pending = True
wait_timeout = time.time() + wait_timeout
while wait and wait_timeout > time.time() and pending:
try:
pvpc = vpc_conn.get_all_vpcs(vpc.id)
if hasattr(pvpc, 'state'):
if pvpc.state == "available":
pending = False
elif hasattr(pvpc[0], 'state'):
if pvpc[0].state == "available":
pending = False
# sometimes vpc_conn.create_vpc() will return a vpc that can't be found yet by vpc_conn.get_all_vpcs()
# when that happens, just wait a bit longer and try again
except boto.exception.BotoServerError as e:
if e.error_code != 'InvalidVpcID.NotFound':
raise
if pending:
time.sleep(5)
if wait and wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="wait for vpc availability timeout on %s" % time.asctime())
except boto.exception.BotoServerError as e:
module.fail_json(msg="%s: %s" % (e.error_code, e.error_message))
# Done with base VPC, now change to attributes and features.
# Add resource tags
vpc_tags = dict((t.name, t.value) for t in vpc_conn.get_all_tags(filters={'resource-id': vpc.id}))
if not set(vpc_spec_tags.items()).issubset(set(vpc_tags.items())):
new_tags = {}
for (key, value) in set(vpc_spec_tags.items()):
if (key, value) not in set(vpc_tags.items()):
new_tags[key] = value
if new_tags:
vpc_conn.create_tags(vpc.id, new_tags)
# boto doesn't appear to have a way to determine the existing
# value of the dns attributes, so we just set them.
# It also must be done one at a time.
vpc_conn.modify_vpc_attribute(vpc.id, enable_dns_support=dns_support)
vpc_conn.modify_vpc_attribute(vpc.id, enable_dns_hostnames=dns_hostnames)
# Process all subnet properties
if subnets is not None:
if not isinstance(subnets, list):
module.fail_json(msg='subnets needs to be a list of cidr blocks')
current_subnets = vpc_conn.get_all_subnets(filters={'vpc_id': vpc.id})
# First add all new subnets
for subnet in subnets:
add_subnet = True
subnet_tags_current = True
new_subnet_tags = subnet.get('resource_tags', {})
subnet_tags_delete = []
for csn in current_subnets:
if subnet['cidr'] == csn.cidr_block:
add_subnet = False
# Check if AWS subnet tags are in playbook subnet tags
existing_tags_subset_of_new_tags = (set(csn.tags.items()).issubset(set(new_subnet_tags.items())))
# Check if subnet tags in playbook are in AWS subnet tags
new_tags_subset_of_existing_tags = (set(new_subnet_tags.items()).issubset(set(csn.tags.items())))
if existing_tags_subset_of_new_tags is False:
try:
for item in csn.tags.items():
if item not in new_subnet_tags.items():
subnet_tags_delete.append(item)
subnet_tags_delete = [key[0] for key in subnet_tags_delete]
delete_subnet_tag = vpc_conn.delete_tags(csn.id, subnet_tags_delete)
changed = True
except EC2ResponseError as e:
module.fail_json(msg='Unable to delete resource tag, error {0}'.format(e))
# Add new subnet tags if not current
if new_tags_subset_of_existing_tags is False:
try:
changed = True
create_subnet_tag = vpc_conn.create_tags(csn.id, new_subnet_tags)
except EC2ResponseError as e:
module.fail_json(msg='Unable to create resource tag, error: {0}'.format(e))
if add_subnet:
try:
new_subnet = vpc_conn.create_subnet(vpc.id, subnet['cidr'], subnet.get('az', None))
new_subnet_tags = subnet.get('resource_tags', {})
if new_subnet_tags:
# Sometimes AWS takes its time to create a subnet and so using new subnets's id
# to create tags results in exception.
# boto doesn't seem to refresh 'state' of the newly created subnet, i.e.: it's always 'pending'
# so i resorted to polling vpc_conn.get_all_subnets with the id of the newly added subnet
while len(vpc_conn.get_all_subnets(filters={'subnet-id': new_subnet.id})) == 0:
time.sleep(0.1)
vpc_conn.create_tags(new_subnet.id, new_subnet_tags)
changed = True
except EC2ResponseError as e:
module.fail_json(msg='Unable to create subnet {0}, error: {1}'.format(subnet['cidr'], e))
# Now delete all absent subnets
for csubnet in current_subnets:
delete_subnet = True
for subnet in subnets:
if csubnet.cidr_block == subnet['cidr']:
delete_subnet = False
if delete_subnet:
try:
vpc_conn.delete_subnet(csubnet.id)
changed = True
except EC2ResponseError as e:
module.fail_json(msg='Unable to delete subnet {0}, error: {1}'.format(csubnet.cidr_block, e))
# Handle Internet gateway (create/delete igw)
igw = None
igw_id = None
igws = vpc_conn.get_all_internet_gateways(filters={'attachment.vpc-id': vpc.id})
if len(igws) > 1:
module.fail_json(msg='EC2 returned more than one Internet Gateway for id %s, aborting' % vpc.id)
if internet_gateway:
if len(igws) != 1:
try:
igw = vpc_conn.create_internet_gateway()
vpc_conn.attach_internet_gateway(igw.id, vpc.id)
changed = True
except EC2ResponseError as e:
module.fail_json(msg='Unable to create Internet Gateway, error: {0}'.format(e))
else:
# Set igw variable to the current igw instance for use in route tables.
igw = igws[0]
else:
if len(igws) > 0:
try:
vpc_conn.detach_internet_gateway(igws[0].id, vpc.id)
vpc_conn.delete_internet_gateway(igws[0].id)
changed = True
except EC2ResponseError as e:
module.fail_json(msg='Unable to delete Internet Gateway, error: {0}'.format(e))
if igw is not None:
igw_id = igw.id
# Handle route tables - this may be worth splitting into a
# different module but should work fine here. The strategy to stay
# idempotent is to basically build all the route tables as
# defined, track the route table ids, and then run through the
# remote list of route tables and delete any that we didn't
# create. This shouldn't interrupt traffic in theory, but is the
# only way to really work with route tables over time that I can
# think of without using painful aws ids. Hopefully boto will add
# the replace-route-table API to make this smoother and
# allow control of the 'main' routing table.
if route_tables is not None:
rtb_needs_change = rtb_changed(route_tables, vpc_conn, module, vpc, igw)
if route_tables is not None and rtb_needs_change:
if not isinstance(route_tables, list):
module.fail_json(msg='route tables need to be a list of dictionaries')
# Work through each route table and update/create to match dictionary array
all_route_tables = []
for rt in route_tables:
try:
new_rt = vpc_conn.create_route_table(vpc.id)
new_rt_tags = rt.get('resource_tags', None)
if new_rt_tags:
vpc_conn.create_tags(new_rt.id, new_rt_tags)
for route in rt['routes']:
route_kwargs = {}
if route['gw'] == 'igw':
if not internet_gateway:
module.fail_json(
msg='You asked for an Internet Gateway '
'(igw) route, but you have no Internet Gateway'
)
route_kwargs['gateway_id'] = igw.id
elif route['gw'].startswith('i-'):
route_kwargs['instance_id'] = route['gw']
elif route['gw'].startswith('eni-'):
route_kwargs['interface_id'] = route['gw']
elif route['gw'].startswith('pcx-'):
route_kwargs['vpc_peering_connection_id'] = route['gw']
else:
route_kwargs['gateway_id'] = route['gw']
vpc_conn.create_route(new_rt.id, route['dest'], **route_kwargs)
# Associate with subnets
for sn in rt['subnets']:
rsn = vpc_conn.get_all_subnets(filters={'cidr': sn, 'vpc_id': vpc.id})
if len(rsn) != 1:
module.fail_json(
msg='The subnet {0} to associate with route_table {1} '
'does not exist, aborting'.format(sn, rt)
)
rsn = rsn[0]
# Disassociate then associate since we don't have replace
old_rt = vpc_conn.get_all_route_tables(
filters={'association.subnet_id': rsn.id, 'vpc_id': vpc.id}
)
old_rt = [x for x in old_rt if x.id is not None]
if len(old_rt) == 1:
old_rt = old_rt[0]
association_id = None
for a in old_rt.associations:
if a.subnet_id == rsn.id:
association_id = a.id
vpc_conn.disassociate_route_table(association_id)
vpc_conn.associate_route_table(new_rt.id, rsn.id)
all_route_tables.append(new_rt)
changed = True
except EC2ResponseError as e:
module.fail_json(
msg='Unable to create and associate route table {0}, error: '
'{1}'.format(rt, e)
)
# Now that we are good to go on our new route tables, delete the
# old ones except the 'main' route table as boto can't set the main
# table yet.
all_rts = vpc_conn.get_all_route_tables(filters={'vpc-id': vpc.id})
for rt in all_rts:
if rt.id is None:
continue
delete_rt = True
for newrt in all_route_tables:
if newrt.id == rt.id:
delete_rt = False
break
if delete_rt:
rta = rt.associations
is_main = False
for a in rta:
if a.main:
is_main = True
break
try:
if not is_main:
vpc_conn.delete_route_table(rt.id)
changed = True
except EC2ResponseError as e:
module.fail_json(msg='Unable to delete old route table {0}, error: {1}'.format(rt.id, e))
vpc_dict = get_vpc_info(vpc)
created_vpc_id = vpc.id
returned_subnets = []
current_subnets = vpc_conn.get_all_subnets(filters={'vpc_id': vpc.id})
for sn in current_subnets:
returned_subnets.append({
'resource_tags': dict((t.name, t.value) for t in vpc_conn.get_all_tags(filters={'resource-id': sn.id})),
'cidr': sn.cidr_block,
'az': sn.availability_zone,
'id': sn.id,
})
if subnets is not None:
# Sort subnets by the order they were listed in the play
order = {}
for idx, val in enumerate(subnets):
order[val['cidr']] = idx
# Number of subnets in the play
subnets_in_play = len(subnets)
returned_subnets.sort(key=lambda x: order.get(x['cidr'], subnets_in_play))
return (vpc_dict, created_vpc_id, returned_subnets, igw_id, changed)
def terminate_vpc(module, vpc_conn, vpc_id=None, cidr=None):
"""
Terminates a VPC
module: Ansible module object
vpc_conn: authenticated VPCConnection connection object
vpc_id: a vpc id to terminate
cidr: The cidr block of the VPC - can be used in lieu of an ID
Returns a dictionary of VPC information
about the VPC terminated.
If the VPC to be terminated is available
"changed" will be set to True.
"""
vpc_dict = {}
terminated_vpc_id = ''
changed = False
vpc = find_vpc(module, vpc_conn, vpc_id, cidr)
if vpc is not None:
if vpc.state == 'available':
terminated_vpc_id = vpc.id
vpc_dict = get_vpc_info(vpc)
try:
subnets = vpc_conn.get_all_subnets(filters={'vpc_id': vpc.id})
for sn in subnets:
vpc_conn.delete_subnet(sn.id)
igws = vpc_conn.get_all_internet_gateways(
filters={'attachment.vpc-id': vpc.id}
)
for igw in igws:
vpc_conn.detach_internet_gateway(igw.id, vpc.id)
vpc_conn.delete_internet_gateway(igw.id)
rts = vpc_conn.get_all_route_tables(filters={'vpc_id': vpc.id})
for rt in rts:
rta = rt.associations
is_main = False
for a in rta:
if a.main:
is_main = True
if not is_main:
vpc_conn.delete_route_table(rt.id)
vpc_conn.delete_vpc(vpc.id)
except EC2ResponseError as e:
module.fail_json(
msg='Unable to delete VPC {0}, error: {1}'.format(vpc.id, e)
)
changed = True
vpc_dict['state'] = "terminated"
return (changed, vpc_dict, terminated_vpc_id)
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
cidr_block=dict(),
instance_tenancy=dict(choices=['default', 'dedicated'], default='default'),
wait=dict(type='bool', default=False),
wait_timeout=dict(default=300),
dns_support=dict(type='bool', default=True),
dns_hostnames=dict(type='bool', default=True),
subnets=dict(type='list'),
vpc_id=dict(),
internet_gateway=dict(type='bool', default=False),
resource_tags=dict(type='dict', required=True),
route_tables=dict(type='list'),
state=dict(choices=['present', 'absent'], default='present'),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
)
if not HAS_BOTO:
module.fail_json(msg='boto required for this module')
state = module.params.get('state')
region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module)
# If we have a region specified, connect to its endpoint.
if region:
try:
vpc_conn = connect_to_aws(boto.vpc, region, **aws_connect_kwargs)
except boto.exception.NoAuthHandlerFound as e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
igw_id = None
if module.params.get('state') == 'absent':
vpc_id = module.params.get('vpc_id')
cidr = module.params.get('cidr_block')
(changed, vpc_dict, new_vpc_id) = terminate_vpc(module, vpc_conn, vpc_id, cidr)
subnets_changed = None
elif module.params.get('state') == 'present':
# Changed is always set to true when provisioning a new VPC
(vpc_dict, new_vpc_id, subnets_changed, igw_id, changed) = create_vpc(module, vpc_conn)
module.exit_json(changed=changed, vpc_id=new_vpc_id, vpc=vpc_dict, igw_id=igw_id, subnets=subnets_changed)
if __name__ == '__main__': if __name__ == '__main__':
main() removed_module()

View file

@ -19,7 +19,10 @@ short_description: create or terminate a virtual machine in azure
description: description:
- Creates or terminates azure instances. When created optionally waits for it to be 'running'. - Creates or terminates azure instances. When created optionally waits for it to be 'running'.
version_added: "1.7" version_added: "1.7"
deprecated: "Use M(azure_rm_virtualmachine) instead." deprecated:
removed_in: "2.8"
why: Replaced with various dedicated Azure modules.
alternative: M(azure_rm_virtualmachine)
options: options:
name: name:
description: description:

View file

@ -20,7 +20,10 @@ description:
version_added: "2.3" version_added: "2.3"
author: author:
- René Moser (@resmo) - René Moser (@resmo)
deprecated: Deprecated in 2.4. Use M(cs_instance_nic_secondaryip) instead. deprecated:
removed_in: "2.8"
why: New module created.
alternative: Use M(cs_instance_nic_secondaryip) instead.
options: options:
vm: vm:
description: description:

File diff suppressed because it is too large Load diff

View file

@ -14,7 +14,10 @@ DOCUMENTATION = '''
--- ---
module: kubernetes module: kubernetes
version_added: "2.1" version_added: "2.1"
deprecated: In 2.5 use M(k8s_raw) instead. deprecated:
removed_in: "2.9"
why: This module used the oc command line tool, where as M(k8s_raw) goes over the REST API.
alternative: Use M(k8s_raw) instead.
short_description: Manage Kubernetes resources short_description: Manage Kubernetes resources
description: description:
- This module can manage Kubernetes resources on an existing cluster using - This module can manage Kubernetes resources on an existing cluster using

View file

@ -17,7 +17,10 @@ ANSIBLE_METADATA = {
DOCUMENTATION = """ DOCUMENTATION = """
author: author:
- "Kenneth D. Evensen (@kevensen)" - "Kenneth D. Evensen (@kevensen)"
deprecated: In 2.5 use M(openshift_raw) instead. deprecated:
removed_in: "2.9"
why: This module used the oc command line tool, where as M(openshift_raw) goes over the REST API.
alternative: Use M(openshift_raw) instead.
description: description:
- This module allows management of resources in an OpenShift cluster. The - This module allows management of resources in an OpenShift cluster. The
inventory host can be any host with network connectivity to the OpenShift inventory host can be any host with network connectivity to the OpenShift

View file

@ -18,7 +18,10 @@ version_added: "1.1"
short_description: Manages Citrix NetScaler entities short_description: Manages Citrix NetScaler entities
description: description:
- Manages Citrix NetScaler server and service entities. - Manages Citrix NetScaler server and service entities.
deprecated: In 2.4 use M(netscaler_service) and M(netscaler_server) instead. deprecated:
removed_in: "2.8"
why: Replaced with Citrix maintained version.
alternative: Use M(netscaler_service) and M(netscaler_server) instead.
options: options:
nsc_host: nsc_host:
description: description:

View file

@ -19,7 +19,10 @@ module: cl_bond
version_added: "2.1" version_added: "2.1"
author: "Cumulus Networks (@CumulusNetworks)" author: "Cumulus Networks (@CumulusNetworks)"
short_description: Configures a bond port on Cumulus Linux short_description: Configures a bond port on Cumulus Linux
deprecated: Deprecated in 2.3. Use M(nclu) instead. deprecated:
removed_in: "2.5"
why: The M(nclu) module is designed to be easier to use for individuals who are new to Cumulus Linux by exposing the NCLU interface in an automatable way.
alternative: Use M(nclu) instead.
description: description:
- Configures a bond interface on Cumulus Linux To configure a bridge port - Configures a bond interface on Cumulus Linux To configure a bridge port
use the cl_bridge module. To configure any other type of interface use the use the cl_bridge module. To configure any other type of interface use the
@ -209,281 +212,7 @@ msg:
sample: "interface bond0 config updated" sample: "interface bond0 config updated"
''' '''
import os from ansible.module_utils.common.removed import removed_module
import re
import tempfile
from ansible.module_utils.basic import AnsibleModule
# handy helper for calling system calls.
# calls AnsibleModule.run_command and prints a more appropriate message
# exec_path - path to file to execute, with all its arguments.
# E.g "/sbin/ip -o link show"
# failure_msg - what message to print on failure
def run_cmd(module, exec_path):
(_rc, out, _err) = module.run_command(exec_path)
if _rc > 0:
if re.search('cannot find interface', _err):
return '[{}]'
failure_msg = "Failed; %s Error: %s" % (exec_path, _err)
module.fail_json(msg=failure_msg)
else:
return out
def current_iface_config(module):
# due to a bug in ifquery, have to check for presence of interface file
# and not rely solely on ifquery. when bug is fixed, this check can be
# removed
_ifacename = module.params.get('name')
_int_dir = module.params.get('location')
module.custom_current_config = {}
if os.path.exists(_int_dir + '/' + _ifacename):
_cmd = "/sbin/ifquery -o json %s" % (module.params.get('name'))
module.custom_current_config = module.from_json(
run_cmd(module, _cmd))[0]
def build_address(module):
# if addr_method == 'dhcp', don't add IP address
if module.params.get('addr_method') == 'dhcp':
return
_ipv4 = module.params.get('ipv4')
_ipv6 = module.params.get('ipv6')
_addresslist = []
if _ipv4 and len(_ipv4) > 0:
_addresslist += _ipv4
if _ipv6 and len(_ipv6) > 0:
_addresslist += _ipv6
if len(_addresslist) > 0:
module.custom_desired_config['config']['address'] = ' '.join(
_addresslist)
def build_vids(module):
_vids = module.params.get('vids')
if _vids and len(_vids) > 0:
module.custom_desired_config['config']['bridge-vids'] = ' '.join(_vids)
def build_pvid(module):
_pvid = module.params.get('pvid')
if _pvid:
module.custom_desired_config['config']['bridge-pvid'] = str(_pvid)
def conv_bool_to_str(_value):
if isinstance(_value, bool):
if _value is True:
return 'yes'
else:
return 'no'
return _value
def conv_array_to_str(_value):
if isinstance(_value, list):
return ' '.join(_value)
return _value
def build_generic_attr(module, _attr):
_value = module.params.get(_attr)
_value = conv_bool_to_str(_value)
_value = conv_array_to_str(_value)
if _value:
module.custom_desired_config['config'][
re.sub('_', '-', _attr)] = str(_value)
def build_alias_name(module):
alias_name = module.params.get('alias_name')
if alias_name:
module.custom_desired_config['config']['alias'] = alias_name
def build_addr_method(module):
_addr_method = module.params.get('addr_method')
if _addr_method:
module.custom_desired_config['addr_family'] = 'inet'
module.custom_desired_config['addr_method'] = _addr_method
def build_vrr(module):
_virtual_ip = module.params.get('virtual_ip')
_virtual_mac = module.params.get('virtual_mac')
vrr_config = []
if _virtual_ip:
vrr_config.append(_virtual_mac)
vrr_config.append(_virtual_ip)
module.custom_desired_config.get('config')['address-virtual'] = \
' '.join(vrr_config)
def add_glob_to_array(_bondmems):
"""
goes through each bond member if it sees a dash add glob
before it
"""
result = []
if isinstance(_bondmems, list):
for _entry in _bondmems:
if re.search('-', _entry):
_entry = 'glob ' + _entry
result.append(_entry)
return ' '.join(result)
return _bondmems
def build_bond_attr(module, _attr):
_value = module.params.get(_attr)
_value = conv_bool_to_str(_value)
_value = add_glob_to_array(_value)
if _value:
module.custom_desired_config['config'][
'bond-' + re.sub('_', '-', _attr)] = str(_value)
def build_desired_iface_config(module):
"""
take parameters defined and build ifupdown2 compatible hash
"""
module.custom_desired_config = {
'addr_family': None,
'auto': True,
'config': {},
'name': module.params.get('name')
}
for _attr in ['slaves', 'mode', 'xmit_hash_policy',
'miimon', 'lacp_rate', 'lacp_bypass_allow',
'lacp_bypass_period', 'lacp_bypass_all_active',
'min_links']:
build_bond_attr(module, _attr)
build_addr_method(module)
build_address(module)
build_vids(module)
build_pvid(module)
build_alias_name(module)
build_vrr(module)
for _attr in ['mtu', 'mstpctl_portnetwork', 'mstpctl_portadminedge'
'mstpctl_bpduguard', 'clag_id',
'lacp_bypass_priority']:
build_generic_attr(module, _attr)
def config_dict_changed(module):
"""
return true if 'config' dict in hash is different
between desired and current config
"""
current_config = module.custom_current_config.get('config')
desired_config = module.custom_desired_config.get('config')
return current_config != desired_config
def config_changed(module):
"""
returns true if config has changed
"""
if config_dict_changed(module):
return True
# check if addr_method is changed
return module.custom_desired_config.get('addr_method') != \
module.custom_current_config.get('addr_method')
def replace_config(module):
temp = tempfile.NamedTemporaryFile()
desired_config = module.custom_desired_config
# by default it will be something like /etc/network/interfaces.d/swp1
final_location = module.params.get('location') + '/' + \
module.params.get('name')
final_text = ''
_fh = open(final_location, 'w')
# make sure to put hash in array or else ifquery will fail
# write to temp file
try:
temp.write(module.jsonify([desired_config]))
# need to seek to 0 so that data is written to tempfile.
temp.seek(0)
_cmd = "/sbin/ifquery -a -i %s -t json" % (temp.name)
final_text = run_cmd(module, _cmd)
finally:
temp.close()
try:
_fh.write(final_text)
finally:
_fh.close()
def main():
module = AnsibleModule(
argument_spec=dict(
slaves=dict(required=True, type='list'),
name=dict(required=True, type='str'),
ipv4=dict(type='list'),
ipv6=dict(type='list'),
alias_name=dict(type='str'),
addr_method=dict(type='str',
choices=['', 'dhcp']),
mtu=dict(type='str'),
virtual_ip=dict(type='str'),
virtual_mac=dict(type='str'),
vids=dict(type='list'),
pvid=dict(type='str'),
mstpctl_portnetwork=dict(type='bool'),
mstpctl_portadminedge=dict(type='bool'),
mstpctl_bpduguard=dict(type='bool'),
clag_id=dict(type='str'),
min_links=dict(type='int', default=1),
mode=dict(type='str', default='802.3ad'),
miimon=dict(type='int', default=100),
xmit_hash_policy=dict(type='str', default='layer3+4'),
lacp_rate=dict(type='int', default=1),
lacp_bypass_allow=dict(type='int', choices=[0, 1]),
lacp_bypass_all_active=dict(type='int', choices=[0, 1]),
lacp_bypass_priority=dict(type='list'),
lacp_bypass_period=dict(type='int'),
location=dict(type='str',
default='/etc/network/interfaces.d')
),
mutually_exclusive=[['lacp_bypass_priority', 'lacp_bypass_all_active']],
required_together=[['virtual_ip', 'virtual_mac']]
)
# if using the jinja default filter, this resolves to
# create an list with an empty string ['']. The following
# checks all lists and removes it, so that functions expecting
# an empty list, get this result. May upstream this fix into
# the AnsibleModule code to have it check for this.
for k, _param in module.params.items():
if isinstance(_param, list):
module.params[k] = [x for x in _param if x]
_location = module.params.get('location')
if not os.path.exists(_location):
_msg = "%s does not exist." % (_location)
module.fail_json(msg=_msg)
return # for testing purposes only
ifacename = module.params.get('name')
_changed = False
_msg = "interface %s config not changed" % (ifacename)
current_iface_config(module)
build_desired_iface_config(module)
if config_changed(module):
replace_config(module)
_msg = "interface %s config updated" % (ifacename)
_changed = True
module.exit_json(changed=_changed, msg=_msg)
if __name__ == '__main__': if __name__ == '__main__':
main() removed_module()

View file

@ -19,7 +19,10 @@ module: cl_bridge
version_added: "2.1" version_added: "2.1"
author: "Cumulus Networks (@CumulusNetworks)" author: "Cumulus Networks (@CumulusNetworks)"
short_description: Configures a bridge port on Cumulus Linux short_description: Configures a bridge port on Cumulus Linux
deprecated: Deprecated in 2.3. Use M(nclu) instead. deprecated:
removed_in: "2.5"
why: The M(nclu) module is designed to be easier to use for individuals who are new to Cumulus Linux by exposing the NCLU interface in an automatable way.
alternative: Use M(nclu) instead.
description: description:
- Configures a bridge interface on Cumulus Linux To configure a bond port - Configures a bridge interface on Cumulus Linux To configure a bond port
use the cl_bond module. To configure any other type of interface use the use the cl_bond module. To configure any other type of interface use the
@ -157,258 +160,7 @@ msg:
sample: "interface bond0 config updated" sample: "interface bond0 config updated"
''' '''
import os from ansible.module_utils.common.removed import removed_module
import re
import tempfile
from ansible.module_utils.basic import AnsibleModule
# handy helper for calling system calls.
# calls AnsibleModule.run_command and prints a more appropriate message
# exec_path - path to file to execute, with all its arguments.
# E.g "/sbin/ip -o link show"
# failure_msg - what message to print on failure
def run_cmd(module, exec_path):
(_rc, out, _err) = module.run_command(exec_path)
if _rc > 0:
if re.search('cannot find interface', _err):
return '[{}]'
failure_msg = "Failed; %s Error: %s" % (exec_path, _err)
module.fail_json(msg=failure_msg)
else:
return out
def current_iface_config(module):
# due to a bug in ifquery, have to check for presence of interface file
# and not rely solely on ifquery. when bug is fixed, this check can be
# removed
_ifacename = module.params.get('name')
_int_dir = module.params.get('location')
module.custom_current_config = {}
if os.path.exists(_int_dir + '/' + _ifacename):
_cmd = "/sbin/ifquery -o json %s" % (module.params.get('name'))
module.custom_current_config = module.from_json(
run_cmd(module, _cmd))[0]
def build_address(module):
# if addr_method == 'dhcp', don't add IP address
if module.params.get('addr_method') == 'dhcp':
return
_ipv4 = module.params.get('ipv4')
_ipv6 = module.params.get('ipv6')
_addresslist = []
if _ipv4 and len(_ipv4) > 0:
_addresslist += _ipv4
if _ipv6 and len(_ipv6) > 0:
_addresslist += _ipv6
if len(_addresslist) > 0:
module.custom_desired_config['config']['address'] = ' '.join(
_addresslist)
def build_vids(module):
_vids = module.params.get('vids')
if _vids and len(_vids) > 0:
module.custom_desired_config['config']['bridge-vids'] = ' '.join(_vids)
def build_pvid(module):
_pvid = module.params.get('pvid')
if _pvid:
module.custom_desired_config['config']['bridge-pvid'] = str(_pvid)
def conv_bool_to_str(_value):
if isinstance(_value, bool):
if _value is True:
return 'yes'
else:
return 'no'
return _value
def build_generic_attr(module, _attr):
_value = module.params.get(_attr)
_value = conv_bool_to_str(_value)
if _value:
module.custom_desired_config['config'][
re.sub('_', '-', _attr)] = str(_value)
def build_alias_name(module):
alias_name = module.params.get('alias_name')
if alias_name:
module.custom_desired_config['config']['alias'] = alias_name
def build_addr_method(module):
_addr_method = module.params.get('addr_method')
if _addr_method:
module.custom_desired_config['addr_family'] = 'inet'
module.custom_desired_config['addr_method'] = _addr_method
def build_vrr(module):
_virtual_ip = module.params.get('virtual_ip')
_virtual_mac = module.params.get('virtual_mac')
vrr_config = []
if _virtual_ip:
vrr_config.append(_virtual_mac)
vrr_config.append(_virtual_ip)
module.custom_desired_config.get('config')['address-virtual'] = \
' '.join(vrr_config)
def add_glob_to_array(_bridgemems):
"""
goes through each bridge member if it sees a dash add glob
before it
"""
result = []
if isinstance(_bridgemems, list):
for _entry in _bridgemems:
if re.search('-', _entry):
_entry = 'glob ' + _entry
result.append(_entry)
return ' '.join(result)
return _bridgemems
def build_bridge_attr(module, _attr):
_value = module.params.get(_attr)
_value = conv_bool_to_str(_value)
_value = add_glob_to_array(_value)
if _value:
module.custom_desired_config['config'][
'bridge-' + re.sub('_', '-', _attr)] = str(_value)
def build_desired_iface_config(module):
"""
take parameters defined and build ifupdown2 compatible hash
"""
module.custom_desired_config = {
'addr_family': None,
'auto': True,
'config': {},
'name': module.params.get('name')
}
for _attr in ['vlan_aware', 'pvid', 'ports', 'stp']:
build_bridge_attr(module, _attr)
build_addr_method(module)
build_address(module)
build_vids(module)
build_alias_name(module)
build_vrr(module)
for _attr in ['mtu', 'mstpctl_treeprio']:
build_generic_attr(module, _attr)
def config_dict_changed(module):
"""
return true if 'config' dict in hash is different
between desired and current config
"""
current_config = module.custom_current_config.get('config')
desired_config = module.custom_desired_config.get('config')
return current_config != desired_config
def config_changed(module):
"""
returns true if config has changed
"""
if config_dict_changed(module):
return True
# check if addr_method is changed
return module.custom_desired_config.get('addr_method') != \
module.custom_current_config.get('addr_method')
def replace_config(module):
temp = tempfile.NamedTemporaryFile()
desired_config = module.custom_desired_config
# by default it will be something like /etc/network/interfaces.d/swp1
final_location = module.params.get('location') + '/' + \
module.params.get('name')
final_text = ''
_fh = open(final_location, 'w')
# make sure to put hash in array or else ifquery will fail
# write to temp file
try:
temp.write(module.jsonify([desired_config]))
# need to seek to 0 so that data is written to tempfile.
temp.seek(0)
_cmd = "/sbin/ifquery -a -i %s -t json" % (temp.name)
final_text = run_cmd(module, _cmd)
finally:
temp.close()
try:
_fh.write(final_text)
finally:
_fh.close()
def main():
module = AnsibleModule(
argument_spec=dict(
ports=dict(required=True, type='list'),
name=dict(required=True, type='str'),
ipv4=dict(type='list'),
ipv6=dict(type='list'),
alias_name=dict(type='str'),
addr_method=dict(type='str',
choices=['', 'dhcp']),
mtu=dict(type='str'),
virtual_ip=dict(type='str'),
virtual_mac=dict(type='str'),
vids=dict(type='list'),
pvid=dict(type='str'),
mstpctl_treeprio=dict(type='str'),
vlan_aware=dict(type='bool'),
stp=dict(type='bool', default='yes'),
location=dict(type='str',
default='/etc/network/interfaces.d')
),
required_together=[
['virtual_ip', 'virtual_mac']
]
)
# if using the jinja default filter, this resolves to
# create an list with an empty string ['']. The following
# checks all lists and removes it, so that functions expecting
# an empty list, get this result. May upstream this fix into
# the AnsibleModule code to have it check for this.
for k, _param in module.params.items():
if isinstance(_param, list):
module.params[k] = [x for x in _param if x]
_location = module.params.get('location')
if not os.path.exists(_location):
_msg = "%s does not exist." % (_location)
module.fail_json(msg=_msg)
return # for testing purposes only
ifacename = module.params.get('name')
_changed = False
_msg = "interface %s config not changed" % (ifacename)
current_iface_config(module)
build_desired_iface_config(module)
if config_changed(module):
replace_config(module)
_msg = "interface %s config updated" % (ifacename)
_changed = True
module.exit_json(changed=_changed, msg=_msg)
if __name__ == '__main__': if __name__ == '__main__':
main() removed_module()

View file

@ -19,7 +19,10 @@ module: cl_img_install
version_added: "2.1" version_added: "2.1"
author: "Cumulus Networks (@CumulusNetworks)" author: "Cumulus Networks (@CumulusNetworks)"
short_description: Install a different Cumulus Linux version. short_description: Install a different Cumulus Linux version.
deprecated: Deprecated in 2.3. The image slot system no longer exists in Cumulus Linux. deprecated:
removed_in: "2.5"
why: The image slot system no longer exists in Cumulus Linux.
alternative: n/a
description: description:
- install a different version of Cumulus Linux in the inactive slot. For - install a different version of Cumulus Linux in the inactive slot. For
more details go the Image Management User Guide at more details go the Image Management User Guide at
@ -103,213 +106,7 @@ msg:
sample: "interface bond0 config updated" sample: "interface bond0 config updated"
''' '''
import re from ansible.module_utils.common.removed import removed_module
from ansible.module_utils.basic import AnsibleModule, platform
from ansible.module_utils.six.moves.urllib import parse as urlparse
def check_url(module, url):
parsed_url = urlparse(url)
if len(parsed_url.path) > 0:
sch = parsed_url.scheme
if (sch == 'http' or sch == 'https' or len(parsed_url.scheme) == 0):
return True
module.fail_json(msg="Image Path URL. Wrong Format %s" % (url))
return False
def run_cl_cmd(module, cmd, check_rc=True):
try:
(rc, out, err) = module.run_command(cmd, check_rc=check_rc)
except Exception as e:
module.fail_json(msg=e.strerror)
# trim last line as it is always empty
ret = out.splitlines()
return ret
def get_slot_info(module):
slots = {}
slots['1'] = {}
slots['2'] = {}
active_slotnum = get_active_slot(module)
primary_slotnum = get_primary_slot_num(module)
for _num in range(1, 3):
slot = slots[str(_num)]
slot['version'] = get_slot_version(module, str(_num))
if _num == int(active_slotnum):
slot['active'] = True
if _num == int(primary_slotnum):
slot['primary'] = True
return slots
def get_slot_version(module, slot_num):
lsb_release = check_mnt_root_lsb_release(slot_num)
switch_firm_ver = check_fw_print_env(module, slot_num)
_version = module.sw_version
if lsb_release == _version or switch_firm_ver == _version:
return _version
elif lsb_release:
return lsb_release
else:
return switch_firm_ver
def check_mnt_root_lsb_release(slot_num):
_path = '/mnt/root-rw/config%s/etc/lsb-release' % (slot_num)
try:
lsb_release = open(_path)
lines = lsb_release.readlines()
for line in lines:
_match = re.search('DISTRIB_RELEASE=([0-9a-zA-Z.]+)', line)
if _match:
return _match.group(1).split('-')[0]
except:
pass
return None
def check_fw_print_env(module, slot_num):
cmd = None
if platform.machine() == 'ppc':
cmd = "/usr/sbin/fw_printenv -n cl.ver%s" % (slot_num)
fw_output = run_cl_cmd(module, cmd)
return fw_output[0].split('-')[0]
elif platform.machine() == 'x86_64':
cmd = "/usr/bin/grub-editenv list"
grub_output = run_cl_cmd(module, cmd)
for _line in grub_output:
_regex_str = re.compile('cl.ver' + slot_num + r'=([\w.]+)-')
m0 = re.match(_regex_str, _line)
if m0:
return m0.group(1)
def get_primary_slot_num(module):
cmd = None
if platform.machine() == 'ppc':
cmd = "/usr/sbin/fw_printenv -n cl.active"
return ''.join(run_cl_cmd(module, cmd))
elif platform.machine() == 'x86_64':
cmd = "/usr/bin/grub-editenv list"
grub_output = run_cl_cmd(module, cmd)
for _line in grub_output:
_regex_str = re.compile(r'cl.active=(\d)')
m0 = re.match(_regex_str, _line)
if m0:
return m0.group(1)
def get_active_slot(module):
try:
cmdline = open('/proc/cmdline').readline()
except:
module.fail_json(msg='Failed to open /proc/cmdline. ' +
'Unable to determine active slot')
_match = re.search(r'active=(\d+)', cmdline)
if _match:
return _match.group(1)
return None
def install_img(module):
src = module.params.get('src')
_version = module.sw_version
app_path = '/usr/cumulus/bin/cl-img-install -f %s' % (src)
run_cl_cmd(module, app_path)
perform_switch_slot = module.params.get('switch_slot')
if perform_switch_slot is True:
check_sw_version(module)
else:
_changed = True
_msg = "Cumulus Linux Version " + _version + " successfully" + \
" installed in alternate slot"
module.exit_json(changed=_changed, msg=_msg)
def switch_slot(module, slotnum):
_switch_slot = module.params.get('switch_slot')
if _switch_slot is True:
app_path = '/usr/cumulus/bin/cl-img-select %s' % (slotnum)
run_cl_cmd(module, app_path)
def determine_sw_version(module):
_version = module.params.get('version')
_filename = ''
# Use _version if user defines it
if _version:
module.sw_version = _version
return
else:
_filename = module.params.get('src').split('/')[-1]
_match = re.search(r'\d+\W\d+\W\w+', _filename)
if _match:
module.sw_version = re.sub(r'\W', '.', _match.group())
return
_msg = 'Unable to determine version from file %s' % (_filename)
module.exit_json(changed=False, msg=_msg)
def check_sw_version(module):
slots = get_slot_info(module)
_version = module.sw_version
perform_switch_slot = module.params.get('switch_slot')
for _num, slot in slots.items():
if slot['version'] == _version:
if 'active' in slot:
_msg = "Version %s is installed in the active slot" \
% (_version)
module.exit_json(changed=False, msg=_msg)
else:
_msg = "Version " + _version + \
" is installed in the alternate slot. "
if 'primary' not in slot:
if perform_switch_slot is True:
switch_slot(module, _num)
_msg = _msg + \
"cl-img-select has made the alternate " + \
"slot the primary slot. " +\
"Next reboot, switch will load " + _version + "."
module.exit_json(changed=True, msg=_msg)
else:
_msg = _msg + \
"Next reboot will not load " + _version + ". " + \
"switch_slot keyword set to 'no'."
module.exit_json(changed=False, msg=_msg)
else:
if perform_switch_slot is True:
_msg = _msg + \
"Next reboot, switch will load " + _version + "."
module.exit_json(changed=False, msg=_msg)
else:
_msg = _msg + \
'switch_slot set to "no". ' + \
'No further action to take'
module.exit_json(changed=False, msg=_msg)
def main():
module = AnsibleModule(
argument_spec=dict(
src=dict(required=True, type='str'),
version=dict(type='str'),
switch_slot=dict(type='bool', default=False),
),
)
determine_sw_version(module)
_url = module.params.get('src')
check_sw_version(module)
check_url(module, _url)
install_img(module)
if __name__ == '__main__': if __name__ == '__main__':
main() removed_module()

View file

@ -20,7 +20,10 @@ version_added: "2.1"
author: "Cumulus Networks (@CumulusNetworks)" author: "Cumulus Networks (@CumulusNetworks)"
short_description: Configures a front panel port, loopback or short_description: Configures a front panel port, loopback or
management port on Cumulus Linux. management port on Cumulus Linux.
deprecated: Deprecated in 2.3. Use M(nclu) instead. deprecated:
removed_in: "2.5"
why: The M(nclu) module is designed to be easier to use for individuals who are new to Cumulus Linux by exposing the NCLU interface in an automatable way.
alternative: Use M(nclu) instead.
description: description:
- Configures a front panel, sub-interface, SVI, management or loopback port - Configures a front panel, sub-interface, SVI, management or loopback port
on a Cumulus Linux switch. For bridge ports use the cl_bridge module. For on a Cumulus Linux switch. For bridge ports use the cl_bridge module. For
@ -202,249 +205,7 @@ msg:
sample: "interface bond0 config updated" sample: "interface bond0 config updated"
''' '''
import os from ansible.module_utils.common.removed import removed_module
import re
import tempfile
from ansible.module_utils.basic import AnsibleModule
# handy helper for calling system calls.
# calls AnsibleModule.run_command and prints a more appropriate message
# exec_path - path to file to execute, with all its arguments.
# E.g "/sbin/ip -o link show"
# failure_msg - what message to print on failure
def run_cmd(module, exec_path):
(_rc, out, _err) = module.run_command(exec_path)
if _rc > 0:
if re.search('cannot find interface', _err):
return '[{}]'
failure_msg = "Failed; %s Error: %s" % (exec_path, _err)
module.fail_json(msg=failure_msg)
else:
return out
def current_iface_config(module):
# due to a bug in ifquery, have to check for presence of interface file
# and not rely solely on ifquery. when bug is fixed, this check can be
# removed
_ifacename = module.params.get('name')
_int_dir = module.params.get('location')
module.custom_current_config = {}
if os.path.exists(_int_dir + '/' + _ifacename):
_cmd = "/sbin/ifquery -o json %s" % (module.params.get('name'))
module.custom_current_config = module.from_json(
run_cmd(module, _cmd))[0]
def build_address(module):
# if addr_method == 'dhcp', don't add IP address
if module.params.get('addr_method') == 'dhcp':
return
_ipv4 = module.params.get('ipv4')
_ipv6 = module.params.get('ipv6')
_addresslist = []
if _ipv4 and len(_ipv4) > 0:
_addresslist += _ipv4
if _ipv6 and len(_ipv6) > 0:
_addresslist += _ipv6
if len(_addresslist) > 0:
module.custom_desired_config['config']['address'] = ' '.join(
_addresslist)
def build_vids(module):
_vids = module.params.get('vids')
if _vids and len(_vids) > 0:
module.custom_desired_config['config']['bridge-vids'] = ' '.join(_vids)
def build_pvid(module):
_pvid = module.params.get('pvid')
if _pvid:
module.custom_desired_config['config']['bridge-pvid'] = str(_pvid)
def build_speed(module):
_speed = module.params.get('speed')
if _speed:
module.custom_desired_config['config']['link-speed'] = str(_speed)
module.custom_desired_config['config']['link-duplex'] = 'full'
def conv_bool_to_str(_value):
if isinstance(_value, bool):
if _value is True:
return 'yes'
else:
return 'no'
return _value
def build_generic_attr(module, _attr):
_value = module.params.get(_attr)
_value = conv_bool_to_str(_value)
if _value:
module.custom_desired_config['config'][
re.sub('_', '-', _attr)] = str(_value)
def build_alias_name(module):
alias_name = module.params.get('alias_name')
if alias_name:
module.custom_desired_config['config']['alias'] = alias_name
def build_addr_method(module):
_addr_method = module.params.get('addr_method')
if _addr_method:
module.custom_desired_config['addr_family'] = 'inet'
module.custom_desired_config['addr_method'] = _addr_method
def build_vrr(module):
_virtual_ip = module.params.get('virtual_ip')
_virtual_mac = module.params.get('virtual_mac')
vrr_config = []
if _virtual_ip:
vrr_config.append(_virtual_mac)
vrr_config.append(_virtual_ip)
module.custom_desired_config.get('config')['address-virtual'] = \
' '.join(vrr_config)
def build_desired_iface_config(module):
"""
take parameters defined and build ifupdown2 compatible hash
"""
module.custom_desired_config = {
'addr_family': None,
'auto': True,
'config': {},
'name': module.params.get('name')
}
build_addr_method(module)
build_address(module)
build_vids(module)
build_pvid(module)
build_speed(module)
build_alias_name(module)
build_vrr(module)
for _attr in ['mtu', 'mstpctl_portnetwork', 'mstpctl_portadminedge',
'mstpctl_bpduguard', 'clagd_enable',
'clagd_priority', 'clagd_peer_ip',
'clagd_sys_mac', 'clagd_args']:
build_generic_attr(module, _attr)
def config_dict_changed(module):
"""
return true if 'config' dict in hash is different
between desired and current config
"""
current_config = module.custom_current_config.get('config')
desired_config = module.custom_desired_config.get('config')
return current_config != desired_config
def config_changed(module):
"""
returns true if config has changed
"""
if config_dict_changed(module):
return True
# check if addr_method is changed
return module.custom_desired_config.get('addr_method') != \
module.custom_current_config.get('addr_method')
def replace_config(module):
temp = tempfile.NamedTemporaryFile()
desired_config = module.custom_desired_config
# by default it will be something like /etc/network/interfaces.d/swp1
final_location = module.params.get('location') + '/' + \
module.params.get('name')
final_text = ''
_fh = open(final_location, 'w')
# make sure to put hash in array or else ifquery will fail
# write to temp file
try:
temp.write(module.jsonify([desired_config]))
# need to seek to 0 so that data is written to tempfile.
temp.seek(0)
_cmd = "/sbin/ifquery -a -i %s -t json" % (temp.name)
final_text = run_cmd(module, _cmd)
finally:
temp.close()
try:
_fh.write(final_text)
finally:
_fh.close()
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True, type='str'),
ipv4=dict(type='list'),
ipv6=dict(type='list'),
alias_name=dict(type='str'),
addr_method=dict(type='str',
choices=['', 'loopback', 'dhcp']),
speed=dict(type='str'),
mtu=dict(type='str'),
virtual_ip=dict(type='str'),
virtual_mac=dict(type='str'),
vids=dict(type='list'),
pvid=dict(type='str'),
mstpctl_portnetwork=dict(type='bool'),
mstpctl_portadminedge=dict(type='bool'),
mstpctl_bpduguard=dict(type='bool'),
clagd_enable=dict(type='bool'),
clagd_priority=dict(type='str'),
clagd_peer_ip=dict(type='str'),
clagd_sys_mac=dict(type='str'),
clagd_args=dict(type='str'),
location=dict(type='str',
default='/etc/network/interfaces.d')
),
required_together=[
['virtual_ip', 'virtual_mac'],
['clagd_enable', 'clagd_priority',
'clagd_peer_ip', 'clagd_sys_mac']
]
)
# if using the jinja default filter, this resolves to
# create an list with an empty string ['']. The following
# checks all lists and removes it, so that functions expecting
# an empty list, get this result. May upstream this fix into
# the AnsibleModule code to have it check for this.
for k, _param in module.params.items():
if isinstance(_param, list):
module.params[k] = [x for x in _param if x]
_location = module.params.get('location')
if not os.path.exists(_location):
_msg = "%s does not exist." % (_location)
module.fail_json(msg=_msg)
return # for testing purposes only
ifacename = module.params.get('name')
_changed = False
_msg = "interface %s config not changed" % (ifacename)
current_iface_config(module)
build_desired_iface_config(module)
if config_changed(module):
replace_config(module)
_msg = "interface %s config updated" % (ifacename)
_changed = True
module.exit_json(changed=_changed, msg=_msg)
if __name__ == '__main__': if __name__ == '__main__':
main() removed_module()

View file

@ -19,7 +19,10 @@ module: cl_interface_policy
version_added: "2.1" version_added: "2.1"
author: "Cumulus Networks (@CumulusNetworks)" author: "Cumulus Networks (@CumulusNetworks)"
short_description: Configure interface enforcement policy on Cumulus Linux short_description: Configure interface enforcement policy on Cumulus Linux
deprecated: Deprecated in 2.3. Use M(nclu) instead. deprecated:
removed_in: "2.5"
why: The M(nclu) module is designed to be easier to use for individuals who are new to Cumulus Linux by exposing the NCLU interface in an automatable way.
alternative: Use M(nclu) instead.
description: description:
- This module affects the configuration files located in the interfaces - This module affects the configuration files located in the interfaces
folder defined by ifupdown2. Interfaces port and port ranges listed in the folder defined by ifupdown2. Interfaces port and port ranges listed in the
@ -64,82 +67,8 @@ msg:
type: string type: string
sample: "interface bond0 config updated" sample: "interface bond0 config updated"
''' '''
import os
import re
from ansible.module_utils.basic import AnsibleModule
# get list of interface files that are currently "configured".
# doesn't mean actually applied to the system, but most likely are
def read_current_int_dir(module):
module.custom_currentportlist = os.listdir(module.params.get('location'))
# take the allowed list and convert it to into a list
# of ports.
def convert_allowed_list_to_port_range(module):
allowedlist = module.params.get('allowed')
for portrange in allowedlist:
module.custom_allowedportlist += breakout_portrange(portrange)
def breakout_portrange(prange):
_m0 = re.match(r'(\w+[a-z.])(\d+)?-?(\d+)?(\w+)?', prange.strip())
# no range defined
if _m0.group(3) is None:
return [_m0.group(0)]
else:
portarray = []
intrange = range(int(_m0.group(2)), int(_m0.group(3)) + 1)
for _int in intrange:
portarray.append(''.join([_m0.group(1),
str(_int),
str(_m0.group(4) or '')
]
)
)
return portarray
# deletes the interface files
def unconfigure_interfaces(module):
currentportset = set(module.custom_currentportlist)
allowedportset = set(module.custom_allowedportlist)
remove_list = currentportset.difference(allowedportset)
fileprefix = module.params.get('location')
module.msg = "remove config for interfaces %s" % (', '.join(remove_list))
for _file in remove_list:
os.unlink(fileprefix + _file)
# check to see if policy should be enforced
# returns true if policy needs to be enforced
# that is delete interface files
def int_policy_enforce(module):
currentportset = set(module.custom_currentportlist)
allowedportset = set(module.custom_allowedportlist)
return not currentportset.issubset(allowedportset)
def main():
module = AnsibleModule(
argument_spec=dict(
allowed=dict(type='list', required=True),
location=dict(type='str', default='/etc/network/interfaces.d/')
),
)
module.custom_currentportlist = []
module.custom_allowedportlist = []
module.changed = False
module.msg = 'configured port list is part of allowed port list'
read_current_int_dir(module)
convert_allowed_list_to_port_range(module)
if int_policy_enforce(module):
module.changed = True
unconfigure_interfaces(module)
module.exit_json(changed=module.changed, msg=module.msg)
from ansible.module_utils.common.removed import removed_module
if __name__ == '__main__': if __name__ == '__main__':
main() removed_module()

View file

@ -18,8 +18,11 @@ DOCUMENTATION = '''
module: cl_license module: cl_license
version_added: "2.1" version_added: "2.1"
author: "Cumulus Networks (@CumulusNetworks)" author: "Cumulus Networks (@CumulusNetworks)"
short_description: Install licenses fo Cumulus Linux short_description: Install licenses for Cumulus Linux
deprecated: Deprecated in 2.3. deprecated:
why: The M(nclu) module is designed to be easier to use for individuals who are new to Cumulus Linux by exposing the NCLU interface in an automatable way.
removed_in: "2.5"
alternative: Use M(nclu) instead.
description: description:
- Installs a Cumulus Linux license. The module reports no change of status - Installs a Cumulus Linux license. The module reports no change of status
when a license is installed. when a license is installed.
@ -100,43 +103,7 @@ msg:
sample: "interface bond0 config updated" sample: "interface bond0 config updated"
''' '''
from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.common.removed import removed_module
CL_LICENSE_PATH = '/usr/cumulus/bin/cl-license'
def install_license(module):
# license is not installed, install it
_url = module.params.get('src')
(_rc, out, _err) = module.run_command("%s -i %s" % (CL_LICENSE_PATH, _url))
if _rc > 0:
module.fail_json(msg=_err)
def main():
module = AnsibleModule(
argument_spec=dict(
src=dict(required=True, type='str'),
force=dict(type='bool', default=False)
),
)
# check if license is installed
# if force is enabled then set return code to nonzero
if module.params.get('force') is True:
_rc = 10
else:
(_rc, out, _err) = module.run_command(CL_LICENSE_PATH)
if _rc == 0:
module.msg = "No change. License already installed"
module.changed = False
else:
install_license(module)
module.msg = "License installation completed"
module.changed = True
module.exit_json(changed=module.changed, msg=module.msg)
if __name__ == '__main__': if __name__ == '__main__':
main() removed_module()

View file

@ -19,7 +19,10 @@ module: cl_ports
version_added: "2.1" version_added: "2.1"
author: "Cumulus Networks (@CumulusNetworks)" author: "Cumulus Networks (@CumulusNetworks)"
short_description: Configure Cumulus Switch port attributes (ports.conf) short_description: Configure Cumulus Switch port attributes (ports.conf)
deprecated: Deprecated in 2.3. Use M(nclu) instead. deprecated:
removed_in: "2.5"
why: The M(nclu) module is designed to be easier to use for individuals who are new to Cumulus Linux by exposing the NCLU interface in an automatable way.
alternative: Use M(nclu) instead.
description: description:
- Set the initial port attribute defined in the Cumulus Linux ports.conf, - Set the initial port attribute defined in the Cumulus Linux ports.conf,
file. This module does not do any error checking at the moment. Be careful file. This module does not do any error checking at the moment. Be careful
@ -77,139 +80,8 @@ msg:
type: string type: string
sample: "interface bond0 config updated" sample: "interface bond0 config updated"
''' '''
import os
import re
import tempfile
import shutil
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
PORTS_CONF = '/etc/cumulus/ports.conf'
def hash_existing_ports_conf(module):
module.ports_conf_hash = {}
if not os.path.exists(PORTS_CONF):
return False
try:
existing_ports_conf = open(PORTS_CONF).readlines()
except IOError as e:
_msg = "Failed to open %s: %s" % (PORTS_CONF, to_native(e))
module.fail_json(msg=_msg)
return # for testing only should return on module.fail_json
for _line in existing_ports_conf:
_m0 = re.match(r'^(\d+)=(\w+)', _line)
if _m0:
_portnum = int(_m0.group(1))
_speed = _m0.group(2)
module.ports_conf_hash[_portnum] = _speed
def generate_new_ports_conf_hash(module):
new_ports_conf_hash = {}
convert_hash = {
'speed_40g_div_4': '40G/4',
'speed_4_by_10g': '4x10G',
'speed_10g': '10G',
'speed_40g': '40G'
}
for k in module.params.keys():
port_range = module.params[k]
port_setting = convert_hash[k]
if port_range:
port_range = [x for x in port_range if x]
for port_str in port_range:
port_range_str = port_str.replace('swp', '').split('-')
if len(port_range_str) == 1:
new_ports_conf_hash[int(port_range_str[0])] = \
port_setting
else:
int_range = map(int, port_range_str)
portnum_range = range(int_range[0], int_range[1] + 1)
for i in portnum_range:
new_ports_conf_hash[i] = port_setting
module.new_ports_hash = new_ports_conf_hash
def compare_new_and_old_port_conf_hash(module):
ports_conf_hash_copy = module.ports_conf_hash.copy()
module.ports_conf_hash.update(module.new_ports_hash)
port_num_length = len(module.ports_conf_hash.keys())
orig_port_num_length = len(ports_conf_hash_copy.keys())
if port_num_length != orig_port_num_length:
module.fail_json(msg="Port numbering is wrong. \
Too many or two few ports configured")
return False
elif ports_conf_hash_copy == module.ports_conf_hash:
return False
return True
def make_copy_of_orig_ports_conf(module):
if os.path.exists(PORTS_CONF + '.orig'):
return
try:
shutil.copyfile(PORTS_CONF, PORTS_CONF + '.orig')
except IOError as e:
_msg = "Failed to save the original %s: %s" % (PORTS_CONF, to_native(e))
module.fail_json(msg=_msg)
return # for testing only
def write_to_ports_conf(module):
"""
use tempfile to first write out config in temp file
then write to actual location. may help prevent file
corruption. Ports.conf is a critical file for Cumulus.
Don't want to corrupt this file under any circumstance.
"""
temp = tempfile.NamedTemporaryFile()
try:
try:
temp.write('# Managed By Ansible\n')
for k in sorted(module.ports_conf_hash.keys()):
port_setting = module.ports_conf_hash[k]
_str = "%s=%s\n" % (k, port_setting)
temp.write(_str)
temp.seek(0)
shutil.copyfile(temp.name, PORTS_CONF)
except IOError as e:
module.fail_json(msg="Failed to write to %s: %s" % (PORTS_CONF, to_native(e)))
finally:
temp.close()
def main():
module = AnsibleModule(
argument_spec=dict(
speed_40g_div_4=dict(type='list'),
speed_4_by_10g=dict(type='list'),
speed_10g=dict(type='list'),
speed_40g=dict(type='list')
),
required_one_of=[['speed_40g_div_4',
'speed_4_by_10g',
'speed_10g',
'speed_40g']]
)
_changed = False
hash_existing_ports_conf(module)
generate_new_ports_conf_hash(module)
if compare_new_and_old_port_conf_hash(module):
make_copy_of_orig_ports_conf(module)
write_to_ports_conf(module)
_changed = True
_msg = "/etc/cumulus/ports.conf changed"
else:
_msg = 'No change in /etc/ports.conf'
module.exit_json(changed=_changed, msg=_msg)
from ansible.module_utils.common.removed import removed_module
if __name__ == '__main__': if __name__ == '__main__':
main() removed_module()

View file

@ -24,7 +24,10 @@ DOCUMENTATION = '''
--- ---
module: nxos_ip_interface module: nxos_ip_interface
version_added: "2.1" version_added: "2.1"
deprecated: Deprecated in 2.5. Use M(nxos_l3_interface) instead. deprecated:
removed_in: "2.9"
why: Replaced with common C(*_l3_interface) network modules.
alternative: Use M(nxos_l3_interface) instead.
short_description: Manages L3 attributes for IPv4 and IPv6 interfaces. short_description: Manages L3 attributes for IPv4 and IPv6 interfaces.
description: description:
- Manages Layer 3 attributes for IPv4 and IPv6 interfaces. - Manages Layer 3 attributes for IPv4 and IPv6 interfaces.

View file

@ -25,7 +25,10 @@ DOCUMENTATION = '''
module: nxos_mtu module: nxos_mtu
extends_documentation_fragment: nxos extends_documentation_fragment: nxos
version_added: "2.2" version_added: "2.2"
deprecated: Deprecated in 2.3 use M(nxos_system)'s C(mtu) option. deprecated:
removed_in: "2.5"
why: Replaced with common C(*_system) network modules.
alternative: Use M(nxos_system)'s C(system_mtu) option. To specify an interfaces MTU use M(nxos_interface).
short_description: Manages MTU settings on Nexus switch. short_description: Manages MTU settings on Nexus switch.
description: description:
- Manages MTU settings on Nexus switch. - Manages MTU settings on Nexus switch.
@ -121,264 +124,8 @@ changed:
type: boolean type: boolean
sample: true sample: true
''' '''
from ansible.module_utils.network.nxos.nxos import load_config, run_commands
from ansible.module_utils.network.nxos.nxos import nxos_argument_spec, check_args
from ansible.module_utils.basic import AnsibleModule
def execute_show_command(command, module):
if 'show run' not in command:
output = 'json'
else:
output = 'text'
cmds = [{
'command': command,
'output': output,
}]
body = run_commands(module, cmds)
return body
def flatten_list(command_lists):
flat_command_list = []
for command in command_lists:
if isinstance(command, list):
flat_command_list.extend(command)
else:
flat_command_list.append(command)
return flat_command_list
def get_mtu(interface, module):
command = 'show interface {0}'.format(interface)
mtu = {}
body = execute_show_command(command, module)
try:
mtu_table = body[0]['TABLE_interface']['ROW_interface']
mtu['mtu'] = str(
mtu_table.get('eth_mtu',
mtu_table.get('svi_mtu', 'unreadable_via_api')))
mtu['sysmtu'] = get_system_mtu(module)['sysmtu']
except KeyError:
mtu = {}
return mtu
def get_system_mtu(module):
command = 'show run all | inc jumbomtu'
sysmtu = ''
body = execute_show_command(command, module)
if body:
sysmtu = str(body[0].split(' ')[-1])
try:
sysmtu = int(sysmtu)
except:
sysmtu = ""
return dict(sysmtu=str(sysmtu))
def get_commands_config_mtu(delta, interface):
CONFIG_ARGS = {
'mtu': 'mtu {mtu}',
'sysmtu': 'system jumbomtu {sysmtu}',
}
commands = []
for param, value in delta.items():
command = CONFIG_ARGS.get(param, 'DNE').format(**delta)
if command and command != 'DNE':
commands.append(command)
command = None
mtu_check = delta.get('mtu', None)
if mtu_check:
commands.insert(0, 'interface {0}'.format(interface))
return commands
def get_commands_remove_mtu(delta, interface):
CONFIG_ARGS = {
'mtu': 'no mtu {mtu}',
'sysmtu': 'no system jumbomtu {sysmtu}',
}
commands = []
for param, value in delta.items():
command = CONFIG_ARGS.get(param, 'DNE').format(**delta)
if command and command != 'DNE':
commands.append(command)
command = None
mtu_check = delta.get('mtu', None)
if mtu_check:
commands.insert(0, 'interface {0}'.format(interface))
return commands
def get_interface_type(interface):
if interface.upper().startswith('ET'):
return 'ethernet'
elif interface.upper().startswith('VL'):
return 'svi'
elif interface.upper().startswith('LO'):
return 'loopback'
elif interface.upper().startswith('MG'):
return 'management'
elif interface.upper().startswith('MA'):
return 'management'
elif interface.upper().startswith('PO'):
return 'portchannel'
else:
return 'unknown'
def is_default(interface, module):
command = 'show run interface {0}'.format(interface)
try:
body = execute_show_command(command, module)[0]
if body == 'DNE':
return 'DNE'
else:
raw_list = body.split('\n')
if raw_list[-1].startswith('interface'):
return True
else:
return False
except (KeyError):
return 'DNE'
def get_interface_mode(interface, intf_type, module):
command = 'show interface {0}'.format(interface)
mode = 'unknown'
interface_table = {}
body = execute_show_command(command, module)
try:
interface_table = body[0]['TABLE_interface']['ROW_interface']
except (KeyError, AttributeError, IndexError):
return mode
if intf_type in ['ethernet', 'portchannel']:
mode = str(interface_table.get('eth_mode', 'layer3'))
if mode in ['access', 'trunk']:
mode = 'layer2'
elif mode == 'routed':
mode = 'layer3'
elif intf_type in ['loopback', 'svi']:
mode = 'layer3'
return mode
def main():
argument_spec = dict(
mtu=dict(type='str'),
interface=dict(type='str'),
sysmtu=dict(type='str'),
state=dict(choices=['absent', 'present'], default='present'),
)
argument_spec.update(nxos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
required_together=[['mtu', 'interface']],
supports_check_mode=True)
warnings = list()
check_args(module, warnings)
interface = module.params['interface']
mtu = module.params['mtu']
sysmtu = module.params['sysmtu']
state = module.params['state']
if sysmtu and (interface or mtu):
module.fail_json(msg='Proper usage-- either just use the sysmtu param '
'or use interface AND mtu params')
if interface:
intf_type = get_interface_type(interface)
if intf_type != 'ethernet':
if is_default(interface, module) == 'DNE':
module.fail_json(msg='Invalid interface. It does not exist '
'on the switch.')
existing = get_mtu(interface, module)
else:
existing = get_system_mtu(module)
if interface and mtu:
if intf_type == 'loopback':
module.fail_json(msg='Cannot set MTU for loopback interface.')
mode = get_interface_mode(interface, intf_type, module)
if mode == 'layer2':
if intf_type in ['ethernet', 'portchannel']:
if mtu not in [existing['sysmtu'], '1500']:
module.fail_json(msg='MTU on L2 interfaces can only be set'
' to the system default (1500) or '
'existing sysmtu value which is '
' {0}'.format(existing['sysmtu']))
elif mode == 'layer3':
if intf_type in ['ethernet', 'portchannel', 'svi']:
if ((int(mtu) < 576 or int(mtu) > 9216) or
((int(mtu) % 2) != 0)):
module.fail_json(msg='Invalid MTU for Layer 3 interface'
'needs to be an even number between'
'576 and 9216')
if sysmtu:
if ((int(sysmtu) < 576 or int(sysmtu) > 9216 or
((int(sysmtu) % 2) != 0))):
module.fail_json(msg='Invalid MTU- needs to be an even '
'number between 576 and 9216')
args = dict(mtu=mtu, sysmtu=sysmtu)
proposed = dict((k, v) for k, v in args.items() if v is not None)
delta = dict(set(proposed.items()).difference(existing.items()))
changed = False
end_state = existing
commands = []
if state == 'present':
if delta:
command = get_commands_config_mtu(delta, interface)
commands.append(command)
elif state == 'absent':
common = set(proposed.items()).intersection(existing.items())
if common:
command = get_commands_remove_mtu(dict(common), interface)
commands.append(command)
cmds = flatten_list(commands)
if cmds:
if module.check_mode:
module.exit_json(changed=True, commands=cmds)
else:
changed = True
load_config(module, cmds)
if interface:
end_state = get_mtu(interface, module)
else:
end_state = get_system_mtu(module)
if 'configure' in cmds:
cmds.pop(0)
results = {}
results['proposed'] = proposed
results['existing'] = existing
results['end_state'] = end_state
results['updates'] = cmds
results['changed'] = changed
results['warnings'] = warnings
module.exit_json(**results)
from ansible.module_utils.common.removed import removed_module
if __name__ == '__main__': if __name__ == '__main__':
main() removed_module()

View file

@ -25,7 +25,10 @@ DOCUMENTATION = '''
module: nxos_portchannel module: nxos_portchannel
extends_documentation_fragment: nxos extends_documentation_fragment: nxos
version_added: "2.2" version_added: "2.2"
deprecated: Deprecated in 2.5. Use M(nxos_linkagg) instead. deprecated:
removed_in: "2.9"
why: Replaced with common C(*_linkagg) network modules.
alternative: Use M(nxos_linkagg) instead.
short_description: Manages port-channel interfaces. short_description: Manages port-channel interfaces.
description: description:
- Manages port-channel specific configuration parameters. - Manages port-channel specific configuration parameters.

View file

@ -25,7 +25,10 @@ DOCUMENTATION = '''
module: nxos_switchport module: nxos_switchport
extends_documentation_fragment: nxos extends_documentation_fragment: nxos
version_added: "2.1" version_added: "2.1"
deprecated: Use M(nxos_l2_interface) instead. deprecated:
removed_in: "2.9"
why: Replaced with generic version.
alternative: Use M(nxos_l2_interface) instead.
short_description: Manages Layer 2 switchport interfaces. short_description: Manages Layer 2 switchport interfaces.
description: description:
- Manages Layer 2 interfaces - Manages Layer 2 interfaces

View file

@ -30,7 +30,10 @@ author: "Luigi Mori (@jtschichold), Ivan Bojer (@ivanbojer)"
version_added: "2.3" version_added: "2.3"
requirements: requirements:
- pan-python - pan-python
deprecated: In 2.4 use M(panos_nat_rule) instead. deprecated:
removed_in: "2.8"
why: M(panos_nat_rule) uses next generation SDK (PanDevice).
alternative: Use M(panos_nat_rule) instead.
options: options:
ip_address: ip_address:
description: description:
@ -143,7 +146,7 @@ RETURN = '''
''' '''
ANSIBLE_METADATA = {'metadata_version': '1.1', ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'], 'status': ['deprecated'],
'supported_by': 'community'} 'supported_by': 'community'}

View file

@ -20,7 +20,7 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>. # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'metadata_version': '1.1', ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'], 'status': ['deprecated'],
'supported_by': 'community'} 'supported_by': 'community'}
@ -35,7 +35,10 @@ description:
traffic is applied, the more specific rules must precede the more general ones. traffic is applied, the more specific rules must precede the more general ones.
author: "Ivan Bojer (@ivanbojer)" author: "Ivan Bojer (@ivanbojer)"
version_added: "2.3" version_added: "2.3"
deprecated: In 2.4 use M(panos_security_rule) instead. deprecated:
removed_in: "2.8"
why: Renamed to M(panos_security_rule) in order to align with API calls and UI object references, which also has extra support for PanDevice SDK.
alternative: Use M(panos_security_rule) instead.
requirements: requirements:
- pan-python can be obtained from PyPi U(https://pypi.python.org/pypi/pan-python) - pan-python can be obtained from PyPi U(https://pypi.python.org/pypi/pan-python)
- pandevice can be obtained from PyPi U(https://pypi.python.org/pypi/pandevice) - pandevice can be obtained from PyPi U(https://pypi.python.org/pypi/pandevice)

View file

@ -16,11 +16,14 @@ ANSIBLE_METADATA = {'metadata_version': '1.1',
DOCUMENTATION = ''' DOCUMENTATION = '''
--- ---
module: accelerate module: accelerate
removed: True
short_description: Enable accelerated mode on remote node short_description: Enable accelerated mode on remote node
deprecated: "Use SSH with ControlPersist instead." deprecated:
removed_in: "2.4"
why: Replaced by ControlPersist
alternative: Use SSH with ControlPersist instead.
removed: True
description: description:
- This module has been removed, this file is kept for historicaly documentation purposes - This module has been removed, this file is kept for historical documentation purposes.
- This modules launches an ephemeral I(accelerate) daemon on the remote node which - This modules launches an ephemeral I(accelerate) daemon on the remote node which
Ansible can use to communicate with nodes at high speed. Ansible can use to communicate with nodes at high speed.
- The daemon listens on a configurable port for a configurable amount of time. - The daemon listens on a configurable port for a configurable amount of time.
@ -77,3 +80,8 @@ EXAMPLES = '''
tasks: tasks:
- command: /usr/bin/anything - command: /usr/bin/anything
''' '''
from ansible.module_utils.common.removed import removed_module
if __name__ == '__main__':
removed_module()

View file

@ -20,8 +20,9 @@ author: Ansible Core Team (@ansible)
module: include module: include
short_description: Include a play or task list short_description: Include a play or task list
deprecated: deprecated:
The include action was too confusing, dealing with both plays and tasks, being both dynamic and static. This module removed_in: "2.8"
will be removed in version 2.8. As alternatives use M(include_tasks), M(import_playbook), M(import_tasks). why: The include action was too confusing, dealing with both plays and tasks, being both dynamic and static. This module will be removed in version 2.8.
alternative: Use M(include_tasks), M(import_playbook), M(import_tasks).
description: description:
- Includes a file with a list of plays or tasks to be executed in the current playbook. - Includes a file with a list of plays or tasks to be executed in the current playbook.
- Files with a list of plays can only be included at the top level. Lists of tasks can only be included where tasks - Files with a list of plays can only be included at the top level. Lists of tasks can only be included where tasks

View file

@ -29,7 +29,10 @@ DOCUMENTATION = r'''
--- ---
module: win_msi module: win_msi
version_added: '1.7' version_added: '1.7'
deprecated: In 2.4 and will be removed in 2.8, use M(win_package) instead. deprecated:
removed_in: "2.8"
why: The win_msi module has a number of issues, the M(win_package) module is easier to maintain and use.
alternative: Use M(win_package) instead.
short_description: Installs and uninstalls Windows MSI files short_description: Installs and uninstalls Windows MSI files
description: description:
- Installs or uninstalls a Windows MSI file that is already located on the - Installs or uninstalls a Windows MSI file that is already located on the

View file

@ -105,15 +105,6 @@
failed_modules: "{{ failed_modules }} + [ 'nxos_vxlan_vtep_vni' ]" failed_modules: "{{ failed_modules }} + [ 'nxos_vxlan_vtep_vni' ]"
test_failed: true test_failed: true
- block:
- include_role:
name: nxos_mtu
when: "limit_to in ['*', 'nxos_mtu']"
rescue:
- set_fact:
failed_modules: "{{ failed_modules }} + [ 'nxos_mtu' ]"
test_failed: true
- block: - block:
- include_role: - include_role:
name: nxos_system name: nxos_system

View file

@ -1,2 +0,0 @@
destructive
posix/ci/group1

View file

@ -1,23 +0,0 @@
-----BEGIN CERTIFICATE-----
MIID3TCCAsWgAwIBAgIJAPczDjnFOjH/MA0GCSqGSIb3DQEBCwUAMIGEMQswCQYD
VQQGEwJVUzELMAkGA1UECAwCTkMxDzANBgNVBAcMBkR1cmhhbTEQMA4GA1UECgwH
QW5zaWJsZTEfMB0GA1UEAwwWZG9ja2VydGVzdC5hbnNpYmxlLmNvbTEkMCIGCSqG
SIb3DQEJARYVdGt1cmF0b21pQGFuc2libGUuY29tMB4XDTE1MDMxNzIyMjc1OVoX
DTQyMDgwMjIyMjc1OVowgYQxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJOQzEPMA0G
A1UEBwwGRHVyaGFtMRAwDgYDVQQKDAdBbnNpYmxlMR8wHQYDVQQDDBZkb2NrZXJ0
ZXN0LmFuc2libGUuY29tMSQwIgYJKoZIhvcNAQkBFhV0a3VyYXRvbWlAYW5zaWJs
ZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDIk4D0+QY3obQM
I/BPmI4pFFu734HHz98ce6Qat7WYiGUHsnt3LHw2a6zMsgP3siD1zqGHtk1IipWR
IwZbXm1spww/8YNUEE8wbXlLGI8IPUpg2J7NS2SdYIuN/TrQMqCUt7fFb+7OQjaH
RtR0LtXhP96al3E8BR9G6AiS67XuwdTL4vrXLUWISjNyF2Vj7xQsp8KRrq0qnXhq
pefeBi1fD9DG5f76j3s8lqGiOg9FHegvfodonNGcqE16T/vBhQcf+NjenlFvR2Lh
3wb/RCo/b1IhZHKNx32fJ/WpiKXkrLYFvwtIWtLw6XIwwarc+n7AfGqKnt4h4bAG
a+5aNnlFAgMBAAGjUDBOMB0GA1UdDgQWBBRZpu6oomSlpCvy2VgOHbWwDwVl1jAf
BgNVHSMEGDAWgBRZpu6oomSlpCvy2VgOHbWwDwVl1jAMBgNVHRMEBTADAQH/MA0G
CSqGSIb3DQEBCwUAA4IBAQCqOSFzTgQDww5bkNRCQrg7lTKzXW9bJpJ5NZdTLwh6
b+e+XouRH+lBe7Cnn2RTtuFYVfm8hQ1Ra7GDM3v2mJns/s3zDkRINZMMVXddzl5S
M8QxsFJK41PaL9wepizslkcg19yQkdWJQYPDeFurlFvwtakhZE7ttawYi5bFkbCd
4fchMNBBmcigpSfoWb/L2lK2vVKBcfOdUl+V6k49lpf8u7WZD0Xi2cbBhw17tPj4
ulKZaVNdzj0GFfhpQe/MtDoqxStRpHamdk0Y6fN+CvoW7RPDeVsqkIgCu30MOFuG
A53ZtOc3caYRyGYJtIIl0Rd5uIApscec/6RGiFX6Gab8
-----END CERTIFICATE-----

View file

@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAyJOA9PkGN6G0DCPwT5iOKRRbu9+Bx8/fHHukGre1mIhlB7J7
dyx8NmuszLID97Ig9c6hh7ZNSIqVkSMGW15tbKcMP/GDVBBPMG15SxiPCD1KYNie
zUtknWCLjf060DKglLe3xW/uzkI2h0bUdC7V4T/empdxPAUfRugIkuu17sHUy+L6
1y1FiEozchdlY+8ULKfCka6tKp14aqXn3gYtXw/QxuX++o97PJahojoPRR3oL36H
aJzRnKhNek/7wYUHH/jY3p5Rb0di4d8G/0QqP29SIWRyjcd9nyf1qYil5Ky2Bb8L
SFrS8OlyMMGq3Pp+wHxqip7eIeGwBmvuWjZ5RQIDAQABAoIBAQCVOumfWgf+LBlB
TxvknKRoe/Ukes6cU1S0ZGlcV4KM0i4Y4/poWHiyJLqUMX4yNB3BxNL5nfEyH6nY
Ki74m/Dd/gtnJ9GGIfxJE6pC7Sq9/pvwIjtEkutxC/vI0LeJX6GKBIZ+JyGN5EWd
sF0xdAc9Z7+/VR2ygj0bDFgUt7rMv6fLaXh6i5Ms0JV7I/HkIi0Lmy9FncJPOTjP
/Wb3Rj5twDppBqSiqU2JNQHysWzNbp8nzBGeR0+WU6xkWjjGzVyQZJq4XJQhqqot
t+v+/lF+jObujcRxPRStaA5IoQdmls3l+ubkoFeNp3j6Nigz40wjTJArMu/Q9xQ5
A+kHYNgBAoGBAPVNku0eyz1SyMM8FNoB+AfSpkslTnqfmehn1GCOOS9JPimGWS3A
UlAs/PAPW/H/FTM38eC89GsKKVV8zvwkERNwf+PIGzkQrJgYLxGwoflAKsvFoQi9
PVbIn0TBDZ3TWyNfGul62fEgNen4B46d7kG6l/C3p9eKKCo3sCBgWl8FAoGBANFS
n9YWyAYmHQAWy5R0YeTsdtiRpZWkB0Is9Jr8Zm/DQDNnsKgvXw//qxuWYMi68teK
6o8t5mgDQNWBu3rXrU73f8mMVJNmzSHFbyQEyFOJ9yvI5qMRbJfvdURUje6d3ZUw
G7olKjX0fec4cAG7hbT8sMDvIbnATdhh3VppiEVBAoGBAJKidJnaNpPJ0MkkOTK4
ypOikFWLT4ZtsYsDxiiR3A0wM0CPVu/Kb2oN+oVmKQhX+0xKvQQi79iskljP6ss+
pBaCwXBgRiWumf2xNzHT7H8apHp7APBAb1JZSxvGa2VU2r4iM+wty+of3xqlcZ8H
OU2BRSJYJrTpmWjjMR2pe1whAoGAfMTbMSlzIPcm4h60SlD06Rdp370xDfkvumpB
gwBfrs6bPgjYa+eQqmCjBValagDFL2VGWwHpDKajxqAFuDtGuoMcUG6tGw9zxmWA
0d9n6SObiSW/FAQWzpmVNJ2R3GGM6pg6bsIoXvDU+zXQzbeRA0h7swTW/Xl67Teo
UXQGHgECgYEAjckqv2e39AgBvjxvj9SylVbFNSERrbpmiIRH31MnAHpTXbxRf7K+
/79vUsRfQun9F/+KVfjUyMqRj0PE2tS4ATIjqQsa18RCB4mAE3sNsKz8HbJfzIFq
eEqAWmURm6gRmLmaTMlXS0ZtZaw/A2Usa/DJumu9CsfBu7ZJbDnrQIY=
-----END RSA PRIVATE KEY-----

View file

@ -1 +0,0 @@
D96F3E552F279F46

View file

@ -1 +0,0 @@
testdocker:$apr1$6cYd3tA9$4Dc9/I5Z.bl8/br8O/6B41

View file

@ -1,21 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDYTCCAkkCCQDZbz5VLyefRjANBgkqhkiG9w0BAQUFADCBhDELMAkGA1UEBhMC
VVMxCzAJBgNVBAgMAk5DMQ8wDQYDVQQHDAZEdXJoYW0xEDAOBgNVBAoMB0Fuc2li
bGUxHzAdBgNVBAMMFmRvY2tlcnRlc3QuYW5zaWJsZS5jb20xJDAiBgkqhkiG9w0B
CQEWFXRrdXJhdG9taUBhbnNpYmxlLmNvbTAgFw0xNTAzMTcyMjMxNTBaGA8yMjg4
MTIzMDIyMzE1MFowXjELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAk5DMQ8wDQYDVQQH
DAZEdXJoYW0xEDAOBgNVBAoMB0Fuc2libGUxHzAdBgNVBAMMFmRvY2tlcnRlc3Qu
YW5zaWJsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC7WpI3
QuuARgPufAA0JkGCGIUNWqFyTEngOWvBVEuk5TnDB4x78OCE9j7rr75OxZaSc6Y7
oFTl+hhlgt6sqj+GXehgCHLA97CCc8eUqGv3bwdIIg/hahCPjEWfYzocX1xmUdzN
6klbV9lSO7FGSuk7W4DNga/weRfZmVoPi6jqTvx0tFsGrHVb1evholUKpxaOEYQZ
2NJ22+UXpUyVzN/mw5TAGNG0/yR7sIgCjKYCsYF8k79SfNDMJ1VcCPy3aag45jaz
WoA+OIJJFRkAaPSM5VtnbGBv/slpDVaKfl2ei7Ey3mKx1b7jYMzRz07Gw+zqr1gJ
kBWvfjR7ioxXcN7jAgMBAAEwDQYJKoZIhvcNAQEFBQADggEBAJyF24tCq5R8SJto
EMln0m9dMoJTC5usaBYBUMMe6hV2ikUGaXVDIqY+Yypt1sIcjGnLRmehJbej8iS7
4aypuLc8Fgb4CvW+gY3I3W1iF7ZxIN/4yr237Z9KH1d1uGi+066Sk94OCXlqgsb+
RzU6XOg+PMIjYC/us5VRv8a2qfjIA8getR+19nP+hR6NgIQcEyRKG2FmhkUSAwd8
60FhpW4UmPQmn0ErZmRwdp2hNPj5g3my5iOSi7DzdK4CwZJAASOoWsbQIxP0k4JE
PMo7Ad1YxXlOvNWIA8FLMkRsq3li6KJ17WBdEYgFeuxWpf1/x1WA+WpwEIfC5cuR
A5LkaNI=
-----END CERTIFICATE-----

View file

@ -1,17 +0,0 @@
-----BEGIN CERTIFICATE REQUEST-----
MIICozCCAYsCAQAwXjELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAk5DMQ8wDQYDVQQH
DAZEdXJoYW0xEDAOBgNVBAoMB0Fuc2libGUxHzAdBgNVBAMMFmRvY2tlcnRlc3Qu
YW5zaWJsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC7WpI3
QuuARgPufAA0JkGCGIUNWqFyTEngOWvBVEuk5TnDB4x78OCE9j7rr75OxZaSc6Y7
oFTl+hhlgt6sqj+GXehgCHLA97CCc8eUqGv3bwdIIg/hahCPjEWfYzocX1xmUdzN
6klbV9lSO7FGSuk7W4DNga/weRfZmVoPi6jqTvx0tFsGrHVb1evholUKpxaOEYQZ
2NJ22+UXpUyVzN/mw5TAGNG0/yR7sIgCjKYCsYF8k79SfNDMJ1VcCPy3aag45jaz
WoA+OIJJFRkAaPSM5VtnbGBv/slpDVaKfl2ei7Ey3mKx1b7jYMzRz07Gw+zqr1gJ
kBWvfjR7ioxXcN7jAgMBAAGgADANBgkqhkiG9w0BAQsFAAOCAQEAoPgw9dlA3Ys2
oahtr2KMNFnHnab6hUr/CuDIygkOft+MCX1cPXY1c0R72NQq42TjAFO5UnriJ0Jg
rcWgBAw8TCOHH77ZWawQFjWWoxNTy+bfXNJ002tzc4S/A4s8ytcFQN7E2irbGtUB
ratVaE+c6RvD/o48N4YLUyJbJK84FZ1xMnJI0z5R6XzDWEqYbobzkM/aUWvDTT9F
+F9H5W/3sIhNFVGLygSKbhgrb6eaC8R36fcmTRfYYdT4GrpXFePoZ4LJGCKiiaGV
p8gZzYQ9xjRYDP2OUMacBDlX1Mu5IJ2SCfjavD1hMhB54tWiiw3CRMJcNMql7ob/
ZHH8UDMqgA==
-----END CERTIFICATE REQUEST-----

View file

@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAu1qSN0LrgEYD7nwANCZBghiFDVqhckxJ4DlrwVRLpOU5wweM
e/DghPY+66++TsWWknOmO6BU5foYZYLerKo/hl3oYAhywPewgnPHlKhr928HSCIP
4WoQj4xFn2M6HF9cZlHczepJW1fZUjuxRkrpO1uAzYGv8HkX2ZlaD4uo6k78dLRb
Bqx1W9Xr4aJVCqcWjhGEGdjSdtvlF6VMlczf5sOUwBjRtP8ke7CIAoymArGBfJO/
UnzQzCdVXAj8t2moOOY2s1qAPjiCSRUZAGj0jOVbZ2xgb/7JaQ1Win5dnouxMt5i
sdW+42DM0c9OxsPs6q9YCZAVr340e4qMV3De4wIDAQABAoIBABjczxSIS+pM4E6w
o/JHtV/HUzjPcydQ2mjoFdWlExjB1qV8BfeYoqLibr0mKFIZxH6Q3FmDUGDojH5E
HLq7KQzyv1inJltXQ1Q8exrOMu22DThUVNksEyCJk9+v8lE7km59pJiq46s8gDl6
dG8Il+TporEi6a820qRsxlfTx8m4EUbyPIhf2e2wYdqiscLwj49ZzMs3TFJxN3j4
lLP3QDHz9n8q+XXpUT9+rsePe4D4DVVRLhg8w35zkys36xfvBZrI+9SytSs+r1/e
X4gVhxeX9q3FkvXiw1IDGPr0l5X7SH+5zk7JWuLfFbNBK02zR/Bd2OIaYAOmyIFk
ZzsVfokCgYEA8Cj04S32Tga7lOAAUEuPjgXbCtGYqBUJ/9mlMHJBtyl4vaBRm1Z3
1YQqlL3yGM1F6ZStPWs86vsVaScypr7+RnmQ/uPjz1g2jNI9vomqRkzpzd8/bBwW
J3FCaKFIfl9uQx4ac7piAYdhNXswjQ7Kzn5xgG24i8EkUm6+UxarA38CgYEAx7X+
qOVT+kA5WU1EDIc2x3Au0PhNIXiHOGRLW0MC7Vy1xBrgxfVrz6J8flBXOxmWYjRq
3dFiHA9S7WPQStkgTjzE91sthLefJ8DKXE4IrRkvYXIIX8DqkcFxTHS/OzckTcK/
z79jNOPYA1s+z2jzgd24sslXbqxNz1LqZ/PlRp0CgYEAik8cEF72/aK0/x0uMRAD
IcjPiGCDKTHMq3M9xjPXEtQofBTLSsm2g9n05+qodY4qmEYOq1OKJs3pW8C+U/ek
2xOB5Ll75lqoN9uQwZ3o2UnMUMskbG+UdqyskTNpW5Y8Gx1IIKQTc0vzOOi0YlhF
hjydw1ftM1dNQsgShimE3aMCgYEAwITwFk7kcoTBBBZY+B7Mrtu1Ndt3N0HiUHlW
r4Zc5waNbptefVbF9GY1zuqR/LYA43CWaHj1NAmNrqye2diPrPwmADHUInGEqqTO
LsdG099Ibo6oBe6J8bJiDwsoYeQZSiDoGVPtRcoyraGjXfxVaaac6zTu5RCS/b53
m3hhWH0CgYAqi3x10NpJHInU/zNa1GhI9UVJzabE2APdbPHvoE/yyfpCGhExiXZw
MDImUzc59Ro0pCZ9Bk7pd5LwdjjeJXih7jaRZQlPD1BeM6dKdmJps1KMaltOOJ4J
W0FE34E+Kt5JeIix8zmhxgaAU9NVilaNx5tI/D65Y0inMBZpqedrtg==
-----END RSA PRIVATE KEY-----

View file

@ -1,40 +0,0 @@
# For versions of Nginx > 1.3.9 that include chunked transfer encoding support
# Replace with appropriate values where necessary
upstream docker-registry {
server localhost:5000;
}
server {
listen 8080;
server_name dockertest.ansible.com;
ssl on;
ssl_certificate /etc/pki/tls/certs/dockertest.ansible.com.crt;
ssl_certificate_key /etc/pki/tls/private/dockertest.ansible.com.key;
proxy_set_header Host $http_host; # required for Docker client sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
# required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
chunked_transfer_encoding on;
location / {
# let Nginx know about our auth file
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/docker-registry.htpasswd;
proxy_pass http://docker-registry;
}
location /_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
}

View file

@ -1,20 +0,0 @@
# test code for the service module
# (c) 2014, James Cammarata <jcammarata@ansible.com>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
dependencies:
- prepare_tests

View file

@ -1,5 +0,0 @@
- name: Install docker packages (apt)
apt:
state: present
# Note: add docker-registry when available
name: docker.io,python-docker,netcat-openbsd,nginx

View file

@ -1,16 +0,0 @@
- name: Install docker packages (rht family)
package:
state: present
name: docker-io,docker-registry,python-docker-py,nginx
- name: Install netcat (Fedora)
package:
state: present
name: nmap-ncat
when: ansible_distribution == 'Fedora' or (ansible_os_family == 'RedHat' and ansible_distribution_version is version(7, '>='))
- name: Install netcat (RHEL)
package:
state: present
name: nc
when: ansible_distribution != 'Fedora' and (ansible_os_family == 'RedHat' and ansible_distribution_version is version(7, '<'))

View file

@ -1,58 +0,0 @@
- name: Start docker daemon
service:
name: docker
state: started
- name: Download busybox image
docker:
image: busybox
state: present
pull: missing
- name: Run a small script in busybox
docker:
image: busybox
state: reloaded
pull: always
command: "nc -l -p 2000 -e xargs -n1 echo hello"
detach: True
- name: Get the docker container ip
set_fact: container_ip="{{docker_containers[0].NetworkSettings.IPAddress}}"
- name: Try to access the server
shell: "echo 'world' | nc {{ container_ip }} 2000"
register: docker_output
- name: check that the script ran
assert:
that:
- "'hello world' in docker_output.stdout_lines"
- name: Run a script that sets environment in busybox
docker:
image: busybox
state: reloaded
pull: always
env:
TEST: hello
command: '/bin/sh -c "nc -l -p 2000 -e xargs -n1 echo $TEST"'
detach: True
- name: Get the docker container ip
set_fact: container_ip="{{docker_containers[0].NetworkSettings.IPAddress}}"
- name: Try to access the server
shell: "echo 'world' | nc {{ container_ip }} 2000"
register: docker_output
- name: check that the script ran
assert:
that:
- "'hello world' in docker_output.stdout_lines"
- name: Remove containers
shell: "docker rm -f $(docker ps -aq)"
- name: Remove all images from the local docker
shell: "docker rmi -f $(docker images -q)"

View file

@ -1,22 +0,0 @@
#- include: docker-setup-rht.yml
# when: ansible_distribution in ['Fedora']
#- include: docker-setup-rht.yml
# Packages on RHEL and CentOS 7 are broken, broken, broken. Revisit when
# they've got that sorted out
# CentOS 6 currently broken by conflicting files in python-backports and python-backports-ssl_match_hostname
#when: ansible_distribution in ['RedHat', 'CentOS'] and ansible_lsb.major_release|int == 6
# python-docker isn't available until 14.10. Revist at the next Ubuntu LTS
#- include: docker-setup-debian.yml
# when: ansible_distribution in ['Ubuntu']
#- include: docker-tests.yml
# Add other distributions as the proper packages become available
# when: ansible_distribution in ['Fedora']
#- include: docker-tests.yml
# when: ansible_distribution in ['RedHat', 'CentOS'] and ansible_lsb.major_release|int == 6
#- include: registry-tests.yml
# Add other distributions as the proper packages become available
# when: ansible_distribution in ['Fedora']

View file

@ -1,188 +0,0 @@
- name: Configure a private docker registry
service:
name: docker-registry
state: started
- name: Retrieve busybox image from docker hub
docker:
image: busybox
state: present
pull: missing
- name: Get busybox image id
shell: "docker images | grep busybox | awk '{ print $3 }'"
register: image_id
- name: Tag docker image into the local registry
command: "docker tag {{ image_id.stdout_lines[0] }} localhost:5000/mine"
- name: Push docker image into the private registry
command: "docker push localhost:5000/mine"
- name: Remove all images from the local docker
shell: "docker rmi -f {{image_id.stdout_lines[0]}}"
- name: Get number of images in docker
command: "docker images"
register: docker_output
# docker prints a header so the header should be all that's present
- name: Check that there are no images in docker
assert:
that:
- "{{ docker_output.stdout_lines| length }} <= 1 "
- name: Retrieve the image from private docker registry
docker:
image: "localhost:5000/mine"
state: present
pull: missing
insecure_registry: True
- name: Run a small script in the new image
docker:
image: "localhost:5000/mine"
state: reloaded
pull: always
command: "nc -l -p 2000 -e xargs -n1 echo hello"
detach: True
insecure_registry: True
- name: Get the docker container id
shell: "docker ps | grep mine | awk '{ print $1 }'"
register: container_id
- name: Get the docker container ip
shell: "docker inspect {{ container_id.stdout_lines[0] }} | grep IPAddress | awk -F '\"' '{ print $4 }'"
register: container_ip
- name: Pause a few moments because docker is not reliable
pause:
seconds: 40
- name: Try to access the server
shell: "echo 'world' | nc {{ container_ip.stdout_lines[0] }} 2000"
register: docker_output
- name: check that the script ran
assert:
that:
- "'hello world' in docker_output.stdout_lines"
- name: Remove containers
shell: "docker rm -f $(docker ps -aq)"
- shell: docker images -q
- name: Remove all images from the local docker
shell: "docker rmi -f $(docker images -q)"
- name: Get number of images in docker
command: "docker images"
register: docker_output
- name: Check that there are no images in docker
assert:
that:
- "{{ docker_output.stdout_lines| length }} <= 1"
#
# Private registry secured with an SSL proxy
#
- name: Set selinux to allow docker to connect to nginx
seboolean:
name: docker_connect_any
state: yes
- name: Set selinux to allow nginx to connect to docker
seboolean:
name: httpd_can_network_connect
state: yes
- name: Setup nginx with a user/password
copy:
src: docker-registry.htpasswd
dest: /etc/nginx/docker-registry.htpasswd
- name: Setup nginx with a config file
copy:
src: nginx-docker-registry.conf
dest: /etc/nginx/conf.d/nginx-docker-registry.conf
- name: Setup nginx docker cert
copy:
src: dockertest.ansible.com.crt
dest: /etc/pki/tls/certs/dockertest.ansible.com.crt
- name: Setup nginx docker key
copy:
src: dockertest.ansible.com.key
dest: /etc/pki/tls/private/dockertest.ansible.com.key
- name: Setup the ca keys
copy:
src: devdockerCA.crt
dest: /etc/pki/ca-trust/source/anchors/devdockerCA.crt
- name: Update the ca bundle
command: update-ca-trust extract
- name: Restart docker daemon
service:
name: docker
state: restarted
- name: Start nginx
service:
name: nginx
state: restarted
- name: Add domain name to hosts
lineinfile:
line: "127.0.0.1 dockertest.ansible.com"
dest: /etc/hosts
state: present
- name: Start a container after getting it from a secured private registry
docker:
image: dockertest.ansible.com:8080/mine
registry: dockertest.ansible.com:8080
username: "testdocker"
password: "testdocker"
state: running
command: "nc -l -p 2000 -e xargs -n1 echo hello"
detach: True
- name: Get the docker container id
shell: "docker ps | grep mine | awk '{ print $1 }'"
register: container_id
- name: Get the docker container ip
shell: "docker inspect {{ container_id.stdout_lines[0] }} | grep IPAddress | awk -F '\"' '{ print $4 }'"
register: container_ip
- name: Pause a few moments because docker is not reliable
pause:
seconds: 40
- name: Try to access the server
shell: "echo 'world' | nc {{ container_ip.stdout_lines[0] }} 2000"
register: docker_output
- name: check that the script ran
assert:
that:
- "'hello world' in docker_output.stdout_lines"
- name: Remove containers
shell: "docker rm $(docker ps -aq)"
- name: Remove all images from the local docker
shell: "docker rmi -f $(docker images -q)"
- name: Remove domain name to hosts
lineinfile:
line: "127.0.0.1 dockertest.ansible.com"
dest: /etc/hosts
state: absent

View file

@ -1,2 +0,0 @@
cloud/aws
posix/ci/cloud/group4/aws

View file

@ -1,2 +0,0 @@
---
# defaults file for test_ec2_vpc

View file

@ -1,3 +0,0 @@
dependencies:
- prepare_tests
- setup_ec2

View file

@ -1,2 +0,0 @@
---
# tasks file for test_ec2_vpc

View file

@ -1,2 +0,0 @@
---
# vars file for test_ec2_vpc

View file

@ -1,2 +0,0 @@
---
testcase: "*"

View file

@ -1,2 +0,0 @@
dependencies:
- prepare_nxos_tests

View file

@ -1,33 +0,0 @@
---
- name: collect common cli test cases
find:
paths: "{{ role_path }}/tests/common"
patterns: "{{ testcase }}.yaml"
connection: local
register: test_cases
- name: collect cli test cases
find:
paths: "{{ role_path }}/tests/cli"
patterns: "{{ testcase }}.yaml"
connection: local
register: cli_cases
- set_fact:
test_cases:
files: "{{ test_cases.files }} + {{ cli_cases.files }}"
- name: set test_items
set_fact: test_items="{{ test_cases.files | map(attribute='path') | list }}"
- name: run test cases (connection=network_cli)
include: "{{ test_case_to_run }} ansible_connection=network_cli connection={}"
with_items: "{{ test_items }}"
loop_control:
loop_var: test_case_to_run
- name: run test case (connection=local)
include: "{{ test_case_to_run }} ansible_connection=local connection={{ cli }}"
with_first_found: "{{ test_items }}"
loop_control:
loop_var: test_case_to_run

View file

@ -1,3 +0,0 @@
---
- { include: cli.yaml, tags: ['cli'] }
- { include: nxapi.yaml, tags: ['nxapi'] }

View file

@ -1,27 +0,0 @@
---
- name: collect common nxapi test cases
find:
paths: "{{ role_path }}/tests/common"
patterns: "{{ testcase }}.yaml"
connection: local
register: test_cases
- name: collect nxapi test cases
find:
paths: "{{ role_path }}/tests/nxapi"
patterns: "{{ testcase }}.yaml"
connection: local
register: nxapi_cases
- set_fact:
test_cases:
files: "{{ test_cases.files }} + {{ nxapi_cases.files }}"
- name: set test_items
set_fact: test_items="{{ test_cases.files | map(attribute='path') | list }}"
- name: run test cases (connection=local)
include: "{{ test_case_to_run }} ansible_connection=local connection={{ nxapi }}"
with_items: "{{ test_items }}"
loop_control:
loop_var: test_case_to_run

View file

@ -1,70 +0,0 @@
---
- debug: msg="START connection={{ ansible_connection }}/set_mtu.yaml"
- debug: msg="Using provider={{ connection.transport }}"
when: ansible_connection == "local"
- set_fact: testint="{{ nxos_int1 }}"
- name: setup
nxos_config:
lines:
- no switchport
- no mtu
parents: "interface {{ testint }}"
match: none
provider: "{{ connection }}"
- name: configure interface mtu
nxos_mtu:
interface: "{{ testint }}"
mtu: 2000
provider: "{{ connection }}"
register: result
- assert:
that:
- "result.changed == true"
- name: verify interface mtu
nxos_mtu:
interface: "{{ testint }}"
mtu: 2000
provider: "{{ connection }}"
register: result
- assert:
that:
- "result.changed == false"
- name: configure invalid (odd) interface mtu
nxos_mtu:
interface: "{{ testint }}"
mtu: 2001
provider: "{{ connection }}"
register: result
ignore_errors: yes
- assert:
that:
- "result.failed == true"
- name: configure invalid (large) mtu setting
nxos_mtu:
interface: "{{ testint }}"
mtu: 100000
provider: "{{ connection }}"
register: result
ignore_errors: yes
- assert:
that:
- "result.failed == true"
- name: teardown
nxos_config:
lines: no mtu
parents: "interface {{ testint }}"
match: none
provider: "{{ connection }}"
- debug: msg="END connection={{ ansible_connection }}/set_mtu.yaml"

View file

@ -1,60 +0,0 @@
---
- debug: msg="START connection={{ ansible_connection }}/sysmtu.yaml"
- debug: msg="Using provider={{ connection.transport }}"
when: ansible_connection == "local"
- name: setup
nxos_config:
lines: no system jumbomtu
match: none
provider: "{{ connection }}"
- name: configure system mtu
nxos_mtu:
sysmtu: 2000
provider: "{{ connection }}"
register: result
- assert:
that:
- "result.changed == true"
- name: verify system mtu
nxos_mtu:
sysmtu: 2000
provider: "{{ connection }}"
register: result
- assert:
that:
- "result.changed == false"
- name: configure invalid (odd) system mtu
nxos_mtu:
sysmtu: 2001
provider: "{{ connection }}"
register: result
ignore_errors: yes
- assert:
that:
- "result.failed == true"
- name: configure invalid (large) system mtu setting
nxos_mtu:
sysmtu: 10000
provider: "{{ connection }}"
register: result
ignore_errors: yes
- assert:
that:
- "result.failed == true"
- name: teardown
nxos_config:
lines: no system jumbomtu
match: none
provider: "{{ connection }}"
- debug: msg="END connection={{ ansible_connection }}/sysmtu.yaml"

View file

@ -427,7 +427,7 @@ lib/ansible/modules/network/ordnance/ordnance_facts.py E322
lib/ansible/modules/network/panos/panos_nat_rule.py E322 lib/ansible/modules/network/panos/panos_nat_rule.py E322
lib/ansible/modules/network/panos/panos_sag.py E322 lib/ansible/modules/network/panos/panos_sag.py E322
lib/ansible/modules/network/panos/panos_sag.py E323 lib/ansible/modules/network/panos/panos_sag.py E323
lib/ansible/modules/network/panos/panos_security_policy.py E322 lib/ansible/modules/network/panos/_panos_security_policy.py E322
lib/ansible/modules/network/panos/panos_security_rule.py E322 lib/ansible/modules/network/panos/panos_security_rule.py E322
lib/ansible/modules/network/radware/vdirect_commit.py E321 lib/ansible/modules/network/radware/vdirect_commit.py E321
lib/ansible/modules/network/radware/vdirect_file.py E321 lib/ansible/modules/network/radware/vdirect_file.py E321

View file

@ -505,7 +505,14 @@ class ModuleValidator(Validator):
return min(linenos) return min(linenos)
def _find_main_call(self): def _find_main_call(self, look_for="main"):
""" Ensure that the module ends with:
if __name__ == '__main__':
main()
OR, in the case of modules that are in the docs-only deprecation phase
if __name__ == '__main__':
removed_module()
"""
lineno = False lineno = False
if_bodies = [] if_bodies = []
for child in self.ast.body: for child in self.ast.body:
@ -547,13 +554,13 @@ class ModuleValidator(Validator):
if isinstance(child, ast.Expr): if isinstance(child, ast.Expr):
if isinstance(child.value, ast.Call): if isinstance(child.value, ast.Call):
if (isinstance(child.value.func, ast.Name) and if (isinstance(child.value.func, ast.Name) and
child.value.func.id == 'main'): child.value.func.id == look_for):
lineno = child.lineno lineno = child.lineno
if lineno < self.length - 1: if lineno < self.length - 1:
self.reporter.error( self.reporter.error(
path=self.object_path, path=self.object_path,
code=104, code=104,
msg='Call to main() not the last line', msg=('Call to %s() not the last line' % look_for),
line=lineno line=lineno
) )
@ -561,7 +568,7 @@ class ModuleValidator(Validator):
self.reporter.error( self.reporter.error(
path=self.object_path, path=self.object_path,
code=103, code=103,
msg='Did not find a call to main' msg=('Did not find a call to %s()' % look_for)
) )
return lineno or 0 return lineno or 0
@ -1220,10 +1227,18 @@ class ModuleValidator(Validator):
) )
return return
end_of_deprecation_should_be_docs_only = False
if self._python_module(): if self._python_module():
doc_info, docs = self._validate_docs() doc_info, docs = self._validate_docs()
if self._python_module() and not self._just_docs(): # See if current version => deprecated.removed_in, ie, should be docs only
if 'deprecated' in docs and docs['deprecated'] is not None:
removed_in = docs.get('deprecated')['removed_in']
strict_ansible_version = StrictVersion('.'.join(ansible_version.split('.')[:2]))
end_of_deprecation_should_be_docs_only = strict_ansible_version >= removed_in
# FIXME if +2 then file should be empty? - maybe add this only in the future
if self._python_module() and not self._just_docs() and not end_of_deprecation_should_be_docs_only:
self._validate_argument_spec(docs) self._validate_argument_spec(docs)
self._check_for_sys_exit() self._check_for_sys_exit()
self._find_blacklist_imports() self._find_blacklist_imports()
@ -1239,11 +1254,14 @@ class ModuleValidator(Validator):
self._find_ps_docs_py_file() self._find_ps_docs_py_file()
self._check_gpl3_header() self._check_gpl3_header()
if not self._just_docs(): if not self._just_docs() and not end_of_deprecation_should_be_docs_only:
self._check_interpreter(powershell=self._powershell_module()) self._check_interpreter(powershell=self._powershell_module())
self._check_type_instead_of_isinstance( self._check_type_instead_of_isinstance(
powershell=self._powershell_module() powershell=self._powershell_module()
) )
if end_of_deprecation_should_be_docs_only:
# Ensure that `if __name__ == '__main__':` calls `removed_module()` which ensure that the module has no code in
main = self._find_main_call('removed_module')
class PythonPackageValidator(Validator): class PythonPackageValidator(Validator):

View file

@ -83,13 +83,32 @@ def return_schema(data):
) )
def deprecation_schema():
deprecation_schema_dict = {
# Only list branches that are deprecated or may have docs stubs in
# Deprecation cycle changed at 2.4 (though not retroactively)
# 2.3 -> removed_in: "2.5" + n for docs stub
# 2.4 -> removed_in: "2.8" + n for docs stub
Required('removed_in'): Any("2.2", "2.3", "2.4", "2.5", "2.8", "2.9"),
Required('why'): Any(*string_types),
Required('alternative'): Any(*string_types),
'removed': Any(True),
}
return Schema(
deprecation_schema_dict,
extra=PREVENT_EXTRA
)
def doc_schema(module_name): def doc_schema(module_name):
deprecated_module = False
if module_name.startswith('_'): if module_name.startswith('_'):
module_name = module_name[1:] module_name = module_name[1:]
return Schema( deprecated_module = True
{ doc_schema_dict = {
Required('module'): module_name, Required('module'): module_name,
'deprecated': Any(*string_types),
Required('short_description'): Any(*string_types), Required('short_description'): Any(*string_types),
Required('description'): Any(list_string_types, *string_types), Required('description'): Any(list_string_types, *string_types),
Required('version_added'): Any(float, *string_types), Required('version_added'): Any(float, *string_types),
@ -99,7 +118,16 @@ def doc_schema(module_name):
'todo': Any(None, list_string_types, *string_types), 'todo': Any(None, list_string_types, *string_types),
'options': Any(None, *list_dict_option_schema), 'options': Any(None, *list_dict_option_schema),
'extends_documentation_fragment': Any(list_string_types, *string_types) 'extends_documentation_fragment': Any(list_string_types, *string_types)
}, }
if deprecated_module:
deprecation_required_scheme = {
Required('deprecated'): Any(deprecation_schema()),
}
doc_schema_dict.update(deprecation_required_scheme)
return Schema(
doc_schema_dict,
extra=PREVENT_EXTRA extra=PREVENT_EXTRA
) )

View file

@ -1,2 +1 @@
lib/ansible/modules/utilities/logic/async_status.py lib/ansible/modules/utilities/logic/async_status.py
lib/ansible/modules/utilities/helper/_accelerate.py

View file

@ -1,20 +0,0 @@
import collections
import os
import unittest
from ansible.modules.cloud.docker._docker import get_split_image_tag
class DockerSplitImageTagTestCase(unittest.TestCase):
def test_trivial(self):
self.assertEqual(get_split_image_tag('test'), ('test', 'latest'))
def test_with_org_name(self):
self.assertEqual(get_split_image_tag('ansible/centos7-ansible'), ('ansible/centos7-ansible', 'latest'))
def test_with_tag(self):
self.assertEqual(get_split_image_tag('test:devel'), ('test', 'devel'))
def test_with_tag_and_org_name(self):
self.assertEqual(get_split_image_tag('ansible/centos7-ansible:devel'), ('ansible/centos7-ansible', 'devel'))